Quantcast
Channel: News and Research articles on Information and Data
Viewing all 248 articles
Browse latest View live

Towards a holistic perspective on personal data and the data-driven election paradigm

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Politics is an art and not a science, and what is required for its mastery is not the rationality of the engineer but the wisdom and the moral strength of the statesman. - Hans Morgenthau, Scientific Man versus Power Politics

Voters, industry representatives, and lawmakers – and not infrequently, journalists and academics as well – have asked one question more than any other when presented with evidence of how personal data is changing modern-day politicking: “Does it work?” As my colleagues and I have detailed in our report, Personal Data: Political Persuasion, the convergence of politics and commercial data brokering has transformed personal data into a political asset, a means for political intelligence, and an instrument for political influence. The practices we document are varied and global: an official campaign app requesting camera and microphone permissions in India, experimentation to select slogans designed to trigger emotional responses from Brexit voters, a robocalling-driven voter suppression campaign in Canada, attack ads used to control voters’ first impressions on search engines in Kenya, and many more.

Asking “Does it work?” is understandable for many reasons, including to address any real or perceived damage to the integrity of an election, to observe shifts in attitudes or voting behaviour, or perhaps to ascertain and harness the democratic benefits of the technology in question. However, discourse fixated on the efficacy of data-intensive tools is fraught with abstraction and reflects a shortsighted appreciation for the full political implications of data-driven elections.

“Does it work?”

The question “Does it work?” is very difficult to answer with any degree of confidence regardless of the technology in question: personality profiling of voters to influence votes, natural language processing applied to the Twitter pipeline to glean information about voters’ political leanings, political ads delivered in geofences, or a myriad of others.

First, the question is too general with respect to the details it glosses over. The technologies themselves are a heterogenous mix, and their real-world implementations are manifold. Furthermore, questions of efficacy are often divorced of context, and a technology’s usefulness to a campaign very likely depends on the sociopolitical context in which it lives. Finally, the question of effectiveness continues to be studied extensively. Predictably, the conclusions of peer-reviewed research vary.

As one example, the effectiveness of implicit social pressure in direct mail in the United States evidently remains inconclusive. The motivation for this research is the observation that voting is a social norm responsive to others’ impressions (Blais, 2000; Gerber & Rogers, 2009). However, some evidence suggests that explicit social pressure to mobilise voters (e.g., by disclosing their vote histories) may seem invasive and can backfire (Matland & Murray, 2013). In an attempt to preserve the benefits of social pressure while mitigating its drawbacks, researchers have explored whether implicit social pressure in direct mail (i.e., mailers with an image of eyes, reminding recipients of their social responsibility) boosts turnout on election day. Of their evaluation of implicit social pressure, which had apparently been regarded as effective, political scientists Richard Matland and Gregg Murray concluded that, “The effects are substantively and statistically weak at best and inconsistent with previous findings” (Matland & Murray, 2016). In response to similar, repeated findings from Matland and Murray, Costas Panagopoulos wrote that their work in fact “supports the notion that eyespots likely stimulate voting, especially when taken together with previous findings” (Panagopoulos, 2015). Panagopoulos soon thereafter authored a paper arguing that the true impact of implicit social pressure actually varies with political identity, claiming that the effect is pronounced for Republicans but not for Democrats or Independents, while Matland maintained that the effect is "fairly weak" (Panagopoulos & van der Linden, 2016; Matland, 2016).

Similarly, studies on the effects of door-to-door canvassing lack consensus (Bhatti et al., 2019). Donald Green, Mary McGrath, and Peter Aronow published a review of seventy-one canvassing experiments and found their average impact to be robust and credible (Green, McGrath, & Aronow, 2013). A number of other experiments have demonstrated that canvassing can boost voter turnout outside the American-heavy literature: among students in Beijing in 2003, with British voters in 2005, and for women in rural Pakistan in 2008 (Guan & Green, 2006; John & Brannan, 2008; Giné & Mansuri, 2018). Studies from Europe, however, call into question the generalisability of these findings. Two studies on campaigns in 2010 and 2012 in France both produced ambiguous results, as the true effect of canvassing was not credibly positive (Pons, 2018; Pons & Liegey, 2019). Experiments conducted during the 2013 Danish municipal elections observed no definitive effect of canvassing, while Enrico Cantoni and Vincent Pons found that visits by campaign volunteers in Italy helped increase turnout, but those by the candidates themselves did not (Bhatti et al., 2019; Cantoni & Pons, 2017). In some cases, the effect of door-to-door canvassing was neither positive nor ambiguous but distinctly counterproductive. Florian Foos and Peter John observed that voters contacted by canvassers and given leaflets for the 2014 British European Parliament elections were 3.7 percentage points less likely to vote than those in the control group (Foos & John, 2018). Putting these together, the effects of canvassing still seem positive in Europe, but they are less pronounced than in the US. This learning has led some scholars to note that “practitioners should be cautious about assuming that lessons from a US- dominated field can be transferred to their own countries’ contexts” (Bhatti et al., 2019).

A cursory glance at a selection of literature related to these two cases alone – implicit social pressure and canvassing – illustrates how tricky answering “Does it work?” is. Although many of the technologies in use today are personal data-supercharged analogues of these antecedents (e.g., canvassing apps with customised scripts and talking points based on data about each household’s occupants instead of generic, door-to-door knocking), I have no reason to suspect that analyses of data-powered technologies would be any different. The short answer to “Does it work?” is that it depends. It depends on baseline voter turnout rates, print vs. digital media, online vs. offline vs. both combined, targeting young people vs. older people, reaching members of a minority group vs. a majority group, partisan vs. nonpartisan messages, cultural differences, the importance of the election, local history, and more. Indeed, factors like the electoral setup may alter the effectiveness of a technology altogether. A tool for political persuasion might work in a first-past-the-post contest in the United States but not in a European system of proportional representation in which winner-take-all stakes may be tempered. This is not to suggest that asking “Does it work?” is a futile endeavour – indeed there are potential democratic benefits to doing so – but rather that it is both limited in scope and rather abstract given the multitude of factors and conditions at play in practice.

Political calculus and algorithmic contagion

With this in mind, I submit that a more useful approach to appreciating the full impacts of data-driven elections may be a consideration of the preconditions that allow data-intensive practices to thrive and an examination of their consequences than a preoccupation with the efficacy of the practices themselves.

In a piece published in 1986, philosopher Ian Hacking coined the term ‘semantic contagion’ to describe the process of ascribing linguistic and cultural currency to a phenomenon by naming it and thereby also contributing to its spread (Hacking, 1999). I propose that the prevailing political calculus, spurred on by the commercial success of “big data” and “AI”, appears overtaken by an ‘algorithmic contagion’ of sorts. On one level, algorithmic contagion speaks to the widespread logic of quantification. For example, understanding an individual is difficult, so data brokers instead measure people along a number of dimensions like level of education, occupation, credit score, and others. On another level, algorithmic contagion in this context describes an interest in modelling anything that could be valuable to political decision-making, as Market Predict’s political page suggests. It presumes that complex phenomena, like an individual’s political whims, can be predicted and known within the structures of formalised algorithmic process, and that they ought to be. According to the Wall Street Journal, a company executive claimed that Market Predict’s “agent-based modelling allows the company to test the impact on voters of events like news stories, political rallies, security scares or even the weather” (Davies, 2019).

Algorithmic contagion also encompasses a predetermined set of boundaries. Thinking within the capabilities of algorithmic methods prescribes a framework to interpret phenomena within bounds that enable the application of algorithms to those phenomena. In this respect, algorithmic contagion can influence not only what is thought but also how. This conceptualisation of algorithmic contagion encompasses the ontological (through efforts to identify and delineate components that structure a system, like an individual’s set of beliefs), the epistemological (through the iterative learning process and distinction drawn between approximation and truth), and the rhetorical (through authority justified by appeals to quantification).

Figure 1: The political landing page of Market Predict, a marketing optimisation firm for brand and political advertisers, that explains its voter simulation technology. It claims to, among other things, “Account for the irrationality of human decision-making”. Hundreds of companies offer related services. Source: Market Predict Political Advertising

This algorithmic contagion-informed formulation of politics bears some connection to the initial “Does it work?” query but expands the domain in question to not only the applications themselves but also to the components of the system in which they operate – a shift that an honest analysis of data-driven elections, and not merely ad-based micro-targeting, demands. It explains why and how a candidate for mayor in Taipei in 2014 launched a viral social media sensation by going to a tattoo parlour. He did not visit the parlour to get a tattoo, to chat with an artist about possible designs, or out of a genuine interest in meeting the people there. He went because a digital listening company that mines troves of data and services campaigns across southeast Asia generated a list of actions for his campaign that would generate the most buzz online, and visiting a tattoo parlour was at the top of the list.

Figure 2: A still from a video documenting Dr Ko-Wen Je’s visit to a tattoo parlour, prompting a social media sensation. His campaign uploaded the video a few days before municipal elections in which he was elected mayor of Taipei in 2014. The post on Facebook has 15,000 likes, and the video on YouTube has 153,000 views. Against a backdrop of creeping voter surveillance, Dr Ko-Wen Je’s visit to this tattoo parlour begs questions about the authenticity of political leaders. (Image brightened for clarity) Sources: Facebook and YouTube

As politics continues to evolve in response to algorithmic contagion and to the data industrial complex governing the commercial (and now also political) zeitgeist, it is increasingly concerned with efficiency and speed (Schechner & Peker, 2018). Which influencer voters must we win over, and whom can we afford to ignore? Who is both the most likely to turn out to vote and also the most persuadable? How can our limited resources be allocated as efficiently as possible to maximise the probability of winning? In this nascent approach to politics as a practice to be optimised, who is deciding what is optimal? Relatedly, as the infrastructure of politics changes, who owns the infrastructure upon which more and more democratic contests are waged, and to what incentives do they respond?

Voters are increasingly treated as consumers – measured, ranked, and sorted by a logic imported from commerce. Instead of being sold shoes, plane tickets, and lifestyles, voters are being sold political leaders, and structural similarities to other kinds of business are emerging. One challenge posed by data-driven election operations is the manner in which responsibilities have effectively been transferred. Voters expect their interests to be protected by lawmakers while indiscriminately clicking “I Agree” to free services online. Efforts to curtail problems through laws are proving to be slow, mired in legalese, and vulnerable to technological circumvention. Based on my conversations with them, venture capitalists are reluctant to champion a transformation of the whole industry by imposing unprecedented privacy standards on their budding portfolio companies, which claim to be merely responding to the demands of users. The result is an externalised cost shouldered by the public. In this case, however, the externality is not an environmental or a financial cost but a democratic one. The manifestation of these failures include the disintegration of the public sphere and a shared understanding of facts, polarised electorates embroiled in 365-day-a-year campaign cycles online, and open campaign finance and conflict of interest loopholes introduced by data-intensive campaigning, all of which are exacerbated by a growing revolving door between the tech industry and politics (Kreiss & McGregor, 2017).

Personal data and political expediency

One response to Cambridge Analytica is “Does psychometric profiling of voters work?” (Rosenberg et al., 2018). A better response examines what the use of psychometric profiling reveals about the intentions of those attempting to acquire political power. It asks what it means that a political campaign was apparently willing to invest the time and money into building personality profiles of every single adult in the United States in order to win an election, regardless of the accuracy of those profiles, even when surveys of Americans indicate that they do not want political advertising tailored to their personal data (Turow et al., 2012). And it explores the ubiquity of services that may lack Cambridge Analytica’s sensationalised scandal but shares the company’s practice of collecting and using data in opaque ways for clearly political purposes.

The ‘Influence Industry’ underlying this evolution has evangelised the value of personal data, but to whatever extent personal data is an asset, it is also a liability. What risks do the collection and use of personal data expose? In the language of the European Union’s General Data Protection Regulation (GDPR), who are the data controllers, and who are the data subjects in matters of political data which is, increasingly, all data? In short, who gains control, and who loses it?

As a member of a practitioner-oriented group based in Germany with a grounding in human rights, I worry about data-intensive practices in elections and the larger political sphere going awry, especially as much of our collective concern seems focused on questions of efficacy while companies race to capitalise on the market opportunity. For historical standards of the time, the Holocaust was a ruthlessly data-driven, calculated, and efficient undertaking fuelled by vast amounts of personal data. As Edwin Black documents in IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation, personal data managed by IBM was an indispensable resource for the Nazi regime. IBM’s President at the time, Thomas J. Waston Sr., the namesake of today’s IBM Watson, went to great lengths to profit from dealings between IBM’s German subsidiary and the Nazi party. The firm was such an important ally that Hitler awarded Watson an Order of the German Eagle award for his invaluable service to the Third Reich. IBM aided the Nazi’s record-keeping across several phases of the Holocaust, including identification of Jews, ghettoisation, deportation, and extermination (Black, 2015). Black writes that “Prisoners were identified by descriptive Hollerith cards, each with columns and punched holes detailing nationality, date of birth, marital status, number of children, reason for incarceration, physical characteristics, and work skills” (Black, 2001). These Hollerith cards were sorted in machines physically housed in concentration camps.

The precursors to these Hollerith cards were originally developed to track personal details for the first American census. The next American census, to be held in 2020, has already been a highly politicised affair with respect to the addition of a citizenship question (Ballhaus & Kendall, 2019). President Trump recently abandoned an effort to formally add a citizenship question to the census, vowing to seek this information elsewhere, and the US Census Bureau has already published work investigating the quality of alternate citizenship data sources for the 2020 Census (Brown et al., 2018). For stakeholders interested in upholding democratic ideals, focusing on the accuracy of this alternate citizenship data is myopic; that an alternate source of data is being investigated to potentially advance an overtly political goal is the crux of the matter.

Figure 3: A card showing the personal data of Symcho Dymant, a prisoner at Buchenwald Concentration Camp. The card includes many pieces of personal data, including name, birth date, condition, number of children, place of residence, religion, citizenship, residence of relatives, height, eye colour, description of his nose, mouth, ears, teeth, and hair. Source: US Holocaust Memorial Museum

This prospect may seem far-fetched and alarmist to some, but I do not think so. If the political tide were to turn, the same personal data used for a benign digital campaign could be employed in insidious and downright unscrupulous ways if it were ever expedient to do so. What if a door-to-door canvassing app instructed volunteers walking down a street to skip your home and not remind your family to vote because a campaign profiled you as supporters of the opposition candidate? What if a data broker classified you as Muslim, or if an algorithmic analysis of your internet browsing history suggests that you are prone to dissent? Possibilities like these are precisely why a fixation on efficacy is parochial. Given the breadth and depth of personal data used for political purposes, the line between consulting data to inform a political decision and appealing to data – given the rhetorical persuasiveness it enjoys today – in order to weaponise a political idea is extremely thin.

A holistic appreciation of data-driven elections’ democratic effects demands more than simply measurement, and answering “Does it work?” is merely one component of grasping how campaigning transformed by technology and personal data is influencing our political processes and the societies they engender. As digital technologies continue to rank, prioritise, and exclude individuals even when – indeed, especially when – inaccurate, we ought to consider the larger context in which technological practices shape political outcomes in the name of efficiency. The infrastructure of politics is changing, charged with an algorithmic contagion, and a well-rounded perspective requires that we ask not only how these changes are affecting our ideas of who can participate in our democracies and how they do so, but also who derives value from this infrastructure and how they are incentivised, especially when benefits are enjoyed privately but costs sustained democratically. The quantitative tools underlying the ‘datafication’ of politics are neither infallible nor safe from exploitation, and issues of accuracy grow moot when data-intensive tactics are enlisted as pawns in political agendas. A new political paradigm is emerging whether or not it works.

References

Ballhaus, R., & Kendall, B. (2019, July 11). Trump Drops Effort to Put Citizenship Question on Census, The Wall Street Journal. Retrieved from https://www.wsj.com/articles/trump-to-hold-news-conference-on-census-citizenship-question-11562845502

Bhatti, Y., Olav Dahlgaard, J., Hedegaard Hansen, J., & Hansen, K.M. (2019). Is Door-to-Door Canvassing Effective in Europe? Evidence from a Meta-Study across Six European Countries, British Journal of Political Science,49(1), 279–290. https://doi.org/10.1017/S0007123416000521

Black, E. (2015, March 17). IBM’s Role in the Holocaust -- What the New Documents Reveal. HuffPost. Retrieved from https://www.huffpost.com/entry/ibm-holocaust_b_1301691

Black, E. (2001). IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation. New York: Crown Books.

Blais, A. (2000). To Vote or Not to Vote: The Merits and Limits of Rational Choice Theory. Pittsburgh: University of Pittsburgh Press. https://doi.org/10.2307/j.ctt5hjrrf

Brown, J. D., Heggeness, M. L., Dorinski, S., Warren, L., & Yi, M.. (2018). Understanding the Quality of Alternative Citizenship Data Sources for the 2020 Census [Discussion Paper No. 18-38] Washington, DC: Center for Economic Studies. Retrieved from https://www2.census.gov/ces/wp/2018/CES-WP-18-38.pdf

Cantoni, E., & Pons, V. (2017). Do Interactions with Candidates Increase Voter Support and Participation? Experimental Evidence from Italy [Working Paper No. 16-080]. Boston: Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Publication%20Files/16-080_43ffcfcb-74c2-4713-a587-ebde30e27b64.pdf

Davies, P. (2019). A New Crystal Ball to Predict Consumer and Investor Behavior. Wall Street Journal, June 10. Retrieved from https://www.wsj.com/articles/a-new-crystal-ball-to-predict-consumer-and-investor-behavior-11560218820?mod=rsswn

Foos, F., & John, P. (2018). Parties Are No Civic Charities: Voter Contact and the Changing Partisan Composition of the Electorate*, Political Science Research and Methods, 6(2), 283–98. https://doi.org/10.1017/psrm.2016.48

Gerber, A. S., & Rogers, T. (2009). Descriptive Social Norms and Motivation to Vote: Everybody’s Voting and so Should You. The Journal of Politics, 71(1), 178–191. https://doi.org/10.1017/S0022381608090117

Giné, X. & Mansuri, G. (2018). Together We Will: Experimental Evidence on Female Voting Behavior in Pakistan. American Economic Journal: Applied Economics, 10(1), 207–235. https://doi.org/10.1257/app.20130480

Green, D.P., McGrath, M. C. & Aronow, P. M. (2013). Field Experiments and the Study of Voter Turnout. Journal of Elections, Public Opinion and Parties, 23(1), 27–48. https://doi.org/10.1080/17457289.2012.728223

Guan, M. & Green, D. P. (2006). Noncoercive Mobilization in State-Controlled Elections: An Experimental Study in Beijing. Comparative Political Studies, 39(10), 1175–1193. https://doi.org/10.1177/0010414005284377

Hacking, I. (1999). Making Up People. In M. Biagioli (Ed.), The Science Studies Reader (pp. 161–171). New York: Routledge. Retrieved from http://www.icesi.edu.co/blogs/antro_conocimiento/files/2012/02/Hacking_making-up-people.pdf

John, P., & Brannan, T. (2008). How Different Are Telephoning and Canvassing? Results from a ‘Get Out the Vote’ Field Experiment in the British 2005 General Election. British Journal of Political Science,38(3), 565–574. https://doi.org/10.1017/S0007123408000288

Kreiss, D., & McGregor, S. C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle, Political Communication, 35(2), 155–77. https://doi.org/10.1080/10584609.2017.1364814

Matland, R. (2016). These Eyes: A Rejoinder to Panagopoulos on Eyespots and Voter Mobilization, Political Psychology, 37(4), 559–563. https://doi.org/10.1111/pops.12282 Available at https://www.academia.edu/12128219/These_Eyes_A_Rejoinder_to_Panagopoulos_on_Eyespots_and_Voter_Mobilization

Matland, R. E. & Murray, G. R. (2013). An Experimental Test for ‘Backlash’ Against Social Pressure Techniques Used to Mobilize Voters, American Politics Research, 41(3), 359–386. https://doi.org/10.1177/1532673X12463423

Matland, R. E., & Murray, G. R. (2016). I Only Have Eyes for You: Does Implicit Social Pressure Increase Voter Turnout? Political Psychology, 37(4), 533–550. https://doi.org/10.1111/pops.12275

Panagopoulos, C. (2015). A Closer Look at Eyespot Effects on Voter Turnout: Reply to Matland and Murray, Political Psychology, 37(4). https://doi.org/10.1111/pops.12281

Panagopoulos, C. & van der Linden, S. (2016). Conformity to Implicit Social Pressure: The Role of Political Identity, Social Influence, 11(3), 177–184. https://doi.org/10.1080/15534510.2016.1216009

Pons, V. (2018). Will a Five-Minute Discussion Change Your Mind? A Countrywide Experiment on Voter Choice in France, American Economic Review, 108(6), 1322–1363. https://doi.org/10.1257/aer.20160524

Pons, V., & Liegey, G. (2019). Increasing the Electoral Participation of Immigrants: Experimental Evidence from France, The Economic Journal, 129(617), 481–508. https://doi.org/10.1111/ecoj.12584 Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53575

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How Trump Consultants Exploited the Facebook Data of Millions, The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Schechner, S. & Peker, E. (2018, October 24). Apple CEO Condemns ‘Data-Industrial Complex’, Wall Street Journal, October 24.

Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans Roundly Reject Tailored Political Advertising [Departmental Paper No. 7-2012]. Annenberg School for Communication, University of Pennsylvania. Retrieved from http://repository.upenn.edu/asc_papers/398


Big data and democracy: a regulator’s perspective

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction: all roads lead to Victoria, British Columbia

As the Information and Privacy Commissioner for British Columbia, I am entrusted with enforcing the province’s two pieces of privacy legislation – BC’s Freedom of Information and Protection of Privacy Act (FIPPA) and the Personal Information Protection Act (PIPA). When these laws came into force, “Big Data” was not a term in public discourse. All that of course has changed irrevocably.

In late summer 2017, I left the Office of the Information and Privacy Commissioner for BC (OIPC) to take on an assignment with the UK Information Commissioner’s Office (ICO), under the former BC Commissioner, Elizabeth Denham. I had temporarily stepped aside from my role as Deputy Commissioner at the OIPC to help lead the ICO’s investigation of how the UK’s political parties collected and used the personal information of voters (Information Commissioner's Office, United Kingdom, 2018). Their enquiry came on the heels of media reports concerning the potential misuse of data during the country’s European Union referendum (Doward, 2017). At the time, I had no idea that I would find myself standing, two years later, full circle from the world’s most notorious data breach - the Facebook/Cambridge Analytica scandal, which affected more than 80 million users worldwide (Badshah, 2018).

Soon after my arrival, I interviewed the key data strategists of UK’s two largest parties. With their significant resources, these parties were able to gather volumes of voter data and make predictions about voting intentions. They also had the means to target specific classes of voters in pursuit of their support. Those party representatives were very nervous about sharing the mechanics of their work. This reluctance intersects with one of modern democracy’s great challenges, and it was why the ICO launched its investigation: citizens know very little about what information political parties collect about them – and how that information is being used.

The public was concerned about the opacity of political campaign systems even before the ICO began its work. But their concern was soon to grow exponentially. In early 2018, UK’s Information and Privacy Commissioner Elizabeth Denham and I met a young man in a lawyer’s office in London. He was from, of all places, Victoria, BC, and his name was Christopher Wylie.

We were the first regulator or law enforcement agency to talk with Wylie, and his story was sweeping and shocking in its breadth. Many weeks later, the rest of the world would learn the details of how Cambridge Analytica extracted psychological profiles of millions of Facebook users for the purposes of weaponising targeted political messages. Many of those revelations were reported exclusively by The Guardian journalist Carole Cadwalladr, who wrote extensively about the whistleblower beginning in March 2018 (Cadwalladr, 2018).

Suddenly the whole world was paying attention to the explosive mix of new technologies and personal information and how it was impacting political campaigns. The paired names of Cambridge Analytica and Facebook became seared on the public’s consciousness, providing a cautionary tale about what can go wrong when people’s personal information is abused in such a nefarious manner (Meredith, 2018). The Facebook/Cambridge Analytica breach has, without question, shaken the public’s confidence in our democratic political campaigning system.

It is no doubt purely coincidental that so many storylines of this scandal trace their way to Victoria, BC. Adding to the regulatory connection and the whistleblower Christopher Wylie, is the Victoria-based company AggregateIQ Data Services (AIQ), which analysed the data on behalf of the Cambridge Analytica’s parent company, SCL Elections. Victoria is also home to Dr. Colin Bennett. He has long been a leading global authority in pursuing the study of these matters, work that has now taken on an even greater urgency. For this reason, the OIPC teamed up with the Big Data Surveillance project coordinated by the Surveillance Studies Centre at Queen’s University, and headed by Dr David Lyon. Our office was pleased to host the workshop in April 2019 on “Data-Driven Elections: Implications and Challenges for Democratic Societies,” from which the papers in this collection originated.

Privacy regulators, along with electoral commissioners, are on the frontline of these questions about the integrity of our democratic institutions. However, in some jurisdictions, regulators have very few means to address them, especially as it concerns political parties whose appetites for the personal information of voters is seemingly insatiable. How then does a regulator convince the politicians to regulate themselves?

Home, and another Facebook/Cambridge Analytica investigation

Following the execution of the warrant on Cambridge Analytica’s office in London, I returned home to accept my appointment as BC’s fourth Information and Privacy Commissioner. However, there was no escaping the fallout of the issues I investigated in the UK and their connections to Canada.

As it turned out, the personal information of more than 600,000 Canadian Facebook users had been vacuumed up by Cambridge Analytica (Braga, 2018). But this wasn’t the only Canadian connection to the breach. After acquiring that personal information, Cambridge Analytica (CA) and its parent company SCL Elections needed a way to make the data ready for practical use for potential clients of CA. That requirement would eventually be filled by AIQ.

With a BC and a Canadian connection to this story it became clear that coordinated regulatory action would be required. The Privacy Commissioner of Canada, Daniel Therrien, and I decided to join forces to look at both the Facebook/CA breach and the activities of AIQ (OIPC news release, 2018).

This joint investigation found that Facebook did little to ensure its users’ data was properly protected. Its privacy protection programme was, as my colleague Daniel Therrien called it, an “empty shell.” We recommended, among other things, that Facebook properly audit all of the apps that were allowed to collect their users’ data (OIPC news release, 2019b). Facebook brazenly rejected our findings and recommendations, which of course underscores another huge obstacle.

How can society hold global giants like Facebook to account? Many data protection authorities, like my office, lack the enforcement tools commensurate with the challenges that these companies pose to the public interest. Moreover, my office and that of the federal commissioner have far fewer powers than those available to our European counterparts. I have order-making power, but I cannot levy fines. My federal counterpart does not even possess order-making power; he investigates in response to complaints, or on his own initiative, and he makes recommendations. The only real vehicle he has at his disposal to seek a remedy, is through an unwieldy court application process, which is ongoing as I write. So one can understand why we look with some envy to the European DPAs, which now have the power to impose administrative fines of up to 20 million euros, or 4% of the company’s worldwide annual revenue.

British Columbia’s political parties and privacy regulation

Responsibility for privacy legislation in Canada is divided between the federal government and the provinces (OPC, 2018). The federal regulator, the Office of the Privacy Commissioner of Canada, has no authority to hold political parties to account. Among the provinces that have their own privacy legislation, only one has regulatory oversight over political parties: British Columbia. Given all that was going on at home and around the world concerning political parties, we decided to exercise that authority and investigate how BC’s political parties were collecting and using voter information (OIPC news release, 2019a).

To varying degrees, the province’s three main political parties expressed concerns about how BC’s private sector privacy legislation, the Personal Information Protection Act (PIPA) (BC PIPA , 2019) might impact their ability to communicate with voters. Some argued that voter participation rates were in decline, and that it was already difficult enough to reach out to voters. Anything that further impaired methods of connecting with voters, like privacy regulation, would only make the problem worse, they said. My answer was this: can anyone seriously maintain that the Facebook/CA scandal has generated an increased desire on the part of citizens to participate in the electoral process? It is only when voters trust political parties to handle their data with integrity, and in a manner consistent with privacy law, that they will feel truly confident in engaging robustly in the political campaign system.

After some initial trepidation, these political parties, each with representatives in the legislative assembly, cooperated fully with my office’s investigation. It is important to stress we did not find abuses of personal data, of the kind exhibited in the Facebook/CA scandal. Nor did we find the sophisticated level of data collection and analytics associated with heavily funded US political campaigns. We did find, however, that the parties were collecting and using a lot of information about voters and had a clear appetite to do much more. So, our work was timely, and hopefully it will result in short-circuiting the worst excesses seen in other jurisdictions.

BC’s private sector privacy legislation is principle-based, and the predominant principle is consent. Consent was therefore the lens through which we assessed the parties’ actions. By that measure, many of their practices contravened our law and many others were at least legally questionable.

Like in many jurisdictions, BC’s political parties are entitled by law to receive a voters’ list of names and addresses from the Chief Electoral Officer (Elections BC, 2019). This information forms the basic building block upon which parties compile comprehensive voter profiles. We found what parties add to the voters’ list is sometimes done with consent, but in many cases, without. Door-to-door canvassing, the oldest and most basic method of gathering voter intelligence, is an example of this two-sided coin. The transparent element of this contact occurs when a voter voluntarily expresses support and provides a phone number or email for contact purposes. During the same visit, however, the canvasser might record, without permission, the voter’s ethnicity (or at least the canvasser’s best guess about the voter’s ethnicity). We found many instances of this type of information being downloaded in a party’s database.

We also found that parties used voter contact information in a way that was well beyond the voter’s expectation. The voter could expect to be called or emailed to be reminded to vote on election day. They would not expect, and did not consent to the party disclosing their personal information to Facebook. There is little question that Facebook has become the newest and best friend to almost all political parties. The company offers a rich gateway to parties to reach their supporters and potential supporters.

The problem is that neither the parties nor Facebook do very much to explain this to voters.

It starts with the fact that many, if not most, voters are Facebook users. The parties disclose their voters’ contact information to Facebook in the hope of matching them with their Facebook profiles. If successful, Facebook offers the party two valuable things. The first is the ability to advertise to these individuals in their Facebook newsfeed. Facebook gains revenue from this and is impliedly provided the opportunity to understand the political leanings of their users. The second use for matched voters contact information is Facebook’s analysis of the uploaded profiles to find common characteristics among them. When complete, it offers the party, for a price, the opportunity to advertise to these other Facebook users who “look like” the party’s supporters. This tool, which is also used by commercial businesses, provides an extremely effective means for political campaigns to reach an audience of potentially persuadable voters.

Reduced to its basics, what many parties do is gather voters’ contact information supposedly for direct communication purposes but instead disclose it to a social media giant for advertising and analytic purposes. It would understate things to say that these interactions with voters lack transparency.

All kinds of other data are also added and combined with basic voter information. Postal zone demographics and polling research for example are commonly deployed as parties attempt to attribute characteristics to voters with a view to targeting those they judge to be likely supporters. Most parties “score” voters on the likelihood of support.

Whether using these data sources to score voters is permitted by privacy law is a matter likely to be tested in the near future. What is clear, however, is that parties should be far more transparent about their actions, for no other reason than voters have a right to know what information parties have about them.

Political parties in BC and the UK have been slow to the realisation of this obligation. Parties in both jurisdictions told me that prediction data about a voter, for example their “persuadability score” was not, in fact, their personal information. In another instance, I was told that this score was a commercial secret that could be withheld from a voter. Such a stance does not breed public confidence and is contrary to privacy law in BC and most other jurisdictions.

What then does the future hold? Even the most cursory reflection on this question suggests the answers will come from multiple places. For my office, the first and most obvious ally in protecting the public interest is the province’s Chief Electoral Officer. He is not only the keeper of the voter list, he also tackles other immeasurably complex matters like election interference and disinformation campaigns. The need for us to work together is critical.

We have already embarked on a joint venture to develop a code of conduct for political parties which we hope BC political parties will adopt. Unlike the UK, which has a mechanism for the imposition of such codes, political parties in BC will have to voluntarily sign on. The benefit to parties is that everyone ends up playing by the same set of well-understood standards. It also means the public will have far greater confidence in their interactions with the parties, which hopefully will result in a far more robust campaign system. Thus far, the parties have accepted my investigation report’s recommendations and are working cooperatively with me and with the BC Chief Electoral Officer on developing the code.

The investigation into AIQ

Facebook is but one company political campaigns turn to. Of course, it is far from the only one. This brings us back to Victoria, BC, home base for AIQ (AggregateIQ, 2019). Among other things, AIQ developed “Project Ripon,” the architecture designed to make usable all of the data ingested by Cambridge Analytica. AIQ justified the non-consensual targeting of US voters on the basis that its American clients who collected the personal information at first instance had no legal obligation to seek consent.

My joint report on AIQ with the Office of the Privacy Commissioner of Canada (McEvoy & Therrien, 2019) determined that this was no legal answer. The fact is, they were a Canadian company operating in BC and were obligated to comply with BC law. This meant that AIQ had to exercise due diligence in seeking assurance from their clients that consent was employed to collect the personal information they intended to use. They obviously didn’t.

Subsequent events also undermined AIQ’s claim that the US data they worked with was lawfully obtained. The Federal Trade Commission found in late 2019 that Cambridge Analytica, working with app developer Aleksandr Kogan, deceived users by telling them they would not collect their personal information (Agreement Containing Consent Order as to Respondent Aleksandr Kogan, 2019). The message to Canadian companies operating globally is that they must observe the rules in the places that they work in and those of their home territory.

In the end, AIQ agreed to follow the recommendations of our joint report, cleaning up its practices to ensure, going forward, that they secure consent for the personal information used in client projects as well as improving security measures for safeguarding that information.

Conclusion

In the two years that have taken me from Victoria to the UK and back, the privacy landscape has changed dramatically. The public’s understanding of the privacy challenges we face as a society has been seismically altered. In the past, it was not uncommon for people to ask me at events, “Maybe I share a bit too much of my information on Facebook, but what could possibly go wrong with that?” . Facebook/Cambridge Analytica graphically demonstrated exactly what could go wrong. The idea that enormous numbers of people could be psychologically profiled for the purposes of political message targeting without their knowledge shocked people. The CanTrust Index (CanTrust Index, 2019) that tracks trust sentiment of major brands with Canadians found that Facebook’s reputation took a sharp nosedive with Canadians between 2017 and 2019, according to their most recent survey. In 2017, 51 per cent of Canadians trusted Facebook. Today, just 28 per cent say the same.

The underpinnings of the entire economic model now driving the internet and its social media platforms has been put on full public display. While few people can describe the detailed workings of real time bidding or a cookie’s inner mechanics, most comprehend that their daily activities across the web are tracked in meticulous detail.

While public awareness and concern have shifted markedly, action by legislators to address those concerns has in many jurisdictions tried to keep in step. It is true that the General Data Protection Regulation has set a new standard in Europe but even there, the more exacting ePrivacy Regulation has stalled (Bannerman, 2019). Canadian legislators have tried to be proactive in responding to privacy’s changing landscape. However, the Privacy Commissioner of Canada, as noted, is without direct order-making power. Neither of our offices have the authority to issue administrative penalties. It is little wonder citizens are left to ask “Who has my back?” when organisations violate data protection laws.

The road to reform will not be an easy one. There is considerable bureaucratic and corporate resistance to a stronger regulatory regime. Working together, regulators, academics, and civil society must continue to urge for legislative reform. Our efforts are strongly supported by public sentiment. The OPC’s 2019 survey on privacy (OPC, 2019) revealed that a substantial number of Canadians would be far more willing to transact with a business that was under an enhanced regulatory regime that included financial penalties for wrongdoers. That should be a signal to organisations, including political parties, that data protection is good for their business and that they too should support strengthened regulatory frameworks.

References

AggregateIQ. (2019, December 18). Discover what we can do for you. Retrieved from https://aggregateiq.com/

Badshah, N. (2018, April 8). Facebook to contact 87 million users affected by data breach. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/apr/08/facebook-to-contact-the-87-million-users-affected-by-data-breach

Bannerman, N. (2019, November 26). EU countries fail to agree on OTT ePrivacy regulation. Capacity Media. Retrieved from https://www.capacitymedia.com/articles/3824568/eu-countries-fail-to-agree-on-ott-eprivacy-regulation

British Columbia, Personal Information Protection Act (PIPA). (2019, November 27). Retrieved from http://www.bclaws.ca/civix/document/id/complete/statreg/03063_01

Braga, M. (2018, April 4). Facebook says more than 600,000 Canadians may have had data shared with Cambridge Analytica. CBC News. Retrieved from https://www.cbc.ca/news/technology/facebook-cambridge-analytica-600-thousand-canadians-1.4605097

Cadwalladr, C. (2018, March 17). I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump:

CanTrust Index. (2019, April 25). Retrieved from https://www.getproof.com/thinking/the-proof-cantrust-index/

Doward, J. (2017, March 4). Watchdog to launch inquiry into misuse of data in politics. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/04/cambridge-analytics-data-brexit-trump

Elections BC. (2019). What we do. Retrieved from https://elections.bc.ca/about/what-we-do/

Information Commissioner's Office (ICO). (2018, November 6). Investigation into the use of data analytics in political campaigns [Report]. London: Information Commissioner’s Office. Retrieved from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

McEvoy, M., & Therrien, D. (2019c). AggregateIQ Data Services Ltd [Investigation Report No. P19-03 PIPEDA-035913]. Victoria; Gatineau: Office of the Information & Privacy Commissioner for British Columbia; Office of the Privacy Commissioner of Canada. Retrieved from https://www.oipc.bc.ca/investigation-reports/2363

Meredith, S. (2018, April 10). Facebook-Cambridge Analytica: A timeline of the data hijacking scandal. CNBC. Retrieved from https://www.cnbc.com/2018/04/10/facebook-cambridge-analytica-a-timeline-of-the-data-hijacking-scandal.html

Office of Information and Privacy Commissioner for BC (OIPC).(2018, April 5). BC, federal commissioners initiate joint investigations into Aggregate IQ, Facebook [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2144

Office of Information and Privacy Commissioner for BC (OIPC) (2019a, February 6). BC Political Parties aren’t doing enough to explain how much personal information they collect and why [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2279

Office of Information and Privacy Commissioner for BC (OIPC) (2019b, April 25). Facebook refuses to address serious privacy deficiencies despite public apologies for breach of trust [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2308

Office of the Privacy Commissioner of Canada (OPC). (2018, January 1) Summary of privacy laws in Canada. Retrieved from https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/02_05_d_15/

Office of the Privacy Commissioner of Canada (OPC) (2019, May 9) 2018-19 Survey of Canadians on Privacy [Report No. POR 055-18]. Retrieved from https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2019/por_2019_ca/

United States, Federal Trade Commission(FTC). (2019).Agreement Containing Consent Order as to Respondent Aleksandr Kogan. Retrieved from https://www.ftc.gov/system/files/documents/cases/182_3106_kogan_do.pdf

 

On the edge of glory (…or catastrophe): regulation, transparency and party democracy in data-driven campaigning in Québec

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

For the last 50 years, Québec politics has been characterised by a lasting two-party system based on a dominant divide between the Yes and No options to the project of political independence from the rest of Canada of the 8.4 million people in Canada’s predominantly Francophone jurisdiction (Pelletier, 1989). Following the failure of the 1995 referendum, the erosion of this divide led to an openness of the partisan system and the arrival of four parties in the Québec National Assembly (Dufresne et al., 2019; Langlois, 2018). With a new party elected to government for the first time since 1976, the 2018 election was one of realignment. The Coalition avenir Québec (CAQ) elected 74 Members of the National Assembly (MNAs). With 31 seats, the former government, the Québec Liberal Party (QLP), received its worst result in 150 years and formed the official opposition. With 10 MNAs each, Québec solidaire (QS), a left-wing party and the Parti québécois (PQ), the historic vehicle for independence, occupied the remaining opposition seats.

Beyond these election results, the 2018 Québec election also marks an organisational change. For the first time, the major parties have all massively adopted what is often referred to as “US” data-campaigning practices. However, when it comes to the use of digital technologies for electoral purposes, the US case is the exception rather than the rule (Enli and Moe, 2013; Gibson, 2015; Vaccari, 2013, p. ix). Indeed, data campaigning, as with other techniques of political communication, are conducted in specific contexts that affect what is accessible, possible and viable (Bennett, 2016; Dobber et al., 2017; Ehrhard et al., 2019; Flanagan, 2010, p. 156).

Not unlike other Canadian jurisdictions, Québec is therefore an interesting case to study the effects of these practices in parties operating in a parliamentary system, while not being subject to privacy protection rules. Moreover, to our knowledge, studies on this subject in a sub-national context are few. In Canada, the majority of the work focuses on federal parties (see for example Bennett, 2018; McKelvey and Piebiak, 2018; Munroe and Munroe, 2018; Patten, 2015, 2017; Thomas, 2015), leaving provincial and municipal levels behind (with the notable exception of Carlile, 2017; Yawney, 2018; and Giasson et al., 2019). Thus, the French-speaking jurisdiction represents, as Giasson et al. (2019, p. 3) argue, one of those relevant but “less obvious” cases to study in order to better understand the similarities and differences in why and how political parties adopt or resist technological innovations. The use of this type of case study also makes it possible to explore the gap between emerging opportunities and the campaigns actually deployed by the parties, beyond the "rhetoric of data-driven campaigning" (see Baldwin-Philippi, 2017, p. 627).

Many factors influence technological innovation in campaigns (Kreiss, 2016). Furthermore, as Hersh indicates (2015), cultural and legal contexts influence political actors’ behaviour because types of data that are made available to campaigns shape their perceptions of voters, and therefore their communication practices. According to Munroe and Munroe (2018), political parties may use data as a resource generated in many ways that can be used to guide strategic and tactical decisions. Because parties set up integrated platforms in which personal data on voters are stored and analysed, ethical and political issues emerge (Bennett, 2013, 2015). In most Canadian provinces, including Québec, and at the federal level, parties are not subjected to privacy laws regarding the use and protection of personal data. This absence of a regulatory framework also leads to inadequate self-regulation (Bennett, 2018; Howard and Kreiss, 2010).

As was the case in many other jurisdictions around the globe, Québec parties were faced with a transparency deficit following the March 2018 revelations of the Cambridge Analytica affair (Bashyakarla et al, 2019; Cadwalladr and Graham-Harrison, 2018). Within hours of the scandal becoming public, political reporters in Québec turned to party leaders to get a better sense of the scope and use of the digital data they were collecting, why they collected them and what this all meant for the upcoming fall elections as well as for citizens’ privacy (Bélair-Cirino, 2018). Most claimed that their data collection and analysis practices were ethical and respectful of citizen’s privacy. However, none of them agreed to fully disclose the scope of the data they collected nor the exact purpose of these databases.

Research objectives and methodology

This article examines the increasing pressure to regulate uses of digital personal data by Québec’s political parties. First, it illustrates the central role now played by voter personal data in Québec’s politics. Second, it presents the current (and weak) legislative framework and how the issue of the protection of personal data came onto the agenda in Québec. At first, many saw this shift has a positive evolution where Québec’s parties “caught up” with current digital marketing practices. However, following the Cambridge Analytica affair and revelations about the lack of proper regulation on voter data use, public discourse started casting these technological advancements as democratic catastrophes waiting to happen.

We use three types of data to investigate this context. First, in order to assess the growth in party use of digital voter data, we rely on 40 semi-directed interviews conducted for a broader research project with party organisers, elected officials, activists and advisors of all the main political parties operating in Québec 1. The interviews, each lasting from 45 minutes to 1-hour - were conducted in French just a few weeks before the launch of the 2018 provincial election campaign. Citations presented in this article are therefore translations. The interviewees were selected according to their political representativeness, but also for their high level of electoral involvement. In this article, we only use those responses that relate to digital campaigning and the use of personal information. The citations selected here represented viewpoints shared by at least three interviewees. They illustrate shared perceptions of the evolution of the strategic use of voter personal data in Québec’s electioneering.

Second, we also analysed the legislative framework as well as the self-regulatory practices of political parties in Québec in order to measure the levels of regulation and transparency surrounding their use of personal data. To do this, we studied the websites of the four main parties in order to compare their practices.

Finally, we also conducted a media coverage analysis on the issue of how parties engaged in digital marketing. We conducted a keyword search on the Eureka.cc database to retrieve all texts published in the four main daily newspapers published in French in Québec (La Presse, Le Devoir, Le Soleil and Le Journal de Montréal), in the public affairs magazine L’Actualité, as well as on the Radio-Canada website about digital data issues related to politics in Québec. The time period runs from 1 January 2012 to 1 March 2019 and covers three general (2012, 2014 and 2018) and two municipal (2013 and 2017) elections. Our search returned 223 news articles.

What we find is a perfect storm. We saw parties that are massively adopting data marketing at the same time that regulatory bodies expressed concerns about their lack of supervision. In the background, an international scandal made the headlines and changed the prevailing discourse surrounding these technological innovations.

New digital tools, a new political reality

The increased use of digital technologies and data for electioneering can be traced back to the 2012 provincial election (see Giasson et al., 2019). Québec political parties were then faced with a changing electorate, and data collection helped them adapt to this new context. Most of them also experienced greater difficulties in rallying electors ideologically. In Québec, activist, partisan politics was giving way to more political data-marketing (Del Duchetto, 2016).

In 2018, Québec’s four main political parties integrated digital technologies at the core of their electoral organisations. In doing so, they aimed to close the technological gap with Canadian parties at the federal level (Marland et al., 2012; Delacourt, 2013). Thus, the CAQ developed the Coaliste, its own tool for processing and analysing data. The application centralises information collected on voters in a database and targets them according to their profile. Developed at a cost of 1 million Canadian dollars, the tool was said by a party strategist to help carry a campaign "with 3 or 4 times less" money than before (Blais and Robillard, 2017).

For its part, QS created a mobilisation platform called Mouvement. The tool was inspired by the "popular campaigns of Bernie Sanders and La France Insoumise in France."2 Decentralised in nature, the platform aimed to facilitate event organisation, networking between sympathisers, to create local discussion activities, as well as to facilitate voter identification.

The PQ has also developed its own tool: Force bleue. At its official launch, a party organiser insisted on its strategic role in tight races. It would include “an intelligent mapping system to crisscross constituencies, villages, neighbourhoods to maximise the time spent by local teams and candidates by targeting the highest paying places in votes and simplify your vote turnout” (Bergeron, 2018).

Finally, the QLP outsourced its digital marketing and built on the experience of the federal Liberal Party of Canada as well as Emmanuel Macron’s movement in France. For the 2018 election campaign, the party contracted Data Sciences, a private firm which "collects information from data of all kind, statistics among others, on trends or expectations of targeted citizens or groups"(Salvet, 2018).

Our interviews with political strategists help better understand the scope of this digital shift that Québec’s parties completed in 2018. They also put into perspective the effects of these changes and the questions they raise within the parties themselves.

Why change?

Party organisers interviewed for this article who advocate for the development of new tools stress two phenomena. On the one hand, the Québec electorate is more volatile and on the other, it is much more difficult to communicate with electors than before. A former MNA notes that today: "The campaign counts. It's very volatile and identifying who votes for you early in the campaign doesn’t work anymore. "

With social media, Québec parties’ officials see citizens as more segmented than before. An organiser attributes the evolution of this electoral behaviour to social media. "Today, the big change is that the speed and accessibility of information means that you do not need a membership card to be connected. It circulates freely. It's on Facebook. It’s on Twitter".

He notes that "it is much more difficult to attract someone in a political party by saying that if you become a member you will have privileged access to a certain amount of information or to a certain quality of information". A rival organiser also confirms that people's behaviour has changed: "It's not just generational, they buy a product". He adds that this has implications on the level of volunteering and on voters’ motivation:

When we look at the beginning of the 1970s, we had a lot of people. People were willing to go door-to-door to meet voters. We had people on the ground, they needed to touch each other. The communications were person-to-person. (…) Today, we do marketing.

In sum, "people seek a product and are less loyal" which means that parties must rely on voters’ profiling and targeting.

Increased use of digital technology in 2018

The IT turn in Québec partisan organisations is real. One organiser goes so far as to say that most of the volunteer work that was central in the past is now done digitally. According to him, "any young voter who uses Facebook, is now as important, if not more, than a party activist". This comment reinforces the notion that any communication with an elector must now be personalised:

Now we need competent people in computer science, because we use platforms, email lists. When I send a message reminding to newly registered voters that it will be the first time they will vote, I am speaking directly to them.

To achieve this micro-targeting, party databases are updated constantly. An organiser states that: "Our job is to feed this database with all the tools like surveys, etc... In short, we must bomb the population with all kinds of things, to acquire as much data as possible". For example, Québec solidaire and the Coalition avenir Québec broadly used partisan e-petitions to feed their database (Bélair-Cirino, 2017). There are neither rules nor legislation that currently limit the collection and use of this personal information if it is collected through a partisan online petition or website.

Old political objectives - new digital techniques

In accordance with the current literature on the hybridisation of electoral campaigns (Chadwick, 2013; Giasson et al., 2019), many respondents indicate that the integration of digital tools associated with data marketing has changed the way things are done. This also had an effect on the internal party organisation, as well as on the tasks given to members on the ground. An organiser explains how this evolution took place in just a few years:

Before, we had a field organisation sector, with people on the phones, distributors, all that. We had communication people, we had people distributing content. (...) Right now, we have to work with people that are not there physically and with something that I will not necessarily control.

An organiser from another political party is more nuanced: "We always need people to help us find phone numbers, we always need people to make calls". He confirms, however, that communication tactics changed radically:

The way to target voters in a riding has changed. The way to start a campaign, to canvas, has changed. The technological tools at our disposal means that we need more people who are able to use them and who have the skills and knowledge to use the new technological means we have to reach the electorate.

Another organiser adds that it is now important to train activists properly for their canvassing work. According to her: "We need to give activists digital tools and highly technological support tools that make their lives easier". She adds that: "Everything is chained with intelligent algorithms that will always target the best customer, always first, no matter what...".

New digital technologies and tools are therefore used to maximise efficiency and resources. The tasks entrusted to activists also change. For another organiser, mobilisation evolves with technology: "We used to rely on lots of people to reach for electors". He now sees that people are reached via the internet and that this new reality is not without challenges: "we are witnessing a revolution where new clients do not live in the real world…". It then becomes difficult to meet them in real life, off-line.

Another organiser confirms having "a different canvas technique using social media and other tools”. According to him:

Big data is already outdated. We are talking about smart data. These data are used efficiently and intelligently. How do we collect this data? (...) We used to do a lot of tally by door-to-door or by phone. Now we do a lot of capture. The emails are what interest me. I am not interested in phone numbers anymore, except cell phones.

An experienced organiser observes that "this has completely changed the game. Before, we only had one IT person, now I have three programmers". He adds that "liaison officers have become press officers". This change also translates in the allocation of resources and the integration of new profiles of employees for data management. It brought a new set of digital strategists into war rooms. These new data analysts have knowledge in data management, applied mathematics, computer science and software engineering. They are working alongside traditional field organisers, sometimes even replacing them at the decision table.

Second thoughts

Organisers themselves raise democratic and ethical concerns related to the digital evolution of their work. One of them points out that they face ethical challenges. He openly wonders about the consequences of this gathering of personal information: "It's not because we can do something that we have to do it. With the list of electors, there are many things that can be done. Is it ethical to do it? At some point, you have to ask that question". He points out that new technologies are changing at a rapid pace and that with "each technology comes a communication opportunity". The question is now "how can we appropriate this technology, this communication opportunity, and make good use of it".

Reflecting upon the lack of regulation on the use of personal data by parties in Québec, an organiser added that: "We have the right to do that, but people do not like it". For him, this issue is "more than a question of law, there could be a question of what is socially acceptable".

Another organiser points out that the digital shift could also undermine intra-party democracy. Speaking about the role of activists, he is concerned that "they feel more like being given information that has been chewed on by a small number of people than being collected by more people in each constituency". He notes that the technological divide is also accompanied by a generational divide within the activist base:

The activist who is older, we will probably have less need of him. The younger activist is likely to be needed, but in smaller numbers. (...) Because of the technological gap, it's a bit of a vicious circle, that is also virtuous. The more we try to find technological means that will be effective, the less we need people.

Still, democratically, the line can be very thin between mobilisation and manipulation. Reflecting on a not-so-distant future, this organiser spoke of the many possibilities data collection could provide parties with:

These changes bring us into a dynamic that the Americans call ‘activation fields’. (...) From the moment we have contact with someone, what do we do with this person, where does she go? (...) This gives incredible arborescence, but also incredible opportunities.

He concludes that: "Today, the world does not realise how all the data is piling up on people and that this is how elections are won now". Is there a limit to the information a party could collect on an elector? This senior staffer does not believe so. He adds: “If I could know everything you were consuming, it would be so useful to me and help mobilise you".

Québec’s main political parties completed their digital shift in preparation for the 2018 election. Our interviews show that this change was significant. From an internal democracy perspective, digital technologies and data marketing practices help respond to the decline of activism and membership levels observed in most Québec parties (Montigny, 2015). This can also lead to frustration among older party activists who would feel less involved. On the other hand, from a data protection perspective we note that in the absence of a rigorous regulatory framework, parties in Québec can do almost anything. As a result, they collect a significant amount of unprotected personal data. The pace at which this change is taking place and the risks it represents for data security even lead some political organisers to question their own practices. As the next section indicates, Québec is lagging behind in adapting the data marketing practices of political parties to contemporary privacy standards.

The protection of personal information over time

The data contained in the Québec list of electors has been the cornerstone of all political parties’ electioneering efforts for many years and now form the basis of their respective databases of voter information. It is from this list that they are able, with the addition of other information collected or purchased, to file, segment and target voters. An overview of the legislative amendments concerning the disclosure of the information contained in the list of electors reveals two things: (1) its relatively recent private nature, and (2) the fact that the ability for political parties to collect and use personal data about voters never really seems to have been questioned until recently. Parties mostly reacted by insisting on self-regulation (Élections Québec, 2019).

With regard to the public/private nature of the list of electors, we should note that prior to 1979 it was displayed in public places. Up to 2001, the list of electors of a polling division was even distributed to all voters in that section. Therefore, the list used to be perceived as a public document in order to prevent electoral fraud. Thus, citizens were able to identify potential errors and irregularities.

From 1972 on, the list has been sent to political parties. With the introduction of a permanent list of electors in 1995, political parties and MNAs were granted, in 1997, the right to receive annual copies of the list for verification purposes. Since 2006, parties receive an updated version of the list three times a year. This facilitates the update of their computerised voter databases. It should also be noted that during election periods, all registered electoral candidates are granted access to the list and its content.

Thus, while public access to the list of electors has been considerably reduced, political parties’ access has increased in recent years. Following legislative changes, some information has been removed from the list, the age and profession of the elector for instance. Yet, the Québec list remains the most exhaustive of any Canadian jurisdiction in terms of the quantity of voter information it contains, indicating the name, full address, gender and date of birth of each elector (Élections Québec, 2019, p. 34).

From a legal perspective, Québec parties are not subject to the "two general laws that govern the protection of personal information, namely the Act respecting access to documents held by public bodies and the protection of personal information, which applies in particular to information held by a public body, and the Act respecting the protection of personal information in the private sector, which concerns personal information held by a person carrying on a business within the meaning of section 1525 of the Civil Code of Québec" (Élections Québec, 2019, p. 27). Indirectly, however, this law would apply when a political party chooses to outsource some of its marketing, data collection or digital activities to a private sector firm.

Moreover, the Election Act does not specifically define which uses of data taken from the list of electors are permitted. It merely provides some general provisions. Therefore, parties cannot use or communicate a voter’s information for purposes other than those provided under the Act. It is also illegal to communicate or allow this information to be disclosed to any person who is not lawfully entitled to it.

Instead of strengthening the law, parties represented in the National Assembly first chose to adopt their own privacy and confidentiality policies. This form of self-regulation, however, has its limits. Even if they appear on their websites, these norms are usually not easy to find and there is no way to confirm that they are effectively enforced by parties. Only the Coalition avenir Québec and the Québec Liberal Party offer a clear link on their homepage. 3 We analysed each of these according to five indicators: the presence of 1) a definition of what constitutes personal information, 2) a reference to the type of use and sharing of data, 3) methods of data collection, 4) privacy and security measures that are taken and 5) the possibility for an individual to withdraw his or her consent and contact the party in connection with his or her personal information.

Table 1: Summary of personal information processing policies of parties represented at the National Assembly of Québec
 

CAQ

PLQ

QS

PQ

Definition of personal information

Identifies a person (contact information, name, address and phone number).

Identifies a natural person (the name, date of birth, email address and mailing address of that person, if the person decides to provide them).

About an identifiable individual that excludes business contact information (name, date of birth, personal email address, and credit card).

 

Strategic use and sharing of data protocols

- To provide news and information about the party.

- Can engage third parties to perform certain tasks (processing donations, making phone calls and providing technical services for the website).

- Written contracts include clauses to protect personal information.

- To contact including by newsletter to inform news and events of the Party.

- To provide a personalised navigation experience on the website with targeted information according to interests and regions.

- May disclose personal information to third parties for purposes related to the management of party activities (administration, maintenance or internal management of data, organisation of an event).

- Not sell, trade, lend or voluntarily disclose to third parties the personal information transmitted.

- To improve the content of the website and use for statistical purposes.

Data collection method

- Following a contact by email.

- Following the subscription to a communication.

- After filling out an information request form or any other form on a party page, including polls, petitions and party applications.

- The party reserves the right to use cookies on its site.

- Collected only from an online form provided for this purpose.

  

Privacy and Security of data

- Personal information is not used for other purposes without first obtaining consent. From data provider.

- Personal information may be shared internally between the party's head office and its constituency associations.

- Respect the confidentiality and the protection of personal information collected and used.

- Only people assigned to subscriptions management or communications with subscribers have access to information.

- Protection of information against unauthorized access attempts with a server that is in a safe and secure place.

- Respect the privacy and confidentiality of personal information.

- Personal details will not be published or posted on the Internet in any way except at the explicit request of the person concerned.

- The information is sent in the form of an encrypted email message that guarantees confidentiality.

- No guarantees that the information disclosed by the Internet will not be intercepted by a third party.

- The site strives to use appropriate technological measures, procedures, and storage devices to prevent unauthorised use or disclosure of your personal information.

- No information to identify an individual unless he has provided this information for this purpose.

- Take reasonable steps to protect the confidentiality of this information.

- The information automatically transmitted between computers does not identify an individual personally.

- Access to collected information is limited only to persons authorized by the party or by law.

Withdrawal of consent and information

- Any person registered on a mailing list can unsubscribe at any time.

- Invitation to share questions, comments and suggestions.

- Ability to apply to no longer receive party information at any time.

- Ability to withdraw consent at any time on reasonable notice.

 

In general, we find that three out of four parties offer similar definitions of the notion of personal information: the Coalition avenir Québec, the Liberal Party of Québec and Québec solidaire. Beyond this indicator, the information available varies from one party to another. Thus, voters have little information on the types of use of their personal data. Moreover, only the Coalition avenir Québec and Québec solidaire indicate that they can use a third party in the processing of data without having to state the purpose of this processing to the data providers. The Coalition avenir Québec is the only party that specifies its methods of data collection in more detail. Similarly, Québec solidaire is more specific with respect to the measures taken to protect the privacy and security of the data it collects. Finally, the Parti québécois does not specify the mechanism by which electors could withdraw their consent.

Cambridge Analytica as a turning point

Our analysis of media coverage of the partisan and electoral use of voter data in Québecreveals three main conclusions. First, even though Québec political parties, both at the provincial and municipal levels, began collecting, storing and using personal data on voters several years ago, news media attention on these practices is relatively new. Secondly, the dominant media frame on the issue seems to have changed over the years: after first being rather anecdotal, the treatment of the issue grew in importance and became more suspicious. Finally, the Cambridge Analytica scandal appears as a turning point in news coverage. It is this affair that will force parties and their strategists to explain their practices publicly for the first time (Bélair-Cirino, 2018), will put pressure on the government to react, and bring to the fore the concerns and demands of other organisations such as Élections Québec and the Commission d’accès à l’information du Québec, the administrative tribunal and oversight body responsible for the protection of personal information in provincial public agencies and private enterprises.

Interest in ethical and security issues related to data campaigning built up slowly in Québec’s political news coverage. Already in 2012, parties used technological means to feed their databases and target the electorate (Giasson et al., 2019). However, it is in the context of the municipal elections in the Fall of 2013 that the issue of the collection and processing of personal data on voters was first covered in a news report. It was only shortly after the 2014 Québec elections that we found a news item dealing specifically with the protection of personal data of Québec voters. The Montréal-based newspaper Le Devoir reported that the list of electors was made available online by a genealogy institute. It was even possible to get it for a fee. The Drouin Institute - which released the list - estimated that about 20,000 people had accessed the data (Fortier, 2014).

Paradoxically, the following year, the media reported that investigators working for Élections Québec could not access the data of the electoral list for the purpose of their inquiry (Lajoie, 2015a). That same year, another anecdotal event made headlines: a Liberal MNA was asked by Élections Québec to stop using the voters list data to call his constituents to... wish them a happy birthday (Lajoie, 2015b). In the 2017 municipal elections, and even more so after the revelations regarding Cambridge Analytica in 2018, the media in Québec seemed to have paid more attention to data-driven electoral party strategies than to the protection of personal data by the parties.

For instance, in the hours following the revelation of the Cambridge Analytica scandal, political reporters covering the National Assembly in Québec quickly turned their attention to the leadership of political parties, asking them to report on their respective organisations’ digital practices and about the regulations in place to frame them. Simultaneously, Élections Québec, which had been calling for stronger control of personal data use by political parties since 2013, expressed its concerns publicly and fully joined the public debate. As a way to mark its willingness to act on the issue, the liberal government introduced a bill at the end of the parliamentary session, the last of this parliament. The bill was therefore never adopted by the House, which was dissolved a few days later, in preparation for the next provincial election.

Political reporters in Québec have since then paid sustained attention to partisan practices regarding the collection and use of personal information. In their coverage of the 2018 election campaign, they widely discussed the use of data by leaders and their political parties. Thus, while the Cambridge Analytica affair did not directly affect Québec political parties, it nevertheless appears as a shifting point in the media coverage of the use of personal data for political purposes.

Media framing of the issue also evolved over the studied period, becoming more critical and suspicious of partisan data marketing with time. Before the Cambridge Analytica case, coverage rarely focused on the democratic consequences or privacy and security issues associated with the use of personal data for political purposes. Initial coverage seems to have been largely dominated by the story depicting how parties were innovating in electioneering and on how digital technologies could improve electoral communication. Journalists mostly cited the official discourse of political leaders, their strategists or of the digital entrepreneurs from tech companies who worked with them.

An illustrative example of this type of coverage can be found in an article published in September 2013 during municipal elections held in Québec. It presents a portrait of two Montréal-based data analysis companies – Democratik and Vote Rapide – offering technological services to political parties (Champagne, 2013). Their tools were depicted as simple databases fed by volunteers, mainly intended for the identification of sympathisers to facilitate the get-out-the-vote operations (GOTV). It emphasised the affordability and universal use of these programmes by parties, and even indicated that one of them had been developed with the support of the Civic Action League, a non-profit organisation that helps fight political corruption.

However, as the years passed, a change of tone began to permeate the coverage, especially in the months building up to the 2018 general election. A critical frame became more obvious in reporting. It even used Orwellian references to data campaigning in titles such as… “Political parties are spying on you” (Castonguay, 2015) “They all have a file on you” (Joncas, 2018), “What parties know about you” (Croteau, 2018), or “Political parties exchange your personal details” (Robichaud, 2018). In a short period of time, data campaigning had gone from cool to dangerous.

Conclusion

Québec political parties began their digital shift a few years later than their Canadian federal counterparts. However, they have adapted their digital marketing practices rapidly; much faster in fact than the regulatory framework. For the 2018 election, all major parties invested a great deal of resources to be up to date on data-driven campaigning.

To maximise the return on their investment in technology, they must now “feed the beast” with more data. Benefiting from weak regulation over data marketing, this means that they will be able to gather even more personal information in the years to come, without having to explain to voters what their data are used for or how they are protected. In addition, parties are now involving an increasing number of volunteers in the field for the collection of digital personal information, which also increases the risk of data leakage or misuse.

They have, so far, implemented that change with very limited transparency. Up until now, research in Canada has not been able to identify precisely what kind of information is collected or how it is managed and protected. Canadian political strategists have been somewhat forthcoming in explaining how parties collect and why they use personal data for electoral purposes (see for instance Giasson et al., 2019; Giasson and Small, 2017; Flanagan, 2014; Marland, 2016). They however remain silent on the topics of regulation and data protection.

This lack of transparency is problematic in Canada since party leaders who win elections have much more internal powers in British parliamentary systems then in the US presidential system. They control the executive and legislative branches as well as the administration of the party. This means that there is no firewall and real restrictions to the use of data collected by a party during an election once it arrives in office. In that regard, it was revealed that the Office of the Prime Minister of Canada, Justin Trudeau, used its party’s database to vet judges’ nominations (Bergeron, 2019). The same risks apply to Québec.

It is in this context that Élections Québec and the Access to Information Commission of Québec have initiated a broad reflection on the electoral use of personal data by parties. In 2018, following a leak of personal data from donors of a Montréal-based municipal party, the commission contacted the campaign to "examine the measures taken to minimise risks". The commission took the opportunity to "emphasise the importance of political parties being clearly subject to privacy rules, as is the case in British Columbia" (Commission d’accès à l’information du Québec, 2018).

In a report published in February 2019, the Chief Electoral Officer of Québec presented recommendations that parties should follow in their voter data collection and analysis procedures (Élections Québec, 2019). It suggested that provincial and municipal political parties be submitted to a general legislative framework for the protection of personal information. Heeding these calls for change, Québec’s new Minister of Justice and Democratic Reform announced, in November 2019, plans for an overhaul of the province’s regulatory framework on personal data and privacy, which would impose stronger regulations on data protection and use and would grant increased investigation powers to the head of the Commission d’accès à l’information. All businesses, organisations, governments and public administrations operating in Québec and collecting personal data would be covered under these news provisions and could be subjected to massive fines for any form of data breach in their systems. Aimed at ensuring better control, transparency and consent of citizens over their data, these measures, which will be part of a bill introduced in 2020 to the National Assembly, were said to also apply to political parties (Croteau, 2019). However, as this article goes to print, the specific details of these new provisions aimed at political parties remain unknown.

This new will to regulate political parties is the result of a perfect storm where three factors came into play at the same time. Thus, in addition to the rapid integration of new data collection technologies by Québec’s main political parties, there was increased pressure from regulatory agencies and an international scandal that changed the media framing of the political use of personal data.

Well beyond the issue of privacy, data collection and analysis for electoral purposes also change some features of our democracy. Technology replacing activists translates in major intra-party changes. In a parliamentary system, this could increase the centralisation of power around party leaders who now rely less on party members to get elected. This would likely be the case in Québec and in Canada.

Some elements also fuel resistance to change within parties, such as the dependence on digital technologies at the detriment of human contact, fears regarding the reliability of systems or data and the high costs generated by the development and maintenance of databases. For some, party culture also plays a role. A former political strategist who worked closely with former Québec Premier Pauline Marois declared in the media: "You know in some parties, we value the activist work done by old ladies who come to make calls and talk to each voter, one by one" (Radio-Canada, 2017).

As some of our respondents mentioned, parties may move from ‘big data’ to ‘smart data’ in coming years, as they adapt to or adopt novel technological tools. In an era of partisan flexibility, data marketing seems to have helped some parties find and reach their voters. A move towards ‘smart data’ may now also help them modify those voters’ beliefs with even more targeted digital strategies. What might this mean for democracy in Québec? Will its voters be mobilised or manipulated when parties will use their data in upcoming campaigns? Are political parties on the edge of glory or of catastrophe? These questions should be central to the study of data-driven campaigning.

References

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(7), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bashyakarla, V., Hankey, S., Macintyre, S., Rennó, R., & Wright, G. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works_print-friendly.pdf

Bélair-Cirino, M. (2018). Inquiétude à Québec sur les banques de données politiques [Concern in Quebec City about Political Databanks]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/523240/donnees-personnelles-inquietude-a-quebec

Bélair-Cirino, M. (2017, April 15). Vie privée – Connaître les électeurs grâce aux petitions [Privacy - Getting to know voters through petitions]. Le Devoir. Retrieved from https://www.ledevoir.com/politique/quebec/496477/vie-privee-connaitre-les-electeurs-grace-aux-petitions

Bergeron, P. (2018, May 26). Le Parti québécois se dote d'une «Force bleue» pour gagner les élections [The Parti Québécois has a "Force Bleue" to win elections]. La Presse. Retrieved from https://www.lapresse.ca/actualites/politique/politique-quebecoise/201805/26/01-5183364-le-parti-quebecois-se-dote-dune-force-bleue-pour-gagner-les-elections.php

Bergeron, É. (2019, April 24). Vérification politiques sur de potentiels juges: l’opposition crie au scandale [Political checks on potential judges: Opposition cries out for scandal]. TVA Nouvelles. Retrieved from https://www.tvanouvelles.ca/2019/04/24/verification-politiques-sur-de-potentiels-juges-lopposition-crie-au-scandale

Bennett, C. J. (2018). Data-driven elections and political parties in Canada: privacy implications, privacy policies and privacy obligations. Canadian Journal of Law and Technology,16(2), 195-226. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3146964

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law,6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bennett, C. J. (2015). Trends in voter surveillance in Western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3-4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bennett, C. J. (2013). The politics of privacy and the privacy of politics: Parties, elections and voter surveillance in Western democracies. First Monday, 18(8). https://doi.org/10.5210/fm.v18i8.4789

Blais, A., & A. Robillard. (2017, October 4). 1 Million $ pour un logiciel électoral [1 Million for election software]. LeJournal de Montréal. Retrieved from https://www.journaldemontreal.com/2017/10/04/1-million--pour-un-logiciel-electoral

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Carlile, C. N. (2017). Data and Targeting in Canadian Politics: Are Provincial Parties Taking Advantage of the Latest Political Technology? [Master Thesis, University of Calgary]. Calgary: University of Calgary. https://doi.org/10.11575/PRISM/5226

Castonguay, A. (2015, September 14). Les partis politiques vous espionnent [The political parties are spaying on you]. L’Actualité. Retrieved from https://lactualite.com/societe/les-partis-politiques-vous-espionnent/

Champagne, V. (2013, September 25). Des logiciels de la Rive-Nord pour gagner les élections [Rive-Nord software to win elections]. Ici Radio-Canada.

Commission d’accès à l’information du Québec. (2018, April 3). La Commission d’accès à l’information examinera les faits sur la fuite de données personnelles de donateurs du parti Équipe Denis Coderre [The Commission d'accès à l'information will examine the facts on the leak of personal data of Team Denis Coderre donors]. Retrieved from http://www.cai.gouv.qc.ca/la-commission-dacces-a-linformation-examinera-les-faits-sur-la-fuite-de-donnees-personnelles-de-donateurs-du-parti-equipe-denis-coderre/

Croteau, M. (2018, August 20). Ce que les partis savent sur vous [What the parties know about you]. La Presse+. Retrieved from http://mi.lapresse.ca/screens/8a829cee-9623-4a4c-93cf-3146a9c5f4cc__7C___0.html

Croteau, M. (2019, November 22). Données personnelles: un chien de garde plus. Imposant [Personal data: one guard dog more. Imposing]. La Presse+. Retrieved from https://www.lapresse.ca/actualites/politique/201911/22/01-5250741-donnees-personnelles-un-chien-de-garde-plus-imposant.php

Del Duchetto, J.-C. (2016). Le marketing politique chez les partis politiques québécois lors des élections de 2012 et de 2014 [Political marketing by Quebec political parties in the 2012 and 2014 elections] [Master’s thesis, University of Montréal]). Retrieved from http://hdl.handle.net/1866/19404

Delacourt, S. (2013). Shopping for votes. How politicians choose us and we choose them. Madeira Park: Douglas & McIntyre.

Dobber, T., Trilling, D., Helberger, N. & de Vreese, C. H. (2017). Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dufresne, Y., Tessier, C., & Montigny, E. (2019). Generational and Life-Cycle Effects on Support for Quebec Independence. French politics, 17(1), 50–63. https://doi.org/10.1057/s41253-019-00083-9

Ehrhard, T., Bambade, A., & Colin, S. (2019). Digital campaigning in France, a Wide Wild Web? Emergence and evolution of the market and Its players. In A. M. G. Solo (Ed.), Handbook of Research on Politics in the Computer Age (pp. 113-126). Hershey (PA), USA: IGI Global. https://doi.org/10.4018/978-1-7998-0377-5.ch007

Élections Québec. (2019). Partis politiques et protection des renseignements personnels: exposé de la situation québécoise, perspectives comparées et recommandations [Political Parties and the Protection of Personal Information: Presentation of the Quebec Situation, Comparative Perspectives and Recommendations]. Retrieved from https://www.pes.electionsquebec.qc.ca/services/set0005.extranet.formulaire.gestion/ouvrir_fichier.php?d=2002

Enli, G. & Moe, H. (2013). Social media and election campaigns – key tendencies and ways forward. Information, Communication & Society, 16(5), 637–645. https://doi.org/10.1080/1369118x.2013.784795

Flanagan, T. (2014). Winning power. Canadian campaigning in the 21st century. Montréal; Kingston: McGill-Queen’s University Press.

Flanagan, T. (2010). Campaign strategy: triage and the concentration of resources. In H. MacIvor(Ed.), Election (pp. 155-172). Toronto: Emond Montgomery Publications.

Fortier, M. (2014, May 29). La liste électorale du Québec vendue sur Internet [Quebec's list of electors sold on the Internet]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/409526/la-liste-electorale-du-quebec-vendue-sur-internet

Giasson, T., & Small, T. A. (2017). Online, all the time: the strategic objectives of Canadian opposition parties. In A. Marland, T. Giasson, & A. L. Esselment (Eds.), Permanent campaigning in Canada (pp. 109-126). Vancouver: University of British Columbia Press.

Giasson, T., Le Bars, G. & Dubois, P. (2019). Is Social Media Transforming Canadian Electioneering? Hybridity and Online Partisan Strategies in the 2012 Québec Election. Canadian Journal of Political Science, 52(2), 323–341. https://doi.org/10.1017/s0008423918000902

Gibson, R. K. (2015). Party change, social media and the rise of ‘citizen-initiated’ campaigning. Party Politics, 21(2), 183-197. https://doi.org/10.1177/1354068812472575

Hersh, E. D. (2015). Hacking the electorate: how campaigns perceive voters. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9781316212783

Howard, P. N, & D. Kreiss. (2010). Political parties and voter privacy: Australia, Canada, the United Kingdom, and United States in comparative perspective. First Monday, 15(12). https://doi.org/10.5210/fm.v15i12.2975

Joncas, H. (2018, July 28). Partis politiques : ils vous ont tous fichés [Political parties: they've got you all on file…]. Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2018/07/28/partis-politiques-ils-vous-ont-tous-fiches

Karpf, D., Kreiss, D. Nielsen, R. K., & Powers, M. (2015). The role of qualitative methods in political communication research: past, present, and future. International Journal of Communication, 9(1), 1888–1906. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4153

Kreiss, D. (2016). Prototype politics. Technology-intensive campaigning and the data of democracy. Oxford, UK: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.0001

Lajoie, G. (2015a, December 3). Les enquêteurs du DGEQ privés des informations contenues dans la liste électorale [DGEQ investigators deprived of the information contained in the list of electors]. Le Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2015/12/03/le-dge-prive-ses-propres-enqueteurs-des-informations

Lajoie, G. (2015b, November 27). André Drolet ne peut plus souhaiter bonne fête à ses électeurs [André Drolet can no longer wish his constituents a happy birthday]. Le Journal de Québec. Retrieved from https://www.journaldequebec.com/2015/11/27/interdit-de-souhaiter-bon-anniversaire-a-ses-electeurs

Langlois, S. (2018). Évolution de l'appui à l'indépendance du Québec de 1995 à 2015 [Evolution of Support for Quebec Independence from 1995 to 2015]. In A. Binette and P. Taillon (Eds.), La démocratie référendaire dans les ensembles plurinationaux (pp. 55-84). Québec: Presses de l'Université Laval.

Marland, A. (2016). Brand command: Canadian politics and democracy in the age of message control. Vancouver: University of British Columbia Press.

Marland, A., Giasson, T., & Lees-Marshment, J. (2012). Political marketing in Canada. Vancouver: University of British Columbia Press.

McKelvey, F., & Piebiak, J. (2018). Porting the political campaign: The NationBuilder platform and the global flows of political technology. New Media & Society, 20(3), 901–918. https://doi.org/10.1177/1461444816675439

Montigny, E. (2015). The decline of activism in political parties: adaptation strategies and new technologies. In G. Lachapelle & P. J. Maarek (Eds.), Political parties in the digital age. The Impact of new technologies in politics (pp. 61-72). Berlin: De Gruyter. https://doi.org/10.1515/9783110413816-004

Munroe, K. B & Munroe, H. D. (2018). Constituency campaigning in the age of data. Canadian Journal of Political Science,51(1), 135–154. https://doi.org/10.1017/S0008423917001135

Patten, S. (2017). Databases, microtargeting, and the permanent campaign: a threat to democracy. In A. Marland, T. Giasson, & A. Esselment. (Eds.), Permanent campaigning in Canada (pp. 47-64). Vancouver: University of British Columbia Press.

Patten, S. (2015). Data-driven microtargeting in the 2015 general election. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy. Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Pelletier, R. (1989). Partis politiques et société québécoise [Political parties and Quebec society]. Montréal: Québec Amérique.

Radio-Canada. (2017, October 1). Episode of Sunday, October 1, 2017[Television Series Episode] in Les Coulisses du Pouvoir [Behind the scenes of power]. ICI RD. Retrieved from https://ici.radio-canada.ca/tele/les-coulisses-du-pouvoir/site/episodes/391120/joly-charest-sondages

Robichaud, O. (2018, August 20). Les partis politiques s'échangent vos coordonnées personnelles [Political parties exchange your personal contact information]. Huffpost Québec. Retrieved from https://quebec.huffingtonpost.ca/entry/les-partis-politiques-sechangent-vos-coordonnees-personnelles_qc_5cccc8ece4b089f526c6f070

Salvet, J.-M. (2018, January 31). Entente entre le PLQ et Data Sciences: «Tous les partis politiques font ça», dit Couillard [Agreement between the QLP and Data Sciences: "All political parties do that," says Couillard]. Le Soleil. Retrieved from https://www.lesoleil.com/actualite/politique/entente-entre-le-plq-et-data-sciences-tous-les-partis-politiques-font-ca-dit-couillard-21f9b1b2703cdba5cd95e32e7ccc574f

Thomas, P. G. (2015). Political parties, campaigns, data, and privacy. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy (pp. 16-17). Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Vaccari, C. (2013). Digital politics in western democracies: a comparative study. Baltimore: Johns Hopkins University Press.

Yawney, L. (2018). Understanding the “micro” in micro-targeting: an analysis of the 2018 Ontario provincial election [Master’s thesis, University of Victoria]. Retrieved from https://dspace.library.uvic.ca//handle/1828/10437

Footnotes

1. Even though there are 22 officially registered political parties in Québec, all independent and autonomous from their counterpart at the federal level, only four are represented at the National Assembly: CAQ, QLP, QS and PQ. Since the Québec political system is based on the Westminster model, each MNA is elected in a given constituency by a first-past-the-post ballot.

2.According to QS website (view July 2, 2019).

3.Websites viewed on 27 March 2019.

Unpacking the “European approach” to tackling challenges of disinformation and political manipulation

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

In recent years, the spread of disinformation on online platforms and micro-targeted data-driven political advertising has become a serious concern in many countries around the world, in particular as regards the impact this practice may have on informed citizenship and democratic systems. In April 2019, for the first time in the country’s modern history, Switzerland’s supreme court has overturned a nationwide referendum on the grounds that the voters were not given complete information and that it "violated the freedom of the vote”. While in this case it was the government that had failed to provide correct information, the decision still comes as another warning of the conditions under which elections nowadays are being held and as a confirmation of the role that accurate information plays in this process. There is limited and sometimes even conflicting scholarly evidence as to whether today people are exposed to more diverse political information or trapped in echo chambers, and whether they are more vulnerable to political disinformation and propaganda than before (see, for example: Bruns, 2017, and Dubois & Blank, 2018). Yet, many claim so, and cases of misuse of technological affordances and personal data for political goals have been reported globally.

The decision of Switzerland’s supreme court has particularly resonated in Brexit Britain where the campaign ahead of the European Union (EU) membership referendum left too many people feeling “ill-informed” (Brett, 2016, p. 8). Even before the Brexit referendum took place, the House of Commons Treasury Select Committee complained about “the absence of ‘facts’ about the case for and against the UK’s membership on which the electorate can base their vote” (2016, p. 3). According to this, the voters in the United Kingdom were not receiving complete or even truthful information, and there are also concerns that they might have been manipulated by the use of bots (Howard & Kollanyi, 2016) and by the unlawful processing of personal data (ICO, 2018a, 2018b).

The same concerns were raised in the United States during and after the presidential elections in 2016. Several studies have shown evidence of the exposure of US citizens to social media disinformation in the period around elections (see: Guess et al., 2018, and Allcott & Gentzkow, 2017). In other parts of the world, such as in Brazil and in several Asian countries, the means and platforms for transmission of disinformation were somewhat different but the associated risks have been deemed even higher. The most prominent world media, fact checkers and researchers systematically reported about the scope and spread of disinformation on the Facebook-owned and widely used messaging application WhatsApp in the 2018 presidential elections in Brazil. Freedom House warned that elections in some Asian countries, such as India, Indonesia, and Thailand, were also afflicted by falsified content.

Clearly, online disinformation and unlawful political micro-targeting represent a threat to elections around the globe. The extent to which certain societies are more resilient or more vulnerable to the impact of these phenomena depends on different factors, including, among other things, the status of journalism and legacy media, levels of media literacy, the political context and legal safeguards (CMPF, forthcoming). Different political and regulatory traditions play a role in shaping the responses to online disinformation and data-driven political manipulation. Accordingly, these range from doing nothing to criminalising the spread of disinformation, as is the case with the Singapore’s law1 which came into effect in October 2019. While there seems to be more agreement that regulatory intervention is needed to protect democracy, the concerns over the negative impact of inadequate or overly restrictive regulation on freedom of expression remain. In his recent reports (2018, 2019), UN Special Rapporteur on Freedom of Expression David Kaye warned against regulation that entrusts platforms with even more powers to decide on content removals in very short time frames and without public oversight. Whether certain content is illegal or problematic on other grounds is not always a straightforward decision and often depends on the context in which it is presented. Therefore, as highlighted by the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression (2019), to require platforms to make these content moderation decisions in an automated way, without built-in transparency, and without notice or timely recourse for appeal, contains risks for freedom of expression.

The European Commission (EC) has recognised the exposure of citizens to large scale online disinformation (2018a) and micro-targeting of voters based on the unlawful processing of personal data (2018b) as major challenges for European democracies. In a response to these challenges, and to ensure citizens’ access to a variety of credible information and sources, the EC has put in place several measures which aim to create an overarching “European approach”. This paper provides an analysis of this approach to identify the key principles upon which it builds, and to what extent, if at all, they differ from the principles of “traditional” political advertising and media campaign regulation during the electoral period. The analysis further looks at how these principles are elaborated and whether they reflect the complexity of the challenges identified. The focus is on the EU as it is “articulating a more interventionist approach” to the relations with the online platform companies (Flew et al., 2019, p. 45). Furthermore, due to the size of the European market, any relevant regulation can set the global standard, as is the case with the General Data Protection Regulation (GDPR) in the area of data protection and privacy (Flew et al., 2019).

The role of (social) media in elections

The paper starts from the notion that a healthy democracy is dependent on pluralism and that the role of (social) media in elections and the transparency of data-driven political advertising are among the crucial components of any assessment of the state of pluralism in a given country. In this view, pluralism “implies all measures that ensure citizens' access to a variety of information sources, opinion, voices etc. in order to form their opinion without the undue influence of one dominant opinion forming power” (EC, 2007, p. 5; Valcke et al., 2009, p. 2). Furthermore, it implies the relevance of citizens' access to truthful and accurate information.

The media have long been playing a crucial role in election periods: serving, on one side, as wide-reaching platforms for parties and candidates to deliver their messages, and, on the other, helping voters to make informed choices. They set the agenda by prioritising certain issues over others and by deciding on time and space to be given to candidates; they frame their reporting within a certain field of meaning and considering the characteristics of different types of media; and, if the law allows, they sell time and space for political advertising (Kelley, 1963). A democracy requires the protection of media freedom and editorial autonomy, but asks that the media be socially responsible. This responsibility implies respect of fundamental standards of journalism, such as impartiality and providing citizens with complete and accurate information. As highlighted on several occasions by the European Commission for Democracy through Law (so-called Venice Commission) of the Council of Europe (2013, paras. 48, 49): “The failure of the media to provide impartial information about the election campaign and the candidates is one of the most frequent shortcomings that arise during elections”.

Access to the media has been seen as “one of the main resources sought by parties in the campaign period” and to ensure a level playing field “legislation regarding access of parties and candidates to the public media should be non-discriminatory and provide for equal treatment” (Venice Commission, 2010, para. 148). The key principles of media regulation during the electoral period are therefore media impartiality and equality of opportunity for contenders. Public service media are required to abide by higher standards of impartiality compared to private outlets, and audiovisual media are more broadly bound by rules than the printed press and online media. The latter are justified by the perceived stronger effects of audiovisual media on voters (Schoenbach & Lauf, 2004) and by the fact that television channels benefit from the public and limited resource of the radio frequency spectrum (Venice Commission, 2009, paras. 24-28, 58).

In the Media Pluralism Monitor (MPM) 2, a research tool supported by the European Commission and designed to assess risks to media pluralism in EU member states, the role of media in the democratic electoral process is one out of 20 key indicators. It is seen as an aspect of political pluralism and the variables against which the risks are assessed have been elaborated in accordance with the above-mentioned principles. The indicator assesses the existence and implementation of a regulatory and self-regulatory framework for the fair representation of different political actors and viewpoints on public service media and private channels, especially during election campaigns. The indicator also takes into consideration the regulation of political advertising – whether the restrictions are imposed to allow equal opportunities for all political parties and candidates.

The MPM results (Brogi et al., 2018) showed that the rules to ensure the fair representation of political viewpoints in news and informative programmes on public service media channels and services are imposed by law in all EU countries. It is, however, less common for such regulation and/or self-regulatory measures to exist for private channels. A similar approach is observed in relation to political advertising rules, which are more often and more strictly defined for public service than for commercial media. Most countries in the EU have a law or another statutory measure that imposes restrictions on political advertising during election campaigns to allow equal opportunities for all candidates. Even though political advertising is “considered as a legitimate instrument for candidates and parties to promote themselves” (Holtz-Bacha & Just, 2017, p. 5), some countries do not allow it at all. In cases when there is a complete ban on political advertising, public service media provide free airtime on principles of equal or proportionate access. In cases when paid political advertising is allowed, it is often restricted only to the campaign period and regulation seeks to set limits on, for example, campaign resources and spending, the amount of airtime that can be purchased and the timeframe in which political advertising can be broadcast. In most countries there is a requirement for transparency – how much was spent for advertising in the campaign, presented through spending on different types of media. For traditional media, the regulatory framework requires that political advertising (as any other advertising) be properly identified and labelled as such.

Television remains the main source of news for citizens in the EU (Eurobarometer, 2018a, 2017). However, the continuous rise of online sources and platforms as resources for (political) news and views (Eurobarometer, 2018a), and as channels for more direct and personalised political communication, call for a deeper examination of the related practice and potential risks to be addressed. The ways people find and interact with (political) news and the ways political messages are being shaped and delivered to people has been changing significantly with the global rise, popularity and features offered by the online platforms. An increasing number of people, and especially young populations, are using them as doors to news (Newman et al., 2018, p. 15; Shearer, 2018). Politicians are increasingly using the same doors to reach potential voters, and the online platforms have become relevant, if not central, to different stages of the whole process. This means that platforms are now increasingly performing functions long attributed to media and much more through, for example, filtering and prioritising certain content offered to users, and selling the time and space for political advertising based on data-driven micro-targeting. At the same time, a majority of EU countries still do not have specific requirements that would ensure transparency and fair play in campaigning, including political advertising in the online environment. According to the available MPM data (Brogi et al., 2018; and preliminary data collected in 2019), only 11 countries (Belgium, Bulgaria, Denmark, Finland, France, Germany, Italy, Latvia, Lithuania, Portugal and Sweden) have legislation or guidelines to require transparency of online political advertisements. In all cases, it is the general law on political advertising during the electoral period that also applies to the online dimension.

Political advertising and political communication more broadly take on different forms in the environment of online platforms, which may hold both promises and risks for democracy (see, for example, Valeriani & Vaccari, 2016; and Zuiderveen Borgesius et al., 2018). There is still limited evidence on the reach of online disinformation in Europe, but a study conducted by Fletcher et al. (2018) suggests that even if the overall reach of publishers of false news is not high, they achieve significant levels of interaction on social media platforms. Disinformation online comes in many different forms, including false context, imposter, manipulated, fabricated or extreme partisan content (Wardle & Derakhshan, 2017), but always with an intention to deceive (Kumar & Shah, 2018). There are also different motivations for the spread of disinformation, including financial and political (Morgan, 2018), and different platforms’ affordances affect whether disinformation spreads better as organic content or as paid-for advertising. Vosoughi et al. (2018) have shown that Twitter disinformation organically travels faster and further than true information pieces due to technological possibilities, but also due to human nature that is more likely to spread something surprising and emotional, which disinformation often does. On Facebook, on the other hand, the success of spread of disinformation may be significantly attributed to advertising, claim Chiou and Tucker (2018). Accordingly, platforms have put in place different policies towards disinformation. Twitter has recently announced a ban on political advertising, while Facebook continues to run it and exempts politician’s speech and political advertising from third-party fact-checking programmes.

Further to different types of disinformation, and different affordances of platforms and their policies, there are “many different actors involved and we’re learning much more about the different tactics that are being used to manipulate the online public sphere, particularly around elections”, warns Susan Morgan (2018, p. 40). Young Mie Kim and others (2018) have investigated the groups that stood behind divisive issue campaigns on Facebook in the weeks before the 2016 US elections, and found that most of these campaigns were run by groups which did not file reports to the Federal Election Commission. These groups, clustered by authors as non-profits, astroturf/movement groups, and unidentifiable “suspicious” groups, have sponsored four times more ads than those that did file the reports to the Commission. In addition to the variety of groups playing a role in political advertising and political communication on social media today, a new set of tactics are emerging, including the use of automated accounts, so-called bots, and data-driven micro-targeting of voters (Morgan, 2018).

Bradshaw and Howard (2018) have found that governments and political parties in an increasing number of countries of different political regimes are investing significant resources in using social media to manipulate public opinion. Political bots, as they note, are used to promote or attack particular politicians, to promote certain topics, to fake a follower base, or to get opponents’ accounts and content removed by reporting it on a large scale. Micro-targeting, as another tactic, is commonly defined as a political advertising strategy that makes use of data analytics to build individual or small group voter models and to address them with tailored political messages (Bodó et al., 2017). These messages can be drafted with the intention to deceive certain groups and to influence their behaviour, which is particularly problematic in the election period when the decisions of high importance for democracy are made, the tensions are high and the time for correction or reaction is scarce.

The main fuel of contemporary political micro-targeting is data gathered from citizens’ online presentation and behaviour, including from their social media use. Social media has also been used as a channel for distribution of micro-targeted campaign messages. This political advertising tactic came into the spotlight with the Cambridge Analytica case reported by journalist Carole Cadwalladr in 2018. Her investigation, based on the information from whistleblower Christopher Wylie, revealed that the data analytics firm Cambridge Analytica, which worked with Donald Trump’s election team and the winning Brexit campaign, harvested the personal data of millions of peoples' Facebook profiles without their knowledge and consent, and used it for political advertising purposes (Cadwalladr, 2018). In the EU, the role of social media in elections came high on the agenda of political institutions after the Brexit referendum in 2016. The focus has been in particular on the issue of ‘fake news’ or disinformation. The reform of the EU’s data protection rules, which resulted in the GDPR, started in 2012. The Regulation was adopted on 14 April 2016, and its scheduled time of enforcement, 25 May 2018, collided with the outbreak of the Cambridge Analytica case.

Perspective and methodology

Although, European elections are primarily the responsibility of national governments, the EU has taken several steps to tackle the issue of online disinformation. In the Communication of 26 April 2018 the EC called these steps a “European approach” (EC, 2018a), with one of its key deliverables being the Code of Practice on Disinformation (2018), presented as a self-regulatory instrument that should encourage proactivity of online platforms in ensuring transparency of political advertising and restricting the automated spread of disinformation. The follow up Commission’s Communication from September 2018, focused on securing free and fair European elections (EC, 2018f), suggests that, in the context of elections, principles set out in the European approach for tackling online disinformation (EC, 2018a) should be seen as complementary to the GDPR (Regulation, 2016/679). The Commission also prepared specific guidance on the application of GDPR in the electoral context (EC, 2018d). It further suggested considering the Recommendation on election cooperation networks (EC, 2018e), and transparency of political parties, foundations and campaign organisations on financing and practices (Regulation, 2018, p. 673). This paper provides an analysis of the listed legal and policy instruments that form and complement the EU’s approach to tackling disinformation and suspicious tactics of political advertising on online platforms. The Commission’s initiatives in the area of combating disinformation contain also a cybersecurity aspect. However, this subject is technically and politically too complex to be included in this specific analysis.

The EC considers online platforms as covering a wide range of activities, but the European approach to tackling disinformation is concerned primarily with “online platforms that distribute content, particularly social media, video-sharing services and search engines” (EC, 2018a). This paper employs the same focus and hence the same narrow definition of online platforms. The main research questions are: which are the key principles upon which the European approach to tackling disinformation and political manipulation builds; and to what extent, if at all, do they differ from the principles of “traditional” political advertising and media campaign regulation in the electoral period? The analysis further seeks to understand how these principles are elaborated and whether they reflect the complexity of the challenges identified. For this purpose, the ‘European approach’ is understood in a broad sense (EC, 2018f). Looking through the lens of pluralism, this analysis uses a generic inductive approach, a qualitative research approach that allows findings to emerge from the data without having pre-defined coding categories (Liu, 2016). This methodological decision was made as this exploratory research sought not only to analyse the content of the above listed documents, but also the context in which they came into existence and how they relate to one another.

Two birds with one stone: the European approach in creating fair and plural campaigning online

The actions currently contained in the EU’s approach to tackling online disinformation and political manipulation derive from the regulation (GDPR), EC-initiated self-regulation of platforms (Code of Practice on Disinformation), and the non-binding Commission’s communications and recommendations to the member states. While some of the measures, such as data protection, have a long tradition and have only been evolving, some represent a new attempt to develop solutions to the problem of platforms (self-regulation). In general, the current European approach can be seen as primarily designed towards (i) preventing unlawful micro-targeting of voters by protecting personal data; and (ii) combating disinformation by increasing the transparency of political and issue-based advertising on online platforms.

Protecting personal data

The elections of May 2019 were the first European Parliament (EP) elections after major concerns about legality and legitimacy of the vote in US presidential election and the UK's Brexit referendum. The May 2019 elections were also the first elections for the EP held under the GDPR, which became directly applicable across the EU as of 25 May 2018. As a regulation, the GDPR is directly binding, but does provide flexibility for certain aspects of the regulation to be adjusted by individual member states. For example, to balance the right to data protection with the right to freedom of expression, article 85 of the GDPR provides for the exemption of, or derogation for, the processing of data for “journalistic purposes or the purpose of academic artistic or literary expression”, which should be clearly defined by each member state. While the GDPR provides the tools necessary to address instances of unlawful use of personal data, including in the electoral context, its scope is still not fully and properly understood. Since it was the very first time the GDPR was applied in the European electoral context, the European Commission published in September 2018 the Guidance on the application of Union data protection law in the electoral context (EC, 2018d).

The data protection regime in the EU is not new, 3 even though it has not been well harmonised and the data protection authorities (DPAs) have had limited enforcement powers. The GDPR aims to address these shortcomings as it gives DPAs powers to investigate, to correct behaviour and to impose fines up to 20 million Euros or, in the case of a company, up to 4% of its worldwide turnover. In its Communication, the EC (2018d) particularly emphasises the strengthened powers of authorities and calls them to use these sanctioning powers especially in cases of infringement in the electoral context. This is an important shift as the European DPAs have historically been very reluctant to regulate political parties. The GDPR further aims at achieving cooperation and harmonisation of the Regulation’s interpretations between the national DPAs by establishing the European Data Protection Board (EDPB). The EDPB is made up of the heads of national data protection authorities and of the European Data Protection Supervisor (EDPS) or their representatives. The role of the EDPS is to ensure that EU institutions and bodies respect people's right to privacy when processing their personal data. In March 2018, the EDPS published an Opinion on online manipulation and personal data, confirming the growing impact of micro-targeting in the electoral context and a significant shortfall in transparency and provision of fair processing of information (EDPS, 2019).

The Commission guidance on the application of GDPR in the electoral context (EC, 2018d) underlines that it “applies to all actors active in the electoral context”, including European and national political parties, European and national political foundations, platforms, data analytics companies and public authorities responsible for the electoral process. Any data processing should comply with the GDPR principles such as fairness and transparency, and for specified purposes only. The guidance provides relevant actors with the additional explanation of the notions of “personal data” and of “sensitive data”, be it collected or inferred. Sensitive data may include political opinions, ethnic origin, sexual orientation and similar, and the processing of such data is generally prohibited unless one of the specific justifications provided for by the GDPR applies. This can be in the case where the data subject has given explicit, specific, fully informed consent for processing; when this information is manifestly made public by the data subject; when the data relate to “the members or to former members of the body or to persons who have regular contact with”; or when processing “is necessary for reasons of substantial public interest” (GDPR, Art. 9, para. 2). In a statement adopted in March 2019, the EDPB points out that derogations of special data categories should be interpreted narrowly. In particular, the derogation in the case when a person makes his or her ‘political opinion’ public cannot be used to legitimate inferred data. Bennett (2016) also warns that vagueness of several terms used to describe exceptions from the application of Article 9(1) might lead to confusion or inconsistencies in interpretation as processing of ‘political opinions’ becomes increasingly relevant for contemporary political campaigning.

The principles of fairness and transparency require that individuals (data subjects) are informed of the existence of the processing operation and its purposes (GDPR, Art. 5). The Commission’s guidance clearly states that data controllers (those who make the decision on and the purpose of processing, like political parties or foundations) have to inform individuals about key aspects related to the processing of their personal data, including why they receive personalised messages from different organisations; which is the source of the data when not collected directly from the person; how are data from different sources combined and used; and whether the automated decision-making has been applied in processing.

Despite the strengthened powers and an explicit call to act more in the political realm (EC, 2018d), to date we have not seen many investigations by DPAs into political parties under the GDPR. An exception is UK Information Commissioner Elizabeth Denham. In May 2017, she announced the launch of a formal investigation into the use of data analytics for political purposes following the wrongdoings exposed by journalists, in particular Carole Cadwalladr, during the EU Referendum, and involving parties, platforms and data analytics companies such as Cambridge Analytica. The report of November 2018 concludes:

that there are risks in relation to the processing of personal data by many political parties. Particular concerns include the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing and the use of third-party data analytics companies, with insufficient checks around consent (ICO, 2018a, p. 8).

As a result of the investigation, the ICO sent 11 letters to the parties with formal warnings about their practices, and in general it became the largest investigation conducted by a DPA on this matter and encompassing different actors, not only political parties but also social media platforms, data brokers and analytics companies.

Several cases have been reported where the national adaptation of the GDPR does not fully meet the requirements of recital 56 GDPR which establishes that personal data on people’s political opinions may be processed “for reasons of public interest” if “the operation of the democratic system in a member state requires that political parties compile” such personal data; and “provided that appropriate safeguards are established”. In November 2018 a question was raised in the European Parliament on the data protection law adapting Spanish legislation to the GDPR which allows “political parties to use citizens’ personal data that has been obtained from web pages and other publicly accessible sources when conducting political activities during election campaigns”. As a member of the European Parliament Sophia in 't Veld, who posed the question, highlighted: “Citizens can opt out if they do not wish their data to be processed. However, even if citizens do object to receiving political messages, they could still be profiled on the basis of their political opinions, philosophical beliefs or other special categories of personal data that fall under the GDPR”. The European Commission was also urged to investigate the RomanianGDPR implementation for similar concerns. Further to the reported challenges with national adaptation of GDPR, in November 2019 the EDPS has issued the first ever reprimand to an EU institution. The ongoing investigation into the European Parliament was prompted by the Parliament’s use of a US-based political campaigning company NationBuilder to process personal data as part of its activities relating to the 2019 EU elections.

Combating disinformation

In contrast to the GDPR, which is sometimes praised as “the most consequential regulatory development in information policy in a generation” (Hoofnagle et al., 2019, p. 66), the EC has decided to tackle fake news and disinformation through self-regulation, at least in the first round. The European Council, a body composed of the leaders of the EU member states, first recognised the threat of online disinformation campaigns in 2015 when it asked the High Representative of the Union for Foreign Affairs and Security Policy to address the disinformation campaigns by Russia (EC, 2018c). The Council is not one of the EU's legislating institutions, but it defines the Union’s overall political direction and priorities. So, it comes as no surprise that the issue of disinformation came high on the agenda of the EU, in particular after the UK referendum and US presidential elections in 2016. In April 2018 the EC (2018a) adopted a Communication on Tackling online disinformation: a European Approach. This is the central document that set the tone for future actions in this field. In the process of its drafting, the EC carried out consultations with experts and stakeholders, and used citizens’ opinions gathered through polling. The consultations included the establishment of a High-Level Expert Group on Fake News and Online Disinformation (HLEG) in early 2018, which two months later produced a Report (HLEG, 2018) advising the EC against simplistic solutions. Broader public consultations and dialogues with relevant stakeholders were also held, and the specific Eurobarometer (2018b) poll was conducted via telephone interviews in all EU member states. The findings indicated a high level of concern among the respondents for the spread of online disinformation in their country (85%) and saw it as a risk for democracy in general (83%). This urged the EC to act and the Communication on tackling online disinformation was a starting point and the key document in understanding the European approach to the pressing challenges. The Communication builds around four overarching principles and objectives: transparency, diversity of information, credibility of information, and cooperation (EC, 2018a).

Transparency, in this view, means that it should be clear to users where the information comes from, who the author is and why they see certain content when an automated recommendation system is being employed. Furthermore, a clearer distinction between sponsored and informative content should be made and it should be clearly indicated who paid for the advertisement. The diversity principle is strongly related to strengthening so-called quality journalism, 4 to rebalancing the disproportionate power relations between media and social media platforms, and to increasing media literacy levels. The credibility, according to the EC, is to be achieved by entrusting platforms to design and implement a system that would provide an indication of the source and information trustworthiness. The fourth principle emphasises cooperation between authorities at national and transnational level and cooperation of broad stakeholders in proposing solutions to the emerging challenges. With an exception of emphasising media literacy and promoting cooperation networks of authorities, the Communication largely recommends that platforms design solutions which would reduce the reach of manipulative content and disinformation, and increase the visibility of trustworthy, diverse and credible content.

The key output of this Communication is a self-regulatory Code of Practice on Online Disinformation (CoP). The document was drafted by the working group composed of online platforms, advertisers and the advertising industry, and was reviewed by the Sounding Board, composed of academics, media and civil society organisations. The CoP was agreed by the online platforms Facebook, Google and Twitter, Mozilla, and by advertisers and the advertising industry, and was presented to the EC in October 2018. The Sounding Board (2018), however, presented a critical view on its content and the commitments laid out by the platforms, stating that it “contains no clear and meaningful commitments, no measurable objectives” and “no compliance or enforcement tool”. The CoP, as explained by the Commission, represents a transitional measure where private actors are entrusted to increase transparency and credibility of the online information environment. Depending on the evaluation of their performance in the first 12 months, the EC is supposed to determine the further steps, including the possibility of self-regulation being replaced with regulation (EC, 2018c). The overall assessment of the Code’s effectiveness is expected to be presented in early 2020.

The CoP builds on the principles expressed in the Commission’s Communication (2018a) through the actions listed in Table 1. For the purpose of this paper the actions are not presented in the same way as in the CoP. THey are instead slightly reorganised under the following three categories: Disinformation; Political advertising, Issue-based advertising.

Table 1: Commitments of the signatories of the Code of Practice on Online Disinformation selected and grouped under three categories: disinformation, political advertising, issue-based advertising. Source: composed by the author based on the Code of Practice on Online Disinformation

Disinformation

Political advertising

Issue-based advertising

To disrupt advertising and monetisation incentives for accounts and websites which consistently misrepresent information about themselves

To clearly label paid-for communication as such

Limiting the abuse of platforms by unauthentic users (misuse of automated bots)

To publicly disclose political advertising, including actual sponsor and amounts spent

To publicly disclose, conditioned to developing a working definition of “issue-based advertising” which does not limit freedom of expression and excludes commercial advertising

Implementing rating systems (on trustworthiness), and report system (on false content)

Enabling users to understand why they have been targeted by a given advertisement

To invest in technology to prioritise “relevant, authentic and authoritative information” in search, feeds and other ranked channels

  

Resources for users on how to recognise and limit the spread of false news

  

In the statement on the first annual self-assessment reports by the signatories of the CoP, the Commission acknowledged that some progress has been achieved, but warns that it “varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny”. The European Regulators Group for Audiovisual Media Services (ERGA) has been supporting the EC in monitoring the implementation of the commitments made by Google, Facebook and Twitter under the CoP, particularly in the area of political and issue-based advertising. In June 2019 ERGA released an interim Report as a result of the monitoring activities carried out in 13 EU countries, based on the information reported by platforms and on the data available in their online archives of political advertising. While it stated “that Google, Twitter and Facebook made evident progress in the implementation of the Code’s commitments by creating an ad hoc procedure for the identification of political ads and of their sponsors and by making their online repository of relevant ads publicly available”, it also emphasised that the platforms have not met a request to provide access to the overall database of advertising for the monitored period, which “was a significant constraint on the monitoring process and emerging conclusions” (ERGA, 2019, p. 3). Furthermore, based on the analysis of the information provided in the platforms’ repositories of political advertising (e.g., Ad Library), the information was “not complete and that not all the political advertising carried on the platforms was correctly labelled as such” (ERGA, 2019, p. 3).

The EC still needs to provide a comprehensive assessment on the implementation of the commitments under the CoP after an initial 12-month period. However, it is already clear that the issue of the lack of transparency of the platforms’ internal operations and decision-making processes remains and represents a risk. If platforms are not amenable to thorough public auditing, then the adequate assessment of the effectiveness of implementation when it comes to self-regulation becomes impossible. The ERGA Report (2019) further warns that at this point it is not clear what options for micro-targeting were offered to political advertisements nor if all options are disclosed in the publicly available repositories of political advertising.

Further to the commitments laid down in the CoP and relying on social media platforms to increase transparency of political advertising online, the Commission Recommendation of 9 September 2018 (EC, 2018e), “encourages”, and asks member states to “encourage” further transparency commitments by European and national political parties and foundations, in particular:

information on the political party, political campaign or political support group behind paid online political advertisements and communications” [...] “information on any targeting criteria used in the dissemination of such advertisements and communications” [...] “make available on their websites information on their expenditure for online activities, including paid online political advertisements and communications (EC, 2018e, p. 8).

The Recommendation (EC, 2018e) further advises member states to set up a national election network, involving national authorities with competence for electoral matters, including data protection commissioners, electoral authorities and audio-visual media regulators. This recommendation is further elaborated in the Action plan (EC, 2018c) but, because of practical obstacles, national cooperation between authorities has not yet become a reality in many EU countries.

Key principles and shortcomings of the European approach

This analysis has shown that the principles contained in the above mentioned instruments, which form the basis of the European approach to combating disinformation and political manipulation are: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

Data protection and transparency principles related to personal data collection, processing and use are contained in the GDPR. The requirement to increase transparency of political and issues-based advertising and of automated communication is currently directed primarily towards platforms that have committed themselves to label and publicly disclose sponsors and content of political and issues-based advertising, as well as to identify and label automated accounts. Unlike with the traditional media landscapes where, in general, on the same territory, media were broadcasting the same political advertising and messages to their audiences, in the digital information environment political messages are being targeted and shown only to specific profiles of voters with limited ability to track them to see which messages were targeted to whom. To increase transparency on this level would require platforms to provide a user-friendly repository of political ads, including searchable information on actual sponsors and amounts spent. At the moment, they struggle with how to identify political and issue-based ads, to distinguish them from other types of advertising, and to verify ad buyers’ identities (Leerssen et al., 2019).

Furthermore, the European approach fails to impose similar transparency requirements towards political parties to provide searchable and easy to navigate repositories of the campaign materials used. The research project of campaign monitoring during the 2019 European elections, showed that parties/groups/candidates participating in the elections were largely not transparent about their campaign materials. Materials were not readily available on their websites or social media accounts nor did they respond to direct requests from researchers (Simunjak et al., 2019). This warns that while it is relevant to require platforms to provide more transparency on political advertising, it is perhaps even more relevant to demand this transparency directly from political parties and candidates in elections.

In the framework of transparency, the European approach also fails to further emphasise the need for political parties to declare officially to authorities and under a specific category the amounts spent for digital (including social media) campaigning. At present, in some EU countries (for example Croatia, see: Klaric, 2019), authorities with competences in electoral matters do not consider social media as media and accordingly do not apply the requirements to report spending on social media and other digital platforms in a transparent manner. This represents a risk, as the monitoring of the latest EP elections has clearly showed that the parties had spent both extensive time and resources on their social media accounts (Novelli & Johansson, 2019).

The diversity and credibility principles stipulated in the Communication on tackling online disinformation and in the Action plan ask from platforms to indicate the information trustworthiness, to label automated accounts, to close down fake accounts, and to prioritise quality journalism. At the same time, clear definition or instructions on criteria to determine whether an information or a source is trustworthy and whether it represents quality journalism is not provided. Entrusting platforms with making these choices without the possibility of auditing their algorithms and decision-making processes represents a potential risk for freedom of expression.

The signatories of the CoP have committed themselves to disrupt advertising and monetisation incentives for accounts and websites, which consistently misrepresent information about themselves. But, what about accounts that provide accurate information about themselves but occasionally engage in campaigns which might also contain disinformation? For example, a political party may use data to profile and target individual voters or a small group of voters with messages that are not completely false but are exaggerated, taken out of context or framed with an intention to deceive and influence voters’ behaviour. As already noted, disinformation comes in many different forms, including false context, imposter, manipulated or fabricated content (Wardle & Derakhshan, 2017). While the work of fact-checkers and flagging of false content are not completely useless here, in the current state of play this is far from sufficient to tackle the problems of disinformation, including in political advertising and especially of dark ads 5. The efficiency of online micro-targeting depends largely on data and profiling. Therefore, if effectively implemented, the GDPR should be of use here by preventing the unlawful processing of personal data.

Another important aspect of the European approach are stronger sanctions in cases when the rules are not respected. This entails increased powers of authorities, first and foremost of DPAs and increased fines under the GDPR. Data protection in the electoral context is difficult to ensure if the cooperation between different authorities with competence for electoral matters (such as data protection commissioners, electoral authorities and audio-visual media regulators) is not established and operational. While the European approach strongly recommends cooperation, it is not easily achievable at a member state level, as it requires significant investments in capacity building and providing channels for cooperation. In some cases, it may even require amendments to the legislative framework. The cooperation of regulators of the same type at the EU level is sometimes hampered by the fact that their competences differ in different member states.

The CoP also contains a commitment on “empowering the research community”. This means that the CoP signatories commit themselves to support research on disinformation and political advertising by providing researchers access to data sets, or collaborating with academics and civil society organisations in other ways. However, the CoP does not specify how this cooperation should work, the procedures for granting access and for what kind of data, or which measures should researchers put in place to ensure appropriate data storage, security and protection. In the reflection on the platform’s progress under the Code, three Commissioners warned that the “access to data provided so far still does not correspond to the needs of independent researchers”.

Conclusions

This paper has given an overview of the developing European approach to combating disinformation and political manipulation during an electoral period. It provided an analysis of the key instruments contained in the approach and drew out the key principles upon which it builds: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

The principles of legacy media regulation in the electoral period are impartiality and equality of opportunity for contenders. This entails balanced and non-partisan reporting as well as equal or proportionate access to media for political parties (be it free or paid-for). If political advertising is allowed, it is usually subject to transparency and equal conditions requirements: how much was spent on advertising in the campaign needs to be presented through spending on different types of media and reported to the competent authorities. The regulatory framework requires that political advertising be properly labelled as such.

In the online environment, the principles applied to legacy media require further elaboration as the problem of electoral disinformation cuts across a number of different policy areas, involving a range of public and private actors. Political disinformation is not a problem that can easily be compartmentalised into existing legal and policy categories. It is a complex and multi-layered issue that requires a more comprehensive and collaborative approach when designing potential solutions. The emerging EU approach reflects the necessity for that overall policy coordination.

The main fuel of online political campaigning is data. Therefore, the protection of personal data and especially of “sensitive” data from abuse becomes a priority of any action that aims to ensure free, fair and plural elections. The European approach further highlights the importance of transparency. It calls on platforms to clearly identify political advertisements and who paid for them, but it fails to emphasise the importance of having a repository of all the material used in the campaign provided by candidates and political parties. Furthermore, a stronger requirement for political parties to report on the amounts spent on different types of communication channels (including legacy, digital and social media) is lacking in this approach, as well as the requirement for platforms to provide more comprehensive and workable data on sponsors and spending in political advertising.

The European Commission’s communication of the European approach claims that it aims to address all actors active in the electoral context, including European and national political parties and foundations, online platforms, data analytics companies and public authorities responsible for the electoral process. However, it seems that the current focus is primarily on the platforms and in a way that enables them to shape the future direction of actions in the fight against disinformation and political manipulation.

As regards the principle of cooperation, many obstacles, such as differences in competences and capacities of the relevant national authorities, have not been fully taken into account. The elections are primarily a national matter so the protection of the electoral process, as well as the protection of media pluralism, falls primarily within the competence of member states. Yet, if the approach to tackling disinformation and political manipulation is to be truly European, there should be more harmonisation between authorities and approaches taken at national levels.

While being a significant step in the creation of a common EU answer to the challenges of disinformation and political manipulation, especially during elections, the European approach requires further elaboration, primarily to include additional layers of transparency. This entails transparency of political parties and of other actors on their actions in the election campaigns, as well as more transparency about internal processes and decision-making by platforms especially on actions of relevance to pluralism, elections and democracy. Furthermore, the attempt to propose solutions and relevant actions at the European level faces two constraints. On the one hand, it faces the power of global platforms shaped in the US tradition, which to a significant extent differs from the European approach in balancing freedom of expression and data protection. On the other hand, the EU approach confronts the resilience of national political traditions in member states, in particular if the measures are based on recommendations and other soft instruments.

References

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives,31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bodó, B., Helberger, N. & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse?. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Bradshaw, S. & Howard, P. N. (2018). Challenging Truth and Trust: A Global Inventory of Organised Social Media Manipulation [Report]. Computational Propaganda Research Project, Oxford Internet Institute. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf

Brett, W. (2016). It’s Good to Talk: Doing Referendums Differently. The Electoral Reform Society’s report. Retrieved from https://www.electoral-reform.org.uk/wp-content/uploads/2017/06/2016-EU-Referendum-its-good-to-talk.pdf

Brogi, E., Nenadic, I., Parcu, P. L., & Viola de Azevedo Cunha, M. (2018). Monitoring Media Pluralism in Europe: Application of the Media Pluralism Monitor 2017 in the European Union, FYROM, Serbia and Turkey [Report]. Centre for Media Pluralism and Media Freedom, European University Institute. Retrieved from https://cmpf.eui.eu/wp-content/uploads/2018/12/Media-Pluralism-Monitor_CMPF-report_MPM2017_A.pdf

Bruns, A. (2017, September 15). Echo chamber? What echo chamber? Reviewing the evidence. 6th Biennial Future of Journalism Conference (FOJ17), Cardiff, UK. Retrieved from https://eprints.qut.edu.au/113937/1/Echo%20Chamber.pdf

Cadwalladr, C. & Graham-Harrison, E. (2018, March 17) Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Chiou, L. & Tucker, C. E. (2018). Fake News and Advertising on Social Media: A Study of the Anti-Vaccination Movement [Working Paper No. 25223]. Cambridge, MA: The National Bureau of Economic Research. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3209929https://doi.org/10.3386/w25223

Centre for Media Pluralism and Media Freedom (CMPF). (forthcoming, 2020). Independent Study on Indicators to Assess Risks to Information Pluralism in the Digital Age. Florence: Media Pluralism Monitor Project.

Code of Practice on Disinformation (September 2018). Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

Council Decision (EU, Euratom) 2018/994 of 13 July 2018 amending the Act concerning the election of the members of the European Parliament by direct universal suffrage, annexed to Council Decision 76/787/ECSC, EEC, Euratom of 20 September 1976. Retrieved from https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32018D0994&qid=1531826494620

Commission Recommendation (EU) 2018/234 of 14 February 2018 on enhancing the European nature and efficient conduct of the 2019 elections to the European Parliament (OJ L 45, 17.2.2018, p. 40)

Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37)

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information,Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656

Eurobarometer (2018a). Standard 90: Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2215

Eurobarometer (2018b). Flash 464: Fake news and disinformation online. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/survey/getsurveydetail/instruments/flash/surveyky/2183

Eurobarometer (2017). Standard 88:. Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2143

European Commission (EC). (2018a). Tackling online disinformation: a European Approach, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. COM/2018/236. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&from=EN

European Commission (EC). (2018b). Free and fair European elections – Factsheet, State of the Union. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/IP_18_5681

European Commission (EC). (2018c, December 5). Action Plan against Disinformation. European Commission contribution to the European Council (5 December). Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/eu-communication-disinformation-euco-05122018_en.pdf

European Commission (EC). (2018d, September 12). Commission guidance on the application of Union data protection law in the electoral context: A contribution from the European Commission to the Leaders' meeting in Salzburg on 19-20 September 2018. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-data-protection-law-electoral-guidance-638_en.pdf

European Commission (EC). (2018e, September 12). Recommendation on election cooperation networks, online transparency, protection against cybersecurity incidents and fighting disinformation campaigns in the context of elections to the European Parliament. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-cybersecurity-elections-recommendation-5949_en.pdf

European Commission (EC). (2018f). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the
Committee of the Regions: Securing free and fair European elections. COM(2018)637. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-free-fair-elections-communication-637_en.pdf

European Commission (EC). (2007). Media pluralism in the Member States of the European Union [Commission Staff Working Document No. SEC(2007)32]. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/media_pluralism_swp_en.pdf

European Data Protection Board (EDPB). (2019). Statement 2/2019 on the use of personal data in the course of political campaigns. Retrieved from https://edpb.europa.eu/our-work-tools/our-documents/ostalo/statement-22019-use-personal-data-course-political-campaigns_en

European Data Protection Supervisor (EDPS). (2018). Opinion 372018 on online manipulation and personal data. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-03-19_online_manipulation_en.pdf

European Regulators Group for Audiovisual Media Services (ERGA). (2019, June). Report of the activities carried out to assist the European Commission in the intermediate monitoring of the Code of practice on disinformation [Report]. Slovakia: European Regulators Group for Audiovisual Media Services. Retrieved from http://erga-online.eu/wp-content/uploads/2019/06/ERGA-2019-06_Report-intermediate-monitoring-Code-of-Practice-on-disinformation.pdf?fbclid=IwAR1BZV2xYlJv9nOzYAghxA8AA5q70vYx0VUNnh080WvDD2BfFfWFM3js4wg

Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the reach of “fake news” and online disinformation in Europe. Retrieved from https://www.press.is/static/files/frettamyndir/reuterfake.pdf

Flew, T., Martin, F., Suzor, N. P. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media and Policy, 10(1), 33–50. https://doi.org/10.1386/jdtv.10.1.33_1

Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: evidence from the consumption of fake news during the 2016 US presidential campaign [Working Paper]. Retrieved from https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf

High Level Expert Group on Fake News and Online Disinformation (HLEG). (2018). Final report [Report]. Retrieved from https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

Hoofnagle, C.J. & van der Sloot, B., & Zuiderveen Borgesius, F. J. (2019). The European Union general data protection regulation: what it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

Holtz-Bacha, C. & Just, M. R. (Eds.). (2018). Routledge Handbook of Political Advertising. New York: Routledge.

House of Commons Treasury Committee. (2016, May 27). The economic and financial costs and benefits of the UK’s EU membership. First Report of Session 2016–17. Retrieved from https://publications.parliament.uk/pa/cm201617/cmselect/cmtreasy/122/122.pdf

Howard, P. N. & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum. ArXiv160606356 Phys. Retrieved from https://arxiv.org/abs/1606.06356

Information Commissioner’s Office (ICO). (2018a, July 11). Investigation into the use of data analytics in political campaigns [Report to Parliament]. Retrieved from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

Information Commissioner’s Office (ICO). (2018b, July 11). Democracy disrupted? Personal information and political influence. Retrieved from https://ico.org.uk/media/action-weve-taken/2259369/democracy-disrupted-110718.pdf

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., Heinrich, R., Baragwanath, R., & Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kelley, S. Jr. (1962). Elections and the Mass Media. Law and Contemporary Problems, 27(2), 307–326. Retrieved from https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=2926&context=lcp

Klaric, J. (2019, March 28) Ovo je Hrvatska 2019.: za Državno izborno povjerenstvo teletekst je medij, Facebook nije. Telegram. Retrieved from https://www.telegram.hr/politika-kriminal/ovo-je-hrvatska-2019-za-drzavno-izborno-povjerenstvo-teletekst-je-medij-facebook-nije/

Kreiss, D. l., & McGregor, S. C. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Valcke, P., Lefever, K., Kerremans, R., Kuczerawy, A., Sükosd, M., Gálik, M., … Füg, O. (2009). Independent Study on Indicators for Media Pluralism in the Member States – Towards a Risk-Based Approach [Report]. ICRI, K.U. Leuven; CMCS, Central European University, MMTC, Jönköping Business School; Ernst & Young Consultancy Belgium. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/pfr_report.pdf

Kumar, S., & Shah, N. (2018, April). False information on web and social media: A survey. arXiv:1804.08559 [cs]. Retrieved from https://arxiv.org/pdf/1804.08559.pdf

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & de Vreese, C. H. (2019). Platform ad archives: promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Liu, L. (2016). Using Generic Inductive Approach in Qualitative Educational Research: A Case Study Analysis. Journal of Education and Learning, 5(2), 129–135. https://doi.org/10.5539/jel.v5n2p129

Morgan, S. (2018). Fake news, disinformation, manipulation and online tactics to undermine democracy. Journal of Cyber Policy, 3(1), 39–43. https://doi.org/10.1080/23738871.2018.1462395

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Digital News Report 2018. Oxford: Reuters Institute for the Study of Journalism. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/digital-news-report-2018.pdf

Novelli, E. & Johansson, B. (Eds.) (2019). 2019 European Elections Campaign: Images,Topics, Media in the 28 Member States [Research Report]. Directorate-General of Communication of the European Parliament. Retrieved from https://op.europa.eu/hr/publication-detail/-/publication/e6767a95-a386-11e9-9d01-01aa75ed71a1/language-en?fbclid=IwAR0C9R6Mw0Gd5aggB7wZx6KGWt3is84M210q3rv0g9LbXJqJpXuha1H6yeQ

Regulation (EU, Euratom). 2018/673 amending Regulation (EU, Euratom) No 1141/2014 on the statute and funding of European political parties and European political foundations. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32018R0673

Regulation (EU). 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1)

Regulation (EU, Euratom). No 1141/2014 of the European Parliament and of the Council of 22 October 2014 on the statute and funding of European political parties and European political foundations, (OJ L 317, 4.11.2014, p.1).

Report of the Special Rapporteur to the General Assembly on online hate speech. (2019). (A/74/486). Retrieved from https://www.ohchr.org/Documents/Issues/Opinion/A_74_486.pdf

Report of the Special Rapporteur to the Human Rights Council on online content regulation. (2018). (A/HRC/38/35). Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/096/72/PDF/G1809672.pdf?OpenElement

Schoenbach, K., & Lauf, E. (2004). Another Look at the ‘Trap’ Effect of Television—and Beyond. International Journal of Public Opinion Research, 16(2), 169–182. https://doi.org/10.1093/ijpor/16.2.169

Shearer, E. (2018, December 10). Social media outpaces print newspapers in the U.S. as a news source. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/

Šimunjak, M., Nenadić, I., & Žuvela, L. (2019). National report: Croatia. In E. Novelli & B. Johansson (Eds.), 2019 European Elections Campaign: Images, topics, media in the 28 Member States (pp. 59–66). Brussels: European Parliament.

Sounding Board. (2018). The Sounding Board’s Unanimous Final Opinion on the so-called Code of Practice on 24 September 2018. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

The Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression. (2019). How governments and platforms have fallen short in trying to moderate content online (Co-Chairs Report No. 1 and Working Papers). Retrieved from https://www.ivir.nl/publicaties/download/TWG_Ditchley_intro_and_papers_June_2019.pdf

Valeriani, A., & Vaccari, C. (2016). Accidental exposure to politics on social media as online participation equalizer in Germany, Italy, and the United Kingdom. New Media & Society, 18(9). https://doi.org/10.1177/1461444815616223

Venice Commission. (2013). CDL-AD(2013)021 Opinion on the electoral legislation of Mexico, adopted by the Council for Democratic Elections at its 45th meeting (Venice, 13 June 2013) and by the Venice Commission at its 95th Plenary Session (Venice, 14-15 June 2013).

Venice Commission. (2010). CDL-AD(2010)024 Guidelines on political party regulation, by the OSCE/ODIHR and the Venice Commission, adopted by the Venice Commission at its 84th Plenary Session (Venice, 15-16 October 2010).

Venice Commission. (2009). CDL-AD(2009)031 Guidelines on media analysis during election observation missions, by the OSCE Office for Democratic Institutions and Human Rights (OSCE/ODIHR) and the Venice Commission, adopted by the Council for Democratic Elections at its 29th meeting (Venice, 11 June 2009) and the Venice Commission at its 79th Plenary Session (Venice, 12- 13 June 2009).

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Wakefield, J. (2019, February 18). Facebook needs regulation as Zuckerberg 'fails' - UK MPs. BBC. Retrieved from https://www.bbc.com/news/technology-47255380

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking [Report No. DGI(2017)09]. Strasbourg: Council of Europe. Retrieved from https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-de%CC%81sinformation-1.pdf?x56713

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S. Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., de Vreese, C. H. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. The so-called ‘fake news’ law was passed in May 2019 allowing ministers to issue orders to platforms like Facebook to put up warnings next to disputed posts or, in extreme cases, to take the content down. The law also allows for fines of up to SG$ 1 million (665,000 €) for companies that fail to comply, and the individual offenders could face up to ten years in prison. Many have raised the voice against this law, including the International Political Science Association (IPSA), but it came into effect and is being used.

2. To which the author is affiliated.

3. The GDPR supplanted the Data Protection Directive (Directive 95/46/EC on the protection of individuals with regard to the processing of personal data (PII (US)) and on the free movement of such data).

4. The Council of Europe also uses the term ‘quality journalism’ but it is not fully clear what is entailed in ‘quality’ and who decides on what ‘quality journalism’ is, and what is not. The aim could be (and most likely is) to distinguish journalism that respects professional standards from less reliable, less structured and less ethical and professional standards bound forms of content production and delivery. Many argue that journalism already entails the request for quality so this attribute adjective is not necessary and, in fact, may be problematic.

5. Dark advertising is a type of online advertising visible only to the advert's publisher and the intended target group.

As European eyes turn to India's fake news lockdown, Argentina's human rights response should be evaluated

$
0
0

The influx of false and misleading information during the COVID-19 pandemic has once again raised anxieties about potential regulatory intervention to curb this ever-growing digital problem. As a global issue, it is interesting to observe developments when considering how Europe could respond. As the European institution tasked with legislative initiative in this area, the European Commission has highlighted how the dissemination of false and harmful content has further strained the public's ability to decipher genuine and reliable news from medical falsities.

In response to the proliferation of inaccuracies during the current crisis, the Commission has advised citizens to refrain from sharing "unverified information coming from dubious sources", while simultaneously "encouraging" online platforms to amplify authoritative sources and "demote" low credibility content. This is reflective of the self-regulatory legal framework that the Commission has adopted as a response to this technological problem. This can be traced back to the resolution on "Online Platforms and the Digital Single Market" in June 2017. On foot of this resolution, the Commission established a High Level Group (HLG) to discuss and ultimately "advise on policy initiatives to counter fake news and the spread of disinformation online". This led to the development of the "Action Plan Against Disinformation" in December 2018, and eventually, the Codes of Practice on Disinformation.

While these provide useful guidance for online platforms in addressing this problem, the voluntary and non-binding nature of the Codes left scope for further regulatory intervention, and this was acknowledged by the Commission. The COVID-19 pandemic represents the most urgent situation in relation to both disinformation and misinformation since the establishment of the Codes, meaning that pressure to reassess and potentially change the regulatory framework in this area will likely grow.

The Indian Supreme Court's response to fake news panic

It is important not to analyse the European position in isolation, and to draw attention to important global legal developments that have been spurred by an influx of rumours and lies online. In particular, the Indian Supreme Court decision on 31st March in Alakh Alok Srivastava v Union of India provides a useful insight into the legal perils associated with fake news regulation.

For the EU, the practical implications of this case are not as germane as recent legislative developments in Hungary that introduced criminal sanctions for fake news dissemination. However, it provides two interesting developments. Firstly, it represents an extremely rare (if not unprecedented) instance where the popularised term "fake news" has been invoked in a Supreme Court setting. Secondly, it provides a clear example of the dangers to fundamental rights that can be demonstrated by fake news regulation in the wake of a global panic.

The background of the Supreme Court's advice to the central government of India highlights a deeply troubling reality. In the midst of India's response to the COVID-19 outbreak, misleading rumours had been spread online concerning the potential for a lockdown for over three months. This prompted a number of migrant workers to return home in fear of being caught in the prolonged lockdown. As a result, the court noted that this "panic" driven exodus was heavily linked to "fake and/or misleading news" on digital platforms. According to the court, this signified a continuation of "deliberate or inadvertent fake news" as the "single most unmanageable hindrance" in the central government's response to COVID-19. This is particularly noteworthy in light of the court's characterisation of India's general response as "proactive, pre-emptive and graded".

In light of this panic driven mass exodus, the Supreme Court issued advice to the central government in order to contain the spread of false and misleading information during the current pandemic. The government was advised that "no electronic/print media/ web portal or social media shall print/ publish or telecast anything without first ascertaining the true factual position from the separate mechanism provided by the central government."

This "mechanism" would consist of a web portal established by the central government in order to provide accurate and recognised official information regarding ongoing responses to COVID-19. Accordingly, information and news circulated must first be checked off by this governmental information verification mechanism.

An urgent gap: human rights compatibility

In assessing the Indian Supreme Court's decision, a pressing and continuing anxiety can be seen when approaching this growing problem from a legal perspective. That is, it is far easier to identify the scope of the fake news problem than it is to prescribe effective and sufficiently measured solutions. In particular, the need to balance responses becomes especially prescient when analysing these developments in the context of international human rights. In issuing guidance, the Court correctly acknowledged the link between the dissemination of false information and the "potential of causing panic in large sections of society". Moreover, it was also noted that a continuing deluge of misleading claims fuelled by social media can have the effect of impeding otherwise effective governmental responses, which are of particular importance during a public health crisis. When considering the need to protect public health under numerous international human rights instruments including the European Convention on Human Rights (ECHR) and the Charter of Fundamental Rights of the European Union (CFR), it is essential that proportionate legal responses consider this factor during pandemics such as COVID-19.

In spite of this, a number of fundamental rights concerns are raised by this decision, and other related directions that could seek to mirror the court's directions. While the government is tasked with reducing the spread of false information, the position of the centralised information and news "portal" raises questions. Both during the current crises and in the aftermath, a balanced human rights response to fake news must mediate the necessity for a pluralistic and free press environment, with the protection of the public from misleading, manipulative, and ultimately harmful information. It is questionable whether the Court's decision could ultimately lead to a chilling effect from numerous legitimate media publications that provide valuable and accurate coverage of the coronavirus.

The need for media institutions to hold power to account is particularly relevant during a pandemic, when the government's responses must be scrutinised accurately and honestly. In particular when considering the potential for governments to use such powers to suppress unfavourable but verified information from press outlets, a state sponsored information portal could create a dangerous precedent. While the potential for fake news to exacerbate public health concerns and related panic, the ability for journalists to hold power accountable for harmful governmental responses (or unacceptable inaction) is also critical from a public health and fundamental rights perspective. Responses such as these must also clearly specify when restrictions will be lifted. While it is difficult, if not impossible, to ascertain when the severity of COVID-19 will subside, instructions such as these must indicate that they will last until broader governmental restrictions remain in place, or as otherwise decided.

These factors underscore the need for Europe to enshrine future regulatory developments for disinformation and misinformation with adequate protections for the free press. In particular when considering the right for the public to "receive and impart information" under Article 10 of the ECHR, the need for legitimate journalists and news outlets to be protected in their coverage of pandemics is crucial.

A promising development in Argentina

Going forward, global developments in response to this issue should be closely monitored in their compatibility with fundamental rights protections. Numerous jurisdictions have yielded promising developments, especially when furnished with human rights protections. In Argentina for example, the legislature recently proposed legislation that floated a "Commission" to verify the authenticity of news. This legislation arises on foot of the rapid "speed of creation, propagation and distribution of false news", and would exist under the umbrella framework of the national electoral commission, and would be tasked with detecting, labelling, and curtailing the spread of false information. Applying to both online and offline content, a notable and promising feature of this development is that it is grounded in a recognition of fundamental rights principles. The proposal of the Commission for the Verification of Fake News (CVNF) recognises the "right to access the Internet as a human right, based on the integral respect for human dignity, freedom, equality and diversity in all its expressions." It also acknowledges the need to protect "personal data", and would appoint journalists from both the "Graphic Media Associations" and the "Associations of Audiovisual Companies".

Proposals such as these represent a promising blueprint for fake news regulatory responses to achieve compatibility with international human rights principles. While the precise direction of future legal developments is far from certain, it is increasingly likely that the current pandemic could launch significant regulatory changes.

Disinformation optimised: gaming search engine algorithms to amplify junk news

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Did the Holocaust really happen? In December 2016, Google’s search engine algorithm determined the most authoritative source to answer this question was a neo-Nazi website peddling holocaust denialism (Cadwalladr, 2016b). For any inquisitive user typing this question into Google, the first website recommended by Search linked to an article entitled: “Top 10 reasons why the Holocaust didn’t happen”. The third article “The Holocaust Hoax; IT NEVER HAPPENED” was published by another neo-Nazi website, while the fifth, seventh, and ninth recommendations linked to similar racist propaganda pages (Cadwalladr, 2016b). Up until Google started demoting websites committed to spreading anti-Semitic messages, anyone asking whether the Holocaust actually happened would have been directed to consult neo-Nazi websites, rather than one of the many credible sources about the Holocaust and tragedy of World War II.

Google’s role in shaping the information environment and enabling political advertising has made it a “de facto infrastructure” for democratic processes (Barrett & Kreiss, 2019). How its search engine algorithm determines authoritative sources directly shapes the online information environment for more than 89 percent of the world’s internet users who trust Google Search to quickly and accurately find answers to their questions. Unlike social media platforms that tailor content based on “algorithmically curated newsfeeds” (Golebiewski & boyd, 2019), the logic of search engines is “mutually shaped” by algorithms — that shape access — and users — who shape the information being sought (Schroeder, 2014). By facilitating information access and discovery, search engines hold a unique position in the information ecosystem. But, like other digital platforms, the digital affordances of Google Search have proved to be fertile ground for media manipulation.

Previous research has demonstrated how large volumes of mis- and disinformation were spread on social media platforms in the lead up to elections around the world (Hedman et al., 2018; Howard, Kollanyi, Bradshaw, & Neudert, 2017; Machado et al., 2018). Some of this disinformation was micro-targeted towards specific communities or individuals based on their personal data. While data-driven campaigning has become a powerful tool for political parties to mobilise and fundraise (Fowler et al., 2019; Baldwin-Philippi, 2017), the connection between online advertisements and disinformation, foreign election interference, polarisation, and non-transparent campaign practices has caused growing anxieties about its impact on democracy.

Since the 2016 presidential election in the United States, public attention and scrutiny has largely focused on the role of Facebook in profiting from and amplifying the spread of disinformation via digital advertisements. However, less attention has been paid to Google, who, along with Facebook, commands more than 60% of the digital advertising market share. At the same time, a multi-billion-dollar search engine optimisation (SEO) industry has been built around understanding how technical systems rank, sort, and prioritise information (Hoffmann, Taylor, & Bradshaw, 2019). The purveyors of disinformation have learned to exploit social media platforms to engineer content discovery and drive “pseudo-organic engagement”. 1 These websites — that do not employ professional journalistic standards, report on conspiracy theory, counterfeit professional news brands, and mask partisan commentary as news — have been referred to as “junk news” domains (Bradshaw, Howard, Kollanyi, & Neudert, 2019).

Together, the role of political advertising and the matured SEO industry make Google Search an interesting and largely underexplored case to analyse. Considering the importance of Google Search in connecting individuals to news and information about politics, this paper examines how junk news websites generate discoverability via Google Search. It asks: (1) How do junk news domains optimise content, through both paid and SEO strategies, to grow discoverability and grow their website value? (2) What strategies are effective at growing discoverability and/or growing website value; and (3) What are the implications of these findings for ongoing discussions about the regulation of social media platforms?

To answer these questions, I analysed 29 junk news domains and their advertising and search engine optimisation strategies between January 2016 and March 2019. First, junk news domains make use of a variety of SEO keyword strategies in order to game Search and grow pseudo-organic clicks and grow their website value. The keywords that generated the highest placements on Google Search focused on (1) navigational searches for known brand names (such as searches for “breitbart.com”) and (2) carefully curated keyword combinations that fill so-called “data voids” (Golebiewski & Boyd, 2018), or a gap in search engine queries (such as searches for “Obama illegal alien”). Second, there was a clear correlation between the number of clicks that a website receives and the estimated value of the junk news domains. The most profitable timeframes correlated with important political events in the United States (such as the 2016 presidential election, and the 2018 midterm elections), and the value of the domain increased based on SEO optimised — rather than paid — clicks. Third, junk news domains were relatively successful at generating top-placements on Google Search before and after the 2016 US presidential election. However, their discoverability abruptly declined beginning in August 2017 following major announcements from Google about changes to its search engine algorithms, as well as other initiatives to combat the spread of junk news in search results. This suggests that Google can, and has, measurably impacted the discoverability of junk news on Search.

This paper proceeds as follows: The first section provides background on the vocabulary of disinformation and ongoing debates about so-called fake news, situating the terminology of “junk news” used in this paper in the scholarly literature. The second section discusses the logic and politics of search, describing how search engines work and reviewing the existing literature on Google Search and the spread of disinformation. The third section outlines the methodology of the paper. The fourth section analyses 29 prominent junk news domains to learn about their SEO and advertising strategies, as well as their impact on content discoverability and revenue generation. This paper concludes with a discussion of the findings and implications for future policymaking and private self-regulation.

The vocabulary of political communication in the 21st century

“Fake news” gained significant attention from scholarship and mainstream media during the 2016 presidential election in the United States as viral stories pushing outrageous headlines — such as Hillary Clinton’s alleged involvement in a paedophile ring in the basement of a DC pizzeria — were prominently displayed across search and social media news feeds (Silverman, 2016). Although “fake news” is not a new phenomenon, the spread of these stories—which are both enhanced and constrained by the unique affordances of internet and social networking technologies — has reinvigorated an entire research agenda around digital news consumption and democratic outcomes. Scholars from diverse disciplinary backgrounds — including psychology, sociology and ethnography, economics, political science, law, computer science, journalism, and communication studies — have launched investigations into circulation of so-called “fake news” stories (Allcott & Gentzkow, 2017; Lazer et al., 2018), their role in agenda-setting (Guo & Vargo, 2018; Vargo, Guo, & Amazeen, 2018), and their impact on democratic outcomes and political polarisation (Persily, 2017; Tucker et al., 2018).

However, scholars at the forefront of this research agenda have continually identified several epistemological and methodological challenges around the study of so-called “fake news”. A commonly identified concern is the ambiguity of the term itself, as “fake news” has come to be an umbrella term for all kinds of problematic content online, including political satire, fabrication, manipulation, propaganda, and advertising (Tandoc, Lim, & Ling, 2018; Wardle, 2017). The European High-Level Expert Group on Fake News and Disinformation recently acknowledged the definitional difficulties around the term, recognising it “encompasses a spectrum of information types…includ[ing] low risk forms such as honest mistakes made by reporters…to high risk forms such as foreign states or domestic groups that would try to undermine the political process” (European Commission, 2018). And even when the term “fake news” is simply used to describe news and information that is factually inaccurate, the binary distinction between what is true and what is false has been criticised for not adequately capturing the complexity of the kinds of information being shared and consumed in today’s digital media environment (Wardle & Derakhshan, 2017).

Beyond the ambiguities surrounding the vocabulary of “fake news”, there is growing concern that the term has begun to be appropriated by politicians to restrict freedom of the press. A wide range of political actors have used the term “fake news” to discredit, attack, and delegitimise political opponents and mainstream media (Farkas & Schou, 2018). Certainly, Donald Trump’s (in)famous use of the term “fake news”, is often used to “deflect” criticism and to erode the credibility of established media and journalist organisations (Lakoff, 2018). And many authoritarian regimes have followed suit, adopting the term into a common lexicon to legitimise further censorship and restrictions on media within their own borders (Bradshaw, Neudert, & Howard, 2018). Given that most citizens perceive “fake news” to define “partisan debate and poor journalism”, rather than a discursive tool to undermine trust and legitimacy in media institutions, there is general scholarly consensus that the term is highly problematic (Nielsen & Graves, 2017).

Rather than chasing a definition of what has come to be known as “fake news”, researchers at the Oxford Internet Institute have produced a grounded typology of what users actually share on social media (Bradshaw et al., 2019). Drawing on Twitter and Facebook data from elections in Europe and North America, researchers developed a grounded typology of online political communication (Bradshaw et al., 2019; Neudert, Howard, & Kollanyi, 2019). They identified a growing prevalence of “junk news” domains, which publish a variety of hyper-partisan, conspiracy theory or click-bait content that was designed to look like real news about politics. During the 2016 presidential election in the United States, social media users on Twitter shared as much “junk news” as professionally produced news about politics (Howard, Bolsover, Kollanyi, Bradshaw, & Neudert, 2017; Howard, Kollanyi, et al., 2017). And voters in swing-states tended to share more junk news than their counterparts in uncontested ones (Howard, Kollanyi, et al., 2017). In countries throughout Europe — in France, Germany, the United Kingdom and Sweden — junk news inflamed political debates around immigration and amplified populist voices across the continent (Desiguad, Howard, Kollanyi, & Bradshaw, 2017; Kaminska, Galacher, Kollanyi, Yasseri, & Howard, 2017; Neudert, Howard, & Kollanyi, 2017).

According to researchers on the Computational Propaganda Project junk news is defined as having at least three out of five elements: (1) professionalism, where sources do not employ the standards and best practices of professional journalism including information about real authors, editors, and owners (2) style, where emotionally driven language, ad hominem attacks, mobilising memes and misleading headlines are used; (3) credibility, where sources rely on false information or conspiracy theories, and do not post corrections; (4) bias, where sources are highly biased, ideologically skewed and publish opinion pieces as news; and (5) counterfeit, where sources mimic established news reporting including fonts, branding and content strategies (Bradshaw et al., 2019).

In a complex ecosystem of political news and information, junk news provides a useful point of analysis because rather than focusing on individual stories that may contain honest mistakes, it examines the domain as a whole and looks for various elements of deception, which underscores the definition of disinformation. The concept of junk news is also not tied to a particular producer of disinformation, such as foreign operatives, hyper-partisan media, or hate groups, who, despite their diverse goals, deploy the same strategies to generate discoverability. Given that the literature on disinformation is often siloed around one particular actor, does not cross platforms, nor integrate a variety of media sources (Tucker et al., 2018), the junk news framework can be useful for taking a broader look at the ecosystem as a whole and the digital techniques producers use to game search engine algorithms. Throughout this paper, I use the term “junk news” to describe the wide range of politically and economically motivated disinformation being shared about politics.

The logic and politics of search

Search engines play a fundamental role in the modern information environment by sorting, organising, and making visible content on the internet. Before the search engine, anyone who wished to find content online would have to navigate “cluttered portals, garish ads and spam galore” (Pasquale, 2015). This didn’t matter in the early days of the web when it remained small and easy to navigate. During this time, web directories were built and maintained by humans who often categorised pages according to their characteristics (Metaxas, 2010). By the mid-1990s it became clear that the human classification system would not be able to scale. The search engine “brought order to chaos by offering a clean and seamless interface to deliver content to users” (Hoffman, Taylor, & Bradshaw, 2019).

Simplistically speaking, search engines work by crawling the web to gather information about online webpages. Data about the words on a webpage, links, images, videos, or the pages they link to are organised into an index by an algorithm, analogous to an index found at the end of a book. When a user types a query into Google Search, machine learning algorithms apply complex statistical models in order to deliver the most “relevant” and “important” information to a user (Gillespie, 2012). These models are based on a combination of “signals” including the words used in a specific query, the relevance and usability of webpages, the expertise of sources, and other information about context, such as a user’s geographic location and settings (Google, 2019).

Google’s search rankings are also influenced by AdWords, which allow individuals or companies to promote their websites by purchasing “paid placement” for specific keyword searches. Paid placement is conducted through a bidding system, where rankings and the number of times the advertisement is displayed are prioritised by the amount of money spent by the advertiser. For example, a company that sells jeans might purchase AdWords for keywords such as “jeans”, “pants”, or “trousers”, so when an individual queries Google using these terms, a “sponsored post” will be placed at the top of the search results. 2 AdWords also make use of personalisation, which allow advertisers to target more granular audiences based on factors such as age, gender, and location. Thus, a local company selling jeans for women can specify local female audiences — individuals who are more likely to purchase their products.

The way in which Google structures, organizes, and presents information and advertisements to users is important because these technical and policy decisions embed a wide range of political issues (Granka, 2010; Introna & Nissenbaum, 2000; Vaidhynathan, 2011). Several public and academic investigations auditing Google’s algorithms have documented various examples of bias in Search or problems with the autocomplete function (Cadwalladr, 2016a; Pasquale, 2015). Biases inherently designed into algorithms have been shown to disproportionately marginalise minority communities, women, and the poor (Noble, 2018).

At the same time, political advertisements have become a contentious political issue. While digital advertising can generate significant benefits for democracy, by democratising political finance and assisting in political mobilisation (Fowler et al., 2019; Baldwin-Philippi, 2017), it can also be used to selectively spread disinformation and messages of demobilisation (Burkell & Regan, 2019; Evangelista & Bruno, 2019; Howard, Ganesh, Liotsiou, Kelly, & Francois, 2018). Indeed, Russian AdWord purchases in the lead-up to the 2016 US election demonstrate how foreign states actors can exploit Google Search to spread propaganda (Mueller, 2019). But the general lack of regulation around political advertising has also raised concerns about domestic actors and the ways in which legitimate politicians campaign in increasingly opaque and unaccountable ways (Chester & Montgomery, 2017; Tufekci, 2014). These concerns are underscored by the rise of the “influence industry” and the commercialisation of political technologies who sell various ‘psychographic profiling’ technologies to craft, target, and tailor messages of persuasion and demobilisation (Chester & Montgomery, 2019; McKelvey, 2019; Bashyakarla, 2019). For example, during the 2016 US election, Cambridge Analytica worked with the Trump campaign to implement “persuasion search advertising”, where AdWords were bought to strategically push pro-Trump and anti-Clinton information to voters (Lewis & Hilder, 2018).

Given growing concerns over the spread of disinformation online, scholars are beginning to study the ways in which Google Search might amplify junk news and disinformation. One study by Metaxa-Kakavouli and Torres-Echeverry examined the top ten results from Google searches about congressional candidates over a 26-week period in the lead-up to the 2016 presidential election. Of the URLs recommended by Google, only 1.5% came from domains that were flagged by PolitiFact as being “fake news” domains (2017). Metaxa-Kakavouli and Torres-Echeverry suggest that the low levels of “fake news” are the result of Google’s “long history” combatting spammers on its platform (2017). Another research paper by Golebiewski and boyd looks at how gaps in search engine results lead to strategic “data voids” that optimisers exploit to amplify their content (2018). Golebiewski and boyd argue that there are many search terms where data is “limited, non-existent or deeply problematic” (2018). Although these searches are rare, if a user types these search terms into a search engine, “it might not give a user what they are looking for because of limited data and/or limited lessons learned through previous searches” (Golebiewski & boyd, 2018).

The existence of biases, disinformation, or gaps in authoritative information on Google Search matters because Google directly impacts what people consume as news and information. Most of the time, people do not look past the top ten results returned by the search engine (Metaxas, 2010). Indeed, eye-tracking experiments have demonstrated that the order in which Google results are presented to users matters more than the actual relevance of the page abstracts (Pan et al., 2007). However, it is important to note that the logic of higher placements does not necessarily translate to search engine advertising listings, where users are less likely to click on advertisements if they are familiar with the brand or product they are searching for (Narayanan & Kalyanam, 2015).

Nevertheless, the significance of the top ten placement has given rise to the SEO industry, whereby optimisers use digital keyword strategies to move webpages higher in Google’s rankings and thereby generate higher traffic flows. There is a long history of SEO dating back to the 1990s when the first search engine algorithms emerged (Metaxas, 2010). Since then, hundreds of SEO pages have published guesses about the different ranking factors these algorithms consider (Dean, 2019). However, the specific signals that inform Google’s search engine algorithms are dynamic and constantly adapting to the information environment. Google makes hundreds of changes to its algorithm every year to adjust the weight and importance of various signals. While most of these changes are minor updates designed to improve the speed and performance of Search, sometimes Google makes more significant changes to its algorithm to elude optimisers trying to game the system.

Google has taken several steps to combat people seeking to manipulate Search for political or economic gain (Taylor, Walsh, & Bradshaw, 2019). This involves several algorithmic changes to demote sources of disinformation as well as changes to their advertising policies to limit the extent to which users can be micro-targeted with political advertisements. In one study, researchers interviewed SEO strategists to audit how Facebook and Google’s algorithmic changes impacted their optimisation strategies (Hoffmann, Taylor, & Bradshaw, 2019). Since the purveyors of disinformation often rely on the same digital marketing strategies used by legitimate political candidates, news organisations, and businesses, the SEO industry can offer unique, but heuristic, insight into the impact of algorithmic changes. Hoffmann, Taylor and Bradshaw (2019) found that despite more than 125 announcements over a three-year period, the algorithmic changes made by the platforms did not significantly alter digital marketing strategies.

This paper hopes to contribute to the growing body of work examining the effect of Search on the spread of disinformation and junk news by empirically analysing the strategies — paid and optimised — employed by junk news domains. By performing an audit of the keywords junk news websites use to generate discoverability, this paper evaluates the effectiveness of Google in combatting the spread of disinformation on Search.

Methodology

Conceptual Framework: The Techno-Commercial Infrastructure of Junk News

The starting place for this inquiry into the SEO infrastructure of junk news domains is grounded conceptually in the field of science and technology studies (STS), which provides a rich literature on how infrastructure design, implementation, and use embeds politics (Winner, 1980). Digital infrastructure — such as physical hardware, cables, virtual protocols, and code — operate invisibly in the background, which can make it difficult to trace the politics embedded in technical coding and design (Star & Ruhleder, 1994). As a result, calls to study internet infrastructure has engendered digital research methods that shed light on the less-visible areas of technology. One growing and relevant body of research has focused on the infrastructure of social media platforms and the algorithms and advertising infrastructure that invisibly operate to amplify or spread junk news to users, or to micro-target political advertisements (Kim et al., 2018; Tambini, Anstead, & Magalhães, 2017). Certainly, the affordances of technology — both real and imagined — mutually shape social media algorithms and their potential for manipulation (Nagy & Neff, 2015; Neff & Nagy, 2016). However, the proprietary nature of platform architecture has made it difficult to operationalise studies in this field. Because junk news domains operate in a digital ecosystem built on search engine optimisation, page ranks, and advertising, there is an opportunity to analyse the infrastructure that supports the discoverability of junk news content, which could provide insights into how producers reach audiences, grow visibility, and generate domain value.

Junk news data set

The first step of my methodology involved identifying a list of junk news domains to analyse. I used the Computational Propaganda Project’s (COMPROP) data set on junk news domains in order to analyse websites that spread disinformation about politics. To develop this list, researchers on the COMPROP project built a typology of junk news based on URLs shared on Twitter and Facebook relating to the 2016 US presidential election, the 2017 US State of the Union Address, and 2018 US midterm elections. 3 A team of five rigorously trained coders labelled the domains contained in tweets and on Facebook pages based on a grounded typology of junk news that has been tested and refined over several elections around the world between 2016 and 2018. 4 A domain was labelled as junk news when it failed on three of the five criteria of the typology (style, bias, credibility, professionalism, and counterfeit, as described in section one). For this analysis, I used the most recent 2018 midterm election junk news list, which is comprised of the top-29 most shared domains that were labelled as junk news by researchers. This list was selected because all 29 domains were active during the 2016 US presidential election in November 2016 and the 2017 US State of the Union Address, which provides an opportunity to comparatively assess how both the advertising and optimisation strategies, as well as their performance, changed overtime.

SpyFu data and API queries

The second step of my methodology involved collecting data about the advertising and optimisation strategies used by junk news websites. I worked with SpyFu, a competitive keyword research tool used by digital marketers to increase website traffic and improve keyword rankings on Google (SpyFu, 2019). SpyFu collects, analyses and tracks various data about the search optimisation strategies used by websites, such as organic ranks, paid keywords bought on Google AdWords, and advertisement trends.

To shed light onto the optimisation strategies used by junk news domains on Google, SpyFu provided me with: (1) a list of historical keywords and keyword combinations used by the top-29 junk news that led to the domain appearing in Google Search results; and (2) the position the domain appeared in Google as a result of the keywords. The historical keywords were provided from January 2016 until March 2019. Only keywords that led to the junk news domains appearing in the top-50 positions on Google were included in the data set.

In order to determine the effectiveness of the optimisation and advertising strategies used by junk news domains to either grow their website value and/or successfully appear in the top positions on Google Search, I wrote a simple python script to connect to the SpyFu API service. This python script collected and parsed the following data from SpyFu for each of the top-29 junk news domains in the sample: (1) the number of keywords that show up organically on Google searches; (2) the estimated sum of clicks a domain receives based on factors including organic keywords, the rank of keyword, and the search volume of the keyword; (3) the estimated organic value of a domain based on factors including organic keywords, the rank of keywords, and the search volume of the keyword; (4) the number of paid advertisements a domain purchased through Google AdWords; and (5) the number of paid clicks a domain received from the advertisements it purchased from Google AdWords.

Data and methodology limitations

There are several data and methodology limitations that must be noted. First, the junk news domains identified by the Computational Propaganda Project highlights only a small sample of the wide variety of websites that peddle disinformation about politics. The researchers also do not differentiate between the different actors behind the junk news websites — such as foreign states or hyper-partisan media — nor do they differentiate between the political leaning of the junk news outlet — such as left-or-right-leaning domains. Thus, the outcomes of these findings cannot be described in terms of the strategies of different actors. Further, given that the majority of junk news domains in the top-29 sample lean politically to the right and far right, these findings might not be applicable to the hyper-partisan left and their optimisation strategies. Finally, the junk news domains identified in the sample were shared on social media in the lead-up to important political events in the United States. A further research question could examine the SEO strategies of domains operating in other country contexts.

When it comes to working with the data provided by SpyFu (and other SEO optimisation tools), there are two limitations that should be noted. First, the historical keywords collected by SpyFu are only collected when they appear in the top-50 Google Search results. This is an important limitation to note because news and information producers are constantly adapting keywords based on the content they are creating. Keywords may be modified by the source website dynamically to match news trends. Low performing keywords might be changed or altered in order to make content more visible via Search. Thus, the SpyFu data might not capture all of the keywords used by junk news domains. However, the collection strategy will have captured many of the most popular keywords used by junk news domains to get their content appearing in Google Search. Second, because SpyFu is a company there are proprietary factors that go into measuring a domain’s SEO performance (in particular, the data points collected via the API on the estimated sum of clicks and the estimated organic value). Nevertheless, considering that Google Search is a prominent avenue for news and information discovery, and that few studies have systematically analysed the effect of search engine optimisation strategies on the spread of disinformation, this study provides an interesting starting point for future research questions about the impact SEO can have on the spread and monetisation of disinformation via Search.

Analysis: optimizing disinformation through keywords and advertising

Junk news advertising strategies on Google

Junk news domains rarely advertise on Google. Only two out of the 29 junk news domains (infowars.com and cnsnews.com) purchased Google advertisements (See Figure 1: Advertisements purchased vs. paid clicks). The advertisements purchased by infowars.com were all made prior to the 2016 election in the United States (from the period of May 2015 to March 2016). cnsnews.com made several advertisement purchases over the three-year time period.

Figure 1: Advertisements purchased vs. paid clicks received: inforwars.com and cnsnews.com (May 2015-March 2019)

Looking at the total number of paid clicks received, junk news domains generated only a small amount of traffic using paid advertisements. Infowars on average, received about 2000 clicks as a result of their paid advertisements. cnsnews.com peaked at approximately 1800 clicks, but on average generated only about 600 clicks per month over the course of three years. By comparing the number of clicks that are paid versus those that were generated as a result of SEO keyword optimisation, there is a significant difference. During the same time period, cnsnews.com and infowars.com were generating on average 146,000 and 964,000 organic clicks respectively (See Figure 2:Organic vs. paid clicks (cnsnews.com and infowars.com)). Although it is hard to make generalisations about how junk news websites advertise on Google based on a sample of two, the lack of data suggests that advertising on Google Search might not be as popular as advertising on other social media platforms. Second, the return on investment (i.e., paid clicks generated as a result of Google advertisements) was very low compared to the organic clicks these junk news domains received for free. Factors other than advertising seem to drive the discoverability of junk news on Google Search.

Figure 2: organic vs. paid clicks (cnsnews.com and infowars.com)

Junk news keyword optimisation strategies

In order to assess the keyword optimisation strategies used by junk news websites, I worked with SpyFu, which provided historical keyword data for the 29 junk news domains, when those keywords made it to the top-50 results in Google between January 2016 and March 2019. In total, there were 88,662 unique keywords in the data set. Given the importance of placement on Google, I looked specifically at keywords that indexed junk news websites on the first — and most authoritative — position. Junk news domains had different aptitudes for generating placement in the first position (See Table 1: Junk news domains and number of keywords found in the first position on Google). Breitbart, DailyCaller and ZeroHedge had the most successful SEO strategies, respectively having 1006, 957 and 807 keywords lead to top placements on Google Search over the 39-month period. In contrast, six domains (committedconservative.com, davidharrisjr.com, reverbpress.news, thedailydigest.org, thefederalist.com, thepoliticalinsider.com) had no keywords reach the first position on Google. The remaining 20 domains had anywhere between 1 to 253 keywords place between the 2-10 positions on Google Search over the same timeframe.

Table 1: Junk news domains and number of keywords found in the first position on Google

Domain

Keywords reaching position 1

breitbart.com

1006

dailycaller.com

957

zerohedge.com

807

infowars.com

253

cnsnews.com

228

dailywire.com

214

thefederalist.com

200

rawstory.com

199

lifenews.com

156

pjmedia.com

140

americanthinker.com

133

thepoliticalinsider.com

111

thegatewaypundit.com

105

barenakedislam.com

48

michaelsavage.com

15

theblacksphere.net

9

truepundit.com

8

100percentfedup.com

5

bigleaguepolitics.com

3

libertyheadlines.com

2

ussanews.com

2

gellerreport.com

1

truthfeednews.com

1

Different keywords also generate different kinds of placement over the 39-month period. Table 2 (see Appendix) provides a sample list of up to ten keywords from each junk news domain in the sample when the keyword reached the first position.

First, many junk news domains appear in the first position on Google Search as a result of “navigational searches” whereby a user entered a query with the intent of finding a website. A search for a specific brand of junk news could happen naturally for many users, since the Google Search function is built into the address bar in Chrome, and sometimes set as the default search engine for other browsers. In particular, terms like “infowars” “breitbart” “cnsnews” and “rawstory” were navigational keywords users typed into Google Search. The performance of brand searches over time consistently places junk news webpages in the number one position (see Figure 3: Brand-related keywords over time). This suggests that brand-recognition plays an important role for driving traffic to junk news domains.

Figure 3: the performance of brand-related keywords overtime: top-5 junk news websites (January 2016-March 2019)

There is one outlier in this analysis, where keyword searches for “breitbart” drops to position two: in January 2017 and September 2017. This drop could have been a result of mainstream media coverage of Steve Bannon assuming (and eventually leaving) his position as the White House Chief Strategist during those respective months. The fact that navigational searches are one of the main drivers behind generating a top ten placement on Search suggests that junk news websites rely heavily on developing a recognisable brand and a dedicated readership that actively seeks out content from these websites. However, this also demonstrates that a complicated set of factors go into determining what keywords from what websites make the top placement in Google Search, and that coverage of news events from mainstream professional news outlets can alter the discoverability of junk news via Search.

Second, many keywords that made it to the top position in Google Search results are what Golebiewski and boyd (2018) would call terms that filled “data voids”, or gaps in search engine queries where there is limited authoritative information about a particular issue. These keywords tended to focus on conspiratorial information especially around President Barack Obama (“Obama homosexual” or “stop Barack Obama”), gun rights (“gun control myths”), pro-life narratives (“anti-abortion quotes” or “fetus after abortion”), and xenophobic or racist content (“against Islam” or “Mexicans suck”). Unlike brand-related keywords, problematic search terms do not achieve a consistently high placement on Google Search over the 39-week period. Keywords that ranked in number one for more than 30-weeks include: “vz58 vs. ak47”, “feminizing uranium”, “successful people with down syndrome”, “google ddrive”, and “westboro[sic] Baptist church tires slashed”. This suggests that, for the most part, data voids are either being filled by more authoritative sources, or Google Search has been able to demote websites attempting to generate pseudo-organic engagement via SEO.

The performance of junk news domains on Google Search

After analysing what keywords are used to get junk news websites in the number one position, the next half of my analysis looks at larger trends in SEO strategies overtime. What is the relationship between organic clicks and the value of a junk news website? How has the effectiveness of SEO keywords changed over the past 48 months? And have changes made by Google to combat the spread of junk news on Search had an impact on its discoverability?

Junk news, organic clicks, and the value of the domain

There is a close relationship between the number of clicks a domain receives and the estimated value of that domain. By comparing figure 4 and 5, you can see that the more clicks a website receives, the higher its estimated value. Often, a domain is considered more valuable when it generates large amounts of traffic. Advertisers see this as an opportunity, then, to reach more people. Thus, the higher the value of a domain, the more likely it is to generate revenue for the operator. The median estimated value of the top-29 most popular junk news was $5,160 USD during the month of the 2016 presidential election, $1,666.65 USD during the 2018 State of the Union, and $3,906.90 USD during the 2018 midterm elections. Infowars.com and breitbart.com were the two highest performing junk news domains — in terms of clicks and domain value. While breitbart.com maintained a more stable readership, especially around the 2016 US presidential election and the 2018 US State of the Union Address, its estimated organic click rate has steadily decreased since early 2018. In contrast, infowars.com has a more volatile readership. The spikes in clicks to infowars.com could be explained by media coverage of the website, including the defamation case against Alex Jones in April 2018 who claimed the shooting at Sandy Hook Elementary School was “completely fake” and a “giant hoax”. Since then, several internet companies — including Apple, Twitter, Facebook, Spotify, and YouTube — banned Infowars from their platforms, and the domain has not been able to regain its clicks nor value since. This demonstrates the powerful role platforms play in not only making content visible to users, but also controlling who can grow their website value — and ultimately generate revenue — from the content they produce and share online.

Figure 4: Estimated organic value for the top 29 junk news domains (May 2015 – March 2019)
Figure 5: Estimated organic clicks for the top 29 junk news domains (May 2015-April 2019)

Junk news domains, search discoverability and Google’s response to disinformation

Figure 6 shows the estimated organic results of the top 29 junk news domains overtime. The estimated organic results are the number of keywords that would organically appear in Google searches. Since August 2017, there has been a sharp decline in the number of keywords that would appear in Google. The four top-performing junk news websites (infowars.com, zerohedge.com, dailycaller.com, and breitbart.com) all appeared less frequently in top-positions on Google Search based on the keywords they were optimising for. This is an interesting finding and suggests that the changes Google made to its search algorithm did indeed have an impact on the discoverability of junk news domains after August 2017. In comparison, other professional news sources (washingtonpost.com, nytimes.com, foxnews.com, nbcnews.com, bloomberg.com, bbc.co.uk, wsj.com, and cnn.com) did not see substantial drops in their search visibility during this timeframe (see Figure 7). In fact, after August 2017 there has been a gradual increase in the organic results of mainstream news media.

Figure 6: Estimated organic results for the top 29 junk news domains (May 2015- April 2019)
Figure 7: Estimated organic results for mainstream media websites in the United States (May 2015-April 2019)

After almost a year, the top-performing junk news websites have regained some of their organic results, but the levels are not nearly as high as they were leading up to and preceding the 2016 presidential election. This demonstrates the power of Google’s algorithmic changes in limiting the discoverability of junk news on Search. But it also shows how junk news producers learn to adapt their strategies in order to extend the visibility of their content. In order to be effective at limiting the visibility of bad information via search, Google must continue to monitor the keywords and optimisation strategies these domains deploy — especially in the lead-up to elections — when more people will be naturally searching for news and information about politics.

Conclusion

In conclusion, the spread of junk news on the internet and the impact it has on democracy has certainly been a growing field of academic inquiry. This paper has looked at a small subset of this phenomenon, in particular the role of Google Search in assisting in the discoverability and monetisation of junk news domains. By looking at the techno-commercial infrastructure that junk news producers use to optimise their websites for paid and pseudo-organic clicks, I found:

  1. Junk news domains do not rely on Google advertisements to grow their audiences and instead focus their efforts on optimisation and keyword strategies;
  2. Navigational searches drive the most traffic to junk news websites, and data voids are used to grow the discoverability of junk news content to mostly small, but varying degrees.
  3. Many junk news producers place advertisements on their websites and grow their value particularly around important political events; and
  4. Overtime, the SEO strategies used by junk news domains have decreased in their ability to generate top-placements in Google Search.

For millions of people around the world, the information Google Search recommends directly impacts how ideas and opinions about politics are formulated. The powerful role of Google as an information gatekeeper has meant that bad actors have tried to subvert these technical systems for political or economic game. For quite some time, Google’s algorithms have come under attack by spammers and other malign actors who wish to spread disinformation, conspiracy theories, spam, and hate speech to unsuspecting users. The rise of “computational propaganda” and the variety of bad actors exploiting technology to influence political outcomes has also led to the manipulation of Search. Google’s response to the optimisation strategies used by junk news domains has had a positive effect on limiting the discoverability of these domains over time. However, the findings of this paper are also showing an upward trend, as junk news producers find new ways to optimise their content for higher search rankings. This game of cat and mouse is one that will continue for the foreseeable future.

While it is hard to reduce the visibility of junk news domains when individuals actively search for them, more can be done to limit the ways in which bad actors might try to optimise content to generate pseudo-organic engagement, especially around disinformation. Google can certainly do more to tweak its algorithms in order to demote known disinformation sources, as well as identify and limit the discoverability of content seeking to exploit data voids. However, there is no straightforward technical patch that Google can implement to stop various actors from trying to game their systems. By co-opting the technical infrastructure and policies that enable search, the producers of junk news are able to spread disinformation — albeit to small audiences who might use obscure search terms to learn about a particular topic.

There have also been growing pressures for regulators to take steps that force social media platforms to take greater actions that limit the spread of disinformation online. But the findings of this paper have two important lessons for policymakers. First, the disinformation problem — through both optimisation and advertising — on Google Search is not as dramatic as it is sometimes portrayed. Most of the traffic to junk news websites are by users performing navigational searches to find specific, well-known brands. Only a limited number of placements — as well as clicks — to junk news domains come from pseudo-organic engagement generated by data voids and other problematic keyword searches. Thus, requiring Google to take a heavy-handed approach to content moderation could do more harm than good, and might not reflect the severity of the problem. Second, the reason why disinformation spreads on Google are reflective of deeper systemic problems within democracies: growing levels of polarisation and distrust in the mainstream media are pushing citizens to fringe and highly partisan sources of news and information. Any solution to the spread of disinformation on Google Search will require thinking about media and digital literacy and programmes to strengthen, support, and sustain professional journalism.

References

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Barrett, B., & D. Kreiss (2019). Platform Transience:  changes in Facebook’s policies, procedures and affordances in global electoral politics. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1446

Bradshaw, S., Howard, P., Kollanyi, B., & Neudert, L.-M. (2019). Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018. Political Communication. https://doi.org/10.1080/10584609.2019.1663322

Bradshaw, S., & Howard, P. N. (2018). Why does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life [Working Paper]. Miami: Knight Foundation. Retrieved from https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/142/original/Topos_KF_White-Paper_Howard_V1_ado.pdf

Bradshaw, S., Neudert, L.-M., & Howard, P. (2018). Government Responses to the Malicious Use of Social Media. NATO.

Burkell, J., & Regan, P. (2019). Voting Public: Leveraging Personal Information to Construct Voter Preference. In N. Witzleb, M. Paterson, & J. Richardson (Eds.), Big Data, Privacy and the Political Process. London: Routledge.

Cadwalladr, C. (2016a, December 4). Google, democracy and the truth about internet search. The Observer. Retrieved from https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook

Cadwalladr, C. (2016b, December 11). Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2016/dec/11/google-frames-shapes-and-distorts-how-we-see-world

Chester, J. & Montgomery, K. (2019). The digital commercialisation of US politics—2020 and beyond. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1443

Dean, B. (2019). Google’s 200 Ranking Factors: The Complete List (2019). Retrieved April 18, 2019, from Backlinko website: https://backlinko.com/google-ranking-factors

Desiguad, C., Howard, P. N., Kollanyi, B., & Bradshaw, S. (2017). Junk News and Bots during the French Presidential Election: What are French Voters Sharing Over Twitter In Round Two? [Data Memo No. 2017.4]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved May 19, 2017, from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/05/What-Are-French-Voters-Sharing-Over-Twitter-Between-the-Two-Rounds-v7.pdf

European Commission. (2018). A multi-dimensional approach to disinformation: report of the independent high-level group on fake news and online disinformation. Luxembourg: European Commission.

Evangelista, R., & F. Bruno. (2019) WhatsApp and political instability in Brazil: targeted messages and political radicalization. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1435

Farkas, J., & Schou, J. (2018). Fake News as a Floating Signifier: Hegemony, Antagonism and the Politics of Falsehood. Journal of the European Institute for Communication and Culture, 25(3), 298–314. https://doi.org/10.1080/13183222.2018.1463047

Gillespie, T. (2012). The Relevance of Algorithms. In T. Gillespie, P. J. Boczkowski, & K. Foot (Eds.), Media Technologies: Essays on Communication, Materiality and Society (pp. 167–193). Cambridge, MA: The MIT Press. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.692.3942&rep=rep1&type=pdf

Golebiewski, M., & Boyd, D. (2018). Data voids: where missing data can be easily exploited. Retrieved from Data & Society website: https://datasociety.net/wp-content/uploads/2018/05/Data_Society_Data_Voids_Final_3.pdf

Google. (2019). How Google Search works: Search algorithms. Retrieved April 17, 2019, from https://www.google.com/intl/en/search/howsearchworks/algorithms/

Granka, L. A. (2010). The Politics of Search: A Decade Retrospective. The Information Society, 26(5), 364–374. https://doi.org/10.1080/01972243.2010.511560

Guo, L., & Vargo, C. (2018). “Fake News” and Emerging Online Media Ecosystem: An Integrated Intermedia Agenda-Setting Analysis of the 2016 U.S. Presidential Election. Communication Research. https://doi.org/10.1177/0093650218777177

Hedman, F., Sivnert, F., Kollanyi, B., Narayanan, V., Neudert, L. M., & Howard, P. N. (2018, September 6). News and Political Information Consumption in Sweden: Mapping the 2018 Swedish General Election on Twitter [Data Memo No. 2018.3]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/09/Hedman-et-al-2018.pdf

Hoffmann, S., Taylor, E., & Bradshaw, S. (2019, October). The Market of Disinformation. [Report]. Oxford: Oxford Information Labs; Oxford Technology & Elections Commission, University of Oxford. Retrieved from https://oxtec.oii.ox.ac.uk/wp-content/uploads/sites/115/2019/10/OxTEC-The-Market-of-Disinformation.pdf

Howard, P., Ganesh, B., Liotsiou, D., Kelly, J., & Francois, C. (2018). The IRA and Political Polarization in the United States, 2012-2018 [Working Paper No. 2018.2]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S., & Neudert, L.-M. (2017). Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? [Data Memo No. 2017.1]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/2017/03/26/junk-news-and-bots-during-the-u-s-election-what-were-michigan-voters-sharing-over-twitter/

Howard, P. N., Kollanyi, B., Bradshaw, S., & Neudert, L.-M. (2017). Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States? [Data Memo No. 2017.8]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2017/09/Polarizing-Content-and-Swing-States.pdf

Introna, L., & Nissenbaum, H. (2000). Shaping the Web: Why the Politics of Search Engines Matters. The Information Society, 16(3), 169–185. https://doi.org/10.1080/01972240050133634

Kaminska, M., Galacher, J. D., Kollanyi, B., Yasseri, T., & Howard, P. N. (2017). Social Media and News Sources during the 2017 UK General Election. [Data Memo No. 2017.6]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://www.oii.ox.ac.uk/blog/social-media-and-news-sources-during-the-2017-uk-general-election/

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., … Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Lakoff, G. (2018, January 2). Trump uses social media as a weapon to control the news cycle. Retrieved from https://twitter.com/GeorgeLakoff/status/948424436058791937

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Lewis, P. & Hilder, P. (2018, March 23). Leaked: Cambridge Analytica’s Blueprint for Trump Victory. The Guardian. Retrieved from: https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-blueprint-for-trump-victory

Machado, C., Kira, B., Hirsch, G., Marchal, N., Kollanyi, B., Howard, Philip N., … Barash, V. (2018). News and Political Information Consumption in Brazil: Mapping the First Round of the 2018 Brazilian Presidential Election on Twitter [Data Memo No. 2018.4]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://blogs.oii.ox.ac.uk/comprop/wp-content/uploads/sites/93/2018/10/machado_et_al.pdf

McKelvey F. (2019). Cranks, Clickbaits and Cons:  On the acceptable use of political engagement platforms.  Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1439

Metaxa-Kakavouli, D., & Torres-Echeverry, N. (2017). Google’s Role in Spreading Fake News and Misinformation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3062984

Metaxas, P. T. (2010). Web Spam, Social Propaganda and the Evolution of Search Engine Rankings. In J. Cordeiro & J. Filipe (Eds.), Web Information Systems and Technologies (Vol. 45, pp. 170–182). https://doi.org/10.1007/978-3-642-12436-5_13

Nagy, P., & Neff, G. (2015). Imagined Affordance: Reconstructing a Keyword for Communication Theory. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115603385

Narayanan S., & Kalyanam K. (2015). Position Effects in Search Advertising and their Moderators: A Regression Discontinuity Approach. Marketing Science, 34(3), 388–407. https://doi.org/10.1287/mksc.2014.0893

Neff, G., & Nagy, P. (2016). Talking to Bots: Symbiotic Agency and the Case of Tay. International Journal of Communication,10, 4915–4931. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6277

Neudert, L.-M., Howard, P., & Kollanyi, B. (2017). Junk News and Bots during the German Federal Presidency Election: What Were German Voters Sharing Over Twitter? [Data Memo 2 No. 2017.2]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-German-Voters-Sharing-Over-Twitter-v6-1.pdf

Nielsen, R. K., & Graves, L. (2017). “News you don’t believe”: Audience perspectives on fake news. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen&Graves_factsheet_1710v3_FINAL_download.pdf

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google We Trust: Users’ Decisions on Rank, Position, and Relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x

Pasquale, F. (2015). The Black Box Society. Cambridge: Harvard University Press.

Persily, N. (2017). The 2016 U.S. Election: Can Democracy Survive the Internet? Journal of Democracy, 28(2), 63–76. https://doi.org/10.1353/jod.2017.0025

Schroeder, R. (2014). Does Google shape what we know? Prometheus, 32(2), 145–160. https://doi.org/10.1080/08109028.2014.984469

Silverman, C. (2016, November 16). This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook. Buzzfeed. Retrieved July 25, 2017 from https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

SpyFu. (2019). SpyFu - Competitor Keyword Research Tools for AdWords PPC & SEO. Retrieved April 19, 2019, from https://www.spyfu.com/

Star, S. L., & Ruhleder, K. (1994). Steps Towards an Ecology of Infrastructure: Complex Problems in Design and Access for Large-scale Collaborative Systems. Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, 253–264. New York: ACM.

Tambini, D., Anstead, N., & Magalhães, J. C. (2017, June 6). Labour’s advertising campaign on Facebook (or “Don’t Mention the War”) [Blog Post]. Retrieved April 11, 2019, from Media Policy Blog website: http://blogs.lse.ac.uk/mediapolicyproject/

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions Digital Journalism, 6(2). https://doi.org/10.1080/21670811.2017.1360143

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018, March). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature [Report]. Menlo Park: William and Flora Hewlett Foundation. Retrieved from https://eprints.lse.ac.uk/87402/1/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf

Vaidhynathan, S. (2011). The Googlization of Everything: (First edition). Berkeley: University of California Press.

Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5), 2028–2049. https://doi.org/10.1177/1461444817712086

Bashyakarla, V. (2019). Towards a holistic perspective on personal data and the data-driven election paradigm. Internet Policy Review, 8(4). Retrieved from https://policyreview.info/articles/news/towards-holistic-perspective-personal-data-and-data-driven-election-paradigm/1445

Wardle, C. (2017, February 16). Fake news. It’s complicated. First Draft News. Retrieved July 20, 2017, from https://firstdraftnews.com:443/fake-news-complicated/

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward and interdisciplinary framework for research and policy making [Report No. DGI(2017)09]. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/information-disorder-report-november-2017/1680764666

Winner, L. (1980). Do Artifacts have Politics. Daedalus, 109(1), 121–136. Retrieved from http://www.jstor.org/stable/20024652

Appendix 1

Junk news seed list (Computational Propaganda Project’s top-29 junk news domains from the 2018 US midterm elections).

www.americanthinker.com, www.barenakedislam.com, www.breitbart.com, www.cnsnews.com, www.dailywire.com, www.infowars.com, www.libertyheadlines.com, www.lifenews.com,www.rawstory.com, www.thegatewaypundit.com, www.truepundit.com, www.zerohedge.com,100percentfedup.com, bigleaguepolitics.com, committedconservative.com, dailycaller.com, davidharrisjr.com, gellerreport.com, michaelsavage.com, newrightnetwork.com, pjmedia.com, reverbpress.news, theblacksphere.net, thedailydigest.org, thefederalist.com, ussanews.com, theoldschoolpatriot.com, thepoliticalinsider.com, truthfeednews.com.

Appendix 2

Table 2: A sample list of up to ten keywords from each junk news domain in the sample when the keyword reached the first position.

100percentfedup.com

dailywire.com

theblacksphere.net

gruesome videos

6

states bankrupt

22

black sphere

28

snopes exposed

5

ms 13 portland oregon

15

dwayne johnson gay

10

gruesome video

4

the gadsen flag

12

george soros private security

1

teendreamers

2

f word on tv

12

bombshell barack

1

bush cheney inauguration

2

against gun control facts

10

madame secretary

1

americanthinker.com

end of america 90

9

head in vagina

1

medienkritic

23

racist blacks

8

mexicans suck

1

problem with taxes

22

associates clinton

8

obama homosexual

1

janet levy

19

diebold voting machine

8

comments this

1

article on environmental protection

18

diebold machines

8

thefederalist.com

maya angelou criticism

18

gellerreport.com

the federalist

39

supply and demand articles 2011

17

geller report

1

federalist

30

ezekiel emanuel complete lives system

16

infowars.com

gun control myths

26

articles on suicide

12

www infowars

39

considering homeschooling

23

American Thinker Coupons

11

infowars com

39

why wont it work technology

22

truth about obama

10

info wars

39

debate iraq war

21

barenakedislam.com

infowars

39

lesbian children

20

berg beheading video

11

www infowars com

39

why homeschooling

19

against islam

11

al-qaeda 100 pentagon run

38

home economics course

18

beheadings

10

info war today

35

iraq war debate

17

iraquis beheaded

10

war info

34

thegatewaypundit.com

muslim headgear

8

infowars moneybomb

34

thegatewaypundit.com

39

torture clips

7

feminizing uranium

33

civilian national security force

10

los angeles islam pictures

7

libertyheadlines.com

safe school czar

8

beheaded clips

7

accusers dod

2

hillary clinton weight gain 2011

8

berg video

7

liberty security guard bucks country

1

RSS Pundit

7

hostages beheaded

6

lifenews.com

hillary clinton weight gain

7

bigleaguepolitics.com

successful people with down syndrome

39

all perhaps hillary

4

habermans

1

life news

35

hillary clinton gained weight

4

fbi whistleblower

1

lifenews.com

35

london serendip i tea camp

4

ron paul supporters

1

fetus after abortion

26

whoa it

4

breitbart.com

anti abortion quotes

21

thepoliticalinsider.com

big journalism

39

pro life court cases

17

obama blames

19

big government breitbart

39

rescuing hug

16

michael moore sucks

14

breitbart blog

39

process of aborting a baby

15

marco rubio gay

11

www.breitbart.com

39

different ways to abort a baby

14

weapons mass destruction iraq

10

big hollywood

39

adoption waiting list statistics

14

weapons of mass destruction found

10

breitbart hollywood

39

michaelsavage.com

wmd iraq

10

breitbart.com

39

www michaelsavage com

19

obama s plan

9

big hollywood blog

39

michaelsavage com

19

chuck norris gay

9

big government blog

39

michaelsavage

18

how old is bill clinton

8

breitbart big hollywood

39

michael savage com

18

stop barack obama

7

cnsnews.com

michaelsavage radio

17

truepundit.com

cns news

39

michael savage

17

john kerrys daughter

8

cnsnews

39

savage nation

15

john kerrys daughters

5

conservative news service

39

michael savage nation

14

sex email

2

christian news service

21

michael savage savage nation

13

poverty warrior

2

cns

20

the savage nation

12

john kerry daughter

1

major corporations

20

pjmedia.com

RSS Pundit

1

billy graham daughter

18

belmont club

39

whistle new

1

taxing the internet

17

belmont club blog

39

pay to who

1

pashtun sexuality

15

pajamas media

39

truthfeednews.com

record tax

15

dr helen

38

nfl.comm

5

dailycaller.com

instapundit blog

38

ussanews.com

the daily caller

37

instapundit

33

imigration expert

2

vz 58 vs ak 47

33

pj media

33

meabolic syndrome

1

condition black

28

instapundit.

32

zerohedge.com

patriot act changes

26

google ddrive

28

zero hedge

33

12 hour school

25

instapundits

27

unempolyment california

24

common core stories

25

rawstory.com

hayman capital letter

24

courtroom transcript

23

the raw story

39

dennis gartman performance

24

why marijuana shouldnt be legal

22

raw story

39

the real barack obama

23

why we shouldnt legalize weed

22

rawstory

39

meredith whitney blog

22

why shouldnt marijuana be legalized

22

rawstory.com

39

weaight watchers

22

  

westboro baptist church tires slashed

35

0hedge

22

  

the raw

25

doug kass predictions

19

  

mormons in porn

22

usa hyperinflation

17

  

norm colemans teeth

19

  
  

xe services sold

18

  
  

duggers

17

  

Footnotes

1. Organic engagement is used to describe authentic user engagement, where an individual might click a website or link without being prompted. This is different from "transactional engagement" where a user engages with content through prompting via paid advertising. In contrast, I use the term “pseudo-organic engagement” to capture the idea that SEO practitioners are generating clicks through the manipulation of keywords that move websites closer to the top of search engine rankings. An important aspect of pseudo-organic engagement is that these results are indistinguishable from those that have “earnt” their search ranking, meaning, users may be more likely to treat the source as authoritative despite the fact their ranking has been manipulated.

2. It is important to note that AdWord purchases can also be displayed on affiliate websites. These “display ads” appear on websites and generate revenue for the website operator.

3. For the US presidential election, 19.53 million tweets were collected between 1 November 2016, and 9 November 2016; for the State of the Union Address 2.26 million tweets were collected between 24 January 2018, and 30 January 2018; and for the 2018 US midterm elections 2.5 million tweets were collected between 21-30 September 2018 and 6,986 Facebook groups between 29 September 2018 and 29 October 2018. For more information see Bradshaw et al., 2019.

4. Elections include: 2016 United States presidential election, 2017 French presidential election, 2017 German federal election, 2017 Mexican presidential election, 2018 Brazilian presidential election, and the 2018 Swedish general election.

Towards a holistic perspective on personal data and the data-driven election paradigm

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Politics is an art and not a science, and what is required for its mastery is not the rationality of the engineer but the wisdom and the moral strength of the statesman. - Hans Morgenthau, Scientific Man versus Power Politics

Voters, industry representatives, and lawmakers – and not infrequently, journalists and academics as well – have asked one question more than any other when presented with evidence of how personal data is changing modern-day politicking: “Does it work?” As my colleagues and I have detailed in our report, Personal Data: Political Persuasion, the convergence of politics and commercial data brokering has transformed personal data into a political asset, a means for political intelligence, and an instrument for political influence. The practices we document are varied and global: an official campaign app requesting camera and microphone permissions in India, experimentation to select slogans designed to trigger emotional responses from Brexit voters, a robocalling-driven voter suppression campaign in Canada, attack ads used to control voters’ first impressions on search engines in Kenya, and many more.

Asking “Does it work?” is understandable for many reasons, including to address any real or perceived damage to the integrity of an election, to observe shifts in attitudes or voting behaviour, or perhaps to ascertain and harness the democratic benefits of the technology in question. However, discourse fixated on the efficacy of data-intensive tools is fraught with abstraction and reflects a shortsighted appreciation for the full political implications of data-driven elections.

“Does it work?”

The question “Does it work?” is very difficult to answer with any degree of confidence regardless of the technology in question: personality profiling of voters to influence votes, natural language processing applied to the Twitter pipeline to glean information about voters’ political leanings, political ads delivered in geofences, or a myriad of others.

First, the question is too general with respect to the details it glosses over. The technologies themselves are a heterogenous mix, and their real-world implementations are manifold. Furthermore, questions of efficacy are often divorced of context, and a technology’s usefulness to a campaign very likely depends on the sociopolitical context in which it lives. Finally, the question of effectiveness continues to be studied extensively. Predictably, the conclusions of peer-reviewed research vary.

As one example, the effectiveness of implicit social pressure in direct mail in the United States evidently remains inconclusive. The motivation for this research is the observation that voting is a social norm responsive to others’ impressions (Blais, 2000; Gerber & Rogers, 2009). However, some evidence suggests that explicit social pressure to mobilise voters (e.g., by disclosing their vote histories) may seem invasive and can backfire (Matland & Murray, 2013). In an attempt to preserve the benefits of social pressure while mitigating its drawbacks, researchers have explored whether implicit social pressure in direct mail (i.e., mailers with an image of eyes, reminding recipients of their social responsibility) boosts turnout on election day. Of their evaluation of implicit social pressure, which had apparently been regarded as effective, political scientists Richard Matland and Gregg Murray concluded that, “The effects are substantively and statistically weak at best and inconsistent with previous findings” (Matland & Murray, 2016). In response to similar, repeated findings from Matland and Murray, Costas Panagopoulos wrote that their work in fact “supports the notion that eyespots likely stimulate voting, especially when taken together with previous findings” (Panagopoulos, 2015). Panagopoulos soon thereafter authored a paper arguing that the true impact of implicit social pressure actually varies with political identity, claiming that the effect is pronounced for Republicans but not for Democrats or Independents, while Matland maintained that the effect is "fairly weak" (Panagopoulos & van der Linden, 2016; Matland, 2016).

Similarly, studies on the effects of door-to-door canvassing lack consensus (Bhatti et al., 2019). Donald Green, Mary McGrath, and Peter Aronow published a review of seventy-one canvassing experiments and found their average impact to be robust and credible (Green, McGrath, & Aronow, 2013). A number of other experiments have demonstrated that canvassing can boost voter turnout outside the American-heavy literature: among students in Beijing in 2003, with British voters in 2005, and for women in rural Pakistan in 2008 (Guan & Green, 2006; John & Brannan, 2008; Giné & Mansuri, 2018). Studies from Europe, however, call into question the generalisability of these findings. Two studies on campaigns in 2010 and 2012 in France both produced ambiguous results, as the true effect of canvassing was not credibly positive (Pons, 2018; Pons & Liegey, 2019). Experiments conducted during the 2013 Danish municipal elections observed no definitive effect of canvassing, while Enrico Cantoni and Vincent Pons found that visits by campaign volunteers in Italy helped increase turnout, but those by the candidates themselves did not (Bhatti et al., 2019; Cantoni & Pons, 2017). In some cases, the effect of door-to-door canvassing was neither positive nor ambiguous but distinctly counterproductive. Florian Foos and Peter John observed that voters contacted by canvassers and given leaflets for the 2014 British European Parliament elections were 3.7 percentage points less likely to vote than those in the control group (Foos & John, 2018). Putting these together, the effects of canvassing still seem positive in Europe, but they are less pronounced than in the US. This learning has led some scholars to note that “practitioners should be cautious about assuming that lessons from a US- dominated field can be transferred to their own countries’ contexts” (Bhatti et al., 2019).

A cursory glance at a selection of literature related to these two cases alone – implicit social pressure and canvassing – illustrates how tricky answering “Does it work?” is. Although many of the technologies in use today are personal data-supercharged analogues of these antecedents (e.g., canvassing apps with customised scripts and talking points based on data about each household’s occupants instead of generic, door-to-door knocking), I have no reason to suspect that analyses of data-powered technologies would be any different. The short answer to “Does it work?” is that it depends. It depends on baseline voter turnout rates, print vs. digital media, online vs. offline vs. both combined, targeting young people vs. older people, reaching members of a minority group vs. a majority group, partisan vs. nonpartisan messages, cultural differences, the importance of the election, local history, and more. Indeed, factors like the electoral setup may alter the effectiveness of a technology altogether. A tool for political persuasion might work in a first-past-the-post contest in the United States but not in a European system of proportional representation in which winner-take-all stakes may be tempered. This is not to suggest that asking “Does it work?” is a futile endeavour – indeed there are potential democratic benefits to doing so – but rather that it is both limited in scope and rather abstract given the multitude of factors and conditions at play in practice.

Political calculus and algorithmic contagion

With this in mind, I submit that a more useful approach to appreciating the full impacts of data-driven elections may be a consideration of the preconditions that allow data-intensive practices to thrive and an examination of their consequences than a preoccupation with the efficacy of the practices themselves.

In a piece published in 1986, philosopher Ian Hacking coined the term ‘semantic contagion’ to describe the process of ascribing linguistic and cultural currency to a phenomenon by naming it and thereby also contributing to its spread (Hacking, 1999). I propose that the prevailing political calculus, spurred on by the commercial success of “big data” and “AI”, appears overtaken by an ‘algorithmic contagion’ of sorts. On one level, algorithmic contagion speaks to the widespread logic of quantification. For example, understanding an individual is difficult, so data brokers instead measure people along a number of dimensions like level of education, occupation, credit score, and others. On another level, algorithmic contagion in this context describes an interest in modelling anything that could be valuable to political decision-making, as Market Predict’s political page suggests. It presumes that complex phenomena, like an individual’s political whims, can be predicted and known within the structures of formalised algorithmic process, and that they ought to be. According to the Wall Street Journal, a company executive claimed that Market Predict’s “agent-based modelling allows the company to test the impact on voters of events like news stories, political rallies, security scares or even the weather” (Davies, 2019).

Algorithmic contagion also encompasses a predetermined set of boundaries. Thinking within the capabilities of algorithmic methods prescribes a framework to interpret phenomena within bounds that enable the application of algorithms to those phenomena. In this respect, algorithmic contagion can influence not only what is thought but also how. This conceptualisation of algorithmic contagion encompasses the ontological (through efforts to identify and delineate components that structure a system, like an individual’s set of beliefs), the epistemological (through the iterative learning process and distinction drawn between approximation and truth), and the rhetorical (through authority justified by appeals to quantification).

Figure 1: The political landing page of Market Predict, a marketing optimisation firm for brand and political advertisers, that explains its voter simulation technology. It claims to, among other things, “Account for the irrationality of human decision-making”. Hundreds of companies offer related services. Source: Market Predict Political Advertising

This algorithmic contagion-informed formulation of politics bears some connection to the initial “Does it work?” query but expands the domain in question to not only the applications themselves but also to the components of the system in which they operate – a shift that an honest analysis of data-driven elections, and not merely ad-based micro-targeting, demands. It explains why and how a candidate for mayor in Taipei in 2014 launched a viral social media sensation by going to a tattoo parlour. He did not visit the parlour to get a tattoo, to chat with an artist about possible designs, or out of a genuine interest in meeting the people there. He went because a digital listening company that mines troves of data and services campaigns across southeast Asia generated a list of actions for his campaign that would generate the most buzz online, and visiting a tattoo parlour was at the top of the list.

Figure 2: A still from a video documenting Dr Ko-Wen Je’s visit to a tattoo parlour, prompting a social media sensation. His campaign uploaded the video a few days before municipal elections in which he was elected mayor of Taipei in 2014. The post on Facebook has 15,000 likes, and the video on YouTube has 153,000 views. Against a backdrop of creeping voter surveillance, Dr Ko-Wen Je’s visit to this tattoo parlour begs questions about the authenticity of political leaders. (Image brightened for clarity) Sources: Facebook and YouTube

As politics continues to evolve in response to algorithmic contagion and to the data industrial complex governing the commercial (and now also political) zeitgeist, it is increasingly concerned with efficiency and speed (Schechner & Peker, 2018). Which influencer voters must we win over, and whom can we afford to ignore? Who is both the most likely to turn out to vote and also the most persuadable? How can our limited resources be allocated as efficiently as possible to maximise the probability of winning? In this nascent approach to politics as a practice to be optimised, who is deciding what is optimal? Relatedly, as the infrastructure of politics changes, who owns the infrastructure upon which more and more democratic contests are waged, and to what incentives do they respond?

Voters are increasingly treated as consumers – measured, ranked, and sorted by a logic imported from commerce. Instead of being sold shoes, plane tickets, and lifestyles, voters are being sold political leaders, and structural similarities to other kinds of business are emerging. One challenge posed by data-driven election operations is the manner in which responsibilities have effectively been transferred. Voters expect their interests to be protected by lawmakers while indiscriminately clicking “I Agree” to free services online. Efforts to curtail problems through laws are proving to be slow, mired in legalese, and vulnerable to technological circumvention. Based on my conversations with them, venture capitalists are reluctant to champion a transformation of the whole industry by imposing unprecedented privacy standards on their budding portfolio companies, which claim to be merely responding to the demands of users. The result is an externalised cost shouldered by the public. In this case, however, the externality is not an environmental or a financial cost but a democratic one. The manifestation of these failures include the disintegration of the public sphere and a shared understanding of facts, polarised electorates embroiled in 365-day-a-year campaign cycles online, and open campaign finance and conflict of interest loopholes introduced by data-intensive campaigning, all of which are exacerbated by a growing revolving door between the tech industry and politics (Kreiss & McGregor, 2017).

Personal data and political expediency

One response to Cambridge Analytica is “Does psychometric profiling of voters work?” (Rosenberg et al., 2018). A better response examines what the use of psychometric profiling reveals about the intentions of those attempting to acquire political power. It asks what it means that a political campaign was apparently willing to invest the time and money into building personality profiles of every single adult in the United States in order to win an election, regardless of the accuracy of those profiles, even when surveys of Americans indicate that they do not want political advertising tailored to their personal data (Turow et al., 2012). And it explores the ubiquity of services that may lack Cambridge Analytica’s sensationalised scandal but shares the company’s practice of collecting and using data in opaque ways for clearly political purposes.

The ‘Influence Industry’ underlying this evolution has evangelised the value of personal data, but to whatever extent personal data is an asset, it is also a liability. What risks do the collection and use of personal data expose? In the language of the European Union’s General Data Protection Regulation (GDPR), who are the data controllers, and who are the data subjects in matters of political data which is, increasingly, all data? In short, who gains control, and who loses it?

As a member of a practitioner-oriented group based in Germany with a grounding in human rights, I worry about data-intensive practices in elections and the larger political sphere going awry, especially as much of our collective concern seems focused on questions of efficacy while companies race to capitalise on the market opportunity. For historical standards of the time, the Holocaust was a ruthlessly data-driven, calculated, and efficient undertaking fuelled by vast amounts of personal data. As Edwin Black documents in IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation, personal data managed by IBM was an indispensable resource for the Nazi regime. IBM’s President at the time, Thomas J. Waston Sr., the namesake of today’s IBM Watson, went to great lengths to profit from dealings between IBM’s German subsidiary and the Nazi party. The firm was such an important ally that Hitler awarded Watson an Order of the German Eagle award for his invaluable service to the Third Reich. IBM aided the Nazi’s record-keeping across several phases of the Holocaust, including identification of Jews, ghettoisation, deportation, and extermination (Black, 2015). Black writes that “Prisoners were identified by descriptive Hollerith cards, each with columns and punched holes detailing nationality, date of birth, marital status, number of children, reason for incarceration, physical characteristics, and work skills” (Black, 2001). These Hollerith cards were sorted in machines physically housed in concentration camps.

The precursors to these Hollerith cards were originally developed to track personal details for the first American census. The next American census, to be held in 2020, has already been a highly politicised affair with respect to the addition of a citizenship question (Ballhaus & Kendall, 2019). President Trump recently abandoned an effort to formally add a citizenship question to the census, vowing to seek this information elsewhere, and the US Census Bureau has already published work investigating the quality of alternate citizenship data sources for the 2020 Census (Brown et al., 2018). For stakeholders interested in upholding democratic ideals, focusing on the accuracy of this alternate citizenship data is myopic; that an alternate source of data is being investigated to potentially advance an overtly political goal is the crux of the matter.

Figure 3: A card showing the personal data of Symcho Dymant, a prisoner at Buchenwald Concentration Camp. The card includes many pieces of personal data, including name, birth date, condition, number of children, place of residence, religion, citizenship, residence of relatives, height, eye colour, description of his nose, mouth, ears, teeth, and hair. Source: US Holocaust Memorial Museum

This prospect may seem far-fetched and alarmist to some, but I do not think so. If the political tide were to turn, the same personal data used for a benign digital campaign could be employed in insidious and downright unscrupulous ways if it were ever expedient to do so. What if a door-to-door canvassing app instructed volunteers walking down a street to skip your home and not remind your family to vote because a campaign profiled you as supporters of the opposition candidate? What if a data broker classified you as Muslim, or if an algorithmic analysis of your internet browsing history suggests that you are prone to dissent? Possibilities like these are precisely why a fixation on efficacy is parochial. Given the breadth and depth of personal data used for political purposes, the line between consulting data to inform a political decision and appealing to data – given the rhetorical persuasiveness it enjoys today – in order to weaponise a political idea is extremely thin.

A holistic appreciation of data-driven elections’ democratic effects demands more than simply measurement, and answering “Does it work?” is merely one component of grasping how campaigning transformed by technology and personal data is influencing our political processes and the societies they engender. As digital technologies continue to rank, prioritise, and exclude individuals even when – indeed, especially when – inaccurate, we ought to consider the larger context in which technological practices shape political outcomes in the name of efficiency. The infrastructure of politics is changing, charged with an algorithmic contagion, and a well-rounded perspective requires that we ask not only how these changes are affecting our ideas of who can participate in our democracies and how they do so, but also who derives value from this infrastructure and how they are incentivised, especially when benefits are enjoyed privately but costs sustained democratically. The quantitative tools underlying the ‘datafication’ of politics are neither infallible nor safe from exploitation, and issues of accuracy grow moot when data-intensive tactics are enlisted as pawns in political agendas. A new political paradigm is emerging whether or not it works.

References

Ballhaus, R., & Kendall, B. (2019, July 11). Trump Drops Effort to Put Citizenship Question on Census, The Wall Street Journal. Retrieved from https://www.wsj.com/articles/trump-to-hold-news-conference-on-census-citizenship-question-11562845502

Bhatti, Y., Olav Dahlgaard, J., Hedegaard Hansen, J., & Hansen, K.M. (2019). Is Door-to-Door Canvassing Effective in Europe? Evidence from a Meta-Study across Six European Countries, British Journal of Political Science,49(1), 279–290. https://doi.org/10.1017/S0007123416000521

Black, E. (2015, March 17). IBM’s Role in the Holocaust -- What the New Documents Reveal. HuffPost. Retrieved from https://www.huffpost.com/entry/ibm-holocaust_b_1301691

Black, E. (2001). IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation. New York: Crown Books.

Blais, A. (2000). To Vote or Not to Vote: The Merits and Limits of Rational Choice Theory. Pittsburgh: University of Pittsburgh Press. https://doi.org/10.2307/j.ctt5hjrrf

Brown, J. D., Heggeness, M. L., Dorinski, S., Warren, L., & Yi, M.. (2018). Understanding the Quality of Alternative Citizenship Data Sources for the 2020 Census [Discussion Paper No. 18-38] Washington, DC: Center for Economic Studies. Retrieved from https://www2.census.gov/ces/wp/2018/CES-WP-18-38.pdf

Cantoni, E., & Pons, V. (2017). Do Interactions with Candidates Increase Voter Support and Participation? Experimental Evidence from Italy [Working Paper No. 16-080]. Boston: Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Publication%20Files/16-080_43ffcfcb-74c2-4713-a587-ebde30e27b64.pdf

Davies, P. (2019). A New Crystal Ball to Predict Consumer and Investor Behavior. Wall Street Journal, June 10. Retrieved from https://www.wsj.com/articles/a-new-crystal-ball-to-predict-consumer-and-investor-behavior-11560218820?mod=rsswn

Foos, F., & John, P. (2018). Parties Are No Civic Charities: Voter Contact and the Changing Partisan Composition of the Electorate*, Political Science Research and Methods, 6(2), 283–98. https://doi.org/10.1017/psrm.2016.48

Gerber, A. S., & Rogers, T. (2009). Descriptive Social Norms and Motivation to Vote: Everybody’s Voting and so Should You. The Journal of Politics, 71(1), 178–191. https://doi.org/10.1017/S0022381608090117

Giné, X. & Mansuri, G. (2018). Together We Will: Experimental Evidence on Female Voting Behavior in Pakistan. American Economic Journal: Applied Economics, 10(1), 207–235. https://doi.org/10.1257/app.20130480

Green, D.P., McGrath, M. C. & Aronow, P. M. (2013). Field Experiments and the Study of Voter Turnout. Journal of Elections, Public Opinion and Parties, 23(1), 27–48. https://doi.org/10.1080/17457289.2012.728223

Guan, M. & Green, D. P. (2006). Noncoercive Mobilization in State-Controlled Elections: An Experimental Study in Beijing. Comparative Political Studies, 39(10), 1175–1193. https://doi.org/10.1177/0010414005284377

Hacking, I. (1999). Making Up People. In M. Biagioli (Ed.), The Science Studies Reader (pp. 161–171). New York: Routledge. Retrieved from http://www.icesi.edu.co/blogs/antro_conocimiento/files/2012/02/Hacking_making-up-people.pdf

John, P., & Brannan, T. (2008). How Different Are Telephoning and Canvassing? Results from a ‘Get Out the Vote’ Field Experiment in the British 2005 General Election. British Journal of Political Science,38(3), 565–574. https://doi.org/10.1017/S0007123408000288

Kreiss, D., & McGregor, S. C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle, Political Communication, 35(2), 155–77. https://doi.org/10.1080/10584609.2017.1364814

Matland, R. (2016). These Eyes: A Rejoinder to Panagopoulos on Eyespots and Voter Mobilization, Political Psychology, 37(4), 559–563. https://doi.org/10.1111/pops.12282 Available at https://www.academia.edu/12128219/These_Eyes_A_Rejoinder_to_Panagopoulos_on_Eyespots_and_Voter_Mobilization

Matland, R. E. & Murray, G. R. (2013). An Experimental Test for ‘Backlash’ Against Social Pressure Techniques Used to Mobilize Voters, American Politics Research, 41(3), 359–386. https://doi.org/10.1177/1532673X12463423

Matland, R. E., & Murray, G. R. (2016). I Only Have Eyes for You: Does Implicit Social Pressure Increase Voter Turnout? Political Psychology, 37(4), 533–550. https://doi.org/10.1111/pops.12275

Panagopoulos, C. (2015). A Closer Look at Eyespot Effects on Voter Turnout: Reply to Matland and Murray, Political Psychology, 37(4). https://doi.org/10.1111/pops.12281

Panagopoulos, C. & van der Linden, S. (2016). Conformity to Implicit Social Pressure: The Role of Political Identity, Social Influence, 11(3), 177–184. https://doi.org/10.1080/15534510.2016.1216009

Pons, V. (2018). Will a Five-Minute Discussion Change Your Mind? A Countrywide Experiment on Voter Choice in France, American Economic Review, 108(6), 1322–1363. https://doi.org/10.1257/aer.20160524

Pons, V., & Liegey, G. (2019). Increasing the Electoral Participation of Immigrants: Experimental Evidence from France, The Economic Journal, 129(617), 481–508. https://doi.org/10.1111/ecoj.12584 Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53575

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How Trump Consultants Exploited the Facebook Data of Millions, The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Schechner, S. & Peker, E. (2018, October 24). Apple CEO Condemns ‘Data-Industrial Complex’, Wall Street Journal, October 24.

Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans Roundly Reject Tailored Political Advertising [Departmental Paper No. 7-2012]. Annenberg School for Communication, University of Pennsylvania. Retrieved from http://repository.upenn.edu/asc_papers/398

Cranks, clickbait and cons: on the acceptable use of political engagement platforms

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Shortly after Donald Trump won the US presidency, Jim Gilliam (2016), the late president of start-up 3DNA, posted a message on its blog titled “Choosing to Lead”. Gilliam congratulated the “three thousand NationBuilder customers who were on the ballot last week”. These customers subscribed to 3DNA’s NationBuilder service that provides a political engagement platform connecting voters, politicians, volunteers and staffers in an integrated online service. The post continues:

Many of you – including President-elect Donald Trump and all three of the other non-establishment presidential candidates – were outsiders. And that’s why this election was so important. Not just for people in the United States, but for people all over the world. This election unequivocally proves that we are in a new era. One where anyone can run and win. (Gilliam, 2016)

Like many posts from NationBuilder, Gilliam celebrated the company’s mission to democratise access to new political technology, bringing in these outsiders.

Gilliam’s post demonstrates faith that being open is a corporate value as well as a business model. As its mission states today, NationBuilder sells “to everyone regardless of race, age, class, religion, educational background, ideology, gender, sexual orientation or party”. Their mission encapsulates a corporate belief in the democratic potential of their product, one available to anyone, much to the frustration of partisans and other political insiders on both sides who tend to guard access to their innovative technologies (Karpf, 2016b).

Gilliam’s optimism matters globally. Political parties worldwide use NationBuilder as a third-party solution to manage its voter data, outreach, website, communications and volunteer management. As of 3 December 2019, NationBuilder reported that in 2018 it was used to send 1,600,000,000 emails, host 341,000 events and raise $401,000,000 USD across 80 countries. The firm has also raised over $14 million US dollars in venture capital partially based on the promise that it will democratise access to political engagement platforms. Unlike most of its competitors, NationBuilder is a nonpartisan political engagement platform. NationBuilder is one of the few services actively developed and promoted as nonpartisan and cross-sectoral. Conservative, liberal and social democratic parties across the globe use NationBuilder, as the company emphasises in its corporate materials (McKelvey and Piebiak, 2018).

By letting outsiders access political technology, might NationBuilder harm politics in its attempts to democratise it? Now is the time to doubt the promise of political technologies. Platform service providers like NationBuilder are the object of significant democratic anxieties globally, rightly or wrongly (see Adams et al., 2019, for a good review of current research). The political technology industry has been pulled into a broad set of issues including, according to Colin Bennett and Smith Oduro-Marfo: “the role of voter analytics in modern elections; the democratic responsibilities of powerful social media platforms; the accountability and transparency for targeted political ads; cyberthreats to the through malicious actors and automated bots” (2019, pp. 1-2). Following the disclosures of malpractice by Cambridge Analytica and AggregateIQ, these public scandals have pushed historic concerns about voter surveillance, non-consensual data collection and poor oversight of the industry to the fore (Bennett, 2015; Howard, 2006; White, 1961).

My paper questions NationBuilder’s corporate belief that better access to political technology improves politics. In doing so, I add acceptable use of political technology to the list of concerns about elections and campaigns in the digital age. Even as Daniel Kreiss (2016) argues, campaigns are technologically-intensive, there have been no systematic studies of how a political technology is used, particularly internationally. My paper reviews the uses of NationBuilder worldwide. It offers empirical research to understand the real world of a contentious political technology and offers grounded examples of problematic or questionable uses of a political technology. NationBuilder is a significant example, as I discuss, of a nonpartisan political technology firm as opposed to its partisan rivals.

The paper uses a mixed method approach to analyse NationBuilder’s use. Methods included document analysis, content analysis and a novel use of web analytics. To first understand real world use, the study collected a list of 6,435 domains using NationBuilder as of October 2017. The study coded the 125 most popular domains by industry and compared results to corporate promotional materials, looking for how actual use differed from its promoted uses. The goal was to find questionable uses of NationBuilder. Questionable, through induction, came to mean uses that might violate liberal democratic norms. By looking at NationBuilder’s various uses, the review found cases at odds with normative and institutional constraints that allow for ‘friendly rivalry’ or ‘agonism’ in liberal democratic politics (Rosenblum, 2008; Mouffe, 2005). These constraints include a free press, individual rights such as privacy as well as a commitment to shared human dignity.

My limited study finds that NationBuilder can be used to undermine privacy rights and journalistic standards while also promoting hatred. The scan identified three problematic uses as: (1) a mobilisation tool for hate groups targeting cultural or ethnic identities; (2) a profiling tool for deceptive advertising or stealth media and; (3) a fundraising tool for entrepreneurial journalism. These findings raise issues about acceptable use and liberal democracy. For example, I looked for cases of NationBuilder being used by known hate groups inspired by recent concerns about the rise of the extreme right (Eatwell and Mudde, 2004) as well as the use of NationBuilder by news websites reflecting the changing media system (Ananny, 2018).

My findings suggest that NationBuilder may be a democratic technology, without being a liberal one. The traditions of liberalism and democracy are separate and a source of tension according to democratic theorist Chantal Mouffe. “By constantly challenging the relations of inclusion implied by the political constitution of 'the people' - required by the exercise of democracy”, Mouffe writes, “the liberal discourse of universal human rights plays an important role in maintaining the democratic contestation alive” (2009, p. 10). NationBuilder’s democratic mission of being open to outsiders then is at odds with a liberal tradition that pushes fraud, violence and hatred outside respectable politics.

While the paper identifies problems, it does not offer much in the way of solutions. Remedies are difficult and certainly not at NationBuilder’s global scale. As I discuss later, NationBuilder is not responsible for how it is used. The most immediate remedies might be based on corporate social responsibility. To this end, this paper provides three recommendations for revisions to 3DNA’s acceptable use policy to address these questionable uses: (1) reconcile its mission statement with its prohibited uses; (2) require disclosure on customers’ websites; and (3) clarify its relation to domestic privacy law as part of a corporate mission to improve global privacy and data standards. These reforms suggest that NationBuilder’s commitment to non-partisanship needs clarification and that the acceptable use of political technology is fraught – a dilemma that should become a central debate. Political technology firms – NationBuilder and its competitors – must understand that liberal democratic technologies are part of what Bennett and Oduro-Marfo describe as “the political campaigning network”. They continue, “contemporary political campaigning is complex, opaque and involves a shifting ecosystem of actors and organisations, which can vary considerably from society to society” (2019, p. 54). Companies ultimately must consider their obligations to liberal democracy, a political system made possible by technologies like the press and the internet (albeit imperfectly).

The acceptable use of politicised, partisan and nonpartisan technology

The political technology industry is central to the era of technology-intensive campaigning found in the United States and across many Western democracies (Baldwin-Philippi, 2015; Karpf, 2016a; Kreiss, 2016). The industry itself has been a staple of political consultancy throughout modern campaigning. From laser letters for direct mail to apps for canvassing, political technology firms promise to bring efficiency to an otherwise messy campaign (D. W. Johnson, 2016; Kreiss and Jasinski, 2016). NationBuilder itself provides a good summary of this industry in a marketing slide reproduced in Figure 1.

Figure 1: Political technology firms according to NationBuilder

The figure illustrates the numerous practices and sectors drawn into politics as well as the migration of practices. These services help campaigns analyse data and make strategic decisions, principally around advertising buys. Many of these firms position themselves as the primary medium of a campaign, creating a platform connecting voters, politicians, volunteers and staff (Baldwin-Philippi, 2017; McKelvey and Piebiak, 2018). Political technology providers blur the boundaries between nonprofit management, political campaigning and advocacy as well as illustrating the taken-for-grantedness of marketing as a political logic (Marland, 2016).

Political technology firms may be divided between: politicised firms, partisan firms, and nonpartisan firms. Politicised firms sell software or services not explicitly designed for politics put to political ends. These can include payment processors like PayPal or Stripe, web hosting companies like Cloudflare and social media platforms that allow political advertising and political mobilisation. NationBuilder’s slide reproduced in Figure 1 includes some further examples of politicised firms providing social media management software, email marketing software and website content management systems. Technologies like NationBuilder are purpose-built for politics, listed as Political Software in Figure 1. These firms can be split further between partisan firms that work only for conservative, liberal or progressive campaigns and nonpartisan firms. In a market dominated by partisan affiliation, NationBuilder and other nonpartisan companies like Aristotle International and ActionKit are significant. They attempt to be apolitical political technologies.

Political technologies raise added concerns in respect to liberal democratic norms. Who should have access to these services, and how should these services be used? New technologies afford campaigns new repertoires of action that may undermine campaign spending limits, norms around targeting or the privacy rights of voters. Cambridge Analytica, for example, has rekindled longstanding debates about the democratic consequences of political technologies especially micro-targeting (Bodó, Helberger, and de Vreese, 2017; Kreiss, 2017) as well as stoking conjecture about the feasibility of psycho-demographics and its mythic promise of a new hypodermic needle (Stark, 2018).

Acceptable use is largely determined by partisan identity due to the limited scope of regulations on digital campaigning. Regulation for political technology is lacking (Bennett, 2015; Howard and Kreiss, 2010) and likely does not apply to a service provider like NationBuilder in the first place. Instead, so far partisanship has been regarded as the key mechanism to regulate the use of political technology. Most firms are partisan, working with only one party. Acceptable use of political technology is largely judged by its conformity to partisan values. As David Karpf explains, “political technology yields partisan benefits, and the market for political technologies is made up of partisans” (2016b, p. 209). Such partisanship functions as a professional norm about acceptable use, restricting access on partisan lines. Fellow partisans are acceptable, and, in what Karpf calls the zero-sum game of politics, rivals are unacceptable users. Indeed, partisanship is an important corporate asset. The major firm Aristotle International sued its competitor NGP VAN for falsely claiming it only sold to Democratic and progressive firms when it licensed its technologies to Republican firms as well. NGP VAN, the case alleged, was not as adherent a partisan firm as it claimed. The courts eventually dismissed the case (D’Aprile, 2011).

The tensions between partisan versus nonpartisan and politicised companies implicitly reveal a split in the values guiding acceptable use. On one side are firms committed to creating technology to advance their political values while on the other are firms trying to be neutral and to sell to anyone. In what might be seen as an act of community governance, progressive partisans argued that the software should not sell to non-progressive campaigns (Karpf, 2016a).

The lack of an expressed political agenda has caused politicised firms, in particular, to be mired in public scandals raising questions involving liberal democratic norms. A ProPublica investigation found that numerous technology firms supported known extremist groups, prompting Paypal and Plasso to cease offering services to groups identified days later (Angwin, Larson, Varner, and Kirchner, 2017a). That investigation only scratches the surface. A partial list of recent media controversies includes politicised firms being accused of spreading misinformation, aiding hate groups and easing foreign propaganda:

  • Facebook’s handling of the Kremlin-affiliated Internet Research Agency misinformation campaigns during the 2016 presidential elections
  • Hosting service Cloudflare removing Stormfront (Price, 2017)
  • GoFundMe allowing a fraudulent campaign to build a US-Mexico border wall (Holcombe, 2019)
  • GoFundMe removing anti-vaccine fundraising campaigns (Liao, 2019)
  • YouTube’s handling of far-right videos and the circulation of the livestream of the Christchurch terrorist attack

In the academic literature, McGregor and Kreiss (2018) question the willingness of politicised firms to assist American presidential campaigns’ advertising strategies, examining how these companies understood their influence. Braun and Eklund (2019) meanwhile explore the digital advertiser's dilemma of trying to demonetise misinformation and imposter journalism. 1 The CitizenLab has addressed the responsibility of international cybersecurity firms in democratic politics, particularly the use of exploits to target dissidents. 2 Tusikov (2019) most directly explores the question of acceptable use by analysing how financial third parties, like PayPal, have developed their own internal policies to not serve hate groups.

For these reasons, NationBuilder is an important test case for the acceptable uses of political technology. NationBuilder, as discussed above, exemplifies the neutral position of many firms, trying to be in politics without being political. NationBuilder exemplifies the problem for both politicised and nonpartisan firms that let their commitments to openness and neutrality to supersede their responsibilities to be political and understand their responsibility to liberal democracy norms.

Why NationBuilder?

NationBuilder is an intriguing case because it encapsulates a particular American belief in the revolutionary promise of computing for politics that has driven the development and regulation of many major technology firms (Gillespie, 2018; Mosco, 2004; Roberts, 2019). NationBuilder is a venture capital–funded company promising to disrupt politics by democratising access to innovation. According to investor Ben Horowitz (2012), “NationBuilder is that rarest of products that not only has the potential to change its market, but to change the world”. He made these remarks in a 2012 post in which Horowitz’s firm announced $6.25 million USD in Series A funding for NationBuilder’s parent company 3DNA. NationBuilder’s late founder Jim Gilliam exemplifies the “romantic individualism” that Tom Streeter associates with a faith in the thrilling, revolutionary effect of computing. Gilliam was a fundamental Christian who found community through BBSs and eventually told his coming-of-age story in a viral video entitled “The Internet Is My Religion”. He later self-published a book co-authored with the company’s current president, Lea Endres. When generalised and situated as part of NationBuilder’s mission, Gilliam’s story exemplifies Streeter’s observation that “the libertarian’s notion of individuality is proudly abstracted from history, from social differences, and from bodies; all that is supposed not to matter. Both the utilitarian and romantic individualist forms of selfhood rely on creation-from-nowhere assumptions, from structures of understanding that are systematically blind to the collective and historical conditions underlying new ideas, new technologies, and new wealth” (Streeter, 2011, p. 24). NationBuilder still links to this video on its corporate philosophy page as of 3 December 2019.

Figure 2: NationBuilder’s philosophy page captured on 8 January 2020

NationBuilder’s mission synthesises its belief in for-profit social change and romantic individualism. According to NationBuilder’s mission page as of 7 January 2020, it wants to “build the infrastructure for a world of creators by helping leaders develop and organise thriving communities”. This includes a belief that: “The tools of leadership should be available to everyone. NationBuilder does not discriminate. It is arrogant, even absurd, for us to decide which leaders are “better” or “right” (NationBuilder, n.d.).

Their mission resembles Streeter’s discussion of the libertarian abstract sense of freedom that, in NationBuilder’s case, equates egalitarian access to a commercial service with a viable means for democratic reform. Whether nonpartisan or libertarian, NationBuilder has remained committed to this belief, defending its openness from critics, such as in Gilliam’s post from the introduction. In doing so, NationBuilder is at odds with former progressive clients and other political technology firms (Karpf, 2016b).

Methodology

My research combines document analysis, web analytics and content analysis to understand NationBuilder usage. The research team reviewed the company’s 2016, 2017 and 2018 annual reports and archived content from the NationBuilder website using the Wayback Machine. The team also turned to the web services tool BuiltWith. BuiltWith scans the million most-popular sites on the internet to detect what technologies they use. 3 BuiltWith generated a list of 6,435 web domains using NationBuilder on 10 October 2017. Research analysed BuiltWith’s data through two scans:

  1. Coding the top 125 websites (as ranked by Alexa, an Amazon company that estimates traffic on major websites) by industry and comparing the results with the publicised use cases in NationBuilder’s annual reports.
  2. Searching the full list of BuiltWith results for websites classified as extremist by ProPublica, itself informed by the Anti-Defamation League and the Southern Poverty Law Center (Angwin, Larson, Varner, and Kirchner, 2017b).

These methods admittedly offer a limited window into the use of NationBuilder. Rather than provide a complete scan of the NationBuilder ecosystem or track trends over time, this project sought to question whether NationBuilder has uses other than those advertised, and, if so, do these applications raise acceptability questions?

The coding schema classified uses of NationBuilder by industry. The schema developed out of a review of prior literature classifying websites (Elmer, Langlois, and McKelvey, 2012) as well as inductive coding developed by visiting the top fifty websites, paying special attention to self-descriptions, such as mission statements and “about us” sections, as well as other clues to a site’s legal status (as a non-profit or a political action committee) or its overt political party affiliation and stated political positions. In the end, each website in the sample was assigned one of ten codes:

  1. College or university: a higher education institution
  2. Cultural production: a site promoting a book, movie, etc.
  3. Educational organisation: a high school or below
  4. Government initiative: sites operated by incumbent political actors or elected officials that are explicitly tied to their work in government (i.e., not used for a re-election campaign)
  5. Media organisation: sites whose primary purpose is to publish or aggregate media content
  6. NGO: (non-governmental organisation) sites for organisations whose activities can reasonably be considered non-political; these are usually but not exclusively non-profits
  7. Other: sites that are unclassifiable (an individual’s blog, for example)
  8. Political advocacy group: organisations that are not directly associated with an official political party or campaign but nonetheless seek to actively affect the political process
  9. Political party or campaign: sites operated by a political party or dedicated to an individual politician’s electoral campaign
  10. Union: sites run by a labour union

Two independent coders classified the 125-website sample. Intercoder reliability was 88 percent with a Krippendorf’s alpha of 0.8425 (Freelon, 2010). Analysis below removed inconsistencies through consensus coding.

Findings

NationBuilder has applications not well presented in its corporate materials that raise acceptability issues. NationBuilder has been used as:

  1. a mobilisation tool for hate or groups targeting cultural or ethnic identities;
  2. a profiling tool for deceptive advertising or stealth media; and,
  3. a fundraising tool for entrepreneurial journalism.

None of these uses violate the official terms of use or acceptable use policy, a problem discussed later in the analysis, but they do provoke questions that may help improve its acceptable usage policies.

Results of scan 1: top industries found in most popular sites in the sample

The first scan, coding top domains by industry, found uses that differed from the corporate reporting. NationBuilder emphasises certain use cases in its annual report and marketing, signalling the authorised channels of circulation for the product as well as its popular applications. Reporting, however, has been inconsistent with the best data available from 2016. The 2016 Annual Report lists the following uses: political (40.80%), advocacy (24.60%), nonprofit (11.80%), higher education (11%), business (8.30%), association (2%), as well as government (1.50%). 4 NationBuilder also profiles “stand-out leaders” in all its annual reports. Politicians, advocacy groups and nonprofits mostly appear in the list. The 2017 list features six politicians out of ten slots, including the party of French President Emmanuel Macron, New Zealand's Prime Minister Jacinda Ardern, and the leader of Canada's New Democratic Party Jagmeet Singh. Their successful campaigns resonate with NationBuilder's brand of political inclusion. In a new twist on the politics of marketing, NationBuilder also profiles businesses as stand-outs. AllSaints is a British fashion retailer that uses NationBuilder to connect with fans of the brand, especially to announce the opening of new stores.

Chart
Figure 3: Sites using NationBuilder by industry

Media outlets are more prominent in the findings than in 3DNA’s corporate materials. Two media outlets are in the top ten domains in our sample sorted by popularity as seen in Table 1. The third and fourth ranked sites are media organisations. Faith Family America is a right-of-centre news outlet, describing itself as “a real-time, social media community of Americans who are passionate about faith, family, and freedom”. The Rebel is a Canadian-based far-right news outlet, comparable to Breitbart in the US. Seven other media organisations appear in the sample, nine in total as seen in Table 2.

Table 1: The top ten websites in BuiltWith data set, according to Alexa ranking (the lower the number, the more popular the website).

Name

Domain

Industry Code

Country

Alexa Rank

American Heart Foundation

heart.org

NGO

US

10,525

NationBuilder

nationbuilder.com

Cultural production

US

20,791

City of Los Angeles

lacity.org

Government initiative

US

33,419

Faith Family America

faithfamilyamerica.com

Media organisation

US

65,980

The Rebel

therebel.media

Media organisation

CA

71,126

Party of Wales

partyof.wales

Political party or campaign

GB

89,996

Lambeth Council

lambeth.gov.uk

Government initiative

GB

107,745

NALEO Education Fund

naleo.org

Political advocacy group

US

112,071

Labour Party of New Zealand

labour.org.nz

Political party or campaign

NZ

115,253

In Utero (film)

inuterofilm.com

Cultural production

US

120,394

Two of the questionable uses of NationBuilder relate to its move into journalism or at least the simulacra of journalism. Through these media outlets, NationBuilder becomes entangled in the ethics of entrepreneurial journalism. The term refers to the “embrace of entrepreneurialism by the world of journalism” (Rafter, 2016, p. 141).

Table 2: Top media outlets using NationBuilder, according to Alexa ranking (the lower the number, the more popular the website).

Name

Domain

Alexa Rank

Faith Family America

faithfamilyamerica.com

65,980

The Rebel

therebel.media

71,126

Thug Kitchen

thugkitchen.com

192,082

New Civil Rights Movement

thenewcivilrightsmovement.com

224,004

All Cute All the Time

allcuteallthetime.com

266,126

Inspiring Day

inspiringday.com

330,692

Newshounds

newshounds.us

432,266

Brave New Films

bravenewfilms.org

703,101

Mark Latham Outsiders

marklathamsoutsiders.com

763,959

Otherwise, findings resembled data from the 2016 annual report. Political, advocacy and nonprofits accounted for 77.2 % of NationBuilder’s customers in the annual report whereas non-governmental organisations, political advocacy groups, political party or campaign and union comprised 83.2% in the sample. Unlike the annual reports, the sample included nine media-based organisations out of the 125 sites, representing 7.2% of the findings. Other users were marginal. There was a curious absence of any brand ambassadors even though NationBuilder highlights these applications prominently in its annual reports and describes 1% of its customers as such in its 2017 report.

Results of scan 2: extremists or hate groups using NationBuilder

The second scan found one use case by a known hate group as defined by the Southern Poverty Law Center, Act for America (ranked 72nd in sample). The Southern Poverty Law Center describes the group as the “largest anti-Muslim group in America”. Act for America used NationBuilder until August 2018 when it switched to an open-source equivalent, Drupal and CiviCRM (cf. McKelvey, 2011). Act for America did not state the reason for the switch or reply to questions.

Covert political organising?

Three media outlets stood out in the sample: Faith Family America, Inspiring Day and All Cute All the Time. Each site used attention-grabbing headlines (also known as clickbait) to present curated news, updates about the British monarchy, and celebrity news that was respectively conservative, religious and innocuous (rather than cute). None of these sites listed staff in a masthead or provided many details about their reporting; instead, the sites encouraged users to join the community and promoted their Facebook groups.

Figure 4: Faith Family America’s front page, capture 23 April 2019

All three outlets were owned by the company Strategic Media 21 (SM21) – a fact that was only apparent through examining the site’s identical privacy policies. Now offline, SM21 was based in San Jose, California. It seems to have been a digital marketing firm with two different web presences: one for content marketing and one for digital strategy. Neither site discloses much information about the company, but their business strategy seems to be manufacturing audiences for political advertisers. SM21 identifies demographics, then creates specific outlets, like Faith Family America for conservative voters, in the hope of building up a dedicated audience for advertising. Data broker L2 blogged about their 2016 partnership with SM21 on a targeted Facebook political advertising campaign. In this case, SM21 was acting in its digital strategy role, working with clients “on messaging, creative, plans out the buy and launches the campaign using your targeted list” (Westcott, 2016). These services have proved valuable. SM21 has received $2,418,592 USD in political expenditures since 2014 according to OpenSecrets. The biggest clients were the conservative Super PACs (political action committees) Vote to Reduce Debt, and Future in America.

Strategic Media 21 raises suspicions that NationBuilder’s data analytics might be used covertly, a kind of native advertising without the journalism. This might be an application of what Daniels calls cloaked websites “published by individuals or groups that conceal authorship or feign legitimacy in order to deliberately disguise a hidden political agenda” (2009, p. 661). Kim et al. describe similar tactics as stealth media, “a system that enables the deliberate operations of political campaigns with undisclosed sponsors/sources, furtive messaging of divisive issues, and imperceptible targeting” (2018, p. 2). By building these niche websites and corresponding Facebook groups that crosspost their content, SM21 has created a political advertising business. NationBuilder features might assist in this business; its Match feature connects email addresses with other social media accounts, and its Political Capital feature monitors these feeds for certain activities.

Suspicions that Strategic Media 21 used NationBuilder for its data mining features are likely true. According to emails released as part of a suit filed against Facebook by the Office of the Attorney General for the District of Columbia, Facebook employees discussed Cambridge Analytica, NationBuilder and SM21 as all being in violation of its data sharing arrangements (Wong, 2019). As one internal document dated 22 September 2015 explains,

One vendor offering beyond [Cambridge Analytica] we're concerned with (given their prominence in the industry ) is NationBuilder’s “Social Matching,” on which they've pitched our clients and their website simply says “Automatically link the emails in your database to Facebook, Twitter, Linkedin and Klout profiles, and pull in social engagement activity.” I'm not sure what that means, and don't want to incorrectly tell folks to avoid it, but it is definitely being conflated in the market with other less above board services. Can you help clarify what they're actually doing?

Employees worried that “these apps’ data-scraping activity [were] likely non-compliant” according to a reply dated 30 September 2015 and the thread actively debated the matter for months. Facebook employees singled out SM21 in a comment on 20 October 2015. It begins,

thanks for confirming this seems in violation. [REDACTED] mentioned there is a lot of confusion in the political space about how people use Facebook to connect with other offline sets of data. In particular, Strategic Media 21 has been exerting a good deal of pressure on one of our clients to take advantage of this type of appending.

These concerns ensued even as Facebook employees reacted to a Guardian article on 11 December 2015 entitled “Ted Cruz using firm that harvested data on millions of unwitting Facebook users” – one of the first stories to develop in the ongoing scandal involving Cambridge Analytica and Facebook data sharing (Davies, 2015). What ultimately happened to NationBuilder and Strategic Media 21 has not been disclosed to date. NationBuilder still advertises its social matching features. SM21, on the other hand, has gone offline, with its website available for purchase as of September 2019.

This evidence raises our first problem of acceptable use: should NationBuilder be used by covert or stealth media to enable the deceptive or non-consensual collection of data? Strategic Media 21 then parallels Cambridge Analytica where users unwittingly trained its profiles by filling out quizzes on Facebook (Cadwalladr and Graham-Harrison, 2018). Visiting websites running Strategic Media 21 and joining related groups might unwittingly inform advertising profiles harvested through NationBuilder. This is a serious privacy harm noted by a UK Information Commissioner’s Office (2018) report and an Information and Privacy Commissioner for British Columbia (2019) report that both raised the issue of social matching in their own reports on NationBuilder.

Advocacy, journalism or outrage?

NationBuilder has become entangled in the ethics of entrepreneurial journalism and the boundaries between editorial and fundraising through The Rebel, its Australian-affiliate Mark Latham’s Outsiders, and to a lesser extent the Newshounds (Hunter, 2016; Porlezza and Splendore, 2016). All sites rely on crowdsourcing, reminding their readers that they need financial support. Newshounds.us is a media watchdog blog covering Fox News that asks its visitors to donate to support its coverage. The Rebel is a Canadian news start-up, established at the closure of Sun News TV or what was called Fox News North. While start-ups, these outlets position themselves as journalism outlets. Newshounds makes mention of its editor’s journalism degree. The Rebel asks its visitors to subscribe and to help support its journalism.

The line between fundraising and journalism is a clear ethical concern for journalism. As Porlezza and Splendore note in a thoughtful review of accountability and transparency issues in entrepreneurial journalism, the industry has to deal with a challenge “that touches the ethical core of journalism: are journalists in start-ups able to distinguish between their different and overlapping goals of publisher, fundraiser and journalist?” (2016, p. 197). Crowdfunding challenges ethical practice by requiring journalists to pitch and report their stories to the public. At its most extreme, fundraising may tip journalism into what Berry and Sobieraj call outrage public opinion media, “recognisable by the rhetoric that defines it, with its hallmark venom, vilification of opponents, and hyperbolic reinterpretations of current events” (2016, p. 5). Reporting, in this case, becomes a means to outrage its audiences and channel that emotion into donations.

The Rebel, for example, blurred the line between financing a movement and a news outlet. In a now-deleted post on the NationBuilder blog, Torch Agency, the creative agency for The Rebel, explains NationBuilder’s role in launching what it called “Canada’s premier source of conservative news, opinion and activism”. The post continues,

In 36 hours, we built a fully-functional NationBuilder site complete with a database and communication headquarters... The result: through compelling content and top-notch digital tools, The Rebel raised over $100,000 CAD in less than twelve hours providing crucial early funding for its continuation.

The Rebel promised to use NationBuilder to better engage news audiences. The Rebel has repeatedly asserted its status as a journalism outlet against claims to the contrary. The Rebel enlisted the support of national press organisations, PEN Canada and Canadian Journalists for Free Expression, after being denied press credentials for a UN climate conference for being “advocacy journalism” (Drinkwater, 2016). In the Canadian province of Alberta, The Rebel successfully protested being removing from the media gallery because it wasn’t a “journalist source” (Edmiston, 2016).

The Rebel's response to a Canadian terrorist attack best frames the problem of distinguishing between advocacy, fundraising and journalism as well as NationBuilder's challenges in defining acceptable use. On 29 January 2017, a man entered a mosque in Québec City with an AK-47, killing six, seriously wounding five and injuring twelve people (Saminather, 2018). The Rebel launched the website QuebecTerror.com the next day. The initial page urged visitors to donate to send a Rebel reporter to cover the aftermath. The site, days after its claims had been discredited by other outlets, described the killing as inter-mosque violence based on a mistranslation of a public YouTube video. Rather than presenting itself as a journalistic report, the QuebecTerror website appeared as a conventional email fundraising pitch, depicting a dire reality – in this case a “truth” the mainstream media would not report – solvable through donations.

The language and matter of The Rebel’s reporting on the Québec terror attack resemble the tactics of outrage media, inflammatory rhetoric in this case complemented by a service to mobilise those emotions (Berry and Sobieraj, 2014). The Rebel’s response to the Québec terror attack then raises a different problem than journalists being uncomfortable in asking for money, as Hunter (2016) notes in a review of crowdfunding in journalism. Here fundraising overtakes reporting; stories are optimised for outrage. The problem is not new, but rather a consequence of the movement of practices between separate fields. Using the news to solicit funds is a known email marketing tactic. Emails that reacted to the news had the highest open rates according to analysis of Hillary Clinton’s email campaigning (Detrow, 2015). NationBuilder may streamline outrage tactics by channelling user engagement. Called a funnel or a ladder in marketing, NationBuilder has a path feature that tries to nudge user behaviour toward certain goals. Taken together, NationBuilder might ease this questionable form of crowdfunding in entrepreneurial journalism and encourage outrage tactics.

These concerns raise a second question: should NationBuilder be used in journalism, especially on hyper-partisan sites or outrage media already blurring the line between reporting, advocacy and fundraising? For its own part, fundraising ethics did cause turmoil at The Rebel. It suffered a scandal when a former correspondent accused the site of misusing funds, pointing to a disclaimer on the website that stated, “surplus funds raised for specific initiatives will be used for other costs associated with that particular project, such as website development, website hosting, mail, and other such expenses” (Gordon and Goldsbie, 2017). Seemingly, any campaign was part of a general pool of revenue, adding to concerns that certain stories might be juiced to bring in more money to general revenues.

These first two cases situate NationBuilder as part of the networked press. Ananny (2018) introduced the concept of the networked press to argue journalism exists within larger sociotechnical systems, of which NationBuilder is a part. Changes or disruption in these systems, evidenced through the rapid uptake of large social networking sites, do not necessarily imply increased press freedom and, instead require journalists’ practices to acknowledge and adapt to broader infrastructural changes. Just as outlets and journalists need to consider these changes, so too does NationBuilder in understanding how its technology is participating in the infrastructure of the networked press. As seen above, NationBuilder already participates in the ethical quandaries and its emphasis on mobilisation and fundraising may be ill-suited for journalistic outlets. NationBuilder might enable data collection and profiling without sufficient audience consent. NationBuilder might also tip the balance from journalism to outrage media by being a better tool to fundraise than publish stories. How does a firm like NationBuilder recognise its role in facilitating these transfers, particularly the expansion of marketing as the ubiquitous logic of cultural production? Should it ultimately be part of press infrastructure? Does using a political engagement platform ultimately improve journalistic practice? These matters require a more hands-on approach than that which NationBuilder presently offers.

Illiberal uses of political technology

Act for America engages in identity-based political advocacy, targeting American Muslims. Their mission includes immigration reform and combating terrorism. According to the Southern Poverty Law Center, their leadership has questioned the right to citizenship of American Muslims, alluding to mass deportation. Politically such statements seem at odds with the rules of what political theorist Nancy Rosenblum calls the “regulated rivalry” of liberal democracy. To protect itself, a militant democracy needs to ban parties that if elected or capable of influencing government “would implement discriminatory policies or worse: strip opposition religious or ethnic groups of civil or political rights, discriminate against minorities (or majorities), deport despised elements of the population” (Rosenblum, 2008, p. 434). Act for America seems to have engaged in such acts in targeting Muslim Americans.

Figure 5: Act for American website, captured 23 April 2019

NationBuilder then faces a third existential question: should groups that mobilise hate have access to its innovations? Other firms, like PayPal, stopped offering Act for America services after ProPublica reported on their relationship (Angwin et al., 2017a). While defining hate might be a little more difficult for an American firm where there is no clear hate speech laws, NationBuilder operates in many countries with clear laws and could guide corporate policy. That these terms are left missing or undefined in 3DNA’s Acceptable Use Policy is troubling.

The more challenging question that faces the larger industry is what responsibility do service providers have for the speech acts made on their services? As Whitney Phillips and Ryan Milner (2017) reflect, “it is difficult…to know how best – most effectively, most humanely, most democratically – to respond to online speech that antagonises, marginalises, or otherwise silences others. On one level, this is a logistic question about what can be done… The deeper and more vexing question is what should be done” (2017, p. 201) This vexing question is a lingering one, echoing the origins of modern broadcasting policy, which begins with governments and media industries attempting to reconcile preserving free speech without propagating hate speech. The American National Association of Broadcasters established a code of conduct in 1939 in part to ban shows like Father Coughlin’s that aired speeches “plainly calculated or likely to rouse religious or racial hatred and stir up strife” (Miller, 1938, as cited in Brown, 1980, p. 203). The decision did not solve the problem, but rather established institutions to consider these normative matters.

NationBuilder is not merely a broadcaster or a communication channel, but a mobilisation tool. The use of NationBuilder by hate groups should trouble the wider political technology industry and the field of political communication. It is part of a tradition in democratic politics that media technology does not just inform publics, but cultivates them. As Sheila Jasanoff notes, American “laws conceived of citizens as being not necessarily knowing but knowledge-able–that is, capable at need of acquiring the knowledge needed for effective self-governance. This idea of an epistemically competent citizen runs through the American political thought from Thomas Jefferson to John Dewey and beyond” (Jasanoff, 2016, p. 239). Communication is about formation as much as information, of cultivating publics. NationBuilder punctuates an existential question for political technology: is it exceptional or mundane? Is it a glorified spreadsheet or a special class of technology? In short, if NationBuilder is an effective tool of political mobilisation, should it effectively mobilise hate?

From corporate social responsibility to liberal democratic responsibility

Finding solutions to the problematic cases above is part of an international debate about platform governance (DeNardis, 2012; Duguay, Burgess, and Suzor, 2018; Gillespie, 2018; Gorwa, 2019). Platform governance refers to the conduct of large information intermediaries and, by extension, the social impacts of publicly accessible and networked computer technology. Where human rights is one emerging value set for platform governance (Kaye, 2019), the international challenge now is to the appropriate ‘web of influence’ that might address human rights concerns and address the numerous regulatory challenges posed by large technology firms (Braithwaite and Drahos, 2000).

Options include external rules – such as fines and penalties through privacy, data protection or election law – and co-regulatory approaches, like codes of conduct and best practices, in addition to self-regulation, specifically corporate social responsibility and responsibilities bestowed for liability protection. Self-regulation dominates the status quo, at least in the US. The rules are largely self-written by platforms, in large part due to their public service obligations under the US Telecommunications Act (Gillespie, 2018). Companies, like Facebook, have acknowledged a need for changing, publicly calling for government regulation (Zuckerberg, 2018). Today, platforms in good faith moderate users in conversations under acceptable use rules. Users might be banned, suspended, surveilled, deprioritised or demonetised under acceptable use policies (Myers West, 2018). The stakes now involve a debate about the public obligations of platforms and whether they should self-police or be deputised to enforce government rules (DeNardis, 2012; Tusikov, 2017).

The regulation of firms like NationBuilder face even greater regulatory challenges as the field has been historically free from much oversight or responsibilities. Many western democracies did not consider political parties or political data to be under the jurisdiction of privacy law. Enforcement was also lacking. Even though political parties were regulated in Europe, regulators only took their responsibilities seriously after the Facebook/Cambridge Analytica scandal (Bennett, 2015; Howard and Kreiss, 2010). Even with new data protection laws, intermediaries still face limited liability as enforcement tends towards the user than the service provider. Service providers are exempt from liability or penalties for misuse, except in certain cases such as copyright. For its own part, NationBuilder claims zero liability for interactions and hosted content according to its Terms of Service.

Political engagement platforms do face an uncertain global regulatory context. On one hand, they function as service providers largely exempt from laws. On the other hand, international law is uneven and changing (for a recent review, see Bennett and Oduro-Marfo, 2019). Public inquiries in the United Kingdom and Canada have focused more on these companies and their status may be changing. A joint investigation of AggregateIQ by the Privacy Commissioner of Canada and the Information and Privacy Commissioner for British Columbia found that the third-party service provider “had a legal responsibility to check that the third-party consent on which they were relying applied to the activities they subsequently performed with that data” (2019, p. 22). The implication is that AiQ had a corporate responsibility to abide by privacy laws in the provision of its services. The same likely holds for NationBuilder.

Amidst regulatory uncertainty, corporate social responsibility might be the most immediate remedy to questionable uses of NationBuilder. Its mission today might be read as ‘functionalist business ethics’ that believe that the product in and of itself is a social good and that more access, or more sales, improves the quality of elections. Whereas other approaches to corporate social responsibility favour an integrative business ethics where “a company’s responsibilities are not merely restricted in one way or another to the profit principle alone but to sound and critical ethical reasoning” (Busch and Shepherd, 2014, p. 297). Where future debates might require consideration of NationBuilder’s obligations to liberal democracy, the next section considers how NationBuilder’s mission and philosophy might be clarified through the company’s acceptable use policy. NationBuilder might not have to become partisan, but it cannot be neutral toward these institutions of liberal democracy, at least if it wants to continue to believe in its mission to revolutionise politics.

Revising the Acceptable Use Policy is possible and has happened before. Clearly stating the relationship between its mission and prohibited uses would reverse past amendments that narrowed corporate responsibilities. The Acceptable Use Policy as of August 2019, last updated 1 May 2018, is more open than prior iterations. Most bans concern computer security, prohibiting uses that overload infrastructure or accessing data without authorisation. The policy does prohibit “possessing or disseminating child pornography, facilitating sex trafficking, stalking, troll storming, threatening imminent violence, death or physical harm to any individual or group whose individual members can reasonably be identified, or inciting violence”. Until 2014, 3DNA covered acceptable use as part of its terms of service; afterwards it became a separate document. Its Terms of Service agreement from 29 March 2011 banned specific user content including “any information or content that we deem to be unlawful, harmful, abusive, racially or ethnically offensive, defamatory, infringing, invasive of personal privacy or publicity rights, harassing, humiliating to other people (publicly or otherwise), libellous, threatening, profane, or otherwise objectionable” as well as a subsequently removed ban on posting incorrect information. These clauses were removed in the 2014 update that reduced prohibited uses to 15. These clauses have slowly been added back. The most recent acceptable usage policy, as of 1 May 2018, had 31 prohibited uses, adding back clauses regulating user activities.

Recommendation #1: Reconcile its mission statement with its prohibited uses

NationBuilder’s Mission is to connect anyone regardless of “race, age, class, religion, educational background, ideology, gender, sexual orientation or party”. By contrast, its Acceptable Use Policy does not consider the positive freedoms inferred in this mission that could conceivably prohibit campaigns aimed at excluding people from participating in politics. A revised Acceptable Use Policy should apply the implications of its corporate mission to its prohibited uses. Act for America, for example, targets its opponents by race and advocates for greater policing, terrorism laws and immigration enforcement that could disproportionately affect Muslim Americans, acting against NationBuilder’s vision of “a world where everyone has the freedom and opportunity to create what they are meant to create”. Revision might prohibit campaigns or parties targeting assigned identities like race, age, gender or sexual orientation, particularly when messages incite hate, while preserving customers’ right to campaign against ideology, party or other chosen or elective politicised issues. To achieve such a mission, NationBuilder may have to restrict access on political grounds (also called de-platforming) or to restrict certain features. 5

Harmonising its position on political freedom may prompt industry-wide reflection on the function of political technology. How do these services protect the liberal democratic institutions they ostensibly promise to disrupt? In finding shared values, NationBuilder has to consider its place in a partisan field. Can it navigate between parties to describe ethical campaigning, or, alternatively, must it find other companies with shared nonpartisan or libertarian values? The likely outcome either way is a code of conduct for digital campaigning similar to the Alliance of Democracies Pledge for Election Integrity or the codes of conduct of the American Association of Political Consultants or European Association of Political Consultants that discourage campaigns based on intolerance and discrimination. In doing so, NationBuilder might force partisan firms to be more explicit about their professional ethics.

Recommendation #2: Require disclosure on customers’ websites

NationBuilder should disclose when it is used even if it cannot decide if it should be used. Two out of the three questionable uses might have benefitted from the organisations’ disclosing their use of the political engagement platform, especially when used in journalism. At a minimum, NationBuilder should require sites to disclose using NationBuilder, ideally through an icon or other disclosure in the page’s footer that might create the possibility of public awareness (Ezrahi, 1999). NationBuilder might also consider requiring users to disclose what tracking features, such as Match and Political Capital, are enabled on the website not unlike the disclosure about data tracking under Europe’s Cookie Law that disclose a site’s use of the tracking tool.

NationBuilder might further standardise the reporting of uses found in its annual report and potentially release data in a separate report. Transparency reports have become an important, albeit imperfect, reporting tool in telecommunications and social media industries (Parsons, 2019). These reports, ideally, would continue the preliminary method used in this paper, breaking down NationBuilder’s use by industry over time and potentially expanding data collection to include other trends such as use by country, use by party and the popularity of features. Such proactive disclosure might also normalise greater transparency in a political technology industry known for its secrecy.

Recommendation #3: Clarify relationship to domestic privacy law

A revised acceptable use policy might define NationBuilder expectations for privacy rights both to explain its normative vision for privacy and improve its customers’ implementation of local privacy law. By contrast, the acceptable use policy currently prohibits applications that “infringe or violate the intellectual property rights (including copyrights), privacy rights or any other rights of anyone else (including 3DNA)”. The clause does not clarify the meaning of privacy rights or jurisdiction. Elsewhere 3DNA states that all its policies “are governed by the internal substantive laws of the State of California, without respect to its conflict of laws principles”. Such ambiguity confuses a clear interpretation of privacy rights, the law and regulation mentioned in the policy. A revised clause should state NationBuilder’s position on privacy as a human right, in such a way that it provides some guidance as to whether local law meets its standards and denies access in countries that do not meet its privacy expectations. Further, the acceptable use policy should also clarify that it expects customers to abide by local privacy law, and, in major markets, if it has any reporting obligations to privacy offices.

Clarifying its position on privacy rights recognises the important function NationBuilder plays in educating its customers on the law. NationBuilder may help implement “proactive guidance on best campaigning practices” recommended by Bennett and Oduro-Marfo (2019, p. 54). For its GDPR compliance, NationBuilder has built a blog and offers many educational resources to customers to understand how to campaign online and to respect the law. These posts clearly state that they are not legal advice, but they do help to interpret the law for practitioners. Similar posts could help clients understand if they should disable certain features in NationBuilder, such as Match or Political Capital, to comply with their domestic privacy law. Revisions to its Acceptable Use Policy might be another avenue for NationBuilder to educate its customers.

Adding privacy to its corporate mission may be a further signal of NationBuilder’s corporate responsibility. NationBuilder has an altogether different relationship to customer privacy than other advertising-based technology firms. Its revenues come from being a service provider and securing data. With growing pressure on political parties to improve their cyber-security, NationBuilder can help its clients better protect their voter data as well as call for better privacy protection in politics overall. Indeed, NationBuilder could advocate for privacy law to apply to its political clients to both simplify its regulatory obligations and reduce risk. Improving privacy may lessen its institutional risk of being associated with major privacy violations as well as simplifying the complex work of setting privacy rules on its own. As such, NationBuilder might be a possible global advocate for better privacy and data protection, a role to date unfulfilled long after public controversy.

Conclusion

This paper has reported the results of empirical research about the acceptable use of a political technology. The results demonstrate that political technologies have questionable uses involving their application within politics. Specifically when does a political movement exceed the limits of liberal democratic discourse? When are its uses in journalism and advertising unacceptable? The experiment demonstrates that harms to liberal democracy can be a reasonable way to judge technological risks. Liberal democratic norms are another factor to consider to the wider study of software and technological accountability (Johnson and Mulvey, 1995; Nissenbaum, 1994). These concerns have a long history. Norbert Wiener, who helped develop digital computing, warned against its misuse in Cold War America for the management of people (Wiener, 1966, p. 93). By comparison, science and technology scholar Sheila Jasanoff (2016) questions if the benefits of technological innovation outweigh the risks of global catastrophe, inequality, and human dignity. While catastrophic global devastation is commonly seen as a questionable use of technology (unless it concerns the climate), there is less consensus about how technology might undermine democracy, of which liberal democracy is just one set of norms. What democracy should be defended is debated (with fault lines drawn between representative, direct and deliberative democracy as well as between liberal and republican traditions) (Karppinen, 2013). My method helps to clarify this debate by finding inductively uses that might challenge many theories of democracy. Further research could extend the analysis to focus on particular concerns to different forms of democracy and democratic theories.

My specific recommendations for NationBuilder may improve the accountability of the political industry at large. Oversight is a major problem in the accountability of political platforms. My methods could easily be scaled to observe more companies and countries. No doubt privacy, information and election regulation could implement this approach as part of their situational awareness. The questionable uses here then offer uses to watch for:

  1. Does the technology facilitate or ease deceptive or non-consensual data collection?
  2. Does the technology undermine journalistic standards and consider its role in the networked press?
  3. Does the technology facilitate the mobilisation of hate groups?

Where remedies to these challenges may be unclear, at the very least ongoing monitoring could identify potential harms sooner than academic research.

Questionable uses of NationBuilder should trouble the company as well as the larger political technology industry and the field of political communication. Faith in political technologies has changed campaign practice in many democracies as well as attracted ongoing international regulatory attention concerned with trust and fairness during elections. Technologies like NationBuilder are premised on the value of communications to political engagement. They are designed to increase engagement and improve efficiency. NationBuilder and its peers are a special class of political technology and thus their obligations to liberal democratic values should be scrutinised. If 3DNA seeking to better politics suffers these abuses then what will come from political firms with less idealism?

Acknowledgements

The author wishes to acknowledge Colin Bennett, the Surveillance Studies Centre, the Office of the Information and Privacy Commissioner for British Columbia, and the Commissioner Michael McEvoy for organising the research workshop on data-driven elections. In addition, the author extend a thank you to Mike Miller, the Social Science Research Council, Erika Franklin Fowler, Sarah Anne Ganter, Natali Helberger, Shannon McGregor, Rasmus Kleis Nielsen and especially Dave Karpf and Daniel Kreiss for organising the 2019 International Communication Association post-conference, “The Rise of Platforms” where versions of this paper were presented and received helpful feedback. Sincere thanks to the anonymous reviewers, Frédéric Dubois, Robert Hunt, Tom Hackbarth and especially Colin Bennett for their feedback and suggestions.

References

Adams, K., Barrett, B., Miller, M., & Edick, C. (2019). The Rise of Platforms: Challenges, Tensions, and Critical Questions for Platform Governance [Report]. New York: Social Science Research Council. https://doi.org/10.35650/MD.2.1971.a.08.27.2019

Ananny, M. (2018). Networked press freedom: creating infrastructures for a public right to hear. Cambridge, MA: The MIT Press.

Angwin, J., Larson, J., Varner, M., & Kirchner, L. (2017a, August 19). Despite Disavowals, Leading Tech Companies Help Extremist Sites Monetize Hate. ProPublica. Retrieved from https://www.propublica.org/article/leading-tech-companies-help-extremist-sites-monetize-hate

Angwin, J., Larson, J., Varner, M., & Kirchner, L. (2017b, August 19). How We Investigated Technology Companies Supporting Hate Sites. ProPublica. Retrieved from https://www.propublica.org/article/how-we-investigated-technology-companies-supporting-hate-sites

Baldwin-Philippi, J. (2015). Using technology, building democracy: digital campaigning and the construction of citizenship. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190231910.001.0001

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bennett, C. (2015). Trends in voter surveillance in western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3/4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bennet, C. J., & Oduro-Marfo, S. (2019, October). Privacy, Voter Surveillance, and Democratic Engagement: Challenges for Data Protection Authorities. 2019 International Conference of Data Protection and Privacy Commissioners (ICDPPC), Greater Victoria. Retrieved from https://web.archive.org/web/20191112101932/https:/icdppc.org/wp-content/uploads/2019/10/Privacy-and-International-Democratic-Engagement_finalv2.pdf

Berry, J. M., & Sobieraj, S. (2016). The outrage industry: political opinion media and the new incivility. New York: Oxford University Press.

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Braun, J. A., & Eklund, J. L. (2019). Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Journalism. Digital Journalism, 7(1), 1–21. https://doi.org/10.1080/21670811.2018.1556314

Brown, J. A. (1980). Selling airtime for controversy: NAB self‐regulation and Father Coughlin. Journal of Broadcasting, 24(2), 199–224. https://doi.org/10.1080/08838158009363979

Busch, T., & Shepherd, T. (2014). Doing well by doing good? Normative tensions underlying Twitter’s corporate social responsibility ethos. Convergence, 20(3), 293–315. https://doi.org/10.1177/1354856514531533

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17) How Cambridge Analytica Turned Facebook ‘Likes’ into a Lucrative Political Tool. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm.

Daniels, J. (2009). Cloaked websites: propaganda, cyber-racism and epistemology in the digital era. New Media & Society, 11(5), 659–683. https://doi.org/10.1177/1461444809105345

Davies, H. (2015, December 11). Ted Cruz campaign using firm that harvested data on millions of unwitting Facebook users. The Guardian. Retrieved from https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data

D’Aprile, S. (2011, September 25). Judge Ends Aristotle Advertising Case. Campaigns & Elections. Retrieved from http://www.campaignsandelections.com/campaign-insider/259782/judge-ends-aristotle-advertising-case.thtml

DeNardis, L. (2012). Hidden Levers of Internet Control. Information, Communication & Society, 15(5), 720–738. https://doi.org/10.1080/1369118X.2012.659199

Detrow, S. (2015, December 15). “Bill Wants To Meet You”: Why Political Fundraising Emails Work. All Things Considered, NPR. Retrieved from https://www.npr.org/2015/12/15/459704216/bill-wants-to-meet-you-why-political-fundraising-emails-work

Drinkwater, R. (2016, October 17). Ezra Levant’s Rebel Media denied UN media accreditation. Macleans. Retrieved from https://www.macleans.ca/news/canada/ezra-levant-rebel-media-denied-un-media/

Duguay, S., Burgess, J., & Suzor, N. (2018). Queer women’s experiences of patchwork platform governance on Tinder, Instagram, and Vine: Convergence. https://doi.org/10.1177/1354856518781530

Eatwell, R., & Mudde, C. (Eds.). (2004). Western democracies and the new extreme right challenge. New York: Routledge.

Edmiston, J. (2016, February 17). Alberta NDP says ‘it’s clear we made a mistake’ in banning Ezra Levant’s The Rebel. National Post. Retrieved from https://nationalpost.com/news/politics/alberta-ndps-ban-on-rebel-reporters-to-stay-for-at-least-two-weeks-while-it-reviews-policy-government-says

Elmer, G., Langlois, G., & McKelvey, F. (2012). The Permanent Campaign: New Media, New Politics. New York: Peter Lang.

Ezrahi, Y. (1999). Dewey’s Critique of Democratic Visual Culture and Its Political Implications. In D. Kleinberg-Levin (Ed.), Sites of Vision: The Discursive Construction of Sight in the History of Philosophy (pp. 315–336). Cambridge, MA: The MIT Press.

Freelon, D. G. (2010). ReCal: intercoder reliability calculation as a Web service. International Journal of Internet Science, 5(1), 20–33. Retrieved from https://www.ijis.net/ijis5_1/ijis5_1_freelon.pdf

Gillespie, T. (2007). Wired Shut: Copyright and the Shape of Digital Culture. Cambridge, MA: The MIT Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Gilliam, J. (2016, November 17). Choosing to lead. Retrieved from https://nationbuilder.com/choosing_to_lead

Gordon, G., & Goldsbie, J. (2017, August 17). Ex-Rebel Contributor Makes Explosive Claims In YouTube Video. CANADALAND. Retrieved from https://www.canadalandshow.com/caolan-robertson-why-left-rebel/

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6). https://doi.org/10.1080/1369118X.2019.1573914

Holcombe, M. (2019, January 13). GoFundMe to refund the $20 million USD raised for the border wall. CNN. Retrieved from https://www.cnn.com/2019/01/12/us/border-wall-gofundme-refund/index.html

Horowitz, B. (2012, March 8). How to Start a Movement [Blog post]. Retrieved from http://www.bhorowitz.com/how_to_start_a_movement

Howard, P. N. (2006). New Media Campaigns and the Managed Citizen. Cambridge: Cambridge University Press.

Howard, P. N., & Kreiss, D. (2010). Political parties and voter privacy: Australia, Canada, the United Kingdom, and United States in comparative perspective. First Monday, 15(12). https://doi.org/10.5210/fm.v15i12.2975

Hunter, A. (2016). “It’s Like Having a Second Full-Time Job”: Crowdfunding, journalism, and labour. Journalism Practice, 10(2), 217–232. https://doi.org/10.1080/17512786.2015.1123107

Information Commissioner’s Office. (2018). Democracy disrupted? Personal information and political influence. Information Commissioner’s Office. https://ico.org.uk/media/2259369/democracy-disrupted-110718.pdf

Jasanoff, S. (2016). The Ethics of Invention: Technology and the Human Future. New York: W.W. Norton & Company.

Johnson, D. G., & Mulvey, J. M. (1995). Accountability and computer decision systems. Communications of the ACM, 38(12), 58–64. https://doi.org/10.1145/219663.219682

Johnson, D. W. (2016). Democracy for Hire: A History of American Political Consulting. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190272692.001.0001

Karpf, D. (2016a). Analytic activism: digital listening and the new political strategy. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190266127.001.0001

Karpf, D. (2016b). The partisan technology gap. In E. Gordon & P. Mihailidis (Eds.), Civic media: technology, design, practice (pp. 199–216). Cambridge, MA; London: The MIT Press.

Karpf, D. (2018). The many faces of resistance media. In D. S. Meyer & S. Tarrow (Eds.), The Resistance: The Dawn of the Anti-Trump Opposition Movement (pp. 143–161). New York: Oxford University Press. https://doi.org/10.1093/oso/9780190886172.003.0008

Karppinen, K. (2013). Uses of democratic theory in media and communication studies. Observatorio, 7(3), 1–17. Retrieved from http://www.scielo.mec.pt/scielo.php?script=sci_arttext&pid=S1646-59542013000300001&lng=en&nrm=iso

Kaye, D. (2019). Speech police: The global struggle to govern the Internet. New York: Columbia Global Reports.

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., … Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 25(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kreiss, D. (2016). Prototype politics: technology-intense campaigning and the data of democracy. New York: Oxford University Press.

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Kreiss, D., & Jasinski, C. (2016). The Tech Industry Meets Presidential Politics: Explaining the Democratic Party’s Technological Advantage in Electoral Campaigning, 2004–2012. Political Communication, 1–19. https://doi.org/10.1080/10584609.2015.1121941

Kreiss, D., & Mcgregor, S. C. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Levy, S. (2002). Crypto: Secrecy and Privacy in the New Code War. London: Penguin.

Liao, S. (2019, March 22). GoFundMe pledges to remove anti-vax campaigns. The Verge. Retrieved from https://www.theverge.com/2019/3/22/18277367/gofundme-anti-vax-campaigns-remove-pledge

Marland, A. (2016). Brand Command: Canadian Politics and Democracy in the Age of Message Control. Vancouver: University of British Columbia Press.

McEvoy, M. (2019, February 6). Full Disclosure: Political parties, campaign data, and voter consent [Investigation Report No. P19-01]. Victoria: Office of the Information and Privacy Commissioner for British Columbia. Retrieved from https://www.oipc.bc.ca/investigation-reports/2278

McEvoy, M., & Therrien, D. (2019). AggregateIQ Data Services Ltd. [Investigation Report No. P19-03 PIPEDA-035913; p. 29]. Victoria; Gatineua: Office of the Information and Privacy Commissioner for British Columbia; Office of the Privacy Commissioner of Canada. https://www.oipc.bc.ca/investigation-reports/2363

McKelvey, F. (2011). A Programmable Platform? Drupal, Modularity, and the Future of the Web. The Fibreculture Journal, (18), 232–254. Retrieved from http://eighteen.fibreculturejournal.org/2011/10/09/fcj-128-programmable-platform-drupal-modularity-and-the-future-of-the-web/

McKelvey, F., & Piebiak, J. (2018). Porting the political campaign: The NationBuilder platform and the global flows of political technology. New Media & Society, 20(3), 901–918. https://doi.org/10.1177/1461444816675439

Mosco, V. (2004). The Digital Sublime: Myth, Power, and Cyberspace. Cambridge: The MIT Press.

Mouffe, C. (2005). The Return of the Political. New York: Verso.

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059

Nationbuilder. (n.d.). NationBuilder mission and beliefs. NationBuilder. Retrieved January 7, 2020, from https://nationbuilder.com/mission

Nissenbaum, H. (1994). Computing and Accountability. Communications of the ACM, 37(1), 72–80. https://doi.org/10.1145/175222.175228

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131. https://doi.org/10.1177/0007650317717957

Phillips, W., & Milner, R. M. (2017). The ambivalent Internet: mischief, oddity, and antagonism online. Malden: Polity.

Porlezza, C., & Splendore, S. (2016). Accountability and Transparency of Entrepreneurial Journalism. Journalism Practice, 10(2), 196–216. https://doi.org/10.1080/17512786.2015.1124731

Price, M. (2017, August 16). Why We Terminated Daily Stormer [Blog post]. Retrieved from https://blog.cloudflare.com/why-we-terminated-daily-stormer/

Rafter, K. (2016). Introduction: understanding where entrepreneurial journalism fits in. Journalism Practice, 10(2), 140–142. https://doi.org/10.1080/17512786.2015.1126014

Roberts, S. T. (2019). Behind the screen: content moderation in the shadows of social media. New Haven: Yale University Press.

Rosenblum, N. L. (2008). On the side of the angels: an appreciation of parties and partisanship. Princeton: Princeton University Press.

Saminather, N. (2018, August 10). Factbox: Canada’s biggest mass shootings in recent history. Reuters. Retrieved from https://www.reuters.com/article/us-canada-shooting-factbox-idUSKBN1KV2BO

Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204–231. https://doi.org/10.1177/0306312718772094

Streeter, T. (2011). The Net Effect: Romanticism, Capitalism, and the Internet. New York: New York University Press.

Tusikov, N. (2019). Defunding Hate: PayPal’s Regulation of Hate Groups. Surveillance & Society, 17(1/2), 46–53. https://doi.org/10.24908/ss.v17i1/2.12908

Westcott, P. (2016, September 23). Targeted Facebook advertising made possible from L2 and Strategic Media 21 [Blog post]. Retrieved from http://www.l2political.com/blog/2016/09/23/targeted-facebook-advertising-made-possible-from-l2-and-strategic-media-21/

White, H. B. (1961). The Processed Voter and the New Political Science. Social Research, 28(2), 127–150. https://www.jstor.org/stable/40969367

Wong, J. C. (2019, August 23). Document reveals how Facebook downplayed early Cambridge Analytica concerns. The Guardian. Retrieved from https://www.theguardian.com/technology/2019/aug/23/cambridge-analytica-facebook-response-internal-document

Footnotes

1. Promoting new media activism that shames companies for advertising on certain sites, a kind of corporate social responsibility for ad spending (Karpf, 2018).

2. The studies in ongoing reports can be found at: https://citizenlab.ca/2017/02/bittersweet-nso-mexico-spyware/

3. The company provides customers with this data for a fee. Most customers are web technology firms looking for information on who uses their competitors

4. The 2017 annual report re-categorised its usage statistics using active verbs, such as win or engage, rather than industry. As a result, there is no way to determine usage trends over time. The 2017 annual report also includes a curious ‘Other’ category without much detail. The 2018 report abandoned reporting by industry altogether.

5. See Chapter 7 in Phillips and Milner, 2017 for a good summary of the challenge of public debate and moderation.


Data-driven political campaigns in practice: understanding and regulating diverse data-driven campaigns

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Data has become an important part of how we understand political campaigns. In reviewing coverage of elections – particularly in the US – the idea that political parties and campaigners now utilise data to deliver highly targeted, strategic and successful campaigns is readily found. In academic and non-academic literature, it has been argued that “[i]n countries around the world political parties have built better databases, integrated online and field data, and created more sophisticated analytic tools to make sense of these traces of the electorate” (Kreiss and Howard, 2010, p. 1; see also in t’Veld, 2017, pp. 2-3). These tools are reported to allow voters to “be monitored and targeting continuously and in depth, utilising methods intricately linked with and drawn from the commercial sector and the vast collection of personal and individual data” (Kerr Morrison, Naik, and Hankey, 2018, p. 11). The Trump campaign in 2016 is accordingly claimed to have “target[ed] 13.5 million persuadable voters in sixteen battleground states, discovering the hidden Trump voters, especially in the Midwest” (Persily, 2017, p. 65). On the basis of such accounts, it appears that data-driven campaigning is coming to define electoral practice – especially in the US - and is now key to understanding modern campaigns.

Yet, at the same time, important questions have been raised about the sophistication and uptake of data-driven campaign tools. As Baldwin-Philippi (2017) has argued, there are certain “myths” about data-driven campaigning. Studying campaigning practices Baldwin-Philippi has shown that “all but the most sophisticated digital and data-driven strategies are imprecise and not nearly as novel as the journalistic feature stories claim” (2017, p. 627). Hersh (2015) has also shown that the data that parties possess about voters is not fine-grained, and tends to be drawn from public records that contain certain standardised information. Moreover, Bennett has highlighted the significant incentive that campaign consultants and managers have to emphasise the sophistication and success of their strategies, suggesting that campaigners may not be offering an accurate account of current practices (2016, p. 261; Kreiss and McGregor, 2018).

These competing accounts raise questions about the nature of data-driven campaigning and the extent to which common practices in data use are found around the globe. These ideas are conceptually important for our understanding of developments in campaigning, but they also have significance for societal responses to the practice of data-driven campaigning. With organisations potentially adopting different data-driven campaigning practices it is important to ask which forms of data use are seen to be democratically acceptable or problematic. 1 These questions are particularly important given the recent interest from international actors and politicians in understanding and responding to the use of data analytics (Information Comissioners Office, 2018a), and specifically practices at Facebook (Kang et al., 2018). Despite growing pressure from these actors to curtail problematic data-driven campaigning practices, it is as yet unclear precisely what is unacceptable and how prevalent these practices are in different organisations and jurisdictions. For these reasons there is a need to understand more about data-driven campaigning.

To generate this insight, in this article I pose the question: “what practices characterise data-driven campaigning?” and develop a comparative analytical framework that can be used to understand, map and consider responses to data-driven campaigning. Identifying three facets of this question, I argue that there can be variations in who is using data in campaigns, what the sources of data are, and how data informs communication in campaigns. Whilst not exhaustive, these questions and the categories they inspire are used to outline the diverse practices that constitute data-driven campaigning within single and different organisations in different countries. It is argued that our understanding of who, what and how data is being used is critical to debates around the democratic acceptability of data-driven campaigning and provides essential insights required when contemplating a regulatory response.

This analysis and the frameworks it inspires have been developed following extensive analysis of the UK case. Drawing on a three-year project exploring the use of data-driven campaigning within political parties, the analysis discusses often overlooked variations in how data is used. In highlighting these origins I contend that these questions are not unique to the UK case, but can inspire analysis around the globe and in different organisations. Indeed, as I will discuss below, this form of inquiry is to be encouraged as comparative analysis makes it possible to explore how different legal, institutional and cultural contexts affect data-driven campaigning practices. Furthermore, analysis of different kinds of organisation makes it possible to understand the extent to which party practices are unique. Although this article is therefore inspired by a particular context and organisational type, the questions and frameworks it provides can be used to unpack and map the diversity of data-driven campaigning practices, providing conceptual clarity able to inform a possible regulatory response.

Data and election campaigns

The relationship between data and election campaigns is well established, particularly in the context of political parties. Describing the focus of party campaigning, Dalton, Farrell and McAllister (2013) outline the longstanding interest parties have in collecting data that can be analysed to (attempt to) achieve electoral success. In their account, “candidates and party workers meet with individual voters, and develop a list of people’s voting preferences. Then on election day a party worker knocks on the doors of prospective supporters at their homes to make sure they cast their ballot and often offers a ride to the polls if needed” (p. 56). Whilst parties in different contexts are subject to different regulations and norms that affect the data they can collect and use (Kreiss and Howard, 2010), it is common for them to be provided with information by the state about voters’ age, registered status and turnout history (Hersh, 2015). In addition, parties then tend to gather their own data about voter interests, voting preferences and degree of support, allowing them to build large data sets and email lists at national and local levels. Although regulated – most notably through the General Data Protection Regulation (GDPR), which outlines rules in Europe for how data can be collected, used and stored – parties’ use of data is often seen to be democratically permissible as it enables participation and promotes an informed citizenry.

In recent history, the use of data by parties is seen to have shifted significantly, making it unclear how campaigns are organised and whether they are engaging in practices that may not be democratically appropriate. In characterising these practices, two very different accounts of data use have emerged. On the one hand, scholars such as Gibson, Römmele and Williamson (2014) have argued that parties now adopt data-driven campaigns that “focus on mining social media platforms to improve their voter profiling efforts” (p. 127). From this perspective, parties are now often seen to be routinely using data to gain information, communicate and evaluate campaign actions.

In terms of information, it has been argued that data-driven campaigning draws on new sources of data (often from social media and online sources) to allow parties to search for patterns in citizens’ attitudes and behaviours. Aggregating data from many different sources at a level hitherto impossible, data-driven campaigning techniques are seen to allow parties to use techniques common in the commercial sector to “construct predictive models to make targeting campaign communications more efficient” (Nickerson and Rogers, 2014, p. 54; Castleman, 2016; Hersh, 2015, p. 28). Similarly, attention has been directed to the capacity to use algorithms to identify “look-alike audiences” (Tactical Tech, 2019, pp. 37-69), 2 allowing campaigners to find new supporters who possess the same attributes as those already pledged to a campaign (Kreiss, 2017, p. 5). Data-driven campaigning techniques are therefore seen to offer campaigns additional information with minimal investment of resources (as one data analyst becomes able to find as many target voters as an army of grassroots activists) (Dobber et al., 2017, p. 4).

In addition, data-driven campaigning has facilitated targeted communication (Hersh, 2015, pp. 1-2), allowing particular messages to be conveyed to certain kinds of people. These capacities are seen to enable stratified campaign messaging, allowing personalised messages that can be delivered fast through cheap and easy to use online (and offline) interfaces. Data-driven campaigning has therefore been reported to allow campaigners to “allocate their finite resources more efficiently” (Bennett, 2016, p. 265), “revolutioniz[ing] the process” of campaigning (International IDEA, 2018, p. 7; Chester and Montgomery, 2017).

It has also been claimed that data-driven campaigning enables parties to evaluate campaign actions and gather feedback in a way previously not possible. Utilising message-testing techniques such as A/B testing, and monitoring response rates and social media metrics, campaigners are seen to be able to use data to analyse – in real time – the impact of campaign actions. Whether monitoring the effect of an email title on the likelihood that it is opened by recipients (Nickerson and Rogers, 2014, p. 57), or testing the wording that makes a supporter most likely to donate funds, data can be gathered and analysed by campaigns seeking to test whether their interventions work (Kreiss and McGregor, 2018, pp. 173-4; Kerr Morrison et al., 2018, p. 12; Tactical Tech, 2019). 3

These new capacities are often highlighted in modern accounts of campaigning and suggest that there has been significant and rapid change in the activities of campaigning organisations. Whilst prevalent, this idea has, however, been challenged by a small group of scholars who have offered a more sceptical account, arguing that “the rhetoric of data-driven campaigning and the realities of on-the-ground practices” are often misaligned (Baldwin-Philippi, 2017, p. 627).

The sceptical account

A number of scholars of campaign practice have questioned the idea that elections are characterised by data-driven campaigning and have highlighted a gulf between the rhetoric and reality of practices here. Nielsen, for example, has shown that whilst data-driven tools are available, campaigns continue to rely primarily on “mundane tools” (2010, p. 756) such as email to organise their activities. Hersh also found that, in practice, campaigns do not possess “accurate, detailed information about the preference and behaviours of voters” (2015, p. 11), but rely instead on relatively basic, publically available data points. Similar observations led Baldwin-Philippi to conclude that the day-to-day reality of campaigning is “not nearly as novel as the journalistic feature stories claim” as “campaigns often do not execute analytic-based campaigning tactics as fully or rigorously as possible” (2017, p. 631). In part the gulf between possible and actual practice has emerged because parties – especially at a grassroots level – lack the capacity and expertise to utilise data-driven campaigning techniques (Ibid., p. 631). There is accordingly little evidence that parties are routinely using data to gain more information about voters, to develop new forms of targeted communication or to evaluate campaign interventions. Indeed, in a study of the UK, Anstead et al. found no evidence “that campaigns were seeking to send highly targeted but contradictory messages to would-be supporters”, with their study of Facebook advertisements showing that parties placed adverts that reflected “the national campaigns parties were running” (unpublished, p. 3).

Other scholars have also questioned the scale of data-use by highlighting the US-centric focus of much scholarship on political campaigns (Kruschinski and Haller, 2017; Dobber at al., 2017). Kreiss and Howard (2010) have highlighted important variations in campaign regulation that restrict the practices of data-driven campaigns (see also: Bennett, 2016). In this way, a study of German campaigning practices by Kruschinski and Haller (2017) highlights how regulation of data collection, consent and storage means that “German campaigners cannot build larger data-bases for micro-targeting” (p. 8). Elsewhere Dobber et al. (2017, p. 6) have highlighted how different electoral systems, regulatory systems and democratic cultures can inform the uptake of data-driven campaigning tools. This reveals that, whilst often discussed in universal terms, there are important country and party level variations that reflect different political, social and institutional contexts. 4 These differences are not, however, often highlighted in existing accounts of data-driven campaigning.

Reflecting on reasons for this gulf in rhetoric and practice, some attention has been directed to the incentives certain actors have to “sell” the sophistication and success of data-driven campaigning practices. For Bennett, political and technical consultants “are eager to tout the benefits of micro-targeting and data-driven campaigning, and to sell a range of software applications, for both database and mobile environments” (2016, p. 261). Indeed, with over 250 companies operating worldwide that specialise in the use of individual data in political campaigns (Kerr Morrison, Naik, and Hankey, 2018, p. 20), there is a clear incentive for many actors to “oversell” the gains to be achieved through the use of data-targeting tools (a behaviour Cambridge Analytica has, for example, been accused of). Whatever the causes of these diverging narratives, it is clear that our conceptual understanding of the nature of data-driven campaigning, and our empirical understanding of how extensively different practices are found is underdeveloped. We therefore currently lack clear benchmarks against which to monitor the form and extent of data-driven campaigning.

These deficiencies in our current conceptualisation of data-driven campaigning are particularly important because there has been recent (and growing) attention paid to the need to regulate data-use in campaigns. Indeed, around the globe calls for regulation have been made citing concerns about the implications of data-driven campaigning for privacy, political debate, transparency and social fragmentation (Dobber et al, 2017, p. 2). In the UK context, for example, the Information Commissioner, Elizabeth Denham, launched an inquiry into the use of data analytics for political purposes by proclaiming:

[w]hat we're looking at here, and what the allegations have been about, is mashing up, scraping, using large amounts of personal data, online data, to micro target or personalise or segment the delivery of the messages without individuals' knowledge. I think the allegation is that fair practices and fair democracy is under threat if large data companies are processing data in ways that are invisible to the public (quoted in Haves, 2018, pp. 2-3).

Similar concerns have been raised by the Canadian Standing Committee on Access to Information, Privacy and Ethics, the US Senate Select Committee on Intelligence, and by international bodies such as the European Commission. These developments are particularly pertinent because the conceptual and empirical ambiguities highlighted above make it unclear which data-driven campaign practices are problematic, and how extensively they are in evidence.

It is against this backdrop that I argue there is a need to unpack the idea of data-driven campaigning by asking “what practices characterise data-driven campaigning?”. Posing three supplementary questions, in the remainder of the article I provide a series of conceptual frameworks that can be used to understand and map a diversity of data use practices that are currently obscured by the idea of data-driven campaigning. This intervention aims not only to clarify our conceptual understanding of data-driven campaigning practices, and to provide a template for future empirical research, but also to inform debate about the democratic acceptability of different practices and the form any regulatory response should take.

Navigating the practice of data-driven campaigns

Whilst often spoken about in uniform terms, data-driven campaigning practices come in a variety of different forms. To begin to understand the diversity of different practices, it is useful to pose three questions:

  1. Who is using data in campaigns?
  2. What are the sources of campaign data?
  3. How does data inform communication?

For each question, I argue that it is possible to identify a range of answers rather than single responses. Indeed, different actors, sources and communication strategies can be associated with data use within single as well as between different campaigns. Recognising this, I develop three analytical frameworks (one for each question) that can be used to identify, map and contemplate different practices.

These frameworks have been designed to enable comparative analysis between different countries and organisations, highlighting the many different ways in which data is used. Whilst not applied empirically within this article, the ideal type markers outlined below can be operationalised to map different practices. In doing so it should be expected that a spectrum of different positions will be found within any single organisation. Whilst it is not within the scope of this paper to fully operationalise these frameworks, methods of inquiry are discussed to highlight how data may be gathered and used in future analysis. In the discussion below, I therefore offer these frameworks as a conceptual device that can be built upon and extended in the future to generate comparative empirical insights. This form of empirical analysis is vital because it is expected that answers to the three questions will vary depending on the specific geographic or organisational context being examined, highlighting differences in data driven campaigning that need to be recognised by those considering regulation and reform.

Who is using data in campaigns?

When imagining the orchestrators of data-driven campaigning the actors that come to mind are often data specialists who provide insights for party strategists about how best to campaign. Often working for an external company or hired exclusively for their data expertise, these actors have received much coverage in election campaigns. Ranging from the now notorious Cambridge Analytica, to established companies such as BlueStateDigital and eXplain (formerly Liegey Muller Pons), there is often evidence that professional actors facilitate data-driven campaigns. Whilst the idea that parties utilise professional expertise is not new (Dalton et al., 2001, p. 55; Himmelweit et al., 1985, pp. 222-3), data professionals are seen to have gained particular importance because “[n]ew technologies require new technicians” (Farrell et al., 2001). This means that campaigners require external, professional support to utilise new techniques and tools (Kreiss and McGregor, 2018; Nickerson and Rogers, 2014, p. 70). Much commentary therefore gives the impression that data-driven campaigning is being facilitated by an elite group of professional individuals with data expertise. For those concerned about the misuse of data and the need to curtail practices seen to have negative democratic implications, this conception suggests that it is the actions of a very small group that are of concern. And yet, as the literature on campaigns demonstrates, parties are reliant on the activism of local volunteers (Jacobson, 2015), and often lack the funds to pay for costly data expertise (indeed, in many countries spending limits prevent campaigners from paying for such expertise). As a result, much data-driven campaigning is not conducted by expert data professionals.

In thinking through this point, it is useful to note that those conducting data-driven campaigning can have varying professional status and levels of expertise. These differences need to be recognised because they affect both who researchers study when they seek to examine data-driven campaigning, but also whose actions need to be regulated or overseen to uphold democratic norms. 5 Noting this, it is useful to draw two conceptual distinctions between professional and activist data users, and between data novices and experts. These categories interact, allowing four “ideal type” positions to be identified in Figure 1.

Figure 1: Who is using data in campaigns?6

Looking beyond the “expert data professionals” who often spring to mind when discussing data-driven campaigning, Figure 1 demonstrates that there can be different actors using data in campaigns. It is therefore common to find “professionals without data expertise” who are employed by a party. Whilst often utilising or collecting data, these individuals do not possess the knowledge to analyse data or develop complex data-driven interventions. Interestingly, this group has been understudied in the context of campaigns, meaning the precise differences between external and internal professionals are not well understood.

In addition to professionals, Figure 1 also shows that data-driven campaigning is performed by activists who can vary in their degree of expertise. Some, described here as “expert data activists”, can possess specialist knowledge - often having many of the same skills as expert data professionals. Others, termed “activists without data expertise”, lack even basic understandings of digital technology (let alone data-analysis) (Nielsen, 2012). Some attention has been paid to activists” digital skills in recent elections with, for example, coverage of digital expertise amongst Momentum activists in the UK (Zagoria and Schulkind, 2017) and Bernie Sanders activists in the US (Penney, 2017). And yet, other studies have suggested that such expertise is not common amongst activists (Nielsen, 2012).

These classifications therefore suggest that data-driven campaigning can and is being conducted by very different actors who vary in their relationship with the party, and in their expertise. Currently we have little insight into the extent to which these different actors dominate campaigns, making it difficult to determine who is using data, and hence whose activities (if any) are problematic. This indicates the need for future empirical analysis that sets out to determine the prevalence and relative power of these different actors within different organisations. Whilst space prevents a full elucidation of the markers that could be used for this analysis, it would be possible to map organisational structures and use surveys to gauge the extent of data-expertise present amongst professionals and activists. In turn, these insights could be mapped against practices to determine who was using data in problematic ways. It may, for example, be that whilst “expert data professionals” are engaging in practices that raise questions about the nature of democratic debate (such as micro-targeting), “activists without data expertise” may be using data in ways that raise concerns about data security and privacy.

Knowing who is using data how is critical for thinking about where any response may be required, but also when considering how a response can be made. Far from being subject to the same forms of oversight these different categories of actors are subject to different forms of control. Whilst professionals tend to be subject to codes of conduct that shape data use practices, or can be held accountable by the threat of losing their employment, the activities of volunteers can be harder to regulate. As shown by Nielsen (2012), even when provided with central guidance and protocols, local activists often diverge from central party instructions, reflecting a classic structure/agency dilemma. This suggests not only that the activities of different actors may require monitoring and regulation, but also that different responses may be required. The question “who is using data in campaigns?” therefore spotlights a range of practices and democratic challenges that are often overlooked, but which need to be appreciated in developing our understanding and any regulatory response.

What are the sources of campaign data?

Having looked at who is using data in campaigns, it is, second, important to ask what are the sources of campaign data? The presumption inherent in much coverage of data-driven campaigning is that campaigners possess complex databases that hold numerous pieces of data about each and every individual. The International Institute for Democracy and Electoral Assistance (IDEA), for example, has argued that parties “increasingly use big data on voters and aggregate them into datasets” which allow them to “achieve a highly detailed understanding of the behaviour, opinions and feelings of voters, allowing parties to cluster voters in complex groups” (2018, p. 7; p. 5). It therefore often appears that campaigns use large databases of information composed of data from different (and sometimes questionable) sources. However, as suggested above, the data that campaigns possess is often freely disclosed (Hersh, 2015), and many campaigners are currently subject to privacy laws around the kind of data they can collect and utilise (Bennett, 2016; Kruschinski and Haller, 2017).

To understand variations and guide responses, four more categories are identified. These are determined by thinking about variations in the form of data; differentiating between disclosed and inferred data, and the conditions under which data is made available; highlighting differences between data that is made available without charge, and data that is purchased.

Figure 2: The sources of campaigning data

As described in Figure 2, much of the data that political parties use is provided to them without charge, but it can come in two forms. The first category “free data disclosed by individuals” refers to data divulged to a campaign without charge, either via official state records or directly by an individual to a campaign. The official data provided to campaigns varies from country to country (Dobber et al., 2017, p. 7; Kreiss and Howard, 2010, p. 5) but can include information on who is registered to vote, a voter’s date of birth, address and turnout record. In the US it can even include data on the registered partisan preference of a particular voter (Bennett, 2016, p. 265; Hersh, 2015). This information is freely available to official campaigners and citizens are often legally required to divulge it (indeed, in the UK it is compulsory to sign up to the Electoral Register). In addition, free data can also be more directly disclosed by individuals to campaigns through activities such as voter canvassing and surveys that gather data about individuals’ preferences and concerns (Aron, 2015, pp. 20-1; Nickerson and Rogers, 2014, p. 57). The second category “free inferred data” identifies data available without charge, but which is inferred rather than divulged. These deductions can occur through contact with a campaign. Indeed, research by the Office of the Information and Privacy Commissioner for British Columbia, Canada describes how party canvassers often collect data about ethnicity, age, gender and the extent of party support by making inferences that the individual themselves is unaware of (2019, p. 22). It is similarly possible for data that campaigns already possess to be used to make inferences. Information gathered from a petition, for example, can be used to make suppositions about an individual’s broader interests and support levels. Much of the data campaigners use is therefore available without charge, but differs in form.

In addition, Figure 2 captures the possibility that campaigns purchase data. This data can be classified in two ways. The category “purchased data disclosed by individuals” describes instances in which parties buy data that was not disclosed directly to them, but was provided to other actors. This data can come in the form of social media data (which parties can buy access to rather than possess), or include data such as magazine subscription lists (Chester and Montgomery, 2017, pp. 3-4; Nickerson and Rogers, 2014, p. 57). Figure 2 also identifies “purchased inferred data”. This refers to modelled data whereby inferences are made about individual preferences on the basis of available data. This kind of modelling is frequently accomplished by external companies using polling data or commercially available insights, but it can also be done on social media platforms, with features such as look-a-like audiences on Facebook selling access to inferred data about individuals’ views.

Campaigns can therefore use different types of data. Whilst the existing literature has drawn attention to the importance of regulatory context in shaping the data parties in different countries are legally able to use (Kruschinski and Haller, 2017), there are remarkably few comparative studies of data use in different countries. This makes it difficult to determine not only how places vary in their regulatory tolerance of these different forms of data, but also how extensively parties actually use them. Such analysis is important because parties’ activities are not only shaped by laws, but can also be informed by variables such as resources or available expertise (Hersh, 2015, p. 170). This makes it important to map current practices and explore if and why data is used in different ways by parties around the world. In envisioning such empirical analysis, it is important to note that parties are likely to be sensitive to the disclosure of data sources. However a mix of methods - including interviews with those using data within parties and data subject access requests - can be used to gain insights here.

In the context of debates around data-driven campaigning and democracy, these categories also prompt debate about the acceptability of different practices. Whilst the idea that certain forms of disclosed data should be available without charge is relatively established as an acceptable component of campaigns, it appears there are concerns over the purchase of data and the collection of inferred data. Indeed, in Canada the Office of the Information and Privacy Commissioner for British Columbia recommended that “[a]ll political parties should ensure door-to-door canvassers do not collect the personal information of voters, including but not limited to gender, religion, and ethnicity information unless that voter has consented to its collection” (2019, p. 41). By acknowledging the different sources of data used for data-driven campaigning it is therefore possible to not only clarify what is happening, but also to think about which forms of data can be acceptably used by campaigns.

How does data inform communication?

Finally, in thinking about data-driven campaigning much attention has been paid to micro-targeting and the possibility that data-driven campaigning allows parties to conduct personalised campaigns. IDEA has therefore argued that micro-targeting allows parties to “reach voters with customized information that is relevant to them…appealing to different segments of the electorate in different ways” with new degrees of precision (2018, p. 7). In the context of digital politics, micro-targeting is seen to have led parties to:

…try to find and send messages to their partisan audiences or intra-party supporters, linking the names in their databases to identities online or on social media platforms such as Facebook. Campaigns can also try to find additional partisans and supporters by starting with the online behaviours, lifestyles, or likes or dislikes of known audiences and then seeking out “look-alike audiences”, to use industry parlance (Kreiss, 2017, p. 5).

In particular, platforms such as Facebook are seen to provide parties with a “powerful “identity-based“ targeting paradigm” allowing them to access “more than 162 million US users and to target them individually by age, gender, congressional district, and interests” (Chester and Montgomery, 2017, p. 4). These developments have raised important questions about the inclusivity of campaign messaging and the degree to which it is acceptable to focus on specific segments of the population. Indeed, some have highlighted risks relating to mis-targeting (Hersh and Schaffner, 2013) and privacy concerns (Kim et al., 2018, p. 4). However, as detailed above, there are questions about the extent to which campaigns are sending highly targeted messages (Anstead et al., unpublished).

In order to understand different practices, Figure 3 differentiates between audience size; specifying between wide and narrow audiences, and message content; noting differences between generic and specialised messages.

Figure 3: How data informs communication

Much campaigning activity comprises generic messages, with content covering a broad range of topics and ideas. By using data (often generated through polling or in focus groups) parties can determine the form of messaging likely to win them appeal. The category “general message to all voters” describes instances in which a general message is broadcast to a wide audience, something that often occurs via party political TV broadcasts or political speeches (Williamson, Miller and Fallon, 2010, p. iii). In contrast “generic message to specific voters” captures instances in which parties limit the audience, but maintain a general message. Such practices often emerge in majoritarian electoral systems where campaigners want to appeal to certain voters who are electorally significant, rather than communicating with (and potentially mobilising) supporters of other campaigns (Dobber et al., 2017, p. 6). Parties therefore often gather data to identify known supporters or sympathisers who are then sent communications that offer a general overview of the party’s positions and goals.

Figure 3 also spotlights the potential for parties to offer more specialised messages, describing a campaign’s capacity to cover only certain issues or aspects of an issue (focusing, for example, on healthcare rather than all policy realms, or healthcare waiting lists rather than plans to privatise health services). These messages can, once again, be deployed to different audiences. The category “specialised message to all voters” describes instances in which parties use data to identify a favourable issue (Budge and Farlie, 1983) that is then emphasised in communications with all citizens. In the UK, for example, the Labour Party often communicates its position on the National Health Service, whereas the Conservative Party focuses on the economy (as these are issues which, respectively, the two parties are positively associated with). Finally, “specialised message to specific voters” captures the much discussed potential for data to be used to identify a particular audience that can then be contacted with a specific message. This means that parties can speak to different voters about different issues – an activity that Williamson, Miller and Fallon describe as “segmentation” (2010, p. 6).

These variations suggest that campaigners can use data to inform different communication practices. Whilst much attention has been paid to segmented micro-targeting (categorised here as “specialised messages to specific voters”), there is currently little data on the degree to which each approach characterises different campaigns (either in single countries or different nations). This makes it difficult to determine how extensive different practices are, and whether the messaging conducted under each heading is taking a problematic form. It may, for example, be that specialised messaging to specific voters is entirely innocuous, or it could be that campaigners are offering contradictory messages to different voters and hence potentially misleading people about the positions they will take (Kreiss, 2017, p. 5). Empirically, this form of analysis can be pursued in different ways. As above, interviews with campaign practitioners can be used to explore campaign strategies and targeting, but it is also important to look at the actual practices of campaigns. Resources such as online advertising libraries and leaflet repositories are therefore useful in monitoring the content and focus of campaign communications. Using these methods, a picture of how data informs communication can be developed.

Thinking about the democratic implications of these different practices, it should be noted that message variation by audience size and message scope is not new - campaigns have often varied in their communication practices. And yet digital micro-targeting and voter segmentation has been widely greeted with alarm. This suggests the importance of thinking further about the precise cause of concern here, determining which democratic norms are being violated, and whether this is only occurring in the digital realm. It may, for example, be that concerns do not only reflect digital practices, suggesting that regulation is needed for practices both online and offline. These categories therefore help to facilitate debate about the democratic implications of different practices, raising questions about precisely what it is that is the cause for concern and where a response needs to be made.

Discussion

The above discussion has shown that data-driven campaigning is not a homogenous construct but something conducted by different actors, using different data, adopting different strategies. To date much existing discussion of data-driven campaigning has focused on the extent to which this practice is found. In contrast, in this analysis I have explored the extent to which different data-driven campaigning practices can be identified. Highlighting variations in who is using data in campaigns, what the sources of campaign data are, and how data informs campaign communication, I argue that there are a diverse range of possible practices.

What is notable in posing these questions and offering these frameworks is that whilst there is evidence to support these different conceptual categories, at present there is little empirical data on the extent to which each practice exists in different organisations. As such, it is not clear what proportion of campaign activity is devoted to targeting specific voters with specific messages as opposed to all voters with a general message. Moreover, it is not clear the extent to which parties rely on different actors for data-driven campaigning, nor how much power and scope these actors have within a single campaign. At present, therefore, there is considerable ambiguity about the type of data-driven campaigns that exist. This suggests the urgent need for new empirical analysis that explores the practice of data-driven campaigning in different organisations and different countries. By operationalising the categories proposed here and using methods including interviews, content analysis and data subject access requests, I argue that it is possible to build up a picture of who is using what data how.

Of particular interest is the potential to use these frameworks to generate comparative insights into data-driven campaigning practice. At present studies of data use have tended to be focused on one country, but in order to understand the scope of data-driven campaigning it is necessary to map the presence of different practices. This is vital because, as previous comparative electoral research has revealed, the legal, cultural and institutional norms of different countries can have significant implications on campaigning practices. In this way it would be expected that a country such as Germany with a history of strong data protection law would exhibit very different data-driven campaigning practices to a country such as Australia. In a similar way, it would be expected that different institutional norms would lead a governmental organisation, charity or religious group to use data differently to parties. At present, however, the lack of comparative empirical data makes it difficult to determine what influences the form of data-driven campaigning and how different regulatory interventions affect campaigning practices. This framework therefore enables such comparative analysis, and opens the door to future empirical and theoretical work.

One particularly valuable aspect of this approach is the potential to use these questions and categories to contribute to existing debates around data-driven campaigning and democracy. Throughout the discussion, I have argued that many commentators have voiced concerns. These relate variously to privacy, the inclusivity of political debate, misinformation and disinformation, political finance, external influence and manipulation, transparency and social fragmentation (for more see Zuiderveen Borgesius et al., 2018, p. 92; Chester and Montgomery, 2017, p. 8; Dobber et al., 2017, p. 2; Hersh, 2015, p. 207; Kreiss and Howard, 2010, p. 11; International IDEA, 2018, p. 19). Such concerns have led to calls for regulation, and, as detailed above, many national governments, regulators and international organisations have moved to make a response. And yet, before creating new regulations and laws, it is vital for these actors to possess accurate information about how precisely data-driven campaigning is being conducted, and to reflect on which democratic ideals these practices violate or uphold. Data-driven campaigning is not an inherently problematic activity, indeed, it is an established feature of democratic practice. However, our understanding of the acceptability of this practice will vary dependent on our understanding of who, what and how data is being used (as whilst some practices will be viewed as permissible, others will not). This makes it important to reflect on what is happening and how prevalent these practices are in order to determine the nature and urgency of any regulatory response. Importantly, these insights need to be gathered in the specific regulatory context of interest to policy makers, as it should not be presumed that different countries or institutions will use data in the same way, or indeed have the same standards for acceptable democratic conduct.

The frameworks presented in this article therefore provide an important means by which to consider the nature, prevalence and implications of data-driven campaigning for democracy and can be operationalised to produce vital empirical insights. Such data and conceptual clarification together can ensure that any reaction to data-driven campaigning takes a consistent, considered approach and reflects the practice (rather than the possibility) of this activity. Given, as a report from Full Fact (2018, p. 31) makes clear that there is a danger of “government overreaction” based on limited information and self-evident assumptions (Ostrom, 2000) about how campaigning is occurring, it is vital that such insights are gathered and utilised in policy debates.

Conclusion

This article has explored the phenomenon of data-driven campaigning. Whilst receiving increased attention over recent years, existing debate has tended to focus on the extent to which this practice can be found. In this article, I present an alternative approach, seeking to map the diversity of data-driven campaigning practices to understand the different ways in which data can and is being used. This has shown that far from being characterised by uniform data-driven campaigning practices, data-use can vary in a number of ways.

In classifying variations in who is using data in campaigns, what the sources of campaign data are, and how data informs campaign communication, I have argued that there are diverse practices that can be acceptable to different actors to different degrees. At an immediate level, there is a need to gain greater understanding of what is happening within single campaigns and how practices vary between different political parties around the globe. More widely, there is a need to reflect on the implications of these trends for democracy and the form that any regulatory response may need to take. As democratic norms are inherently contested, there is no single roadmap for how to make a response, but the nature of any response will likely be affected by our understanding of who, what and how data is being utilised. This suggests the need for new conceptual and empirical understanding of data-driven campaigning practices amongst both academics and regulators alike.

References

Anstead, N., et al. (2018). Facebook Advertising the 2017 United Kingdom General Election: The Uses and Limits of User-Generated Data. Unpublished Manuscript. Retrieved from https://targetingelectoralcampaignsworkshop. files.wordpress.com/2018/02/anstead_et_al_who_targets_me.pdf.

Aron, J. (2015, May 2). Mining for Every Vote. New Scientist, 226(3019), 20–21. https://doi.org/10.1016/S0262-4079(15)30251-7

Baldwin-Philippi, K. (2017). The Myths of Data Driven Campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bennett, C. (2016). Voter Databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America?. International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Budge, I., & Farlie, D. (1983). Explaining and Predicting Elections. London: Allen and Unwin.

Castleman, D. (2016). Essentials of Modelling and Microtargeting. In A. Therriault (Ed.), Data and Democracy: How Political Data Science is Shaping the 2016Elections, (pp. 1–6). Sebastopol, CA: O’Reilly Media. Retrieved from https://www.oreilly.com/ideas/data-and-democracy/page/2/essentials-of-modeling-and-microtargeting

Chester, J., & Montgomery, K.C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Dalton, R. J., Farrell, D. M., & McAllister, I. (2013). Political Parties and Democratic Linkage. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199599356.001.0001

Dobber, T., Trilling, D., Helberger, N., & de Vreese, C. H. (2017). Two Crates of Beer and 40 pizzas: The adoption of innovative political behavioral targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dommett, K., & Temple, L. (2018). Digital Campaigning: The Rise of Facebook and Satellite Campaigns. Parliamentary Affairs, 71(1), 189–202. https://doi.org/10.1093/pa/gsx056

Farrell, D., Kolodny, R., & Medvic, S. (2001). Parties and Campaign Professionals in a Digital Age: Political Consultants in the United States and Their Counterparts Overseas. The International Journal of Press/Politics, 6(4), 11–30. https://doi.org/10.1177/108118001129172314

Full Fact. (2018). Tackling Misinformation in an Open Society [Report]. London: Full Fact. Retrieved from https://fullfact.org/blog/2018/oct/tackling-misinformation-open-society/

Gibson, R., Römmele, A., & Williamson, A. (2014) Chasing the Digital Wave: International Perspectives on the Growth of Online Campaigning. Journal of Information Technology & Politics, 11(2), 123–129. https://doi.org/10.1080/19331681.2014.903064

Haves, E. (2018). Personal Data, Social Media and Election Campaigns. House of Lords Library Briefing. London: The Stationary Office.

Hersh, E. (2015). Hacking the Electorate: How Campaigns Perceive Voters. Cambridge: Cambridge University Press.

Hersh, E. & Schaffner, B. (2013). Targeted Campaign Appeals and the Value of Ambiguity. The Journal of Politics, 75(2), 520–534. https://doi.org/10.1017/S0022381613000182

Himmelweit, H., Humphreys, P. , & Jaeger, M. (1985). How Voters Decide. Open University Press.

in ‘t Veld, S. (2017). On Democracy. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.779

Information Commissioners Office. (2018a). Investigation into the use of data analytics in political campaigns. London: ICO.

Information Commissioners Office. (2018b) Notice of Intent. Retrieved from https://ico.org.uk/media/2259363/emmas-diary-noi-redacted.pdf.

International IDEA. (2018). Digital Microtargeting. IDEA: Stockholm.

Jacobson, G. (2015). How Do Campaigns Matter?. Annual Review of Political Science, 18(1), 31–47. https://doi.org/10.1146/annurev-polisci-072012-113556

Kaang, C., Rosenburg, M., and Frenkel, S. (2018, July 2). Facebook Faces Broadened Federal Investigations Over Data and Privacy. New York Times.Retrieved fromhttps://www.nytimes.com/2018/07/02/technology/facebook-federal-investigations.html?module=inline

Kerr Morrison, J., Naik, R., & Hankey, S. (2018). Data and Democracy in the Digital Age. London: The Constitution Society.

Kim, T., Barasz, K., & John, L. (2018). Why Am I Seeing this Ad? The Effect of Ad Transparency on Ad Effectiveness. Journal of Consumer Research, 45(5), 906–932. https://doi.org/10.1093/jcr/ucy039

Kreiss, D., & Howard, P. N. (2010). New challenges to political privacy: Lessons from the first US Presidential race in the Web 2.0 era. International Journal of Communication, 4(19), 1032–1050. Retrieved from https://ijoc.org/index.php/ijoc/article/view/870

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Kreiss, D., & McGregor, S. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter and Google with Campaigns During the 2016 US Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Kruschinski, S., & Haller, A. (2017). Restrictions on data-driven political micro-targeting in Germany. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.780

Nickerson, D., & Rogers, T. (2014). Political Campaigns and Big Data. Journal of Economic Perspectives, 28(2), 51–74. https://doi.org/10.1257/jep.28.2.51

Nielsen, R. (2010). Mundane internet tools, mobilizing practices, and the coproduction of citizenship in political campaigns. New Media and Society, 13(5), 755–771. https://doi.org/10.1177/1461444810380863

Nielsen, R. (2012). Ground Wars. Princeton: Princeton University Press.

Office of the Information and Privacy Commissioner for British Columbia. (2019). Investigation Report P19-01, Full Disclosure: Political Parties, Campaign Data and Voter Consent. Retrieved from https://www.oipc.bc.ca/investigation-reports/2278

Ostrom, E. (2000). The Danger of Self-Evident Truths. Political Science and Politics, 33(1), 33–44. https://doi.org/10.2307/420774

Penney, J. (2017). Social Media and Citizen Participation in “Official“ and “Unofficial“ Electoral Promotion: A Structural Analysis of the 2016 Bernie Sanders Digital Campaign. Journal of Communication, 67(3), 402–423. https://doi.org/10.1111/jcom.12300

Persily, N. (2017). Can Democracy Survive the Internet?. Journal of Democracy, 28(2), 63–76. https://doi.org/10.1353/jod.2017.0025

Tactical Tech. (2019). Personal Data: Political Persuasion – Inside the Influence Industry. How it works. Berlin: Tactical Technology Collective.

Williamson, A., Miller, L., & Fallon, F. (2010). Behind the Digital Campaign: An Exploration of the Use, Impact and Regulation of Digital Campaigning. London: Hansard Society.

Zagoria, T. and Schulkind, R. (2017). How Labour Activists are already building a digital strategy to win the next election”, New Statesman. Retrieved from https://www.newstatesman.com/politics/elections/2017/07/how-labour-activists-are-already-building-digital-strategy-win-next.

Zuiderveen Borgesius, F., Möller, J., Kruikemeier, S., Fathaigh, R., Irion, K., Dobber, T., Bodo, B. & de Vreese, C. H. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrect Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. This question is important because it is to be expected that universal responses to this question do not exist, and that different actors in different countries will view and judge practices in different ways (against different democratic standards).

2. See the report from Tactical Tech (2019) Personal Data for a range of examples of how data can be used to gain “political intelligence“ about voters.

3. Importantly, this data use is not guaranteed to persuade voters. Campaigns can identify the type of campaign material viewers are more likely to watch or engage with, but this does not necessarily mean that those same viewers are persuaded by that content.

4. Similarly there are likely to be variations between parties and other types of organisation such as campaign groups or state institutions.

5. It should be noted that these democratic norms are not universal, but are expected to vary dependent on context and the perspective of the particular actor concerned.

6. For more on local expert activism in the UK see Dommett and Temple, 2017. In the US see Penney, 2017.

On the edge of glory (…or catastrophe): regulation, transparency and party democracy in data-driven campaigning in Québec

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

For the last 50 years, Québec politics has been characterised by a lasting two-party system based on a dominant divide between the Yes and No options to the project of political independence from the rest of Canada of the 8.4 million people in Canada’s predominantly Francophone jurisdiction (Pelletier, 1989). Following the failure of the 1995 referendum, the erosion of this divide led to an openness of the partisan system and the arrival of four parties in the Québec National Assembly (Dufresne et al., 2019; Langlois, 2018). With a new party elected to government for the first time since 1976, the 2018 election was one of realignment. The Coalition avenir Québec (CAQ) elected 74 Members of the National Assembly (MNAs). With 31 seats, the former government, the Québec Liberal Party (QLP), received its worst result in 150 years and formed the official opposition. With 10 MNAs each, Québec solidaire (QS), a left-wing party and the Parti québécois (PQ), the historic vehicle for independence, occupied the remaining opposition seats.

Beyond these election results, the 2018 Québec election also marks an organisational change. For the first time, the major parties have all massively adopted what is often referred to as “US” data-campaigning practices. However, when it comes to the use of digital technologies for electoral purposes, the US case is the exception rather than the rule (Enli and Moe, 2013; Gibson, 2015; Vaccari, 2013, p. ix). Indeed, data campaigning, as with other techniques of political communication, are conducted in specific contexts that affect what is accessible, possible and viable (Bennett, 2016; Dobber et al., 2017; Ehrhard et al., 2019; Flanagan, 2010, p. 156).

Not unlike other Canadian jurisdictions, Québec is therefore an interesting case to study the effects of these practices in parties operating in a parliamentary system, while not being subject to privacy protection rules. Moreover, to our knowledge, studies on this subject in a sub-national context are few. In Canada, the majority of the work focuses on federal parties (see for example Bennett, 2018; McKelvey and Piebiak, 2018; Munroe and Munroe, 2018; Patten, 2015, 2017; Thomas, 2015), leaving provincial and municipal levels behind (with the notable exception of Carlile, 2017; Yawney, 2018; and Giasson et al., 2019). Thus, the French-speaking jurisdiction represents, as Giasson et al. (2019, p. 3) argue, one of those relevant but “less obvious” cases to study in order to better understand the similarities and differences in why and how political parties adopt or resist technological innovations. The use of this type of case study also makes it possible to explore the gap between emerging opportunities and the campaigns actually deployed by the parties, beyond the "rhetoric of data-driven campaigning" (see Baldwin-Philippi, 2017, p. 627).

Many factors influence technological innovation in campaigns (Kreiss, 2016). Furthermore, as Hersh indicates (2015), cultural and legal contexts influence political actors’ behaviour because types of data that are made available to campaigns shape their perceptions of voters, and therefore their communication practices. According to Munroe and Munroe (2018), political parties may use data as a resource generated in many ways that can be used to guide strategic and tactical decisions. Because parties set up integrated platforms in which personal data on voters are stored and analysed, ethical and political issues emerge (Bennett, 2013, 2015). In most Canadian provinces, including Québec, and at the federal level, parties are not subjected to privacy laws regarding the use and protection of personal data. This absence of a regulatory framework also leads to inadequate self-regulation (Bennett, 2018; Howard and Kreiss, 2010).

As was the case in many other jurisdictions around the globe, Québec parties were faced with a transparency deficit following the March 2018 revelations of the Cambridge Analytica affair (Bashyakarla et al, 2019; Cadwalladr and Graham-Harrison, 2018). Within hours of the scandal becoming public, political reporters in Québec turned to party leaders to get a better sense of the scope and use of the digital data they were collecting, why they collected them and what this all meant for the upcoming fall elections as well as for citizens’ privacy (Bélair-Cirino, 2018). Most claimed that their data collection and analysis practices were ethical and respectful of citizen’s privacy. However, none of them agreed to fully disclose the scope of the data they collected nor the exact purpose of these databases.

Research objectives and methodology

This article examines the increasing pressure to regulate uses of digital personal data by Québec’s political parties. First, it illustrates the central role now played by voter personal data in Québec’s politics. Second, it presents the current (and weak) legislative framework and how the issue of the protection of personal data came onto the agenda in Québec. At first, many saw this shift has a positive evolution where Québec’s parties “caught up” with current digital marketing practices. However, following the Cambridge Analytica affair and revelations about the lack of proper regulation on voter data use, public discourse started casting these technological advancements as democratic catastrophes waiting to happen.

We use three types of data to investigate this context. First, in order to assess the growth in party use of digital voter data, we rely on 40 semi-directed interviews conducted for a broader research project with party organisers, elected officials, activists and advisors of all the main political parties operating in Québec 1. The interviews, each lasting from 45 minutes to 1-hour - were conducted in French just a few weeks before the launch of the 2018 provincial election campaign. Citations presented in this article are therefore translations. The interviewees were selected according to their political representativeness, but also for their high level of electoral involvement. In this article, we only use those responses that relate to digital campaigning and the use of personal information. The citations selected here represented viewpoints shared by at least three interviewees. They illustrate shared perceptions of the evolution of the strategic use of voter personal data in Québec’s electioneering.

Second, we also analysed the legislative framework as well as the self-regulatory practices of political parties in Québec in order to measure the levels of regulation and transparency surrounding their use of personal data. To do this, we studied the websites of the four main parties in order to compare their practices.

Finally, we also conducted a media coverage analysis on the issue of how parties engaged in digital marketing. We conducted a keyword search on the Eureka.cc database to retrieve all texts published in the four main daily newspapers published in French in Québec (La Presse, Le Devoir, Le Soleil and Le Journal de Montréal), in the public affairs magazine L’Actualité, as well as on the Radio-Canada website about digital data issues related to politics in Québec. The time period runs from 1 January 2012 to 1 March 2019 and covers three general (2012, 2014 and 2018) and two municipal (2013 and 2017) elections. Our search returned 223 news articles.

What we find is a perfect storm. We saw parties that are massively adopting data marketing at the same time that regulatory bodies expressed concerns about their lack of supervision. In the background, an international scandal made the headlines and changed the prevailing discourse surrounding these technological innovations.

New digital tools, a new political reality

The increased use of digital technologies and data for electioneering can be traced back to the 2012 provincial election (see Giasson et al., 2019). Québec political parties were then faced with a changing electorate, and data collection helped them adapt to this new context. Most of them also experienced greater difficulties in rallying electors ideologically. In Québec, activist, partisan politics was giving way to more political data-marketing (Del Duchetto, 2016).

In 2018, Québec’s four main political parties integrated digital technologies at the core of their electoral organisations. In doing so, they aimed to close the technological gap with Canadian parties at the federal level (Marland et al., 2012; Delacourt, 2013). Thus, the CAQ developed the Coaliste, its own tool for processing and analysing data. The application centralises information collected on voters in a database and targets them according to their profile. Developed at a cost of 1 million Canadian dollars, the tool was said by a party strategist to help carry a campaign "with 3 or 4 times less" money than before (Blais and Robillard, 2017).

For its part, QS created a mobilisation platform called Mouvement. The tool was inspired by the "popular campaigns of Bernie Sanders and La France Insoumise in France."2 Decentralised in nature, the platform aimed to facilitate event organisation, networking between sympathisers, to create local discussion activities, as well as to facilitate voter identification.

The PQ has also developed its own tool: Force bleue. At its official launch, a party organiser insisted on its strategic role in tight races. It would include “an intelligent mapping system to crisscross constituencies, villages, neighbourhoods to maximise the time spent by local teams and candidates by targeting the highest paying places in votes and simplify your vote turnout” (Bergeron, 2018).

Finally, the QLP outsourced its digital marketing and built on the experience of the federal Liberal Party of Canada as well as Emmanuel Macron’s movement in France. For the 2018 election campaign, the party contracted Data Sciences, a private firm which "collects information from data of all kind, statistics among others, on trends or expectations of targeted citizens or groups"(Salvet, 2018).

Our interviews with political strategists help better understand the scope of this digital shift that Québec’s parties completed in 2018. They also put into perspective the effects of these changes and the questions they raise within the parties themselves.

Why change?

Party organisers interviewed for this article who advocate for the development of new tools stress two phenomena. On the one hand, the Québec electorate is more volatile and on the other, it is much more difficult to communicate with electors than before. A former MNA notes that today: "The campaign counts. It's very volatile and identifying who votes for you early in the campaign doesn’t work anymore. "

With social media, Québec parties’ officials see citizens as more segmented than before. An organiser attributes the evolution of this electoral behaviour to social media. "Today, the big change is that the speed and accessibility of information means that you do not need a membership card to be connected. It circulates freely. It's on Facebook. It’s on Twitter".

He notes that "it is much more difficult to attract someone in a political party by saying that if you become a member you will have privileged access to a certain amount of information or to a certain quality of information". A rival organiser also confirms that people's behaviour has changed: "It's not just generational, they buy a product". He adds that this has implications on the level of volunteering and on voters’ motivation:

When we look at the beginning of the 1970s, we had a lot of people. People were willing to go door-to-door to meet voters. We had people on the ground, they needed to touch each other. The communications were person-to-person. (…) Today, we do marketing.

In sum, "people seek a product and are less loyal" which means that parties must rely on voters’ profiling and targeting.

Increased use of digital technology in 2018

The IT turn in Québec partisan organisations is real. One organiser goes so far as to say that most of the volunteer work that was central in the past is now done digitally. According to him, "any young voter who uses Facebook, is now as important, if not more, than a party activist". This comment reinforces the notion that any communication with an elector must now be personalised:

Now we need competent people in computer science, because we use platforms, email lists. When I send a message reminding to newly registered voters that it will be the first time they will vote, I am speaking directly to them.

To achieve this micro-targeting, party databases are updated constantly. An organiser states that: "Our job is to feed this database with all the tools like surveys, etc... In short, we must bomb the population with all kinds of things, to acquire as much data as possible". For example, Québec solidaire and the Coalition avenir Québec broadly used partisan e-petitions to feed their database (Bélair-Cirino, 2017). There are neither rules nor legislation that currently limit the collection and use of this personal information if it is collected through a partisan online petition or website.

Old political objectives - new digital techniques

In accordance with the current literature on the hybridisation of electoral campaigns (Chadwick, 2013; Giasson et al., 2019), many respondents indicate that the integration of digital tools associated with data marketing has changed the way things are done. This also had an effect on the internal party organisation, as well as on the tasks given to members on the ground. An organiser explains how this evolution took place in just a few years:

Before, we had a field organisation sector, with people on the phones, distributors, all that. We had communication people, we had people distributing content. (...) Right now, we have to work with people that are not there physically and with something that I will not necessarily control.

An organiser from another political party is more nuanced: "We always need people to help us find phone numbers, we always need people to make calls". He confirms, however, that communication tactics changed radically:

The way to target voters in a riding has changed. The way to start a campaign, to canvas, has changed. The technological tools at our disposal means that we need more people who are able to use them and who have the skills and knowledge to use the new technological means we have to reach the electorate.

Another organiser adds that it is now important to train activists properly for their canvassing work. According to her: "We need to give activists digital tools and highly technological support tools that make their lives easier". She adds that: "Everything is chained with intelligent algorithms that will always target the best customer, always first, no matter what...".

New digital technologies and tools are therefore used to maximise efficiency and resources. The tasks entrusted to activists also change. For another organiser, mobilisation evolves with technology: "We used to rely on lots of people to reach for electors". He now sees that people are reached via the internet and that this new reality is not without challenges: "we are witnessing a revolution where new clients do not live in the real world…". It then becomes difficult to meet them in real life, off-line.

Another organiser confirms having "a different canvas technique using social media and other tools”. According to him:

Big data is already outdated. We are talking about smart data. These data are used efficiently and intelligently. How do we collect this data? (...) We used to do a lot of tally by door-to-door or by phone. Now we do a lot of capture. The emails are what interest me. I am not interested in phone numbers anymore, except cell phones.

An experienced organiser observes that "this has completely changed the game. Before, we only had one IT person, now I have three programmers". He adds that "liaison officers have become press officers". This change also translates in the allocation of resources and the integration of new profiles of employees for data management. It brought a new set of digital strategists into war rooms. These new data analysts have knowledge in data management, applied mathematics, computer science and software engineering. They are working alongside traditional field organisers, sometimes even replacing them at the decision table.

Second thoughts

Organisers themselves raise democratic and ethical concerns related to the digital evolution of their work. One of them points out that they face ethical challenges. He openly wonders about the consequences of this gathering of personal information: "It's not because we can do something that we have to do it. With the list of electors, there are many things that can be done. Is it ethical to do it? At some point, you have to ask that question". He points out that new technologies are changing at a rapid pace and that with "each technology comes a communication opportunity". The question is now "how can we appropriate this technology, this communication opportunity, and make good use of it".

Reflecting upon the lack of regulation on the use of personal data by parties in Québec, an organiser added that: "We have the right to do that, but people do not like it". For him, this issue is "more than a question of law, there could be a question of what is socially acceptable".

Another organiser points out that the digital shift could also undermine intra-party democracy. Speaking about the role of activists, he is concerned that "they feel more like being given information that has been chewed on by a small number of people than being collected by more people in each constituency". He notes that the technological divide is also accompanied by a generational divide within the activist base:

The activist who is older, we will probably have less need of him. The younger activist is likely to be needed, but in smaller numbers. (...) Because of the technological gap, it's a bit of a vicious circle, that is also virtuous. The more we try to find technological means that will be effective, the less we need people.

Still, democratically, the line can be very thin between mobilisation and manipulation. Reflecting on a not-so-distant future, this organiser spoke of the many possibilities data collection could provide parties with:

These changes bring us into a dynamic that the Americans call ‘activation fields’. (...) From the moment we have contact with someone, what do we do with this person, where does she go? (...) This gives incredible arborescence, but also incredible opportunities.

He concludes that: "Today, the world does not realise how all the data is piling up on people and that this is how elections are won now". Is there a limit to the information a party could collect on an elector? This senior staffer does not believe so. He adds: “If I could know everything you were consuming, it would be so useful to me and help mobilise you".

Québec’s main political parties completed their digital shift in preparation for the 2018 election. Our interviews show that this change was significant. From an internal democracy perspective, digital technologies and data marketing practices help respond to the decline of activism and membership levels observed in most Québec parties (Montigny, 2015). This can also lead to frustration among older party activists who would feel less involved. On the other hand, from a data protection perspective we note that in the absence of a rigorous regulatory framework, parties in Québec can do almost anything. As a result, they collect a significant amount of unprotected personal data. The pace at which this change is taking place and the risks it represents for data security even lead some political organisers to question their own practices. As the next section indicates, Québec is lagging behind in adapting the data marketing practices of political parties to contemporary privacy standards.

The protection of personal information over time

The data contained in the Québec list of electors has been the cornerstone of all political parties’ electioneering efforts for many years and now form the basis of their respective databases of voter information. It is from this list that they are able, with the addition of other information collected or purchased, to file, segment and target voters. An overview of the legislative amendments concerning the disclosure of the information contained in the list of electors reveals two things: (1) its relatively recent private nature, and (2) the fact that the ability for political parties to collect and use personal data about voters never really seems to have been questioned until recently. Parties mostly reacted by insisting on self-regulation (Élections Québec, 2019).

With regard to the public/private nature of the list of electors, we should note that prior to 1979 it was displayed in public places. Up to 2001, the list of electors of a polling division was even distributed to all voters in that section. Therefore, the list used to be perceived as a public document in order to prevent electoral fraud. Thus, citizens were able to identify potential errors and irregularities.

From 1972 on, the list has been sent to political parties. With the introduction of a permanent list of electors in 1995, political parties and MNAs were granted, in 1997, the right to receive annual copies of the list for verification purposes. Since 2006, parties receive an updated version of the list three times a year. This facilitates the update of their computerised voter databases. It should also be noted that during election periods, all registered electoral candidates are granted access to the list and its content.

Thus, while public access to the list of electors has been considerably reduced, political parties’ access has increased in recent years. Following legislative changes, some information has been removed from the list, the age and profession of the elector for instance. Yet, the Québec list remains the most exhaustive of any Canadian jurisdiction in terms of the quantity of voter information it contains, indicating the name, full address, gender and date of birth of each elector (Élections Québec, 2019, p. 34).

From a legal perspective, Québec parties are not subject to the "two general laws that govern the protection of personal information, namely the Act respecting access to documents held by public bodies and the protection of personal information, which applies in particular to information held by a public body, and the Act respecting the protection of personal information in the private sector, which concerns personal information held by a person carrying on a business within the meaning of section 1525 of the Civil Code of Québec" (Élections Québec, 2019, p. 27). Indirectly, however, this law would apply when a political party chooses to outsource some of its marketing, data collection or digital activities to a private sector firm.

Moreover, the Election Act does not specifically define which uses of data taken from the list of electors are permitted. It merely provides some general provisions. Therefore, parties cannot use or communicate a voter’s information for purposes other than those provided under the Act. It is also illegal to communicate or allow this information to be disclosed to any person who is not lawfully entitled to it.

Instead of strengthening the law, parties represented in the National Assembly first chose to adopt their own privacy and confidentiality policies. This form of self-regulation, however, has its limits. Even if they appear on their websites, these norms are usually not easy to find and there is no way to confirm that they are effectively enforced by parties. Only the Coalition avenir Québec and the Québec Liberal Party offer a clear link on their homepage. 3 We analysed each of these according to five indicators: the presence of 1) a definition of what constitutes personal information, 2) a reference to the type of use and sharing of data, 3) methods of data collection, 4) privacy and security measures that are taken and 5) the possibility for an individual to withdraw his or her consent and contact the party in connection with his or her personal information.

Table 1: Summary of personal information processing policies of parties represented at the National Assembly of Québec
 

CAQ

PLQ

QS

PQ

Definition of personal information

Identifies a person (contact information, name, address and phone number).

Identifies a natural person (the name, date of birth, email address and mailing address of that person, if the person decides to provide them).

About an identifiable individual that excludes business contact information (name, date of birth, personal email address, and credit card).

 

Strategic use and sharing of data protocols

- To provide news and information about the party.

- Can engage third parties to perform certain tasks (processing donations, making phone calls and providing technical services for the website).

- Written contracts include clauses to protect personal information.

- To contact including by newsletter to inform news and events of the Party.

- To provide a personalised navigation experience on the website with targeted information according to interests and regions.

- May disclose personal information to third parties for purposes related to the management of party activities (administration, maintenance or internal management of data, organisation of an event).

- Not sell, trade, lend or voluntarily disclose to third parties the personal information transmitted.

- To improve the content of the website and use for statistical purposes.

Data collection method

- Following a contact by email.

- Following the subscription to a communication.

- After filling out an information request form or any other form on a party page, including polls, petitions and party applications.

- The party reserves the right to use cookies on its site.

- Collected only from an online form provided for this purpose.

  

Privacy and Security of data

- Personal information is not used for other purposes without first obtaining consent. From data provider.

- Personal information may be shared internally between the party's head office and its constituency associations.

- Respect the confidentiality and the protection of personal information collected and used.

- Only people assigned to subscriptions management or communications with subscribers have access to information.

- Protection of information against unauthorized access attempts with a server that is in a safe and secure place.

- Respect the privacy and confidentiality of personal information.

- Personal details will not be published or posted on the Internet in any way except at the explicit request of the person concerned.

- The information is sent in the form of an encrypted email message that guarantees confidentiality.

- No guarantees that the information disclosed by the Internet will not be intercepted by a third party.

- The site strives to use appropriate technological measures, procedures, and storage devices to prevent unauthorised use or disclosure of your personal information.

- No information to identify an individual unless he has provided this information for this purpose.

- Take reasonable steps to protect the confidentiality of this information.

- The information automatically transmitted between computers does not identify an individual personally.

- Access to collected information is limited only to persons authorized by the party or by law.

Withdrawal of consent and information

- Any person registered on a mailing list can unsubscribe at any time.

- Invitation to share questions, comments and suggestions.

- Ability to apply to no longer receive party information at any time.

- Ability to withdraw consent at any time on reasonable notice.

 

In general, we find that three out of four parties offer similar definitions of the notion of personal information: the Coalition avenir Québec, the Liberal Party of Québec and Québec solidaire. Beyond this indicator, the information available varies from one party to another. Thus, voters have little information on the types of use of their personal data. Moreover, only the Coalition avenir Québec and Québec solidaire indicate that they can use a third party in the processing of data without having to state the purpose of this processing to the data providers. The Coalition avenir Québec is the only party that specifies its methods of data collection in more detail. Similarly, Québec solidaire is more specific with respect to the measures taken to protect the privacy and security of the data it collects. Finally, the Parti québécois does not specify the mechanism by which electors could withdraw their consent.

Cambridge Analytica as a turning point

Our analysis of media coverage of the partisan and electoral use of voter data in Québecreveals three main conclusions. First, even though Québec political parties, both at the provincial and municipal levels, began collecting, storing and using personal data on voters several years ago, news media attention on these practices is relatively new. Secondly, the dominant media frame on the issue seems to have changed over the years: after first being rather anecdotal, the treatment of the issue grew in importance and became more suspicious. Finally, the Cambridge Analytica scandal appears as a turning point in news coverage. It is this affair that will force parties and their strategists to explain their practices publicly for the first time (Bélair-Cirino, 2018), will put pressure on the government to react, and bring to the fore the concerns and demands of other organisations such as Élections Québec and the Commission d’accès à l’information du Québec, the administrative tribunal and oversight body responsible for the protection of personal information in provincial public agencies and private enterprises.

Interest in ethical and security issues related to data campaigning built up slowly in Québec’s political news coverage. Already in 2012, parties used technological means to feed their databases and target the electorate (Giasson et al., 2019). However, it is in the context of the municipal elections in the Fall of 2013 that the issue of the collection and processing of personal data on voters was first covered in a news report. It was only shortly after the 2014 Québec elections that we found a news item dealing specifically with the protection of personal data of Québec voters. The Montréal-based newspaper Le Devoir reported that the list of electors was made available online by a genealogy institute. It was even possible to get it for a fee. The Drouin Institute - which released the list - estimated that about 20,000 people had accessed the data (Fortier, 2014).

Paradoxically, the following year, the media reported that investigators working for Élections Québec could not access the data of the electoral list for the purpose of their inquiry (Lajoie, 2015a). That same year, another anecdotal event made headlines: a Liberal MNA was asked by Élections Québec to stop using the voters list data to call his constituents to... wish them a happy birthday (Lajoie, 2015b). In the 2017 municipal elections, and even more so after the revelations regarding Cambridge Analytica in 2018, the media in Québec seemed to have paid more attention to data-driven electoral party strategies than to the protection of personal data by the parties.

For instance, in the hours following the revelation of the Cambridge Analytica scandal, political reporters covering the National Assembly in Québec quickly turned their attention to the leadership of political parties, asking them to report on their respective organisations’ digital practices and about the regulations in place to frame them. Simultaneously, Élections Québec, which had been calling for stronger control of personal data use by political parties since 2013, expressed its concerns publicly and fully joined the public debate. As a way to mark its willingness to act on the issue, the liberal government introduced a bill at the end of the parliamentary session, the last of this parliament. The bill was therefore never adopted by the House, which was dissolved a few days later, in preparation for the next provincial election.

Political reporters in Québec have since then paid sustained attention to partisan practices regarding the collection and use of personal information. In their coverage of the 2018 election campaign, they widely discussed the use of data by leaders and their political parties. Thus, while the Cambridge Analytica affair did not directly affect Québec political parties, it nevertheless appears as a shifting point in the media coverage of the use of personal data for political purposes.

Media framing of the issue also evolved over the studied period, becoming more critical and suspicious of partisan data marketing with time. Before the Cambridge Analytica case, coverage rarely focused on the democratic consequences or privacy and security issues associated with the use of personal data for political purposes. Initial coverage seems to have been largely dominated by the story depicting how parties were innovating in electioneering and on how digital technologies could improve electoral communication. Journalists mostly cited the official discourse of political leaders, their strategists or of the digital entrepreneurs from tech companies who worked with them.

An illustrative example of this type of coverage can be found in an article published in September 2013 during municipal elections held in Québec. It presents a portrait of two Montréal-based data analysis companies – Democratik and Vote Rapide – offering technological services to political parties (Champagne, 2013). Their tools were depicted as simple databases fed by volunteers, mainly intended for the identification of sympathisers to facilitate the get-out-the-vote operations (GOTV). It emphasised the affordability and universal use of these programmes by parties, and even indicated that one of them had been developed with the support of the Civic Action League, a non-profit organisation that helps fight political corruption.

However, as the years passed, a change of tone began to permeate the coverage, especially in the months building up to the 2018 general election. A critical frame became more obvious in reporting. It even used Orwellian references to data campaigning in titles such as… “Political parties are spying on you” (Castonguay, 2015) “They all have a file on you” (Joncas, 2018), “What parties know about you” (Croteau, 2018), or “Political parties exchange your personal details” (Robichaud, 2018). In a short period of time, data campaigning had gone from cool to dangerous.

Conclusion

Québec political parties began their digital shift a few years later than their Canadian federal counterparts. However, they have adapted their digital marketing practices rapidly; much faster in fact than the regulatory framework. For the 2018 election, all major parties invested a great deal of resources to be up to date on data-driven campaigning.

To maximise the return on their investment in technology, they must now “feed the beast” with more data. Benefiting from weak regulation over data marketing, this means that they will be able to gather even more personal information in the years to come, without having to explain to voters what their data are used for or how they are protected. In addition, parties are now involving an increasing number of volunteers in the field for the collection of digital personal information, which also increases the risk of data leakage or misuse.

They have, so far, implemented that change with very limited transparency. Up until now, research in Canada has not been able to identify precisely what kind of information is collected or how it is managed and protected. Canadian political strategists have been somewhat forthcoming in explaining how parties collect and why they use personal data for electoral purposes (see for instance Giasson et al., 2019; Giasson and Small, 2017; Flanagan, 2014; Marland, 2016). They however remain silent on the topics of regulation and data protection.

This lack of transparency is problematic in Canada since party leaders who win elections have much more internal powers in British parliamentary systems then in the US presidential system. They control the executive and legislative branches as well as the administration of the party. This means that there is no firewall and real restrictions to the use of data collected by a party during an election once it arrives in office. In that regard, it was revealed that the Office of the Prime Minister of Canada, Justin Trudeau, used its party’s database to vet judges’ nominations (Bergeron, 2019). The same risks apply to Québec.

It is in this context that Élections Québec and the Access to Information Commission of Québec have initiated a broad reflection on the electoral use of personal data by parties. In 2018, following a leak of personal data from donors of a Montréal-based municipal party, the commission contacted the campaign to "examine the measures taken to minimise risks". The commission took the opportunity to "emphasise the importance of political parties being clearly subject to privacy rules, as is the case in British Columbia" (Commission d’accès à l’information du Québec, 2018).

In a report published in February 2019, the Chief Electoral Officer of Québec presented recommendations that parties should follow in their voter data collection and analysis procedures (Élections Québec, 2019). It suggested that provincial and municipal political parties be submitted to a general legislative framework for the protection of personal information. Heeding these calls for change, Québec’s new Minister of Justice and Democratic Reform announced, in November 2019, plans for an overhaul of the province’s regulatory framework on personal data and privacy, which would impose stronger regulations on data protection and use and would grant increased investigation powers to the head of the Commission d’accès à l’information. All businesses, organisations, governments and public administrations operating in Québec and collecting personal data would be covered under these news provisions and could be subjected to massive fines for any form of data breach in their systems. Aimed at ensuring better control, transparency and consent of citizens over their data, these measures, which will be part of a bill introduced in 2020 to the National Assembly, were said to also apply to political parties (Croteau, 2019). However, as this article goes to print, the specific details of these new provisions aimed at political parties remain unknown.

This new will to regulate political parties is the result of a perfect storm where three factors came into play at the same time. Thus, in addition to the rapid integration of new data collection technologies by Québec’s main political parties, there was increased pressure from regulatory agencies and an international scandal that changed the media framing of the political use of personal data.

Well beyond the issue of privacy, data collection and analysis for electoral purposes also change some features of our democracy. Technology replacing activists translates in major intra-party changes. In a parliamentary system, this could increase the centralisation of power around party leaders who now rely less on party members to get elected. This would likely be the case in Québec and in Canada.

Some elements also fuel resistance to change within parties, such as the dependence on digital technologies at the detriment of human contact, fears regarding the reliability of systems or data and the high costs generated by the development and maintenance of databases. For some, party culture also plays a role. A former political strategist who worked closely with former Québec Premier Pauline Marois declared in the media: "You know in some parties, we value the activist work done by old ladies who come to make calls and talk to each voter, one by one" (Radio-Canada, 2017).

As some of our respondents mentioned, parties may move from ‘big data’ to ‘smart data’ in coming years, as they adapt to or adopt novel technological tools. In an era of partisan flexibility, data marketing seems to have helped some parties find and reach their voters. A move towards ‘smart data’ may now also help them modify those voters’ beliefs with even more targeted digital strategies. What might this mean for democracy in Québec? Will its voters be mobilised or manipulated when parties will use their data in upcoming campaigns? Are political parties on the edge of glory or of catastrophe? These questions should be central to the study of data-driven campaigning.

References

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(7), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bashyakarla, V., Hankey, S., Macintyre, S., Rennó, R., & Wright, G. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works_print-friendly.pdf

Bélair-Cirino, M. (2018). Inquiétude à Québec sur les banques de données politiques [Concern in Quebec City about Political Databanks]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/523240/donnees-personnelles-inquietude-a-quebec

Bélair-Cirino, M. (2017, April 15). Vie privée – Connaître les électeurs grâce aux petitions [Privacy - Getting to know voters through petitions]. Le Devoir. Retrieved from https://www.ledevoir.com/politique/quebec/496477/vie-privee-connaitre-les-electeurs-grace-aux-petitions

Bergeron, P. (2018, May 26). Le Parti québécois se dote d'une «Force bleue» pour gagner les élections [The Parti Québécois has a "Force Bleue" to win elections]. La Presse. Retrieved from https://www.lapresse.ca/actualites/politique/politique-quebecoise/201805/26/01-5183364-le-parti-quebecois-se-dote-dune-force-bleue-pour-gagner-les-elections.php

Bergeron, É. (2019, April 24). Vérification politiques sur de potentiels juges: l’opposition crie au scandale [Political checks on potential judges: Opposition cries out for scandal]. TVA Nouvelles. Retrieved from https://www.tvanouvelles.ca/2019/04/24/verification-politiques-sur-de-potentiels-juges-lopposition-crie-au-scandale

Bennett, C. J. (2018). Data-driven elections and political parties in Canada: privacy implications, privacy policies and privacy obligations. Canadian Journal of Law and Technology,16(2), 195-226. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3146964

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law,6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bennett, C. J. (2015). Trends in voter surveillance in Western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3-4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bennett, C. J. (2013). The politics of privacy and the privacy of politics: Parties, elections and voter surveillance in Western democracies. First Monday, 18(8). https://doi.org/10.5210/fm.v18i8.4789

Blais, A., & A. Robillard. (2017, October 4). 1 Million $ pour un logiciel électoral [1 Million for election software]. LeJournal de Montréal. Retrieved from https://www.journaldemontreal.com/2017/10/04/1-million--pour-un-logiciel-electoral

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Carlile, C. N. (2017). Data and Targeting in Canadian Politics: Are Provincial Parties Taking Advantage of the Latest Political Technology? [Master Thesis, University of Calgary]. Calgary: University of Calgary. https://doi.org/10.11575/PRISM/5226

Castonguay, A. (2015, September 14). Les partis politiques vous espionnent [The political parties are spaying on you]. L’Actualité. Retrieved from https://lactualite.com/societe/les-partis-politiques-vous-espionnent/

Champagne, V. (2013, September 25). Des logiciels de la Rive-Nord pour gagner les élections [Rive-Nord software to win elections]. Ici Radio-Canada.

Commission d’accès à l’information du Québec. (2018, April 3). La Commission d’accès à l’information examinera les faits sur la fuite de données personnelles de donateurs du parti Équipe Denis Coderre [The Commission d'accès à l'information will examine the facts on the leak of personal data of Team Denis Coderre donors]. Retrieved from http://www.cai.gouv.qc.ca/la-commission-dacces-a-linformation-examinera-les-faits-sur-la-fuite-de-donnees-personnelles-de-donateurs-du-parti-equipe-denis-coderre/

Croteau, M. (2018, August 20). Ce que les partis savent sur vous [What the parties know about you]. La Presse+. Retrieved from http://mi.lapresse.ca/screens/8a829cee-9623-4a4c-93cf-3146a9c5f4cc__7C___0.html

Croteau, M. (2019, November 22). Données personnelles: un chien de garde plus. Imposant [Personal data: one guard dog more. Imposing]. La Presse+. Retrieved from https://www.lapresse.ca/actualites/politique/201911/22/01-5250741-donnees-personnelles-un-chien-de-garde-plus-imposant.php

Del Duchetto, J.-C. (2016). Le marketing politique chez les partis politiques québécois lors des élections de 2012 et de 2014 [Political marketing by Quebec political parties in the 2012 and 2014 elections] [Master’s thesis, University of Montréal]). Retrieved from http://hdl.handle.net/1866/19404

Delacourt, S. (2013). Shopping for votes. How politicians choose us and we choose them. Madeira Park: Douglas & McIntyre.

Dobber, T., Trilling, D., Helberger, N. & de Vreese, C. H. (2017). Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dufresne, Y., Tessier, C., & Montigny, E. (2019). Generational and Life-Cycle Effects on Support for Quebec Independence. French politics, 17(1), 50–63. https://doi.org/10.1057/s41253-019-00083-9

Ehrhard, T., Bambade, A., & Colin, S. (2019). Digital campaigning in France, a Wide Wild Web? Emergence and evolution of the market and Its players. In A. M. G. Solo (Ed.), Handbook of Research on Politics in the Computer Age (pp. 113-126). Hershey (PA), USA: IGI Global. https://doi.org/10.4018/978-1-7998-0377-5.ch007

Élections Québec. (2019). Partis politiques et protection des renseignements personnels: exposé de la situation québécoise, perspectives comparées et recommandations [Political Parties and the Protection of Personal Information: Presentation of the Quebec Situation, Comparative Perspectives and Recommendations]. Retrieved from https://www.pes.electionsquebec.qc.ca/services/set0005.extranet.formulaire.gestion/ouvrir_fichier.php?d=2002

Enli, G. & Moe, H. (2013). Social media and election campaigns – key tendencies and ways forward. Information, Communication & Society, 16(5), 637–645. https://doi.org/10.1080/1369118x.2013.784795

Flanagan, T. (2014). Winning power. Canadian campaigning in the 21st century. Montréal; Kingston: McGill-Queen’s University Press.

Flanagan, T. (2010). Campaign strategy: triage and the concentration of resources. In H. MacIvor(Ed.), Election (pp. 155-172). Toronto: Emond Montgomery Publications.

Fortier, M. (2014, May 29). La liste électorale du Québec vendue sur Internet [Quebec's list of electors sold on the Internet]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/409526/la-liste-electorale-du-quebec-vendue-sur-internet

Giasson, T., & Small, T. A. (2017). Online, all the time: the strategic objectives of Canadian opposition parties. In A. Marland, T. Giasson, & A. L. Esselment (Eds.), Permanent campaigning in Canada (pp. 109-126). Vancouver: University of British Columbia Press.

Giasson, T., Le Bars, G. & Dubois, P. (2019). Is Social Media Transforming Canadian Electioneering? Hybridity and Online Partisan Strategies in the 2012 Québec Election. Canadian Journal of Political Science, 52(2), 323–341. https://doi.org/10.1017/s0008423918000902

Gibson, R. K. (2015). Party change, social media and the rise of ‘citizen-initiated’ campaigning. Party Politics, 21(2), 183-197. https://doi.org/10.1177/1354068812472575

Hersh, E. D. (2015). Hacking the electorate: how campaigns perceive voters. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9781316212783

Howard, P. N, & D. Kreiss. (2010). Political parties and voter privacy: Australia, Canada, the United Kingdom, and United States in comparative perspective. First Monday, 15(12). https://doi.org/10.5210/fm.v15i12.2975

Joncas, H. (2018, July 28). Partis politiques : ils vous ont tous fichés [Political parties: they've got you all on file…]. Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2018/07/28/partis-politiques-ils-vous-ont-tous-fiches

Karpf, D., Kreiss, D. Nielsen, R. K., & Powers, M. (2015). The role of qualitative methods in political communication research: past, present, and future. International Journal of Communication, 9(1), 1888–1906. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4153

Kreiss, D. (2016). Prototype politics. Technology-intensive campaigning and the data of democracy. Oxford, UK: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.0001

Lajoie, G. (2015a, December 3). Les enquêteurs du DGEQ privés des informations contenues dans la liste électorale [DGEQ investigators deprived of the information contained in the list of electors]. Le Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2015/12/03/le-dge-prive-ses-propres-enqueteurs-des-informations

Lajoie, G. (2015b, November 27). André Drolet ne peut plus souhaiter bonne fête à ses électeurs [André Drolet can no longer wish his constituents a happy birthday]. Le Journal de Québec. Retrieved from https://www.journaldequebec.com/2015/11/27/interdit-de-souhaiter-bon-anniversaire-a-ses-electeurs

Langlois, S. (2018). Évolution de l'appui à l'indépendance du Québec de 1995 à 2015 [Evolution of Support for Quebec Independence from 1995 to 2015]. In A. Binette and P. Taillon (Eds.), La démocratie référendaire dans les ensembles plurinationaux (pp. 55-84). Québec: Presses de l'Université Laval.

Marland, A. (2016). Brand command: Canadian politics and democracy in the age of message control. Vancouver: University of British Columbia Press.

Marland, A., Giasson, T., & Lees-Marshment, J. (2012). Political marketing in Canada. Vancouver: University of British Columbia Press.

McKelvey, F., & Piebiak, J. (2018). Porting the political campaign: The NationBuilder platform and the global flows of political technology. New Media & Society, 20(3), 901–918. https://doi.org/10.1177/1461444816675439

Montigny, E. (2015). The decline of activism in political parties: adaptation strategies and new technologies. In G. Lachapelle & P. J. Maarek (Eds.), Political parties in the digital age. The Impact of new technologies in politics (pp. 61-72). Berlin: De Gruyter. https://doi.org/10.1515/9783110413816-004

Munroe, K. B & Munroe, H. D. (2018). Constituency campaigning in the age of data. Canadian Journal of Political Science,51(1), 135–154. https://doi.org/10.1017/S0008423917001135

Patten, S. (2017). Databases, microtargeting, and the permanent campaign: a threat to democracy. In A. Marland, T. Giasson, & A. Esselment. (Eds.), Permanent campaigning in Canada (pp. 47-64). Vancouver: University of British Columbia Press.

Patten, S. (2015). Data-driven microtargeting in the 2015 general election. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy. Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Pelletier, R. (1989). Partis politiques et société québécoise [Political parties and Quebec society]. Montréal: Québec Amérique.

Radio-Canada. (2017, October 1). Episode of Sunday, October 1, 2017[Television Series Episode] in Les Coulisses du Pouvoir [Behind the scenes of power]. ICI RD. Retrieved from https://ici.radio-canada.ca/tele/les-coulisses-du-pouvoir/site/episodes/391120/joly-charest-sondages

Robichaud, O. (2018, August 20). Les partis politiques s'échangent vos coordonnées personnelles [Political parties exchange your personal contact information]. Huffpost Québec. Retrieved from https://quebec.huffingtonpost.ca/entry/les-partis-politiques-sechangent-vos-coordonnees-personnelles_qc_5cccc8ece4b089f526c6f070

Salvet, J.-M. (2018, January 31). Entente entre le PLQ et Data Sciences: «Tous les partis politiques font ça», dit Couillard [Agreement between the QLP and Data Sciences: "All political parties do that," says Couillard]. Le Soleil. Retrieved from https://www.lesoleil.com/actualite/politique/entente-entre-le-plq-et-data-sciences-tous-les-partis-politiques-font-ca-dit-couillard-21f9b1b2703cdba5cd95e32e7ccc574f

Thomas, P. G. (2015). Political parties, campaigns, data, and privacy. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy (pp. 16-17). Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Vaccari, C. (2013). Digital politics in western democracies: a comparative study. Baltimore: Johns Hopkins University Press.

Yawney, L. (2018). Understanding the “micro” in micro-targeting: an analysis of the 2018 Ontario provincial election [Master’s thesis, University of Victoria]. Retrieved from https://dspace.library.uvic.ca//handle/1828/10437

Footnotes

1. Even though there are 22 officially registered political parties in Québec, all independent and autonomous from their counterpart at the federal level, only four are represented at the National Assembly: CAQ, QLP, QS and PQ. Since the Québec political system is based on the Westminster model, each MNA is elected in a given constituency by a first-past-the-post ballot.

2.According to QS website (view July 2, 2019).

3.Websites viewed on 27 March 2019.

Towards a holistic perspective on personal data and the data-driven election paradigm

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Politics is an art and not a science, and what is required for its mastery is not the rationality of the engineer but the wisdom and the moral strength of the statesman. - Hans Morgenthau, Scientific Man versus Power Politics

Voters, industry representatives, and lawmakers – and not infrequently, journalists and academics as well – have asked one question more than any other when presented with evidence of how personal data is changing modern-day politicking: “Does it work?” As my colleagues and I have detailed in our report, Personal Data: Political Persuasion, the convergence of politics and commercial data brokering has transformed personal data into a political asset, a means for political intelligence, and an instrument for political influence. The practices we document are varied and global: an official campaign app requesting camera and microphone permissions in India, experimentation to select slogans designed to trigger emotional responses from Brexit voters, a robocalling-driven voter suppression campaign in Canada, attack ads used to control voters’ first impressions on search engines in Kenya, and many more.

Asking “Does it work?” is understandable for many reasons, including to address any real or perceived damage to the integrity of an election, to observe shifts in attitudes or voting behaviour, or perhaps to ascertain and harness the democratic benefits of the technology in question. However, discourse fixated on the efficacy of data-intensive tools is fraught with abstraction and reflects a shortsighted appreciation for the full political implications of data-driven elections.

“Does it work?”

The question “Does it work?” is very difficult to answer with any degree of confidence regardless of the technology in question: personality profiling of voters to influence votes, natural language processing applied to the Twitter pipeline to glean information about voters’ political leanings, political ads delivered in geofences, or a myriad of others.

First, the question is too general with respect to the details it glosses over. The technologies themselves are a heterogenous mix, and their real-world implementations are manifold. Furthermore, questions of efficacy are often divorced of context, and a technology’s usefulness to a campaign very likely depends on the sociopolitical context in which it lives. Finally, the question of effectiveness continues to be studied extensively. Predictably, the conclusions of peer-reviewed research vary.

As one example, the effectiveness of implicit social pressure in direct mail in the United States evidently remains inconclusive. The motivation for this research is the observation that voting is a social norm responsive to others’ impressions (Blais, 2000; Gerber & Rogers, 2009). However, some evidence suggests that explicit social pressure to mobilise voters (e.g., by disclosing their vote histories) may seem invasive and can backfire (Matland & Murray, 2013). In an attempt to preserve the benefits of social pressure while mitigating its drawbacks, researchers have explored whether implicit social pressure in direct mail (i.e., mailers with an image of eyes, reminding recipients of their social responsibility) boosts turnout on election day. Of their evaluation of implicit social pressure, which had apparently been regarded as effective, political scientists Richard Matland and Gregg Murray concluded that, “The effects are substantively and statistically weak at best and inconsistent with previous findings” (Matland & Murray, 2016). In response to similar, repeated findings from Matland and Murray, Costas Panagopoulos wrote that their work in fact “supports the notion that eyespots likely stimulate voting, especially when taken together with previous findings” (Panagopoulos, 2015). Panagopoulos soon thereafter authored a paper arguing that the true impact of implicit social pressure actually varies with political identity, claiming that the effect is pronounced for Republicans but not for Democrats or Independents, while Matland maintained that the effect is "fairly weak" (Panagopoulos & van der Linden, 2016; Matland, 2016).

Similarly, studies on the effects of door-to-door canvassing lack consensus (Bhatti et al., 2019). Donald Green, Mary McGrath, and Peter Aronow published a review of seventy-one canvassing experiments and found their average impact to be robust and credible (Green, McGrath, & Aronow, 2013). A number of other experiments have demonstrated that canvassing can boost voter turnout outside the American-heavy literature: among students in Beijing in 2003, with British voters in 2005, and for women in rural Pakistan in 2008 (Guan & Green, 2006; John & Brannan, 2008; Giné & Mansuri, 2018). Studies from Europe, however, call into question the generalisability of these findings. Two studies on campaigns in 2010 and 2012 in France both produced ambiguous results, as the true effect of canvassing was not credibly positive (Pons, 2018; Pons & Liegey, 2019). Experiments conducted during the 2013 Danish municipal elections observed no definitive effect of canvassing, while Enrico Cantoni and Vincent Pons found that visits by campaign volunteers in Italy helped increase turnout, but those by the candidates themselves did not (Bhatti et al., 2019; Cantoni & Pons, 2017). In some cases, the effect of door-to-door canvassing was neither positive nor ambiguous but distinctly counterproductive. Florian Foos and Peter John observed that voters contacted by canvassers and given leaflets for the 2014 British European Parliament elections were 3.7 percentage points less likely to vote than those in the control group (Foos & John, 2018). Putting these together, the effects of canvassing still seem positive in Europe, but they are less pronounced than in the US. This learning has led some scholars to note that “practitioners should be cautious about assuming that lessons from a US- dominated field can be transferred to their own countries’ contexts” (Bhatti et al., 2019).

A cursory glance at a selection of literature related to these two cases alone – implicit social pressure and canvassing – illustrates how tricky answering “Does it work?” is. Although many of the technologies in use today are personal data-supercharged analogues of these antecedents (e.g., canvassing apps with customised scripts and talking points based on data about each household’s occupants instead of generic, door-to-door knocking), I have no reason to suspect that analyses of data-powered technologies would be any different. The short answer to “Does it work?” is that it depends. It depends on baseline voter turnout rates, print vs. digital media, online vs. offline vs. both combined, targeting young people vs. older people, reaching members of a minority group vs. a majority group, partisan vs. nonpartisan messages, cultural differences, the importance of the election, local history, and more. Indeed, factors like the electoral setup may alter the effectiveness of a technology altogether. A tool for political persuasion might work in a first-past-the-post contest in the United States but not in a European system of proportional representation in which winner-take-all stakes may be tempered. This is not to suggest that asking “Does it work?” is a futile endeavour – indeed there are potential democratic benefits to doing so – but rather that it is both limited in scope and rather abstract given the multitude of factors and conditions at play in practice.

Political calculus and algorithmic contagion

With this in mind, I submit that a more useful approach to appreciating the full impacts of data-driven elections may be a consideration of the preconditions that allow data-intensive practices to thrive and an examination of their consequences than a preoccupation with the efficacy of the practices themselves.

In a piece published in 1986, philosopher Ian Hacking coined the term ‘semantic contagion’ to describe the process of ascribing linguistic and cultural currency to a phenomenon by naming it and thereby also contributing to its spread (Hacking, 1999). I propose that the prevailing political calculus, spurred on by the commercial success of “big data” and “AI”, appears overtaken by an ‘algorithmic contagion’ of sorts. On one level, algorithmic contagion speaks to the widespread logic of quantification. For example, understanding an individual is difficult, so data brokers instead measure people along a number of dimensions like level of education, occupation, credit score, and others. On another level, algorithmic contagion in this context describes an interest in modelling anything that could be valuable to political decision-making, as Market Predict’s political page suggests. It presumes that complex phenomena, like an individual’s political whims, can be predicted and known within the structures of formalised algorithmic process, and that they ought to be. According to the Wall Street Journal, a company executive claimed that Market Predict’s “agent-based modelling allows the company to test the impact on voters of events like news stories, political rallies, security scares or even the weather” (Davies, 2019).

Algorithmic contagion also encompasses a predetermined set of boundaries. Thinking within the capabilities of algorithmic methods prescribes a framework to interpret phenomena within bounds that enable the application of algorithms to those phenomena. In this respect, algorithmic contagion can influence not only what is thought but also how. This conceptualisation of algorithmic contagion encompasses the ontological (through efforts to identify and delineate components that structure a system, like an individual’s set of beliefs), the epistemological (through the iterative learning process and distinction drawn between approximation and truth), and the rhetorical (through authority justified by appeals to quantification).

Figure 1: The political landing page of Market Predict, a marketing optimisation firm for brand and political advertisers, that explains its voter simulation technology. It claims to, among other things, “Account for the irrationality of human decision-making”. Hundreds of companies offer related services. Source: Market Predict Political Advertising

This algorithmic contagion-informed formulation of politics bears some connection to the initial “Does it work?” query but expands the domain in question to not only the applications themselves but also to the components of the system in which they operate – a shift that an honest analysis of data-driven elections, and not merely ad-based micro-targeting, demands. It explains why and how a candidate for mayor in Taipei in 2014 launched a viral social media sensation by going to a tattoo parlour. He did not visit the parlour to get a tattoo, to chat with an artist about possible designs, or out of a genuine interest in meeting the people there. He went because a digital listening company that mines troves of data and services campaigns across southeast Asia generated a list of actions for his campaign that would generate the most buzz online, and visiting a tattoo parlour was at the top of the list.

Figure 2: A still from a video documenting Dr Ko-Wen Je’s visit to a tattoo parlour, prompting a social media sensation. His campaign uploaded the video a few days before municipal elections in which he was elected mayor of Taipei in 2014. The post on Facebook has 15,000 likes, and the video on YouTube has 153,000 views. Against a backdrop of creeping voter surveillance, Dr Ko-Wen Je’s visit to this tattoo parlour begs questions about the authenticity of political leaders. (Image brightened for clarity) Sources: Facebook and YouTube

As politics continues to evolve in response to algorithmic contagion and to the data industrial complex governing the commercial (and now also political) zeitgeist, it is increasingly concerned with efficiency and speed (Schechner & Peker, 2018). Which influencer voters must we win over, and whom can we afford to ignore? Who is both the most likely to turn out to vote and also the most persuadable? How can our limited resources be allocated as efficiently as possible to maximise the probability of winning? In this nascent approach to politics as a practice to be optimised, who is deciding what is optimal? Relatedly, as the infrastructure of politics changes, who owns the infrastructure upon which more and more democratic contests are waged, and to what incentives do they respond?

Voters are increasingly treated as consumers – measured, ranked, and sorted by a logic imported from commerce. Instead of being sold shoes, plane tickets, and lifestyles, voters are being sold political leaders, and structural similarities to other kinds of business are emerging. One challenge posed by data-driven election operations is the manner in which responsibilities have effectively been transferred. Voters expect their interests to be protected by lawmakers while indiscriminately clicking “I Agree” to free services online. Efforts to curtail problems through laws are proving to be slow, mired in legalese, and vulnerable to technological circumvention. Based on my conversations with them, venture capitalists are reluctant to champion a transformation of the whole industry by imposing unprecedented privacy standards on their budding portfolio companies, which claim to be merely responding to the demands of users. The result is an externalised cost shouldered by the public. In this case, however, the externality is not an environmental or a financial cost but a democratic one. The manifestation of these failures include the disintegration of the public sphere and a shared understanding of facts, polarised electorates embroiled in 365-day-a-year campaign cycles online, and open campaign finance and conflict of interest loopholes introduced by data-intensive campaigning, all of which are exacerbated by a growing revolving door between the tech industry and politics (Kreiss & McGregor, 2017).

Personal data and political expediency

One response to Cambridge Analytica is “Does psychometric profiling of voters work?” (Rosenberg et al., 2018). A better response examines what the use of psychometric profiling reveals about the intentions of those attempting to acquire political power. It asks what it means that a political campaign was apparently willing to invest the time and money into building personality profiles of every single adult in the United States in order to win an election, regardless of the accuracy of those profiles, even when surveys of Americans indicate that they do not want political advertising tailored to their personal data (Turow et al., 2012). And it explores the ubiquity of services that may lack Cambridge Analytica’s sensationalised scandal but shares the company’s practice of collecting and using data in opaque ways for clearly political purposes.

The ‘Influence Industry’ underlying this evolution has evangelised the value of personal data, but to whatever extent personal data is an asset, it is also a liability. What risks do the collection and use of personal data expose? In the language of the European Union’s General Data Protection Regulation (GDPR), who are the data controllers, and who are the data subjects in matters of political data which is, increasingly, all data? In short, who gains control, and who loses it?

As a member of a practitioner-oriented group based in Germany with a grounding in human rights, I worry about data-intensive practices in elections and the larger political sphere going awry, especially as much of our collective concern seems focused on questions of efficacy while companies race to capitalise on the market opportunity. For historical standards of the time, the Holocaust was a ruthlessly data-driven, calculated, and efficient undertaking fuelled by vast amounts of personal data. As Edwin Black documents in IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation, personal data managed by IBM was an indispensable resource for the Nazi regime. IBM’s President at the time, Thomas J. Waston Sr., the namesake of today’s IBM Watson, went to great lengths to profit from dealings between IBM’s German subsidiary and the Nazi party. The firm was such an important ally that Hitler awarded Watson an Order of the German Eagle award for his invaluable service to the Third Reich. IBM aided the Nazi’s record-keeping across several phases of the Holocaust, including identification of Jews, ghettoisation, deportation, and extermination (Black, 2015). Black writes that “Prisoners were identified by descriptive Hollerith cards, each with columns and punched holes detailing nationality, date of birth, marital status, number of children, reason for incarceration, physical characteristics, and work skills” (Black, 2001). These Hollerith cards were sorted in machines physically housed in concentration camps.

The precursors to these Hollerith cards were originally developed to track personal details for the first American census. The next American census, to be held in 2020, has already been a highly politicised affair with respect to the addition of a citizenship question (Ballhaus & Kendall, 2019). President Trump recently abandoned an effort to formally add a citizenship question to the census, vowing to seek this information elsewhere, and the US Census Bureau has already published work investigating the quality of alternate citizenship data sources for the 2020 Census (Brown et al., 2018). For stakeholders interested in upholding democratic ideals, focusing on the accuracy of this alternate citizenship data is myopic; that an alternate source of data is being investigated to potentially advance an overtly political goal is the crux of the matter.

Figure 3: A card showing the personal data of Symcho Dymant, a prisoner at Buchenwald Concentration Camp. The card includes many pieces of personal data, including name, birth date, condition, number of children, place of residence, religion, citizenship, residence of relatives, height, eye colour, description of his nose, mouth, ears, teeth, and hair. Source: US Holocaust Memorial Museum

This prospect may seem far-fetched and alarmist to some, but I do not think so. If the political tide were to turn, the same personal data used for a benign digital campaign could be employed in insidious and downright unscrupulous ways if it were ever expedient to do so. What if a door-to-door canvassing app instructed volunteers walking down a street to skip your home and not remind your family to vote because a campaign profiled you as supporters of the opposition candidate? What if a data broker classified you as Muslim, or if an algorithmic analysis of your internet browsing history suggests that you are prone to dissent? Possibilities like these are precisely why a fixation on efficacy is parochial. Given the breadth and depth of personal data used for political purposes, the line between consulting data to inform a political decision and appealing to data – given the rhetorical persuasiveness it enjoys today – in order to weaponise a political idea is extremely thin.

A holistic appreciation of data-driven elections’ democratic effects demands more than simply measurement, and answering “Does it work?” is merely one component of grasping how campaigning transformed by technology and personal data is influencing our political processes and the societies they engender. As digital technologies continue to rank, prioritise, and exclude individuals even when – indeed, especially when – inaccurate, we ought to consider the larger context in which technological practices shape political outcomes in the name of efficiency. The infrastructure of politics is changing, charged with an algorithmic contagion, and a well-rounded perspective requires that we ask not only how these changes are affecting our ideas of who can participate in our democracies and how they do so, but also who derives value from this infrastructure and how they are incentivised, especially when benefits are enjoyed privately but costs sustained democratically. The quantitative tools underlying the ‘datafication’ of politics are neither infallible nor safe from exploitation, and issues of accuracy grow moot when data-intensive tactics are enlisted as pawns in political agendas. A new political paradigm is emerging whether or not it works.

References

Ballhaus, R., & Kendall, B. (2019, July 11). Trump Drops Effort to Put Citizenship Question on Census, The Wall Street Journal. Retrieved from https://www.wsj.com/articles/trump-to-hold-news-conference-on-census-citizenship-question-11562845502

Bhatti, Y., Olav Dahlgaard, J., Hedegaard Hansen, J., & Hansen, K.M. (2019). Is Door-to-Door Canvassing Effective in Europe? Evidence from a Meta-Study across Six European Countries, British Journal of Political Science,49(1), 279–290. https://doi.org/10.1017/S0007123416000521

Black, E. (2015, March 17). IBM’s Role in the Holocaust -- What the New Documents Reveal. HuffPost. Retrieved from https://www.huffpost.com/entry/ibm-holocaust_b_1301691

Black, E. (2001). IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation. New York: Crown Books.

Blais, A. (2000). To Vote or Not to Vote: The Merits and Limits of Rational Choice Theory. Pittsburgh: University of Pittsburgh Press. https://doi.org/10.2307/j.ctt5hjrrf

Brown, J. D., Heggeness, M. L., Dorinski, S., Warren, L., & Yi, M.. (2018). Understanding the Quality of Alternative Citizenship Data Sources for the 2020 Census [Discussion Paper No. 18-38] Washington, DC: Center for Economic Studies. Retrieved from https://www2.census.gov/ces/wp/2018/CES-WP-18-38.pdf

Cantoni, E., & Pons, V. (2017). Do Interactions with Candidates Increase Voter Support and Participation? Experimental Evidence from Italy [Working Paper No. 16-080]. Boston: Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Publication%20Files/16-080_43ffcfcb-74c2-4713-a587-ebde30e27b64.pdf

Davies, P. (2019). A New Crystal Ball to Predict Consumer and Investor Behavior. Wall Street Journal, June 10. Retrieved from https://www.wsj.com/articles/a-new-crystal-ball-to-predict-consumer-and-investor-behavior-11560218820?mod=rsswn

Foos, F., & John, P. (2018). Parties Are No Civic Charities: Voter Contact and the Changing Partisan Composition of the Electorate*, Political Science Research and Methods, 6(2), 283–98. https://doi.org/10.1017/psrm.2016.48

Gerber, A. S., & Rogers, T. (2009). Descriptive Social Norms and Motivation to Vote: Everybody’s Voting and so Should You. The Journal of Politics, 71(1), 178–191. https://doi.org/10.1017/S0022381608090117

Giné, X. & Mansuri, G. (2018). Together We Will: Experimental Evidence on Female Voting Behavior in Pakistan. American Economic Journal: Applied Economics, 10(1), 207–235. https://doi.org/10.1257/app.20130480

Green, D.P., McGrath, M. C. & Aronow, P. M. (2013). Field Experiments and the Study of Voter Turnout. Journal of Elections, Public Opinion and Parties, 23(1), 27–48. https://doi.org/10.1080/17457289.2012.728223

Guan, M. & Green, D. P. (2006). Noncoercive Mobilization in State-Controlled Elections: An Experimental Study in Beijing. Comparative Political Studies, 39(10), 1175–1193. https://doi.org/10.1177/0010414005284377

Hacking, I. (1999). Making Up People. In M. Biagioli (Ed.), The Science Studies Reader (pp. 161–171). New York: Routledge. Retrieved from http://www.icesi.edu.co/blogs/antro_conocimiento/files/2012/02/Hacking_making-up-people.pdf

John, P., & Brannan, T. (2008). How Different Are Telephoning and Canvassing? Results from a ‘Get Out the Vote’ Field Experiment in the British 2005 General Election. British Journal of Political Science,38(3), 565–574. https://doi.org/10.1017/S0007123408000288

Kreiss, D., & McGregor, S. C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle, Political Communication, 35(2), 155–77. https://doi.org/10.1080/10584609.2017.1364814

Matland, R. (2016). These Eyes: A Rejoinder to Panagopoulos on Eyespots and Voter Mobilization, Political Psychology, 37(4), 559–563. https://doi.org/10.1111/pops.12282 Available at https://www.academia.edu/12128219/These_Eyes_A_Rejoinder_to_Panagopoulos_on_Eyespots_and_Voter_Mobilization

Matland, R. E. & Murray, G. R. (2013). An Experimental Test for ‘Backlash’ Against Social Pressure Techniques Used to Mobilize Voters, American Politics Research, 41(3), 359–386. https://doi.org/10.1177/1532673X12463423

Matland, R. E., & Murray, G. R. (2016). I Only Have Eyes for You: Does Implicit Social Pressure Increase Voter Turnout? Political Psychology, 37(4), 533–550. https://doi.org/10.1111/pops.12275

Panagopoulos, C. (2015). A Closer Look at Eyespot Effects on Voter Turnout: Reply to Matland and Murray, Political Psychology, 37(4). https://doi.org/10.1111/pops.12281

Panagopoulos, C. & van der Linden, S. (2016). Conformity to Implicit Social Pressure: The Role of Political Identity, Social Influence, 11(3), 177–184. https://doi.org/10.1080/15534510.2016.1216009

Pons, V. (2018). Will a Five-Minute Discussion Change Your Mind? A Countrywide Experiment on Voter Choice in France, American Economic Review, 108(6), 1322–1363. https://doi.org/10.1257/aer.20160524

Pons, V., & Liegey, G. (2019). Increasing the Electoral Participation of Immigrants: Experimental Evidence from France, The Economic Journal, 129(617), 481–508. https://doi.org/10.1111/ecoj.12584 Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53575

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How Trump Consultants Exploited the Facebook Data of Millions, The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Schechner, S. & Peker, E. (2018, October 24). Apple CEO Condemns ‘Data-Industrial Complex’, Wall Street Journal, October 24.

Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans Roundly Reject Tailored Political Advertising [Departmental Paper No. 7-2012]. Annenberg School for Communication, University of Pennsylvania. Retrieved from http://repository.upenn.edu/asc_papers/398

Unpacking the “European approach” to tackling challenges of disinformation and political manipulation

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

In recent years, the spread of disinformation on online platforms and micro-targeted data-driven political advertising has become a serious concern in many countries around the world, in particular as regards the impact this practice may have on informed citizenship and democratic systems. In April 2019, for the first time in the country’s modern history, Switzerland’s supreme court has overturned a nationwide referendum on the grounds that the voters were not given complete information and that it "violated the freedom of the vote”. While in this case it was the government that had failed to provide correct information, the decision still comes as another warning of the conditions under which elections nowadays are being held and as a confirmation of the role that accurate information plays in this process. There is limited and sometimes even conflicting scholarly evidence as to whether today people are exposed to more diverse political information or trapped in echo chambers, and whether they are more vulnerable to political disinformation and propaganda than before (see, for example: Bruns, 2017, and Dubois & Blank, 2018). Yet, many claim so, and cases of misuse of technological affordances and personal data for political goals have been reported globally.

The decision of Switzerland’s supreme court has particularly resonated in Brexit Britain where the campaign ahead of the European Union (EU) membership referendum left too many people feeling “ill-informed” (Brett, 2016, p. 8). Even before the Brexit referendum took place, the House of Commons Treasury Select Committee complained about “the absence of ‘facts’ about the case for and against the UK’s membership on which the electorate can base their vote” (2016, p. 3). According to this, the voters in the United Kingdom were not receiving complete or even truthful information, and there are also concerns that they might have been manipulated by the use of bots (Howard & Kollanyi, 2016) and by the unlawful processing of personal data (ICO, 2018a, 2018b).

The same concerns were raised in the United States during and after the presidential elections in 2016. Several studies have shown evidence of the exposure of US citizens to social media disinformation in the period around elections (see: Guess et al., 2018, and Allcott & Gentzkow, 2017). In other parts of the world, such as in Brazil and in several Asian countries, the means and platforms for transmission of disinformation were somewhat different but the associated risks have been deemed even higher. The most prominent world media, fact checkers and researchers systematically reported about the scope and spread of disinformation on the Facebook-owned and widely used messaging application WhatsApp in the 2018 presidential elections in Brazil. Freedom House warned that elections in some Asian countries, such as India, Indonesia, and Thailand, were also afflicted by falsified content.

Clearly, online disinformation and unlawful political micro-targeting represent a threat to elections around the globe. The extent to which certain societies are more resilient or more vulnerable to the impact of these phenomena depends on different factors, including, among other things, the status of journalism and legacy media, levels of media literacy, the political context and legal safeguards (CMPF, forthcoming). Different political and regulatory traditions play a role in shaping the responses to online disinformation and data-driven political manipulation. Accordingly, these range from doing nothing to criminalising the spread of disinformation, as is the case with the Singapore’s law1 which came into effect in October 2019. While there seems to be more agreement that regulatory intervention is needed to protect democracy, the concerns over the negative impact of inadequate or overly restrictive regulation on freedom of expression remain. In his recent reports (2018, 2019), UN Special Rapporteur on Freedom of Expression David Kaye warned against regulation that entrusts platforms with even more powers to decide on content removals in very short time frames and without public oversight. Whether certain content is illegal or problematic on other grounds is not always a straightforward decision and often depends on the context in which it is presented. Therefore, as highlighted by the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression (2019), to require platforms to make these content moderation decisions in an automated way, without built-in transparency, and without notice or timely recourse for appeal, contains risks for freedom of expression.

The European Commission (EC) has recognised the exposure of citizens to large scale online disinformation (2018a) and micro-targeting of voters based on the unlawful processing of personal data (2018b) as major challenges for European democracies. In a response to these challenges, and to ensure citizens’ access to a variety of credible information and sources, the EC has put in place several measures which aim to create an overarching “European approach”. This paper provides an analysis of this approach to identify the key principles upon which it builds, and to what extent, if at all, they differ from the principles of “traditional” political advertising and media campaign regulation during the electoral period. The analysis further looks at how these principles are elaborated and whether they reflect the complexity of the challenges identified. The focus is on the EU as it is “articulating a more interventionist approach” to the relations with the online platform companies (Flew et al., 2019, p. 45). Furthermore, due to the size of the European market, any relevant regulation can set the global standard, as is the case with the General Data Protection Regulation (GDPR) in the area of data protection and privacy (Flew et al., 2019).

The role of (social) media in elections

The paper starts from the notion that a healthy democracy is dependent on pluralism and that the role of (social) media in elections and the transparency of data-driven political advertising are among the crucial components of any assessment of the state of pluralism in a given country. In this view, pluralism “implies all measures that ensure citizens' access to a variety of information sources, opinion, voices etc. in order to form their opinion without the undue influence of one dominant opinion forming power” (EC, 2007, p. 5; Valcke et al., 2009, p. 2). Furthermore, it implies the relevance of citizens' access to truthful and accurate information.

The media have long been playing a crucial role in election periods: serving, on one side, as wide-reaching platforms for parties and candidates to deliver their messages, and, on the other, helping voters to make informed choices. They set the agenda by prioritising certain issues over others and by deciding on time and space to be given to candidates; they frame their reporting within a certain field of meaning and considering the characteristics of different types of media; and, if the law allows, they sell time and space for political advertising (Kelley, 1963). A democracy requires the protection of media freedom and editorial autonomy, but asks that the media be socially responsible. This responsibility implies respect of fundamental standards of journalism, such as impartiality and providing citizens with complete and accurate information. As highlighted on several occasions by the European Commission for Democracy through Law (so-called Venice Commission) of the Council of Europe (2013, paras. 48, 49): “The failure of the media to provide impartial information about the election campaign and the candidates is one of the most frequent shortcomings that arise during elections”.

Access to the media has been seen as “one of the main resources sought by parties in the campaign period” and to ensure a level playing field “legislation regarding access of parties and candidates to the public media should be non-discriminatory and provide for equal treatment” (Venice Commission, 2010, para. 148). The key principles of media regulation during the electoral period are therefore media impartiality and equality of opportunity for contenders. Public service media are required to abide by higher standards of impartiality compared to private outlets, and audiovisual media are more broadly bound by rules than the printed press and online media. The latter are justified by the perceived stronger effects of audiovisual media on voters (Schoenbach & Lauf, 2004) and by the fact that television channels benefit from the public and limited resource of the radio frequency spectrum (Venice Commission, 2009, paras. 24-28, 58).

In the Media Pluralism Monitor (MPM) 2, a research tool supported by the European Commission and designed to assess risks to media pluralism in EU member states, the role of media in the democratic electoral process is one out of 20 key indicators. It is seen as an aspect of political pluralism and the variables against which the risks are assessed have been elaborated in accordance with the above-mentioned principles. The indicator assesses the existence and implementation of a regulatory and self-regulatory framework for the fair representation of different political actors and viewpoints on public service media and private channels, especially during election campaigns. The indicator also takes into consideration the regulation of political advertising – whether the restrictions are imposed to allow equal opportunities for all political parties and candidates.

The MPM results (Brogi et al., 2018) showed that the rules to ensure the fair representation of political viewpoints in news and informative programmes on public service media channels and services are imposed by law in all EU countries. It is, however, less common for such regulation and/or self-regulatory measures to exist for private channels. A similar approach is observed in relation to political advertising rules, which are more often and more strictly defined for public service than for commercial media. Most countries in the EU have a law or another statutory measure that imposes restrictions on political advertising during election campaigns to allow equal opportunities for all candidates. Even though political advertising is “considered as a legitimate instrument for candidates and parties to promote themselves” (Holtz-Bacha & Just, 2017, p. 5), some countries do not allow it at all. In cases when there is a complete ban on political advertising, public service media provide free airtime on principles of equal or proportionate access. In cases when paid political advertising is allowed, it is often restricted only to the campaign period and regulation seeks to set limits on, for example, campaign resources and spending, the amount of airtime that can be purchased and the timeframe in which political advertising can be broadcast. In most countries there is a requirement for transparency – how much was spent for advertising in the campaign, presented through spending on different types of media. For traditional media, the regulatory framework requires that political advertising (as any other advertising) be properly identified and labelled as such.

Television remains the main source of news for citizens in the EU (Eurobarometer, 2018a, 2017). However, the continuous rise of online sources and platforms as resources for (political) news and views (Eurobarometer, 2018a), and as channels for more direct and personalised political communication, call for a deeper examination of the related practice and potential risks to be addressed. The ways people find and interact with (political) news and the ways political messages are being shaped and delivered to people has been changing significantly with the global rise, popularity and features offered by the online platforms. An increasing number of people, and especially young populations, are using them as doors to news (Newman et al., 2018, p. 15; Shearer, 2018). Politicians are increasingly using the same doors to reach potential voters, and the online platforms have become relevant, if not central, to different stages of the whole process. This means that platforms are now increasingly performing functions long attributed to media and much more through, for example, filtering and prioritising certain content offered to users, and selling the time and space for political advertising based on data-driven micro-targeting. At the same time, a majority of EU countries still do not have specific requirements that would ensure transparency and fair play in campaigning, including political advertising in the online environment. According to the available MPM data (Brogi et al., 2018; and preliminary data collected in 2019), only 11 countries (Belgium, Bulgaria, Denmark, Finland, France, Germany, Italy, Latvia, Lithuania, Portugal and Sweden) have legislation or guidelines to require transparency of online political advertisements. In all cases, it is the general law on political advertising during the electoral period that also applies to the online dimension.

Political advertising and political communication more broadly take on different forms in the environment of online platforms, which may hold both promises and risks for democracy (see, for example, Valeriani & Vaccari, 2016; and Zuiderveen Borgesius et al., 2018). There is still limited evidence on the reach of online disinformation in Europe, but a study conducted by Fletcher et al. (2018) suggests that even if the overall reach of publishers of false news is not high, they achieve significant levels of interaction on social media platforms. Disinformation online comes in many different forms, including false context, imposter, manipulated, fabricated or extreme partisan content (Wardle & Derakhshan, 2017), but always with an intention to deceive (Kumar & Shah, 2018). There are also different motivations for the spread of disinformation, including financial and political (Morgan, 2018), and different platforms’ affordances affect whether disinformation spreads better as organic content or as paid-for advertising. Vosoughi et al. (2018) have shown that Twitter disinformation organically travels faster and further than true information pieces due to technological possibilities, but also due to human nature that is more likely to spread something surprising and emotional, which disinformation often does. On Facebook, on the other hand, the success of spread of disinformation may be significantly attributed to advertising, claim Chiou and Tucker (2018). Accordingly, platforms have put in place different policies towards disinformation. Twitter has recently announced a ban on political advertising, while Facebook continues to run it and exempts politician’s speech and political advertising from third-party fact-checking programmes.

Further to different types of disinformation, and different affordances of platforms and their policies, there are “many different actors involved and we’re learning much more about the different tactics that are being used to manipulate the online public sphere, particularly around elections”, warns Susan Morgan (2018, p. 40). Young Mie Kim and others (2018) have investigated the groups that stood behind divisive issue campaigns on Facebook in the weeks before the 2016 US elections, and found that most of these campaigns were run by groups which did not file reports to the Federal Election Commission. These groups, clustered by authors as non-profits, astroturf/movement groups, and unidentifiable “suspicious” groups, have sponsored four times more ads than those that did file the reports to the Commission. In addition to the variety of groups playing a role in political advertising and political communication on social media today, a new set of tactics are emerging, including the use of automated accounts, so-called bots, and data-driven micro-targeting of voters (Morgan, 2018).

Bradshaw and Howard (2018) have found that governments and political parties in an increasing number of countries of different political regimes are investing significant resources in using social media to manipulate public opinion. Political bots, as they note, are used to promote or attack particular politicians, to promote certain topics, to fake a follower base, or to get opponents’ accounts and content removed by reporting it on a large scale. Micro-targeting, as another tactic, is commonly defined as a political advertising strategy that makes use of data analytics to build individual or small group voter models and to address them with tailored political messages (Bodó et al., 2017). These messages can be drafted with the intention to deceive certain groups and to influence their behaviour, which is particularly problematic in the election period when the decisions of high importance for democracy are made, the tensions are high and the time for correction or reaction is scarce.

The main fuel of contemporary political micro-targeting is data gathered from citizens’ online presentation and behaviour, including from their social media use. Social media has also been used as a channel for distribution of micro-targeted campaign messages. This political advertising tactic came into the spotlight with the Cambridge Analytica case reported by journalist Carole Cadwalladr in 2018. Her investigation, based on the information from whistleblower Christopher Wylie, revealed that the data analytics firm Cambridge Analytica, which worked with Donald Trump’s election team and the winning Brexit campaign, harvested the personal data of millions of peoples' Facebook profiles without their knowledge and consent, and used it for political advertising purposes (Cadwalladr, 2018). In the EU, the role of social media in elections came high on the agenda of political institutions after the Brexit referendum in 2016. The focus has been in particular on the issue of ‘fake news’ or disinformation. The reform of the EU’s data protection rules, which resulted in the GDPR, started in 2012. The Regulation was adopted on 14 April 2016, and its scheduled time of enforcement, 25 May 2018, collided with the outbreak of the Cambridge Analytica case.

Perspective and methodology

Although, European elections are primarily the responsibility of national governments, the EU has taken several steps to tackle the issue of online disinformation. In the Communication of 26 April 2018 the EC called these steps a “European approach” (EC, 2018a), with one of its key deliverables being the Code of Practice on Disinformation (2018), presented as a self-regulatory instrument that should encourage proactivity of online platforms in ensuring transparency of political advertising and restricting the automated spread of disinformation. The follow up Commission’s Communication from September 2018, focused on securing free and fair European elections (EC, 2018f), suggests that, in the context of elections, principles set out in the European approach for tackling online disinformation (EC, 2018a) should be seen as complementary to the GDPR (Regulation, 2016/679). The Commission also prepared specific guidance on the application of GDPR in the electoral context (EC, 2018d). It further suggested considering the Recommendation on election cooperation networks (EC, 2018e), and transparency of political parties, foundations and campaign organisations on financing and practices (Regulation, 2018, p. 673). This paper provides an analysis of the listed legal and policy instruments that form and complement the EU’s approach to tackling disinformation and suspicious tactics of political advertising on online platforms. The Commission’s initiatives in the area of combating disinformation contain also a cybersecurity aspect. However, this subject is technically and politically too complex to be included in this specific analysis.

The EC considers online platforms as covering a wide range of activities, but the European approach to tackling disinformation is concerned primarily with “online platforms that distribute content, particularly social media, video-sharing services and search engines” (EC, 2018a). This paper employs the same focus and hence the same narrow definition of online platforms. The main research questions are: which are the key principles upon which the European approach to tackling disinformation and political manipulation builds; and to what extent, if at all, do they differ from the principles of “traditional” political advertising and media campaign regulation in the electoral period? The analysis further seeks to understand how these principles are elaborated and whether they reflect the complexity of the challenges identified. For this purpose, the ‘European approach’ is understood in a broad sense (EC, 2018f). Looking through the lens of pluralism, this analysis uses a generic inductive approach, a qualitative research approach that allows findings to emerge from the data without having pre-defined coding categories (Liu, 2016). This methodological decision was made as this exploratory research sought not only to analyse the content of the above listed documents, but also the context in which they came into existence and how they relate to one another.

Two birds with one stone: the European approach in creating fair and plural campaigning online

The actions currently contained in the EU’s approach to tackling online disinformation and political manipulation derive from the regulation (GDPR), EC-initiated self-regulation of platforms (Code of Practice on Disinformation), and the non-binding Commission’s communications and recommendations to the member states. While some of the measures, such as data protection, have a long tradition and have only been evolving, some represent a new attempt to develop solutions to the problem of platforms (self-regulation). In general, the current European approach can be seen as primarily designed towards (i) preventing unlawful micro-targeting of voters by protecting personal data; and (ii) combating disinformation by increasing the transparency of political and issue-based advertising on online platforms.

Protecting personal data

The elections of May 2019 were the first European Parliament (EP) elections after major concerns about legality and legitimacy of the vote in US presidential election and the UK's Brexit referendum. The May 2019 elections were also the first elections for the EP held under the GDPR, which became directly applicable across the EU as of 25 May 2018. As a regulation, the GDPR is directly binding, but does provide flexibility for certain aspects of the regulation to be adjusted by individual member states. For example, to balance the right to data protection with the right to freedom of expression, article 85 of the GDPR provides for the exemption of, or derogation for, the processing of data for “journalistic purposes or the purpose of academic artistic or literary expression”, which should be clearly defined by each member state. While the GDPR provides the tools necessary to address instances of unlawful use of personal data, including in the electoral context, its scope is still not fully and properly understood. Since it was the very first time the GDPR was applied in the European electoral context, the European Commission published in September 2018 the Guidance on the application of Union data protection law in the electoral context (EC, 2018d).

The data protection regime in the EU is not new, 3 even though it has not been well harmonised and the data protection authorities (DPAs) have had limited enforcement powers. The GDPR aims to address these shortcomings as it gives DPAs powers to investigate, to correct behaviour and to impose fines up to 20 million Euros or, in the case of a company, up to 4% of its worldwide turnover. In its Communication, the EC (2018d) particularly emphasises the strengthened powers of authorities and calls them to use these sanctioning powers especially in cases of infringement in the electoral context. This is an important shift as the European DPAs have historically been very reluctant to regulate political parties. The GDPR further aims at achieving cooperation and harmonisation of the Regulation’s interpretations between the national DPAs by establishing the European Data Protection Board (EDPB). The EDPB is made up of the heads of national data protection authorities and of the European Data Protection Supervisor (EDPS) or their representatives. The role of the EDPS is to ensure that EU institutions and bodies respect people's right to privacy when processing their personal data. In March 2018, the EDPS published an Opinion on online manipulation and personal data, confirming the growing impact of micro-targeting in the electoral context and a significant shortfall in transparency and provision of fair processing of information (EDPS, 2019).

The Commission guidance on the application of GDPR in the electoral context (EC, 2018d) underlines that it “applies to all actors active in the electoral context”, including European and national political parties, European and national political foundations, platforms, data analytics companies and public authorities responsible for the electoral process. Any data processing should comply with the GDPR principles such as fairness and transparency, and for specified purposes only. The guidance provides relevant actors with the additional explanation of the notions of “personal data” and of “sensitive data”, be it collected or inferred. Sensitive data may include political opinions, ethnic origin, sexual orientation and similar, and the processing of such data is generally prohibited unless one of the specific justifications provided for by the GDPR applies. This can be in the case where the data subject has given explicit, specific, fully informed consent for processing; when this information is manifestly made public by the data subject; when the data relate to “the members or to former members of the body or to persons who have regular contact with”; or when processing “is necessary for reasons of substantial public interest” (GDPR, Art. 9, para. 2). In a statement adopted in March 2019, the EDPB points out that derogations of special data categories should be interpreted narrowly. In particular, the derogation in the case when a person makes his or her ‘political opinion’ public cannot be used to legitimate inferred data. Bennett (2016) also warns that vagueness of several terms used to describe exceptions from the application of Article 9(1) might lead to confusion or inconsistencies in interpretation as processing of ‘political opinions’ becomes increasingly relevant for contemporary political campaigning.

The principles of fairness and transparency require that individuals (data subjects) are informed of the existence of the processing operation and its purposes (GDPR, Art. 5). The Commission’s guidance clearly states that data controllers (those who make the decision on and the purpose of processing, like political parties or foundations) have to inform individuals about key aspects related to the processing of their personal data, including why they receive personalised messages from different organisations; which is the source of the data when not collected directly from the person; how are data from different sources combined and used; and whether the automated decision-making has been applied in processing.

Despite the strengthened powers and an explicit call to act more in the political realm (EC, 2018d), to date we have not seen many investigations by DPAs into political parties under the GDPR. An exception is UK Information Commissioner Elizabeth Denham. In May 2017, she announced the launch of a formal investigation into the use of data analytics for political purposes following the wrongdoings exposed by journalists, in particular Carole Cadwalladr, during the EU Referendum, and involving parties, platforms and data analytics companies such as Cambridge Analytica. The report of November 2018 concludes:

that there are risks in relation to the processing of personal data by many political parties. Particular concerns include the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing and the use of third-party data analytics companies, with insufficient checks around consent (ICO, 2018a, p. 8).

As a result of the investigation, the ICO sent 11 letters to the parties with formal warnings about their practices, and in general it became the largest investigation conducted by a DPA on this matter and encompassing different actors, not only political parties but also social media platforms, data brokers and analytics companies.

Several cases have been reported where the national adaptation of the GDPR does not fully meet the requirements of recital 56 GDPR which establishes that personal data on people’s political opinions may be processed “for reasons of public interest” if “the operation of the democratic system in a member state requires that political parties compile” such personal data; and “provided that appropriate safeguards are established”. In November 2018 a question was raised in the European Parliament on the data protection law adapting Spanish legislation to the GDPR which allows “political parties to use citizens’ personal data that has been obtained from web pages and other publicly accessible sources when conducting political activities during election campaigns”. As a member of the European Parliament Sophia in 't Veld, who posed the question, highlighted: “Citizens can opt out if they do not wish their data to be processed. However, even if citizens do object to receiving political messages, they could still be profiled on the basis of their political opinions, philosophical beliefs or other special categories of personal data that fall under the GDPR”. The European Commission was also urged to investigate the RomanianGDPR implementation for similar concerns. Further to the reported challenges with national adaptation of GDPR, in November 2019 the EDPS has issued the first ever reprimand to an EU institution. The ongoing investigation into the European Parliament was prompted by the Parliament’s use of a US-based political campaigning company NationBuilder to process personal data as part of its activities relating to the 2019 EU elections.

Combating disinformation

In contrast to the GDPR, which is sometimes praised as “the most consequential regulatory development in information policy in a generation” (Hoofnagle et al., 2019, p. 66), the EC has decided to tackle fake news and disinformation through self-regulation, at least in the first round. The European Council, a body composed of the leaders of the EU member states, first recognised the threat of online disinformation campaigns in 2015 when it asked the High Representative of the Union for Foreign Affairs and Security Policy to address the disinformation campaigns by Russia (EC, 2018c). The Council is not one of the EU's legislating institutions, but it defines the Union’s overall political direction and priorities. So, it comes as no surprise that the issue of disinformation came high on the agenda of the EU, in particular after the UK referendum and US presidential elections in 2016. In April 2018 the EC (2018a) adopted a Communication on Tackling online disinformation: a European Approach. This is the central document that set the tone for future actions in this field. In the process of its drafting, the EC carried out consultations with experts and stakeholders, and used citizens’ opinions gathered through polling. The consultations included the establishment of a High-Level Expert Group on Fake News and Online Disinformation (HLEG) in early 2018, which two months later produced a Report (HLEG, 2018) advising the EC against simplistic solutions. Broader public consultations and dialogues with relevant stakeholders were also held, and the specific Eurobarometer (2018b) poll was conducted via telephone interviews in all EU member states. The findings indicated a high level of concern among the respondents for the spread of online disinformation in their country (85%) and saw it as a risk for democracy in general (83%). This urged the EC to act and the Communication on tackling online disinformation was a starting point and the key document in understanding the European approach to the pressing challenges. The Communication builds around four overarching principles and objectives: transparency, diversity of information, credibility of information, and cooperation (EC, 2018a).

Transparency, in this view, means that it should be clear to users where the information comes from, who the author is and why they see certain content when an automated recommendation system is being employed. Furthermore, a clearer distinction between sponsored and informative content should be made and it should be clearly indicated who paid for the advertisement. The diversity principle is strongly related to strengthening so-called quality journalism, 4 to rebalancing the disproportionate power relations between media and social media platforms, and to increasing media literacy levels. The credibility, according to the EC, is to be achieved by entrusting platforms to design and implement a system that would provide an indication of the source and information trustworthiness. The fourth principle emphasises cooperation between authorities at national and transnational level and cooperation of broad stakeholders in proposing solutions to the emerging challenges. With an exception of emphasising media literacy and promoting cooperation networks of authorities, the Communication largely recommends that platforms design solutions which would reduce the reach of manipulative content and disinformation, and increase the visibility of trustworthy, diverse and credible content.

The key output of this Communication is a self-regulatory Code of Practice on Online Disinformation (CoP). The document was drafted by the working group composed of online platforms, advertisers and the advertising industry, and was reviewed by the Sounding Board, composed of academics, media and civil society organisations. The CoP was agreed by the online platforms Facebook, Google and Twitter, Mozilla, and by advertisers and the advertising industry, and was presented to the EC in October 2018. The Sounding Board (2018), however, presented a critical view on its content and the commitments laid out by the platforms, stating that it “contains no clear and meaningful commitments, no measurable objectives” and “no compliance or enforcement tool”. The CoP, as explained by the Commission, represents a transitional measure where private actors are entrusted to increase transparency and credibility of the online information environment. Depending on the evaluation of their performance in the first 12 months, the EC is supposed to determine the further steps, including the possibility of self-regulation being replaced with regulation (EC, 2018c). The overall assessment of the Code’s effectiveness is expected to be presented in early 2020.

The CoP builds on the principles expressed in the Commission’s Communication (2018a) through the actions listed in Table 1. For the purpose of this paper the actions are not presented in the same way as in the CoP. THey are instead slightly reorganised under the following three categories: Disinformation; Political advertising, Issue-based advertising.

Table 1: Commitments of the signatories of the Code of Practice on Online Disinformation selected and grouped under three categories: disinformation, political advertising, issue-based advertising. Source: composed by the author based on the Code of Practice on Online Disinformation

Disinformation

Political advertising

Issue-based advertising

To disrupt advertising and monetisation incentives for accounts and websites which consistently misrepresent information about themselves

To clearly label paid-for communication as such

Limiting the abuse of platforms by unauthentic users (misuse of automated bots)

To publicly disclose political advertising, including actual sponsor and amounts spent

To publicly disclose, conditioned to developing a working definition of “issue-based advertising” which does not limit freedom of expression and excludes commercial advertising

Implementing rating systems (on trustworthiness), and report system (on false content)

Enabling users to understand why they have been targeted by a given advertisement

To invest in technology to prioritise “relevant, authentic and authoritative information” in search, feeds and other ranked channels

  

Resources for users on how to recognise and limit the spread of false news

  

In the statement on the first annual self-assessment reports by the signatories of the CoP, the Commission acknowledged that some progress has been achieved, but warns that it “varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny”. The European Regulators Group for Audiovisual Media Services (ERGA) has been supporting the EC in monitoring the implementation of the commitments made by Google, Facebook and Twitter under the CoP, particularly in the area of political and issue-based advertising. In June 2019 ERGA released an interim Report as a result of the monitoring activities carried out in 13 EU countries, based on the information reported by platforms and on the data available in their online archives of political advertising. While it stated “that Google, Twitter and Facebook made evident progress in the implementation of the Code’s commitments by creating an ad hoc procedure for the identification of political ads and of their sponsors and by making their online repository of relevant ads publicly available”, it also emphasised that the platforms have not met a request to provide access to the overall database of advertising for the monitored period, which “was a significant constraint on the monitoring process and emerging conclusions” (ERGA, 2019, p. 3). Furthermore, based on the analysis of the information provided in the platforms’ repositories of political advertising (e.g., Ad Library), the information was “not complete and that not all the political advertising carried on the platforms was correctly labelled as such” (ERGA, 2019, p. 3).

The EC still needs to provide a comprehensive assessment on the implementation of the commitments under the CoP after an initial 12-month period. However, it is already clear that the issue of the lack of transparency of the platforms’ internal operations and decision-making processes remains and represents a risk. If platforms are not amenable to thorough public auditing, then the adequate assessment of the effectiveness of implementation when it comes to self-regulation becomes impossible. The ERGA Report (2019) further warns that at this point it is not clear what options for micro-targeting were offered to political advertisements nor if all options are disclosed in the publicly available repositories of political advertising.

Further to the commitments laid down in the CoP and relying on social media platforms to increase transparency of political advertising online, the Commission Recommendation of 9 September 2018 (EC, 2018e), “encourages”, and asks member states to “encourage” further transparency commitments by European and national political parties and foundations, in particular:

information on the political party, political campaign or political support group behind paid online political advertisements and communications” [...] “information on any targeting criteria used in the dissemination of such advertisements and communications” [...] “make available on their websites information on their expenditure for online activities, including paid online political advertisements and communications (EC, 2018e, p. 8).

The Recommendation (EC, 2018e) further advises member states to set up a national election network, involving national authorities with competence for electoral matters, including data protection commissioners, electoral authorities and audio-visual media regulators. This recommendation is further elaborated in the Action plan (EC, 2018c) but, because of practical obstacles, national cooperation between authorities has not yet become a reality in many EU countries.

Key principles and shortcomings of the European approach

This analysis has shown that the principles contained in the above mentioned instruments, which form the basis of the European approach to combating disinformation and political manipulation are: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

Data protection and transparency principles related to personal data collection, processing and use are contained in the GDPR. The requirement to increase transparency of political and issues-based advertising and of automated communication is currently directed primarily towards platforms that have committed themselves to label and publicly disclose sponsors and content of political and issues-based advertising, as well as to identify and label automated accounts. Unlike with the traditional media landscapes where, in general, on the same territory, media were broadcasting the same political advertising and messages to their audiences, in the digital information environment political messages are being targeted and shown only to specific profiles of voters with limited ability to track them to see which messages were targeted to whom. To increase transparency on this level would require platforms to provide a user-friendly repository of political ads, including searchable information on actual sponsors and amounts spent. At the moment, they struggle with how to identify political and issue-based ads, to distinguish them from other types of advertising, and to verify ad buyers’ identities (Leerssen et al., 2019).

Furthermore, the European approach fails to impose similar transparency requirements towards political parties to provide searchable and easy to navigate repositories of the campaign materials used. The research project of campaign monitoring during the 2019 European elections, showed that parties/groups/candidates participating in the elections were largely not transparent about their campaign materials. Materials were not readily available on their websites or social media accounts nor did they respond to direct requests from researchers (Simunjak et al., 2019). This warns that while it is relevant to require platforms to provide more transparency on political advertising, it is perhaps even more relevant to demand this transparency directly from political parties and candidates in elections.

In the framework of transparency, the European approach also fails to further emphasise the need for political parties to declare officially to authorities and under a specific category the amounts spent for digital (including social media) campaigning. At present, in some EU countries (for example Croatia, see: Klaric, 2019), authorities with competences in electoral matters do not consider social media as media and accordingly do not apply the requirements to report spending on social media and other digital platforms in a transparent manner. This represents a risk, as the monitoring of the latest EP elections has clearly showed that the parties had spent both extensive time and resources on their social media accounts (Novelli & Johansson, 2019).

The diversity and credibility principles stipulated in the Communication on tackling online disinformation and in the Action plan ask from platforms to indicate the information trustworthiness, to label automated accounts, to close down fake accounts, and to prioritise quality journalism. At the same time, clear definition or instructions on criteria to determine whether an information or a source is trustworthy and whether it represents quality journalism is not provided. Entrusting platforms with making these choices without the possibility of auditing their algorithms and decision-making processes represents a potential risk for freedom of expression.

The signatories of the CoP have committed themselves to disrupt advertising and monetisation incentives for accounts and websites, which consistently misrepresent information about themselves. But, what about accounts that provide accurate information about themselves but occasionally engage in campaigns which might also contain disinformation? For example, a political party may use data to profile and target individual voters or a small group of voters with messages that are not completely false but are exaggerated, taken out of context or framed with an intention to deceive and influence voters’ behaviour. As already noted, disinformation comes in many different forms, including false context, imposter, manipulated or fabricated content (Wardle & Derakhshan, 2017). While the work of fact-checkers and flagging of false content are not completely useless here, in the current state of play this is far from sufficient to tackle the problems of disinformation, including in political advertising and especially of dark ads 5. The efficiency of online micro-targeting depends largely on data and profiling. Therefore, if effectively implemented, the GDPR should be of use here by preventing the unlawful processing of personal data.

Another important aspect of the European approach are stronger sanctions in cases when the rules are not respected. This entails increased powers of authorities, first and foremost of DPAs and increased fines under the GDPR. Data protection in the electoral context is difficult to ensure if the cooperation between different authorities with competence for electoral matters (such as data protection commissioners, electoral authorities and audio-visual media regulators) is not established and operational. While the European approach strongly recommends cooperation, it is not easily achievable at a member state level, as it requires significant investments in capacity building and providing channels for cooperation. In some cases, it may even require amendments to the legislative framework. The cooperation of regulators of the same type at the EU level is sometimes hampered by the fact that their competences differ in different member states.

The CoP also contains a commitment on “empowering the research community”. This means that the CoP signatories commit themselves to support research on disinformation and political advertising by providing researchers access to data sets, or collaborating with academics and civil society organisations in other ways. However, the CoP does not specify how this cooperation should work, the procedures for granting access and for what kind of data, or which measures should researchers put in place to ensure appropriate data storage, security and protection. In the reflection on the platform’s progress under the Code, three Commissioners warned that the “access to data provided so far still does not correspond to the needs of independent researchers”.

Conclusions

This paper has given an overview of the developing European approach to combating disinformation and political manipulation during an electoral period. It provided an analysis of the key instruments contained in the approach and drew out the key principles upon which it builds: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

The principles of legacy media regulation in the electoral period are impartiality and equality of opportunity for contenders. This entails balanced and non-partisan reporting as well as equal or proportionate access to media for political parties (be it free or paid-for). If political advertising is allowed, it is usually subject to transparency and equal conditions requirements: how much was spent on advertising in the campaign needs to be presented through spending on different types of media and reported to the competent authorities. The regulatory framework requires that political advertising be properly labelled as such.

In the online environment, the principles applied to legacy media require further elaboration as the problem of electoral disinformation cuts across a number of different policy areas, involving a range of public and private actors. Political disinformation is not a problem that can easily be compartmentalised into existing legal and policy categories. It is a complex and multi-layered issue that requires a more comprehensive and collaborative approach when designing potential solutions. The emerging EU approach reflects the necessity for that overall policy coordination.

The main fuel of online political campaigning is data. Therefore, the protection of personal data and especially of “sensitive” data from abuse becomes a priority of any action that aims to ensure free, fair and plural elections. The European approach further highlights the importance of transparency. It calls on platforms to clearly identify political advertisements and who paid for them, but it fails to emphasise the importance of having a repository of all the material used in the campaign provided by candidates and political parties. Furthermore, a stronger requirement for political parties to report on the amounts spent on different types of communication channels (including legacy, digital and social media) is lacking in this approach, as well as the requirement for platforms to provide more comprehensive and workable data on sponsors and spending in political advertising.

The European Commission’s communication of the European approach claims that it aims to address all actors active in the electoral context, including European and national political parties and foundations, online platforms, data analytics companies and public authorities responsible for the electoral process. However, it seems that the current focus is primarily on the platforms and in a way that enables them to shape the future direction of actions in the fight against disinformation and political manipulation.

As regards the principle of cooperation, many obstacles, such as differences in competences and capacities of the relevant national authorities, have not been fully taken into account. The elections are primarily a national matter so the protection of the electoral process, as well as the protection of media pluralism, falls primarily within the competence of member states. Yet, if the approach to tackling disinformation and political manipulation is to be truly European, there should be more harmonisation between authorities and approaches taken at national levels.

While being a significant step in the creation of a common EU answer to the challenges of disinformation and political manipulation, especially during elections, the European approach requires further elaboration, primarily to include additional layers of transparency. This entails transparency of political parties and of other actors on their actions in the election campaigns, as well as more transparency about internal processes and decision-making by platforms especially on actions of relevance to pluralism, elections and democracy. Furthermore, the attempt to propose solutions and relevant actions at the European level faces two constraints. On the one hand, it faces the power of global platforms shaped in the US tradition, which to a significant extent differs from the European approach in balancing freedom of expression and data protection. On the other hand, the EU approach confronts the resilience of national political traditions in member states, in particular if the measures are based on recommendations and other soft instruments.

References

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives,31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bodó, B., Helberger, N. & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse?. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Bradshaw, S. & Howard, P. N. (2018). Challenging Truth and Trust: A Global Inventory of Organised Social Media Manipulation [Report]. Computational Propaganda Research Project, Oxford Internet Institute. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf

Brett, W. (2016). It’s Good to Talk: Doing Referendums Differently. The Electoral Reform Society’s report. Retrieved from https://www.electoral-reform.org.uk/wp-content/uploads/2017/06/2016-EU-Referendum-its-good-to-talk.pdf

Brogi, E., Nenadic, I., Parcu, P. L., & Viola de Azevedo Cunha, M. (2018). Monitoring Media Pluralism in Europe: Application of the Media Pluralism Monitor 2017 in the European Union, FYROM, Serbia and Turkey [Report]. Centre for Media Pluralism and Media Freedom, European University Institute. Retrieved from https://cmpf.eui.eu/wp-content/uploads/2018/12/Media-Pluralism-Monitor_CMPF-report_MPM2017_A.pdf

Bruns, A. (2017, September 15). Echo chamber? What echo chamber? Reviewing the evidence. 6th Biennial Future of Journalism Conference (FOJ17), Cardiff, UK. Retrieved from https://eprints.qut.edu.au/113937/1/Echo%20Chamber.pdf

Cadwalladr, C. & Graham-Harrison, E. (2018, March 17) Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Chiou, L. & Tucker, C. E. (2018). Fake News and Advertising on Social Media: A Study of the Anti-Vaccination Movement [Working Paper No. 25223]. Cambridge, MA: The National Bureau of Economic Research. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3209929https://doi.org/10.3386/w25223

Centre for Media Pluralism and Media Freedom (CMPF). (forthcoming, 2020). Independent Study on Indicators to Assess Risks to Information Pluralism in the Digital Age. Florence: Media Pluralism Monitor Project.

Code of Practice on Disinformation (September 2018). Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

Council Decision (EU, Euratom) 2018/994 of 13 July 2018 amending the Act concerning the election of the members of the European Parliament by direct universal suffrage, annexed to Council Decision 76/787/ECSC, EEC, Euratom of 20 September 1976. Retrieved from https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32018D0994&qid=1531826494620

Commission Recommendation (EU) 2018/234 of 14 February 2018 on enhancing the European nature and efficient conduct of the 2019 elections to the European Parliament (OJ L 45, 17.2.2018, p. 40)

Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37)

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information,Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656

Eurobarometer (2018a). Standard 90: Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2215

Eurobarometer (2018b). Flash 464: Fake news and disinformation online. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/survey/getsurveydetail/instruments/flash/surveyky/2183

Eurobarometer (2017). Standard 88:. Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2143

European Commission (EC). (2018a). Tackling online disinformation: a European Approach, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. COM/2018/236. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&from=EN

European Commission (EC). (2018b). Free and fair European elections – Factsheet, State of the Union. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/IP_18_5681

European Commission (EC). (2018c, December 5). Action Plan against Disinformation. European Commission contribution to the European Council (5 December). Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/eu-communication-disinformation-euco-05122018_en.pdf

European Commission (EC). (2018d, September 12). Commission guidance on the application of Union data protection law in the electoral context: A contribution from the European Commission to the Leaders' meeting in Salzburg on 19-20 September 2018. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-data-protection-law-electoral-guidance-638_en.pdf

European Commission (EC). (2018e, September 12). Recommendation on election cooperation networks, online transparency, protection against cybersecurity incidents and fighting disinformation campaigns in the context of elections to the European Parliament. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-cybersecurity-elections-recommendation-5949_en.pdf

European Commission (EC). (2018f). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Securing free and fair European elections. COM(2018)637. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-free-fair-elections-communication-637_en.pdf

European Commission (EC). (2007). Media pluralism in the Member States of the European Union [Commission Staff Working Document No. SEC(2007)32]. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/media_pluralism_swp_en.pdf

European Data Protection Board (EDPB). (2019). Statement 2/2019 on the use of personal data in the course of political campaigns. Retrieved from https://edpb.europa.eu/our-work-tools/our-documents/ostalo/statement-22019-use-personal-data-course-political-campaigns_en

European Data Protection Supervisor (EDPS). (2018). Opinion 372018 on online manipulation and personal data. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-03-19_online_manipulation_en.pdf

European Regulators Group for Audiovisual Media Services (ERGA). (2019, June). Report of the activities carried out to assist the European Commission in the intermediate monitoring of the Code of practice on disinformation [Report]. Slovakia: European Regulators Group for Audiovisual Media Services. Retrieved from http://erga-online.eu/wp-content/uploads/2019/06/ERGA-2019-06_Report-intermediate-monitoring-Code-of-Practice-on-disinformation.pdf?fbclid=IwAR1BZV2xYlJv9nOzYAghxA8AA5q70vYx0VUNnh080WvDD2BfFfWFM3js4wg

Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the reach of “fake news” and online disinformation in Europe. Retrieved from https://www.press.is/static/files/frettamyndir/reuterfake.pdf

Flew, T., Martin, F., Suzor, N. P. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media and Policy, 10(1), 33–50. https://doi.org/10.1386/jdtv.10.1.33_1

Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: evidence from the consumption of fake news during the 2016 US presidential campaign [Working Paper]. Retrieved from https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf

High Level Expert Group on Fake News and Online Disinformation (HLEG). (2018). Final report [Report]. Retrieved from https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

Hoofnagle, C.J. & van der Sloot, B., & Zuiderveen Borgesius, F. J. (2019). The European Union general data protection regulation: what it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

Holtz-Bacha, C. & Just, M. R. (Eds.). (2018). Routledge Handbook of Political Advertising. New York: Routledge.

House of Commons Treasury Committee. (2016, May 27). The economic and financial costs and benefits of the UK’s EU membership. First Report of Session 2016–17. Retrieved from https://publications.parliament.uk/pa/cm201617/cmselect/cmtreasy/122/122.pdf

Howard, P. N. & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum. ArXiv160606356 Phys. Retrieved from https://arxiv.org/abs/1606.06356

Information Commissioner’s Office (ICO). (2018a, July 11). Investigation into the use of data analytics in political campaigns [Report to Parliament]. Retrieved from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

Information Commissioner’s Office (ICO). (2018b, July 11). Democracy disrupted? Personal information and political influence. Retrieved from https://ico.org.uk/media/action-weve-taken/2259369/democracy-disrupted-110718.pdf

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., Heinrich, R., Baragwanath, R., & Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kelley, S. Jr. (1962). Elections and the Mass Media. Law and Contemporary Problems, 27(2), 307–326. Retrieved from https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=2926&context=lcp

Klaric, J. (2019, March 28) Ovo je Hrvatska 2019.: za Državno izborno povjerenstvo teletekst je medij, Facebook nije. Telegram. Retrieved from https://www.telegram.hr/politika-kriminal/ovo-je-hrvatska-2019-za-drzavno-izborno-povjerenstvo-teletekst-je-medij-facebook-nije/

Kreiss, D. l., & McGregor, S. C. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Valcke, P., Lefever, K., Kerremans, R., Kuczerawy, A., Sükosd, M., Gálik, M., … Füg, O. (2009). Independent Study on Indicators for Media Pluralism in the Member States – Towards a Risk-Based Approach [Report]. ICRI, K.U. Leuven; CMCS, Central European University, MMTC, Jönköping Business School; Ernst & Young Consultancy Belgium. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/pfr_report.pdf

Kumar, S., & Shah, N. (2018, April). False information on web and social media: A survey. arXiv:1804.08559 [cs]. Retrieved from https://arxiv.org/pdf/1804.08559.pdf

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & de Vreese, C. H. (2019). Platform ad archives: promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Liu, L. (2016). Using Generic Inductive Approach in Qualitative Educational Research: A Case Study Analysis. Journal of Education and Learning, 5(2), 129–135. https://doi.org/10.5539/jel.v5n2p129

Morgan, S. (2018). Fake news, disinformation, manipulation and online tactics to undermine democracy. Journal of Cyber Policy, 3(1), 39–43. https://doi.org/10.1080/23738871.2018.1462395

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Digital News Report 2018. Oxford: Reuters Institute for the Study of Journalism. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/digital-news-report-2018.pdf

Novelli, E. & Johansson, B. (Eds.) (2019). 2019 European Elections Campaign: Images,Topics, Media in the 28 Member States [Research Report]. Directorate-General of Communication of the European Parliament. Retrieved from https://op.europa.eu/hr/publication-detail/-/publication/e6767a95-a386-11e9-9d01-01aa75ed71a1/language-en?fbclid=IwAR0C9R6Mw0Gd5aggB7wZx6KGWt3is84M210q3rv0g9LbXJqJpXuha1H6yeQ

Regulation (EU, Euratom). 2018/673 amending Regulation (EU, Euratom) No 1141/2014 on the statute and funding of European political parties and European political foundations. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32018R0673

Regulation (EU). 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1)

Regulation (EU, Euratom). No 1141/2014 of the European Parliament and of the Council of 22 October 2014 on the statute and funding of European political parties and European political foundations, (OJ L 317, 4.11.2014, p.1).

Report of the Special Rapporteur to the General Assembly on online hate speech. (2019). (A/74/486). Retrieved from https://www.ohchr.org/Documents/Issues/Opinion/A_74_486.pdf

Report of the Special Rapporteur to the Human Rights Council on online content regulation. (2018). (A/HRC/38/35). Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/096/72/PDF/G1809672.pdf?OpenElement

Schoenbach, K., & Lauf, E. (2004). Another Look at the ‘Trap’ Effect of Television—and Beyond. International Journal of Public Opinion Research, 16(2), 169–182. https://doi.org/10.1093/ijpor/16.2.169

Shearer, E. (2018, December 10). Social media outpaces print newspapers in the U.S. as a news source. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/

Šimunjak, M., Nenadić, I., & Žuvela, L. (2019). National report: Croatia. In E. Novelli & B. Johansson (Eds.), 2019 European Elections Campaign: Images, topics, media in the 28 Member States (pp. 59–66). Brussels: European Parliament.

Sounding Board. (2018). The Sounding Board’s Unanimous Final Opinion on the so-called Code of Practice on 24 September 2018. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

The Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression. (2019). How governments and platforms have fallen short in trying to moderate content online (Co-Chairs Report No. 1 and Working Papers). Retrieved from https://www.ivir.nl/publicaties/download/TWG_Ditchley_intro_and_papers_June_2019.pdf

Valeriani, A., & Vaccari, C. (2016). Accidental exposure to politics on social media as online participation equalizer in Germany, Italy, and the United Kingdom. New Media & Society, 18(9). https://doi.org/10.1177/1461444815616223

Venice Commission. (2013). CDL-AD(2013)021 Opinion on the electoral legislation of Mexico, adopted by the Council for Democratic Elections at its 45th meeting (Venice, 13 June 2013) and by the Venice Commission at its 95th Plenary Session (Venice, 14-15 June 2013).

Venice Commission. (2010). CDL-AD(2010)024 Guidelines on political party regulation, by the OSCE/ODIHR and the Venice Commission, adopted by the Venice Commission at its 84th Plenary Session (Venice, 15-16 October 2010).

Venice Commission. (2009). CDL-AD(2009)031 Guidelines on media analysis during election observation missions, by the OSCE Office for Democratic Institutions and Human Rights (OSCE/ODIHR) and the Venice Commission, adopted by the Council for Democratic Elections at its 29th meeting (Venice, 11 June 2009) and the Venice Commission at its 79th Plenary Session (Venice, 12- 13 June 2009).

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Wakefield, J. (2019, February 18). Facebook needs regulation as Zuckerberg 'fails' - UK MPs. BBC. Retrieved from https://www.bbc.com/news/technology-47255380

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking [Report No. DGI(2017)09]. Strasbourg: Council of Europe. Retrieved from https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-de%CC%81sinformation-1.pdf?x56713

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S. Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., de Vreese, C. H. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. The so-called ‘fake news’ law was passed in May 2019 allowing ministers to issue orders to platforms like Facebook to put up warnings next to disputed posts or, in extreme cases, to take the content down. The law also allows for fines of up to SG$ 1 million (665,000 €) for companies that fail to comply, and the individual offenders could face up to ten years in prison. Many have raised the voice against this law, including the International Political Science Association (IPSA), but it came into effect and is being used.

2. To which the author is affiliated.

3. The GDPR supplanted the Data Protection Directive (Directive 95/46/EC on the protection of individuals with regard to the processing of personal data (PII (US)) and on the free movement of such data).

4. The Council of Europe also uses the term ‘quality journalism’ but it is not fully clear what is entailed in ‘quality’ and who decides on what ‘quality journalism’ is, and what is not. The aim could be (and most likely is) to distinguish journalism that respects professional standards from less reliable, less structured and less ethical and professional standards bound forms of content production and delivery. Many argue that journalism already entails the request for quality so this attribute adjective is not necessary and, in fact, may be problematic.

5. Dark advertising is a type of online advertising visible only to the advert's publisher and the intended target group.

WhatsApp and political instability in Brazil: targeted messages and political radicalisation

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

After 21 years of military dictatorship, followed by a short period of political instability, the political scene in Brazil was dominated by two major parties that, between them, held the presidency until 2018. Both were moderate with a large membership base, had many representatives in Congress and received constant coverage in the legacy media as representatives of a more westernized democratic process. However, in the 2018 elections, the country elected as president a niche congressman, Jair Bolsonaro, a member of a small party (PSL) with almost no registered supporters, who had been relatively unknown until some four years earlier, when he started to make appearances on popular and comic TV shows, on which he combined extremist rhetoric with praise for the military dictatorship. Bolsonaro’s election surprised local and international politicians and intellectuals, in part because his campaign lacked a traditional political structure, but mainly because of his radical rhetoric, which frequently included misogynistic and racist statements that would be sufficient to shake the public image of any candidate anywhere in the world (Lafuente, 2018), but which were even more shocking in a country marked by social inequalities and racial diversity.

One of the hypotheses for Bolsonaro's electoral success is that his campaign and some supporters developed a specific communication strategy based on the intense use of social media, in which the use of WhatsApp chat groups, micro-targeting and disinformation to reach different groups of voters had significant relevance. Albeit not always in a coordinated way, several platforms were used: YouTube, with videos of alt-right political analyses, lectures about "politically incorrect" history and amateur journalism; Facebook, with its pages and groups for distributing content and memes; and Twitter/Instagram, especially as sites for posting political and media content (the last three platforms mentioned were also widely used by the candidate himself to post messages and live videos on his official profile). Davis and Straubhaar (2019) point out that “legacy media, popular right-wing Facebook groups, and networks formed around the communication network WhatsApp fueled “antipetismo” 1, stressing that WhatsApp was particularly instrumental to cement Bolsonaro’s victory. Addressing the emergence of what she calls “digital populism”, Cesarino (2019) discusses the formation of a polarised digital bubble largely anchored on WhatsApp chat groups.

We focus our analysis on WhatsApp, examining the use of encrypted groups containing up to 256 members reflecting specific interests (religious, professional, regional, etc.). Smartphones and WhatsApp “were not as extensively available in Brazil during the previous [2014] presidential election” (Cesarino, 2019), and we aim to show that WhatsApp has technical specificities and was susceptible to an electoral strategy that justifies particular attention. Several media reports stress the role played by WhatsApp in the 2018 elections. Analysing the Brazilian elections, our goal is also to contribute to the decoupling of “research from the US context” and help with the understating of “macro, meso and micro level factors that affect the adoption and success of political micro-targeting (PMT) across different countries” (Bodó, Helberger, & Vreese, 2017). The Global South in general has been associated with an increase in “computational propaganda” in recent years (Woolley & Howard, 2017; Bradshaw & Howard, 2018).

Because of its closed, encrypted architecture, which restricts visibility for researchers and public authorities, the relative anonymity afforded by the use of only telephone numbers as identifiers by groups administrators, and the limited number of members in a group, which favours audience segmentation, WhatsApp is the platform that poses the greatest challenges for those investigating information dynamics during elections. It has also been shown that, because of these characteristics, WhatsApp played a crucial role in the spread of misinformation during the Brazilian 2018 elections (Tardáguila, Benevenuto, & Ortellado, 2018; Davis & Straubhaar, 2019; Junge, 2019) and in the development of a disinformation strategy that not only was on the edge of the law (Melo, 2019; Avelar, 2019; Magenta, Gragnani & Souza, 2018) but also exploited features of the platform’s architecture that help to render coordinated strategies invisible and favour group segmentation.

Although widely used in Brazil as a means of communication, only recently has WhatsApp use been tackled as a research subject in elections. Moura and Michelson (2017) evaluated its use as a tool for mobilising voters, and Resende et al. (2019) conducted groundbreaking research on the dynamics of political groups on WhatsApp. However, little has been said on the interrelation between a historical context and its interplay with new platforms, media technologies and novel campaign strategies that rely on surveillance. In this sense, surveillance shows up as a new mode of power (Haggerty & Ericson, 2006) with direct impact on the election process, with implications for democracy (Bennett, 2015). This article challenges the idea that political micro-targeting (PMT) is elections as usual (Kreiss, 2017), showcasing its connection with disinformation practices and a process of political radicalisation in a specific empirical context, and stresses that PMT functions as part of an (mis)information ecosystem.

In this article, we discuss the Brazilian institutional, political and media context that paved the way for Jair Bolsonaro to become president in what was an atypical election result that surprised the vast majority of political analysts. We describe and discuss the use of a particular social media platform, WhatsApp, which instead of functioning as an instant messaging application was weaponised as social media during the elections. Based on an analysis of a sample of the most widely distributed images on this platform during the month immediately prior to the first round of the elections, in which Bolsonaro won 46.03% of the valid votes, we argue that messages were partially distributed using a centralised structure, built to manage and to stimulate members of discussion groups, which were treated as segmented audiences. Our ambition is to correctly address a specific and concrete use of data in an electoral campaign and to avoid any type of hype around PMT or data-driven strategies (Bodó et al., 2017; Baldwin-Philippi, 2017). In this case, platforms and data are not used as much to scientifically inform broad campaign strategies (Issenberg, 2012), but are more connected to disinformation/misinformation processes.

The Brazilian context and the rise of Bolsonaro as a political and media “myth”

Brazil is a federal republic with relatively independent states but considerable power centralised in the federal executive and legislatures. The regime is presidential, and an election is held every four years for president (who may run once for re-election), state governors, state congressmen and federal deputies and senators; federal deputies and senators represent their states in two legislative houses, the Chamber of Deputies and the Federal Senate. The president of the Chamber of Deputies is the second in line to the presidency (after the vice president, who is elected on the same slate as the president), and it is the responsibility of the legislature to investigate and, if necessary, try the president of the republic for “crimes of responsibility”, which can lead to impeachment and removal from office.

Voting is compulsory and electors do not need to be members of a party to vote. Failure to vote is punished with a small fine (approximately $2.00 USD). Abstentions are typically in the region of 20%; in the last elections the figure reached 20.3%, the highest in the last 20 years.

Federal and state elections are held every four years, and municipal elections occur in between. Candidates must be members of legally constituted political parties to stand for election. Elections for executive office are held in two rounds, and the two candidates with the most votes in the first round compete in a run-off unless one of them has 50% + 1 of the valid votes in the first round.

The political system is extremely fragmented (Nascimento, 2018), and there is frequent switching between parties, particularly between smaller associations. In general, congressional representatives who belong to these smaller associations are known as the “lower clergy” and form a bloc characterised by clientelism (Hunter & Power, 2005) and cronyism. Armijo & Rhodes (2017) argue that “cronyism may tip into outright corruption, or direct payments by private actors for preferential treatment by state officials”, pointing out that Brazilian elections are very expensive and were, at least until recently, heavily funded by the private sector. Parties form coalitions to take part in the elections, and the seats in the Chamber of Deputies are distributed according to the number of votes the coalitions or parties receive and are then allocated within the coalitions according to the number of votes each candidate receives. In his almost 30 consecutive years as a federal deputy, Jair Bolsonaro was known as a member of the “lower clergy” and was affiliated with no less than nine different political parties. A former member of the armed forces, when elected congressman, his votes came mainly from efforts that benefited that sector (Foley, 2019), and also from his criticism of human rights and his position in favour of harsher criminal laws and more vigorous police action.

Brazilian elections historically were financed with a combination of public funds and limited private donations from individuals and companies 2. Public funds are shared between the parties mainly according to the number of seats they have in Congress. Parties are also entitled to television and radio airtime, for which broadcasting companies receive tax exemptions. Political advertisements paid for by parties are prohibited.

Radio and TV airtime was traditionally considered one of the most important factors in a presidential election. However, in the last election, the candidate with the most time in these media ended up in fourth position (Geraldo Alckmin, PSDB, with 44.3% of the airtime), while Bolsonaro, who had little more than 1% of the total airtime, won the first round. Fernando Haddad (PT), who came second, had 19.1% of the airtime (Ramalho, 2018; Machado, 2018).

The Brazilian broadcasting system is concentrated in the hands of a few family groups and, more recently, an evangelical minister from a non-denominational church (Davis & Straubhaar, 2019). These groups own TV and radio networks, newspapers and websites around the country. The editorial line is economically conservative, although some of the companies (e.g., Rede Globo) have a more liberal attitude in terms of customs (Joyce, 2013).

An appreciation of the Brazilian political scene is also important to understand Bolsonaro’s rise. From the middle of the second term in office of Luis Inácio Lula da Silva, when the first accusations of corruption involving members of the government appeared, much of the domestic press became more critical in its tone and adopted a more aggressive posture clearly aligned with the opposition. Even when the government was at the height of its popularity, during Lula’s second mandate, political commentators who were either clearly anti-Workers’ Party (PT) or more scathing in their criticism tended to be in the majority in much of the media (Carvalho, 2016).

This change generally helped to create quite a negative feeling toward the PT, primarily among the middle classes, the main consumers of this political journalism. This feeling was to intensify after Dilma Rousseff was re-elected by a small margin, with voters being clearly divided by class and region. Roussef’s votes came mainly from the lower income classes and from the north-east of the country (Vale, 2015). The figure below by Vale (2015) shows the distribution of votes per state in the second round of the 2014 presidential election (% of vote).

Figure 1: Distribution of votes in the 2014 Brazilian presidential election (per state) (Vale, 2015).

At the same time, the consequences of the 2008 global crisis began to be felt across the country. Until Rousseff’s first mandate, the country had managed to balance the public accounts. However, when the government boosted internal consumption by granting tax exemptions to certain industrial and infrastructure sectors by the end of her second term, public debt surged and became the target of strong criticism by financial commentators. At this point the PT lost most of support it might have had among the middle classes and in the industrialized south and south-east. Later, when the second major political scandal during the rule of the PT was to break — the discovery of a corruption scheme involving mainly allies of the PT but also marked by the participation and/or connivance of the PT itself — a political process was set in motion that included huge street demonstrations with extensive coverage and encouragement by the media. The government began to lose its support in Congress, a support which had never been very ideologically solid given the strength of the “lower clergy”. The end result was that Dilma was impeached for a “crime of responsibility” on controversial grounds.

The political climate that ensued, however, did not help to restore peace. Accusations of corruption were levelled against a wide range of parties, including those that actively supported impeachment, such as the PT’s long-standing adversary, the Brazilian Social Democracy Party (PSDB). Nowadays a centre-right party, the PSDB had held the presidency from 1995 to 2002 and had faced the PT in run-offs in all the presidential election since then. Historically a centre-left party, the PSDB had moved toward neoliberalism, although its membership later included more conservative candidates, with some in favour of more vigorous police action and tougher laws, as well as a morally conservative agenda. This slow ideological transformation was initially welcomed by part of the electorate, particularly middle and upper-class electors, but later proved to be insufficient to ensure their victory.

Following Rousseff’s impeachment, extensive reporting of the widespread involvement of political parties in the Petrobras scandal appears to have helped produce a profound aversion to politicians as a whole. The Petrobras scandal was revealed in 2014 by Operation Car Wash but the investigations lasted for another five years and were the subject of high media attention for the whole period. Investigators revealed that a “cartel of construction companies had bribed a small number of politically appointed directors of Petrobras in order to secure a virtual monopoly of oil-related contracts” (Boito & Saad-Filho, 2016).

In parallel with this, Jair Bolsonaro became increasingly well known among a large swathe of the public. As previously mentioned, although he had been in politics for almost 30 years, Bolsonaro remained a minor figure until 2011, known only in his state, Rio de Janeiro, and popular only among the military (the armed forces as well as police and firefighters, who are members of the military in Brazil) and a small niche of electors in favour of repressive police action. In 2011, however, Bolsonaro took part in CQC (the acronym for “Whatever It Takes” in Portuguese), a TV programme that combines humour and political news, and answered questions sent in by members of the public. At the time, one of the presenters classified him as “polemical” and another said he had not the slightest idea who that congressman was. The objective appeared to be to ridicule him, and the programme featured humorous sound effects. The then congressman’s answers ranged from homophobic to racist, and he praised the period of military dictatorship in Brazil. The interview sparked controversy, and Bolsonaro was invited to appear on the programme again the following week. He became a common attraction on CQC and other programmes that also explore the impact of the politically incorrect.

The congressman was gradually given more airtime on CQC and taken more seriously, while at the same time his legitimacy increased because of his popularity with the audience. A few months before Bolsonaro was elected, a former CQC reporter recalled with regret the visibility the programme had given the congressman. She said they were “clueless that a good chunk of the population would identify with him, with such a vile human being”, admitting they “played a part” in the process. (2018, April 4)

In addition to CQC, other free-to-air TV programmes gave the then federal deputy airtime. In a recent study, Santos (2019) shows how, since 2010, Bolsonaro made an increasing number of appearances on the free-to-air TV programmes with the largest audiences. CQC helped particularly to bring Bolsonaro closer to a younger audience, together with the programme Pânico na Band, which also takes advantage of politically incorrect humor and created a special segment on the congressman, with 33 episodes each 9 minutes long in 2017.

In social media, Bolsonaro and his opinions became the subject of increasing debate and were vigorously rejected by the left but adopted as a symbol of the politically incorrect by sectors of the non-traditional right. The figure of a talkative, indiscreet congressman became a symbol for a subculture of youth groups—similar to those who frequent websites such as 4chan (Nagle, 2017) — for whom he became known as “the Myth”, a mixture of iconoclasm, playful humour and conservatism. This connection with the politically incorrect was exploited by the administrators of the congressman’s Facebook page since it was set up in June 2013 (Carlo and Kamradt, 2018).

WhatsApp and the spread of misinformation / disinformation

The political year of the last presidential election, 2018, was unusually troubled. Involved in various court cases, the then leader in the polls, former president Lula, was arrested in early April. The party was adamant that he should run for election even though there was little chance of his being allowed to do so by the legal authorities. Without Lula, Bolsonaro was ahead in the polls from the start — with a lead that varied but was always at least 20% — although he was considered a passing phenomenon by most analysts and was expected to disappear as soon as the TV and radio campaigns started because he had very little airtime compared with the candidates for the traditional parties.

Another important event that helps describe the scenario in which WhatsApp had a significant role in politics is the strike/lockout organised by truck drivers in May 2018. Dissatisfied with the almost daily variations in fuel prices, which had begun to be adjusted in line with the US dollar and the price of oil, self-employed truck drivers and logistic companies started a protest that ended up bringing the country to a quite long and economically harmful standstill (Demori & Locatelli, 2018). Using mainly WhatsApp, drivers organised roadblocks on the main highways, causing severe disruption to the supply of goods, including fuel and food (Rossi, 2018). Radical right-wing associations infiltrated these online groups, praising militarism as a solution for the country and sometimes clamouring for military intervention to depose the executive, legislature and judiciary.

Interested in understanding how fake news was spread by political groups on WhatsApp, Resende et al. (2019) collected data of two periods of intense activity in Brazil: the truck drivers’ strike and the month prior to the first round of the elections. They sampled chat groups that did not necessarily have any association with a particular candidate, were open to the public (although the administrators could choose who was accepted or removed), having links to join in shared in websites or social networks, and could be found using the URL chat.whatsapp.com. The groups were chosen by the match of that URL with a dictionary containing the name of politicians, political parties, as well as words associated with political extremism. In all, they analysed 141 groups during the truck drivers’ strike and 364 during the elections. The results show that 2% of the messages shared in these groups during the elections were audio messages, 9% were videos, 9% contained URLs to other sites and 15% were images. The other 65% were text messages with no additional multimedia content.

Resende et al. (2019) also developed an automated method for determining whether the shared images in the analysed groups had already been reviewed and rejected by fact-checking services. To that selection they added 15 more that were previously identified by the Brazilian fact-checking agency, Lupa, as misinformation. Totalling 85 images that contained misinformation, they found that these were shared eight times more often than other 69,590 images, which were truthful or had not been denounced for checking by any independent agency.

Although the total number of images labeled as misinformation is relatively low - only 1% of the total number of images shared - these images were seen in 44% of the groups monitored during the election campaign period, which means they have a long reach. Upon investigation of such images, these researchers identified the groups in which the images appeared first, and remarked that a small number of groups seemed to account for dissemination of a large amount of images with misinformation. In our view, this fact indicates a more centralised and less distributed dissemination structure.

Another fact revealing a dynamic of relatively centralised dissemination is that the "behaviour" of image propagation including disinformation (which are images deliberately produced and/or tampered with) is significantly different from unchecked images. Comparing the structure of propagation of these two groups, particularly as to the time these images appeared on the Web and on WhatsApp and vice-versa, the authors noticed that 95% of the images with unchecked content were posted first on the Web and then in monitored WhatsApp groups. Only 3% of these images made the opposite route, and 2% appeared both on the Web and on WhatsApp on the first day. In contrast, only 45% of the images with misinformation appeared first on the Web, 35% were posted first on WhatsApp and 20% were shared in both platforms on the same day. According to the authors, this suggests "that WhatsApp acted as a source of images with misinformation during the election campaign period" (Resende et al., p.9.). Considering that an image with disinformation is deliberately produced and tampered with, the fact that WhatsApp is its first source of sharing in a much higher percentage than images with unchecked content (35% in the first case versus 2% in the second case) is one more element indicating a relatively centralised and not fully spontaneous organisation of propagation of this type of content.

Disinformation in WhatsApp groups

As to the contents of images with disinformation, they reproduce many of the elements that were key in the rise of Bolsonaro and, later, during his election campaign. In this section we will analyse the top eight most shared images with disinformation in the month before the first round, using the same groups monitored by Resende et al. (2019) as our source. Our analysis is based on investigative work developed by the Agência Lupa and Revista Época, in partnership with the research project "Eleições sem Fake" (Resende et al. 2019; Marés, Becker and Resende, 2018). The news piece points out that none of the eight images analysed mentions the presidential candidates directly. All of them refer to topics that reinforce beliefs, perspectives and feelings that shaped the ideological base of Jair Bolsonaro's campaign. Anti-PT-ism, strongly boosted by the legacy media over the last few years, was one of the pillars of Bolsonaro's campaign, and it is the content of the most shared image with disinformation in the monitored groups in the month before the first round. As we can see, Figure 2 is a photo-montage that inserts a photo of the young ex-president of Brazil, Dilma Rousseff, beside the ex-president of Cuba, Fidel Castro.

Figure 2: First most shared disinformation image on WhatsApp.

At the time the original photo of Fidel Castro was taken by John Duprey for the “NY Daily News” in 1959, president Dilma Rousseff was only 11 years old. Therefore, it is clearly a tampered image that intends to associate the PT with communism and Castroism. Such association was recurrent among Bolsonaro supporters during his campaign and antipetismo appears directly in three out of the eight most shared images with misinformation over the time period under analysis.

Another image with clear anti-PT-ist content (the fourth most shared image in the monitored groups) is an alleged criminal record of ex-president Dilma Rousseff during the times of the military dictatorship, in which she would be accused of being a terrorist/bank robber (Figure 3). This record was never issued by any official agency of the military government and has the same format as the third most shared image, this time showing José Serra, current senator of the republic for the PSDB party 3.

Figure 3: Fourth most shared disinformation image on WhatsApp.

Lastly, the third image with direct anti-PT-ism content (the eighth most shared image in the monitored WhatsApp groups) is the reproduction of a graph with false information comparing consumption of families over the last five years of PT government at that time with the expenditure of the government itself (Figure 4). Contrary to what the graph shows, the consumption of families did not decrease; instead it grew 1.8% between 2011 and 2016, whereas expenditure of the public administration rose 3.1% during the period, and not 4% as the graph indicates.

Figure 4: Eighth most shared disinformation image on WhatsApp.

The second most frequent topic in the most shared images with misinformation is attacks to the rights of LGBTs and women, appearing in three out of the eight most shared images. This kind of content, although not directly antipetismo, denies rights that were symbolically associated to leftist parties by Bolsonaro's campaign.

The fifth and sixth most shared images link these rights to sexualisation of childhood and lack of respect for religious beliefs, as shown in figures 5 and 6, respectively. Moreover, in the context of their sharing via WhatsApp, such images were associated with Rede Globo, the largest commercial open television network in the country. In figure 5, the legend of the image (which is in fact a photo of Heritage Pride March in New York in 2015) reads: “People from Globo-Trash who do not support Bolsonaro!!!”. In figure 6, the image is shown with the sentence "Globo today". However, the image is a record of the Burning Man music festival that took place in the Desert of Nevada in 2011 in the US. It shows a man dressed as Jesus kissing Benjamin Rexroad, director of the “Corpus Christi" production. This image was published in O Globo newspaper at the time of the festival and not before the first round of the 2018 elections. The link of these images to the O Globo newspaper is part of a campaign of disqualification of the TV station with the same name, Rede Globo. Bolsonaro supporters’ strategy seems to be to legitimise the WhatsApp groups as more reliable sources of information than the legacy media. The Globo Network, historically linked to conservative economic and political interests in the country, was here associated with the propagation of hyper-sexualised and anti-Christian content.

The other image (figure 7), which is part of this thematic group against LGBT and women's rights, is a montage of photos of different protests in churches. In one of the protests, a couple has sex inside a church in Oslo, Norway, in 2011. In the second one, a woman defecates on the stairway of a church in Buenos Aires, capital of Argentina, which happened when Maurício Macri won the 2015 presidential election. The caption of the false image reads “Feminists invade church, defecate and have sex”, and is in clear opposition to the #EleNão movement. #EleNão (#NotHim) was a movement led by women that denounced Bolsonaro as misogynist, gathering thousands of people across Brazilian streets in the verge of the first round of the elections (Uchoa, 2018).

Figure 5: Fifth most shared disinformation image on WhatsApp.
Figure 6: Sixth most shared disinformation image on WhatsApp.
Figure 7: Seventh most shared disinformation image on WhatsApp.

Use of illegal tactics

In the second round of the elections, the newspaper Folha de S.Paulo managed to discover that businessmen had signed contracts of up to $3 million US each with marketing agencies specialised in automated bulk messaging on WhatsApp (Melo, 2018). The most famous of the businessmen who were accused owns a retail sector company, which could suggest that the marketing methods used in his company could also be used in politics. The practice is illegal as donations from companies were forbidden in the last election. Furthermore, the businessmen were alleged to have acquired user lists sold by agencies as well as lists supplied by candidates. This practice is also illegal as only the candidate’s user lists may be used. According to a report issued by Coding Rights (2018) involving interviews and document analyses of marketing agencies operating in Brazil, election campaigns in general combine a series of databases, associating public information, such as the Census, and data purchased from private companies such as Serasa Experian and Vivo (a telecom company owned by Telefónica). These databases include demographic data and telephone numbers (including the respective WhatsApp contacts).

According to Folha de S.Paulo’s article, the marketing agencies are able to generate international numbers, which are used by employees to get around restrictions on spam and to administer or participate in groups. Off the record statements obtained by the newspaper from ex-employees and clients of the agencies, reveal that the administrators used algorithms that segmented group members into supporters, detractors and neutral members and defined the content sent accordingly. The most active Bolsonaro supporters were also allegedly contacted so that they could create even more groups in favour of the candidate (Melo, 2018). In a different article, the newspaper noted that some of these groups behaved like a military organisation and referred to themselves as a “virtual army” in favour of the candidate. According to the newspaper, the groups are divided into “brigades”, “commands” and “battalions” and are formed mainly of youths, some under 18 years of age (Valente, 2018).

Investments on election campaigns were directed to several variations of digital advertising. Besides being less expensive, digital advertising can be an alternative to limited TV time, particularly for small political associations (Coding Rights, 2018). Digital campaigns included WhatsApp mainly because it is a platform with deep penetration in the population, considering, among other things, the practice of zero-rating policies. Zero-rated data refers to data that does not count toward the user’s data cap (Galpaya, 2017). Telecom operators commonly offer free use of WhatsApp for pre-paid plans, which are the ones most commonly contracted by lower classes. Even if the user doesn’t have any credits left for accessing the internet, they keep sending and receiving text and media content on their WhatsApp chat groups and from individual users. An accurate image is described by Luca Belli: “fact-checking is too expensive for the average Brazilian” (2018).

Rules approved for the Brazilian 2018 elections permitted candidates to buy advertisements on social media and to spread content using message platforms. However, WhatsApp does not offer micro-segmentation as a service, which would allow advertisements to be directed to a certain audience, like Facebook does. Marketing agencies ended up playing that role, not always using legally collected information on voters.

Audi & Dias (2018) reported that agencies in the business of political advertising use software that monitors different interest groups - not only those of political discussion - to prospect voters and supporters. The users are measured in terms of their mood and receptivity towards campaign messages. By doing so, these agencies manage to identify the ideal target population and the right time to send each type of content. According to the article, “messages that reach a diabetic 60-year old man from São Paulo State are different from those sent to a north-eastern woman who lives on minimum wage”. Audi & Dias (2018) had access to one of the software programmes used during the last elections in Brazil, WNL, a version used for the campaign of a non-identified politician. The software programme monitored and distributed contents in over 100 WhatsApp groups that ranged from diabetics discussion groups, soccer team supporters, Uber drivers, advertising of job vacancies and even workmates and neighbours.

Such segmentation was refined by the monitoring of reactions to contents posted, rated as positive, negative or neutral reactions. Users rated as positive keep receiving similar information in favour of the candidate. Those rated as neutral get mostly materials contrary to the opponent. Negative users start getting a more incisive treatment, receiving content that would tend to "target values dear to the person, such as family and religion, in an attempt to inflate rejection towards the candidate's competitor". By monitoring these reactions, users are segmented in individual files and then classified into groups according to specific topics - such as church, "gay kit", family, communism, weapons, privatisation, etc. Moreover, this software programme enables those that monitor the groups to collect and select keywords in order to discover specific interests: "For example, a patient with cancer speaks about his/her condition and treatment. The system collects these data and finds other people in similar conditions. They start to get content with the policies of the candidate for health, for example" (Audi and Dias, 2018).

It should be noted that this micro-segmentation and micro-targeting are integral to the way advertisements on platforms work. Facebook, for instance, announced some special transparency policies for political ads during the election period (Valente, 2018). However, due to the nature and architecture of WhatsApp, the visibility of content spreading strategies on a platform such as it is minimal, and this prevents users to realise that they are being the target of persuasion strategies. We will return to this issue in the conclusion, but it must be pointed out that using this platform for election campaigns is structurally questionable. If we consider only the methods used by AM4 Company, which openly worked in Jair Bolsonaro's campaign spreading contents to 1,500 WhatsApp groups, there are already reasons for concern, since such content is not explicitly distributed as part of an election advertising campaign. It is rather distributed as content shared by common users in groups with a supposedly symmetrical interaction structure. According to a statement of the company's founding partner: “what we do is curator-ship of contents created by supporters” (Soares, 2018). The company owner also stated that a series of content that fed 1,500 WhatsApp groups on a daily basis were part of the strategy of the company hired by PSL, which operated since the pre-campaign, to revert negative episodes in favour of Bolsonaro's campaign.

WhatsApp group dynamics

Before developing a formal research interest in the use of WhatsApp during the elections we started a participatory observation process at political chat groups on WhatsApp. Initially, we were interested in understanding how the app was being used by truck drivers for organising its protests. Later, we noticed that many groups we found were also occupied by radicals in favour of a return to the military dictatorship and supporters of Jair Bolsonaro. This helped us to understand the dynamics of those groups and how some more prominent members acted to manage the discussions or the posting of content.

Various groups were short-lived and rapidly replaced by other groups that were advertised in the “dying” groups. Until the election day, and some weeks before, we tried to follow the discussion of at least three groups on WhatsApp and two on Telegram. Some of the groups were occasionally invaded by users who posted pornographic content or advertised illegal services, such as pirate IP TV and false diplomas, cloned credit cards and counterfeit money. As observed in field work, in the case of one particular group on Telegram, the group became a pirate IP TV sales channel as soon as the elections were over.

At the end of the elections, it was observed that new groups were set up with a new mission: to act as direct communication channels between supporters of the new president. One of these went by the name of Bolsonews TV. There is little discussion in these groups and only a few members are responsible for almost all the content sent to the groups or forwarded from other groups. A frequently repeated claim is that one should not believe in the legacy media because it is controlled by communists and left-wing individuals; according to the people who send these messages, only some YouTubers, journalists, bloggers and politicians can be trusted. Before the elections, material from the legacy media that was highly critical of the PT was frequently shared, particularly if it was produced by commentators who were considered right wing. After the elections, when the criticism was aimed more at the new government, and even when it came from commentators who were considered right wing, this type of content became less common. Any critical comments in groups clearly identified with Bolsonaro led to the author being removed from the group and accusations that he/she was a supporter of the PT who had infiltrated the group. The telephone numbers of people accused of supporting the PT circulate regularly, and group moderators are warned to exclude these people from groups.

Analysing the flow of messages between political groups during the elections, Resende et al. (2018) identified characteristics that indicated the presence of clusters of groups with members in common. They constructed a graphical model of the relationship between members, which revealed a network of users who were associated because they shared images in at least one common group. “We note a large number of users blending together connecting to each other inside those groups. Most users indeed form a single cluster, connecting mostly to other members of the same community. On the other hand, there are also a few users who serve as bridges between two or more groups linked by multiple users at the same time. Furthermore, a few users work as big central hubs, connecting multiple groups simultaneously. Lastly, some groups have a lot of users in common, causing these groups to be strongly inter-connected, making it even difficult to distinguish them” (2019, p. 6). This would suggest that WhatsApp is working not so much as an instant messaging app but as a social network, like Twitter and Facebook. Other evidence, as shown above, allows us to conclude that these groups may be centrally managed, although this is invisible to the ordinary user.

Conclusion

Commenting on the use of micro-targeting in campaigns, Kreiss points out that it is “likely most effective in the short run when campaigns use them to mobilize identified supporters or partisans” (2017). It seems to be the case of what happened in Brazil in the 2018 elections, in which a candidate was able to tap into a conservative sentiment, harnessing it against the progressive field.

Even though it is not possible to fully confirm the hypothesis that WhatsApp has been used as an effective tool to direct messages to micro-segmented voters, we have shown that the campaign of Jair Bolsonaro used the app to deliver messages (and disinformation) to exacerbate political feelings present in the political debate of the legacy media - antipetismo (Davis & Straubhaar, 2019) - and add to them much more conservative elements in the moral field (anti-feminism and anti-LGBT), which brought back topics from the times of the military dictatorship (anti-communism). Beyond the effects on the left, the radicalisation promoted by Bolsonaro’s campaign was able to neutralise any other candidate on the centre, even on the centre-right, associating them with the establishment and with the notion of a corrupt political system. In the symbolic assemblage (Norton, 2017) that was formed, the elected candidate ended up representing the most effective answer against the political system, although many voted for him for different reasons. In a similar fashion to Trump, Bolsonaro “ran an insurgent campaign that called for shaking up the system” (Kivisto, 2019, p. 212)

There is enough evidence that the WhatsApp chat groups feature was weaponised by Bolsonaro supporters. Although WhatsApp does not provide a service for micro-targeting audiences, there is evidence that third party companies, dedicated to non-political marketing campaigns, provided that kind of service in the context of elections, sometimes using illegal databases. There are reports that Haddad’s campaign has also used WhatsApp to deliver messages to voters (Rebello, Costa, & Prazeres, 2018). However, as the sample collected by Resende et al. (2019) suggests, there is no evidence that the left coalition has employed the same tactics as Bolsonaro’s in secretly managing multiple WhatsApp chat groups.

Among the many problems involved in the use of a platform like WhatsApp in an election campaign, we would like to point out one in particular: the invisibility of the actors that produce, monitor, distribute and/or direct the contents viewed and/or shared by most users. The current architecture of the platform does not allow, once appropriated for purposes of election campaigns and micro-targeting, users to notice or become aware that they are being monitored and managed. Writing on voter surveillance in Western societies, Bennett reminds us that “surveillance has particular, and somewhat different, effects depending on whether we are consumers, employees, immigrants, suspects, students, patients or any number of other actors” (2015, p. 370). The case of use of WhatsApp in Brazilian elections shows how a surveillant structure was built on top of a group message service that allegedly uses cryptography to protect its user's privacy.

Resende et al. (2019) characterised a network structure of the monitored WhatsApp groups that evidence a coordinated activity of some members. There are no clear means for regular WhatsApp chat group members to notice if they are being monitored or laterally surveilled by other group members or even other second-hand observers outside the groups. Studies on perception and experience of Facebook users show that when they notice that a post is sponsored they tend to be less persuaded than when exposed by a regular post by a friend or acquaintance (Kruikemeier et al., 2016). But unlike Facebook, where users can have a huge number of connections although a great part of them may not be very close or not close at all, most contacts of WhatsApp users are closer to a personal circle, thus setting a relationship of trust with the content received. Writing on family WhatsApp chat groups in Brazil, Benjamin Junge classifies them as both “public” of sorts, an “open space for the sharing of information and opinions” (2019, pp. 13-14), and closed in the strict sense, because they are only accessible to family members. Although this trust-based relation may be transformed when the user is a member of larger groups, the experience of proximity and connection with the members of a group is bigger than, for instance, among Facebook friends and Twitter followers. WhatsApp favours a stronger relationship of trust between group members and the content shared, which implies that is a more susceptible field for the spread of misinformation. Cesarino (2019a) posits “trust” as one of the affordances of the WhatsApp platform, affirming that most of the political content forwarded to its users during the 2018 election was pushed by friends or family members.

Possible asymmetries of information, persuasion tactics and/or influence strategies within chat groups are rather hard to detect. In countries like Brazil, this condition is reinforced by the impossibility that many users have to reach beyond the platform to check information shared, something that might provide a context or additional information about the contents circulating in the platform. With the zero-rating plans offered by telecom companies, users are subject to tariffs that they cannot afford if they seek other sources of information. This perceptive confinement is particularly worrying in a context of the wide dissemination of disinformation, just like what happened in the 2018 election period, since most users are not only unaware of the authorship of the contents that reach them, but also they cannot reasonably check and verify such contents. The 'near-sighted" environment (in fact the most appropriate eye disorder here would be loss of peripheral sight) of WhatsApp is also favoured by its one-to-one communication structure, which prevents side visibility, transversal or common visibility in the platform. The lack of a common field of visibility would not be a problem if WhatsApp restricted its stated or projected technical functionality - that of being an instant messenger. However, when the tool begins to function as a typical social network - as stated by Resende et al. (2019), and starts to be massively appropriated for political campaigns, it is critical to have more symmetric relationships of visibility, as well as the possibility to build a common visible field that can be debated, examined and audited.

At least since the 2014 elections and especially after the contested impeachment of President Dilma Rousseff (PT), Brazil lives in a period of political and institutional instability. Recently leaked messages exchanged by prosecutors and judges involved in the investigation of corruption scandals help to draw a picture of a justice system contaminated by political goals (Fishman et al., 2019). That struggle certainly played a role in the ineffectiveness of the electoral legislation to curb the illegal use of WhatsApp in the 2018 elections. We described in this article many of the illegalities that surrounded the electoral process. In 2019, the Brazilian Congress approved a data protection law in many aspects compliant with the EU’s General Data Protection Regulation (GDPR) that can help to strengthen the fairness of future elections (if and when the country can restore its political and institutional normalcy).

However, as we hope to have exposed here, there is a complex dynamic between the legacy media and what is created and shared by political actors and supporters. Much of the dis-informative content we have analysed was produced having as background a radicalisation trend noticed on the legacy media. The fact that the means of communication in Brazil are highly concentrated in the hands of a few groups and lacks political diversity certainly played an important role in paving a way for political radicalisation. Zero-rating policies that fuels the popularity of one specific platform (WhatsApp) and curbs users from accessing a full functioning internet obviously are a practical impediment for a voter that could be educated to adequately research and check the news stories they receive.

References

“A gente, infelizmente, contribuiu”, diz Monica Iozzi sobre popularidade de Bolsonaro - Emais ["Unfortunately, we contributed," says Monica Iozzi about Bolsonaro's popularity]. (2018, April 4). O Estado de S.Paulo. Retrieved April 1, 2019 from https://emais.estadao.com.br/noticias/gente,a-gente-infelizmente-contribuiu-diz-monica-iozzi-sobre-popularidade-de-bolsonaro,70002254686

Armijo, L. E., & Rhodes, S. D. (2017). Explaining infrastructure underperformance in Brazil: Cash, political institutions, corruption, and policy Gestalts. Policy Studies, 38(3), 231–247. https://doi.org/10.1080/01442872.2017.1290227

Audi, A., & Dias, T. (2018, October 22). VÍDEO: Seu número de telefone vale 9 centavos no zap dos políticos [VIDEO: Your phone number is worth 9 cents at the politicians' zap]. The Intercept. Retrieved July 15, 2019, from https://theintercept.com/2018/10/22/whatsapp-politicos/

Avelar, D. (2019, October 30). WhatsApp fake news during Brazil election ‘favoured Bolsonaro’. The Guardian. Retrieved from https://www.theguardian.com/world/2019/oct/30/whatsapp-fake-news-brazil-election-favoured-jair-bolsonaro-analysis-suggests

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Belli, L. (2018, December 5). WhatsApp skewed Brazilian election, proving social media’s danger to democracy. The Conversation. Retrieved 4 December 2019, from http://theconversation.com/whatsapp-skewed-brazilian-election-proving-social-medias-danger-to-democracy-106476

Bennett, C. J. (2015). Trends in Voter Surveillance in Western Societies: Privacy Intrusions and Democratic Implications. Surveillance & Society, 13(3/4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: A Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Boito, A., & Saad-Filho, A. (2016). State, State Institutions, and Political Power in Brazil. Latin American Perspectives, 43(2), 190–206. https://doi.org/10.1177/0094582X15616120

Bradshaw, S., & Howard, P. N. (2018). Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation [Report]. Oxford: Project on Computational Propaganda, Oxford Internet Institute. Retrieved from https://comprop.oii.ox.ac.uk/research/cybertroops2018/

Carlo, J. D., & Kamradt, J. (2018). Bolsonaro e a cultura do politicamente incorreto na política brasileira [Bolsonaro and the culture of the politically incorrect in Brazilian politics]. Teoria e Cultura, 13(2). https://doi.org/10.34019/2318-101X.2018.v13.12431

Carvalho, R. (2016). O governo Lula e a mídia impressa: Estudo sobre a construção de um pensamento hegemônico [The Lula government and the printed media: Study on the construction of hegemonic thinking]. São Paulo: Pontifícia Universidade Católica de São Paulo. Retrieved from https://tede2.pucsp.br/handle/handle/3708

Cesarino, L. (2019). On Digital Populism in Brazil. PoLAR: Political and Legal Anthropology Review. Retrieved from https://polarjournal.org/2019/04/15/on-jair-bolsonaros-digital-populism/

Cesarino, L. (2019a). Digitalização da política: reaproximando a cibernética das máquinas e a cibernética da vida [Digitization of policy: bringing cybernetics closer to machines and cybernetics to life]. Manuscript submitted for publication.

Coding Rights. (2018). Data as a tool for political influence in the Brazilian elections. Retrieved from Coding Rights website: https://www.codingrights.org/data-as-a-tool-for-political-influence-in-the-brazilian-elections/

Davis, S., & Straubhaar, J. (2019). Producing Antipetismo: Media activism and the rise of the radical, nationalist right in contemporary Brazil. International Communication Gazette, 82(1), 82–100. https://doi.org/10.1177/1748048519880731

Demori, L., & Locatelli, P. (2018, June 5). Massive Truckers’ Strike Exposes Political Chaos as Brazil Gears Up for Elections in October. The Intercept. Retrieved 28 November 2019, from https://theintercept.com/2018/06/05/brazil-truckers-strike/

Fishman, A., Martins, R. M., Demori, L., Greenwald, G., & Audi, A. (2019, June 17). “Their Little Show”: Exclusive: Brazilian Judge in Car Wash Corruption Case Mocked Lula’s Defense and Secretly Directed Prosecutors’ Media Strategy During Trial. The Intercept. Retrieved July 12, 2019, from https://theintercept.com/2019/06/17/brazil-sergio-moro-lula-operation-car-wash/

Foley, C. (2019). Balls in the air: The macho politics of Brazil’s new president plus ex-president Dilma Rousseff’s thoughts on constitutional problems. Index on Censorship, 48(2), 26–28. https://doi.org/10.1177/0306422019858496

Galpaya, H. (2017, February) Zero-rating in Emerging Economies. [Paper No. 47]. Waterloo, Ontario; London: Global Commission on Internet Governance; Centre for International Governance Innovation; Chatham House. Retrieved December 14, 2019 from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.47_1.pdf

Haggerty, K., & Ericson, R. (Eds.). (2006). The New Politics of Surveillance and Visibility. Toronto; Buffalo; London: University of Toronto Press. Retrieved from http://www.jstor.org/stable/10.3138/9781442681880

Hunter, W., & Power, T. J. (2005). Lula’s Brazil at Midterm. Journal of Democracy, 16(3), 127–139. https://doi.org/10.1353/jod.2005.0046

Issenberg, S. (2012). The Victory Lab: The Secret Science of Winning Campaigns. New York: Crown Publishers.

Joyce, S. N. (2013). A Kiss Is (Not) Just a Kiss: Heterodeterminism, Homosexuality and TV Globo Telenovelas. International Journal of Communication, 7. Retrieved from https://ijoc.org/index.php/ijoc/article/view/1832

Junge, B. (2019). “Our Brazil Has Become a Mess”: Nostalgic Narratives of Disorder and Disinterest as a “Once-Rising Poor” Family from Recife, Brazil, Anticipates the 2018 Elections. The Journal of Latin American and Caribbean Anthropology. https://doi.org/10.1111/jlca.12443

Kivisto, P. (2019). Populism’s Efforts to De-legitimize the Vital Center and the Implications for Liberal Democracy. In J. L. Mast & J. C. Alexander (Eds.), Politics of Meaning/Meaning of Politics: Cultural Sociology of the 2016 U.S. Presidential Election (pp. 209–222). https://doi.org/10.1007/978-3-319-95945-0_12

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Kruikemeier, S., Sezgin, M., & Boerman, S. C. (2016). Political Microtargeting: Relationship Between Personalized Advertising on Facebook and Voters’ Responses. Cyberpsychology, Behavior, and Social Networking, 19(6), 367–372. https://doi.org/10.1089/cyber.2015.0652

Lafuente, J. (2018, October 9). Bolsonaro’s surprise success in Brazil gives new impetus to the rise of the far right. El País. Retrieved from https://elpais.com/elpais/2018/10/09/inenglish/1539079014_311747.html

Machado, C. (2018, November 13). WhatsApp’s Influence in the Brazilian Election and How It Helped Jair Bolsonaro Win [Blog Post]. Council on Foreign Relations. Retrieved from https://www.cfr.org/blog/whatsapps-influence-brazilian-election-and-how-it-helped-jair-bolsonaro-win

Magenta, M., Gragnani, J., & Souza, F. (2018, October 24). WhatsApp ‘weaponised’ in Brazil election. BBC News. Retrieved from https://www.bbc.com/news/technology-45956557

Marés, C., Becker, C., & Resende, L. (2018, October 18). Imagens falsas mais compartilhadas no WhatsApp não citam presidenciáveis, mas buscam ratificar ideologias [WhatsApp's most shared fake images don't quote presidential, but seek to ratify ideologies]. Retrieved July 15, 2019, from Agência Lupa website: https://piaui.folha.uol.com.br/lupa/2018/10/18/imagens-falsas-whatsapp-presidenciaveis-lupa-ufmg-usp/

Moura, M., & Michelson, M. R. (2017). WhatsApp in Brazil: Mobilising voters through door-to-door and personal messages. Internet Policy Review, 6(4). Retrieved from https://doi.org/10.14763/2017.4.775

Melo, P. C. (2018, October 18). Empresários bancam campanha contra o PT pelo WhatsApp [Business owners campaign against PT through WhatsApp]. Folha de S.Paulo. Retrieved from https://www1.folha.uol.com.br/poder/2018/10/empresarios-bancam-campanha-contra-o-pt-pelo-whatsapp.shtml

Melo, P. C. (2019, October 9). WhatsApp Admits to Illegal Mass Messaging in Brazil’s 2018. Folha de S.Paulo. Retrieved from https://www1.folha.uol.com.br/internacional/en/brazil/2019/10/whatsapp-admits-to-illegal-mass-messaging-in-brazils-2018.shtml

Nascimento, W. (2019). Fragmentação partidária e partidos pequenos no Brasil (1998-2014) [Party fragmentation and small parties in Brazil (1998-2014)]. Conversas & Controvérsias, 5(2), 285–305. https://doi.org/10.15448/2178-5694.2018.2.31837

Nagle, A. (2017). Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right. John Hunt Publishing.

Norton, M. (2017). When voters are voting, what are they doing?: Symbolic selection and the 2016 U.S. presidential election. American Journal of Cultural Sociology, 5(3), 426–442. https://doi.org/10.1057/s41290-017-0040-z

Ramalho, R. (2018). TSE apresenta previsão do tempo de propaganda no rádio e na TV para cada candidato à Presidência [TSE presents radio and TV advertising weather forecast for each presidential candidate]. Retrieved 27 November 2019, from G1 website: https://g1.globo.com/politica/eleicoes/2018/noticia/2018/08/23/tse-apresenta-previsao-do-tempo-de-propaganda-no-radio-e-na-tv-para-cada-candidato-a-presidencia.ghtml

Rebello, A., Costa, F., & Prazeres, L. (2018, October 26). PT usou sistema de WhatsApp; campanha de Bolsonaro apagou registro de envio [PT used WhatsApp system; Bolsonaro campaign deleted submission record]. Retrieved 5 December 2019, from UOL Eleições 2018 website: https://noticias.uol.com.br/politica/eleicoes/2018/noticias/2018/10/26/bolsonaro-apagou-registro-whatsapp-pt-haddad-usou-sistema-mensagens.htm

Resende, G., Melo, P., Sousa, H., Messias, J., Vasconcelos, M., Almeida, J., & Benevenuto, F. (2019). (Mis)Information Dissemination in WhatsApp: Gathering, Analyzing and Countermeasures. WWW’ 19: The World Wide Web Conference, 818–828. https://doi.org/10.1145/3308558.3313688

Rossi, A. (2018, June 2). Como o WhatsApp mobilizou caminhoneiros, driblou governo e pode impactar eleições [How WhatsApp mobilized truckers, dribbles government and can impact elections]. BBC News Brazil. Retrieved March 18, 2019, from https://www.bbc.com/portuguese/brasil-44325458

Soares, J. (2018, October 7). Time digital de Bolsonaro distribui conteúdo para 1.500 grupos de WhatsApp [Digital Scholarship Team distributes content to 1,500 WhatsApp groups]. O Globo. Retrieved from https://oglobo.globo.com/brasil/time-digital-de-bolsonaro-distribui-conteudo-para-1500-grupos-de-whatsapp-23134588

Tardáguila, C., Benevenuto, F., & Ortellado, P. (2018, October 19). Opinion | Fake News Is Poisoning Brazilian Politics. WhatsApp Can Stop It. The New York Times. Retrieved from https://www.nytimes.com/2018/10/17/opinion/brazil-election-fake-news-whatsapp.html

Uchoa, P. (2018, September 21). Why Brazilian women are saying #NotHim. BBC News. Retrieved from https://www.bbc.com/news/world-latin-america-45579635

Vale, H. F. D. (2015). Territorial Polarization in Brazil’s 2014 Presidential Elections. Regional & Federal Studies, 25(3), 297–311. https://doi.org/10.1080/13597566.2015.1060964

Valente, J. (2018, July 24). Facebook vai dar transparência para anúncios eleitorais no Brasil [Facebook to provide transparency for election announcements in Brazil]. Retrieved December 4, 2019, from Agência Brasil website: http://agenciabrasil.ebc.com.br/politica/noticia/2018-07/facebook-vai-dar-transparencia-para-anuncios-eleitorais-no-brasil

Valente, R. (2018, October 26). Grupos de WhatsApp simulam organização militar e compartilham apoio a Bolsonaro [WhatsApp Groups simulate military organization and share support for Bolsonaro]. Folha de S.Paulo. Retrieved from https://www1.folha.uol.com.br/poder/2018/10/grupos-de-whatsapp-simulam-organizacao-militar-e-compartilham-apoio-a-bolsonaro.shtml

Woolley, S. C., & Howard, P. N. (2017). Computational Propaganda Worldwide: Executive Summary [Working Paper No. 2017.11]. Oxford: Project on Computational Propaganda, Oxford Internet Institute. Retrieved from https://comprop.oii.ox.ac.uk/research/working-papers/computational-propaganda-worldwide-executive-summary/

Footnotes

1. We describe antipetismo as “an intensely personal resentment of the Workers’ Party (PT)”.

2. Donations from companies, were however not allowed in the last election, only from individuals.

3. According to Lupa Agency, both fake criminal records circulated for the first time in the presidential election in 2010, when Dilma Rousseff (PT) was the first woman to be elected president of Brazil, beating José Serra (PSDB). It must be pointed out that, before that, the printed newspaper with the greatest circulation in Brazil - A Folha de S.Paulo - published in 2009 a version of the false criminal record of Dilma Rousseff, who at the time was Chief of Staff of the government of then-president Luiz Inácio Lula da Silva (PT). The newspaper corrected the mistake 20 days after publishing the false information. Cf. https://www1.folha.Figureuol.com.br/folha/brasil/ult96u556855.shtml

Cranks, clickbait and cons: on the acceptable use of political engagement platforms

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Shortly after Donald Trump won the US presidency, Jim Gilliam (2016), the late president of start-up 3DNA, posted a message on its blog titled “Choosing to Lead”. Gilliam congratulated the “three thousand NationBuilder customers who were on the ballot last week”. These customers subscribed to 3DNA’s NationBuilder service that provides a political engagement platform connecting voters, politicians, volunteers and staffers in an integrated online service. The post continues:

Many of you – including President-elect Donald Trump and all three of the other non-establishment presidential candidates – were outsiders. And that’s why this election was so important. Not just for people in the United States, but for people all over the world. This election unequivocally proves that we are in a new era. One where anyone can run and win. (Gilliam, 2016)

Like many posts from NationBuilder, Gilliam celebrated the company’s mission to democratise access to new political technology, bringing in these outsiders.

Gilliam’s post demonstrates faith that being open is a corporate value as well as a business model. As its mission states today, NationBuilder sells “to everyone regardless of race, age, class, religion, educational background, ideology, gender, sexual orientation or party”. Their mission encapsulates a corporate belief in the democratic potential of their product, one available to anyone, much to the frustration of partisans and other political insiders on both sides who tend to guard access to their innovative technologies (Karpf, 2016b).

Gilliam’s optimism matters globally. Political parties worldwide use NationBuilder as a third-party solution to manage its voter data, outreach, website, communications and volunteer management. As of 3 December 2019, NationBuilder reported that in 2018 it was used to send 1,600,000,000 emails, host 341,000 events and raise $401,000,000 USD across 80 countries. The firm has also raised over $14 million US dollars in venture capital partially based on the promise that it will democratise access to political engagement platforms. Unlike most of its competitors, NationBuilder is a nonpartisan political engagement platform. NationBuilder is one of the few services actively developed and promoted as nonpartisan and cross-sectoral. Conservative, liberal and social democratic parties across the globe use NationBuilder, as the company emphasises in its corporate materials (McKelvey and Piebiak, 2018).

By letting outsiders access political technology, might NationBuilder harm politics in its attempts to democratise it? Now is the time to doubt the promise of political technologies. Platform service providers like NationBuilder are the object of significant democratic anxieties globally, rightly or wrongly (see Adams et al., 2019, for a good review of current research). The political technology industry has been pulled into a broad set of issues including, according to Colin Bennett and Smith Oduro-Marfo: “the role of voter analytics in modern elections; the democratic responsibilities of powerful social media platforms; the accountability and transparency for targeted political ads; cyberthreats to the through malicious actors and automated bots” (2019, pp. 1-2). Following the disclosures of malpractice by Cambridge Analytica and AggregateIQ, these public scandals have pushed historic concerns about voter surveillance, non-consensual data collection and poor oversight of the industry to the fore (Bennett, 2015; Howard, 2006; White, 1961).

My paper questions NationBuilder’s corporate belief that better access to political technology improves politics. In doing so, I add acceptable use of political technology to the list of concerns about elections and campaigns in the digital age. Even as Daniel Kreiss (2016) argues, campaigns are technologically-intensive, there have been no systematic studies of how a political technology is used, particularly internationally. My paper reviews the uses of NationBuilder worldwide. It offers empirical research to understand the real world of a contentious political technology and offers grounded examples of problematic or questionable uses of a political technology. NationBuilder is a significant example, as I discuss, of a nonpartisan political technology firm as opposed to its partisan rivals.

The paper uses a mixed method approach to analyse NationBuilder’s use. Methods included document analysis, content analysis and a novel use of web analytics. To first understand real world use, the study collected a list of 6,435 domains using NationBuilder as of October 2017. The study coded the 125 most popular domains by industry and compared results to corporate promotional materials, looking for how actual use differed from its promoted uses. The goal was to find questionable uses of NationBuilder. Questionable, through induction, came to mean uses that might violate liberal democratic norms. By looking at NationBuilder’s various uses, the review found cases at odds with normative and institutional constraints that allow for ‘friendly rivalry’ or ‘agonism’ in liberal democratic politics (Rosenblum, 2008; Mouffe, 2005). These constraints include a free press, individual rights such as privacy as well as a commitment to shared human dignity.

My limited study finds that NationBuilder can be used to undermine privacy rights and journalistic standards while also promoting hatred. The scan identified three problematic uses as: (1) a mobilisation tool for hate groups targeting cultural or ethnic identities; (2) a profiling tool for deceptive advertising or stealth media and; (3) a fundraising tool for entrepreneurial journalism. These findings raise issues about acceptable use and liberal democracy. For example, I looked for cases of NationBuilder being used by known hate groups inspired by recent concerns about the rise of the extreme right (Eatwell and Mudde, 2004) as well as the use of NationBuilder by news websites reflecting the changing media system (Ananny, 2018).

My findings suggest that NationBuilder may be a democratic technology, without being a liberal one. The traditions of liberalism and democracy are separate and a source of tension according to democratic theorist Chantal Mouffe. “By constantly challenging the relations of inclusion implied by the political constitution of 'the people' - required by the exercise of democracy”, Mouffe writes, “the liberal discourse of universal human rights plays an important role in maintaining the democratic contestation alive” (2009, p. 10). NationBuilder’s democratic mission of being open to outsiders then is at odds with a liberal tradition that pushes fraud, violence and hatred outside respectable politics.

While the paper identifies problems, it does not offer much in the way of solutions. Remedies are difficult and certainly not at NationBuilder’s global scale. As I discuss later, NationBuilder is not responsible for how it is used. The most immediate remedies might be based on corporate social responsibility. To this end, this paper provides three recommendations for revisions to 3DNA’s acceptable use policy to address these questionable uses: (1) reconcile its mission statement with its prohibited uses; (2) require disclosure on customers’ websites; and (3) clarify its relation to domestic privacy law as part of a corporate mission to improve global privacy and data standards. These reforms suggest that NationBuilder’s commitment to non-partisanship needs clarification and that the acceptable use of political technology is fraught – a dilemma that should become a central debate. Political technology firms – NationBuilder and its competitors – must understand that liberal democratic technologies are part of what Bennett and Oduro-Marfo describe as “the political campaigning network”. They continue, “contemporary political campaigning is complex, opaque and involves a shifting ecosystem of actors and organisations, which can vary considerably from society to society” (2019, p. 54). Companies ultimately must consider their obligations to liberal democracy, a political system made possible by technologies like the press and the internet (albeit imperfectly).

The acceptable use of politicised, partisan and nonpartisan technology

The political technology industry is central to the era of technology-intensive campaigning found in the United States and across many Western democracies (Baldwin-Philippi, 2015; Karpf, 2016a; Kreiss, 2016). The industry itself has been a staple of political consultancy throughout modern campaigning. From laser letters for direct mail to apps for canvassing, political technology firms promise to bring efficiency to an otherwise messy campaign (D. W. Johnson, 2016; Kreiss and Jasinski, 2016). NationBuilder itself provides a good summary of this industry in a marketing slide reproduced in Figure 1.

Figure 1: Political technology firms according to NationBuilder

The figure illustrates the numerous practices and sectors drawn into politics as well as the migration of practices. These services help campaigns analyse data and make strategic decisions, principally around advertising buys. Many of these firms position themselves as the primary medium of a campaign, creating a platform connecting voters, politicians, volunteers and staff (Baldwin-Philippi, 2017; McKelvey and Piebiak, 2018). Political technology providers blur the boundaries between nonprofit management, political campaigning and advocacy as well as illustrating the taken-for-grantedness of marketing as a political logic (Marland, 2016).

Political technology firms may be divided between: politicised firms, partisan firms, and nonpartisan firms. Politicised firms sell software or services not explicitly designed for politics put to political ends. These can include payment processors like PayPal or Stripe, web hosting companies like Cloudflare and social media platforms that allow political advertising and political mobilisation. NationBuilder’s slide reproduced in Figure 1 includes some further examples of politicised firms providing social media management software, email marketing software and website content management systems. Technologies like NationBuilder are purpose-built for politics, listed as Political Software in Figure 1. These firms can be split further between partisan firms that work only for conservative, liberal or progressive campaigns and nonpartisan firms. In a market dominated by partisan affiliation, NationBuilder and other nonpartisan companies like Aristotle International and ActionKit are significant. They attempt to be apolitical political technologies.

Political technologies raise added concerns in respect to liberal democratic norms. Who should have access to these services, and how should these services be used? New technologies afford campaigns new repertoires of action that may undermine campaign spending limits, norms around targeting or the privacy rights of voters. Cambridge Analytica, for example, has rekindled longstanding debates about the democratic consequences of political technologies especially micro-targeting (Bodó, Helberger, and de Vreese, 2017; Kreiss, 2017) as well as stoking conjecture about the feasibility of psycho-demographics and its mythic promise of a new hypodermic needle (Stark, 2018).

Acceptable use is largely determined by partisan identity due to the limited scope of regulations on digital campaigning. Regulation for political technology is lacking (Bennett, 2015; Howard and Kreiss, 2010) and likely does not apply to a service provider like NationBuilder in the first place. Instead, so far partisanship has been regarded as the key mechanism to regulate the use of political technology. Most firms are partisan, working with only one party. Acceptable use of political technology is largely judged by its conformity to partisan values. As David Karpf explains, “political technology yields partisan benefits, and the market for political technologies is made up of partisans” (2016b, p. 209). Such partisanship functions as a professional norm about acceptable use, restricting access on partisan lines. Fellow partisans are acceptable, and, in what Karpf calls the zero-sum game of politics, rivals are unacceptable users. Indeed, partisanship is an important corporate asset. The major firm Aristotle International sued its competitor NGP VAN for falsely claiming it only sold to Democratic and progressive firms when it licensed its technologies to Republican firms as well. NGP VAN, the case alleged, was not as adherent a partisan firm as it claimed. The courts eventually dismissed the case (D’Aprile, 2011).

The tensions between partisan versus nonpartisan and politicised companies implicitly reveal a split in the values guiding acceptable use. On one side are firms committed to creating technology to advance their political values while on the other are firms trying to be neutral and to sell to anyone. In what might be seen as an act of community governance, progressive partisans argued that the software should not sell to non-progressive campaigns (Karpf, 2016a).

The lack of an expressed political agenda has caused politicised firms, in particular, to be mired in public scandals raising questions involving liberal democratic norms. A ProPublica investigation found that numerous technology firms supported known extremist groups, prompting Paypal and Plasso to cease offering services to groups identified days later (Angwin, Larson, Varner, and Kirchner, 2017a). That investigation only scratches the surface. A partial list of recent media controversies includes politicised firms being accused of spreading misinformation, aiding hate groups and easing foreign propaganda:

  • Facebook’s handling of the Kremlin-affiliated Internet Research Agency misinformation campaigns during the 2016 presidential elections
  • Hosting service Cloudflare removing Stormfront (Price, 2017)
  • GoFundMe allowing a fraudulent campaign to build a US-Mexico border wall (Holcombe, 2019)
  • GoFundMe removing anti-vaccine fundraising campaigns (Liao, 2019)
  • YouTube’s handling of far-right videos and the circulation of the livestream of the Christchurch terrorist attack

In the academic literature, McGregor and Kreiss (2018) question the willingness of politicised firms to assist American presidential campaigns’ advertising strategies, examining how these companies understood their influence. Braun and Eklund (2019) meanwhile explore the digital advertiser's dilemma of trying to demonetise misinformation and imposter journalism. 1 The CitizenLab has addressed the responsibility of international cybersecurity firms in democratic politics, particularly the use of exploits to target dissidents. 2 Tusikov (2019) most directly explores the question of acceptable use by analysing how financial third parties, like PayPal, have developed their own internal policies to not serve hate groups.

For these reasons, NationBuilder is an important test case for the acceptable uses of political technology. NationBuilder, as discussed above, exemplifies the neutral position of many firms, trying to be in politics without being political. NationBuilder exemplifies the problem for both politicised and nonpartisan firms that let their commitments to openness and neutrality to supersede their responsibilities to be political and understand their responsibility to liberal democracy norms.

Why NationBuilder?

NationBuilder is an intriguing case because it encapsulates a particular American belief in the revolutionary promise of computing for politics that has driven the development and regulation of many major technology firms (Gillespie, 2018; Mosco, 2004; Roberts, 2019). NationBuilder is a venture capital–funded company promising to disrupt politics by democratising access to innovation. According to investor Ben Horowitz (2012), “NationBuilder is that rarest of products that not only has the potential to change its market, but to change the world”. He made these remarks in a 2012 post in which Horowitz’s firm announced $6.25 million USD in Series A funding for NationBuilder’s parent company 3DNA. NationBuilder’s late founder Jim Gilliam exemplifies the “romantic individualism” that Tom Streeter associates with a faith in the thrilling, revolutionary effect of computing. Gilliam was a fundamental Christian who found community through BBSs and eventually told his coming-of-age story in a viral video entitled “The Internet Is My Religion”. He later self-published a book co-authored with the company’s current president, Lea Endres. When generalised and situated as part of NationBuilder’s mission, Gilliam’s story exemplifies Streeter’s observation that “the libertarian’s notion of individuality is proudly abstracted from history, from social differences, and from bodies; all that is supposed not to matter. Both the utilitarian and romantic individualist forms of selfhood rely on creation-from-nowhere assumptions, from structures of understanding that are systematically blind to the collective and historical conditions underlying new ideas, new technologies, and new wealth” (Streeter, 2011, p. 24). NationBuilder still links to this video on its corporate philosophy page as of 3 December 2019.

Figure 2: NationBuilder’s philosophy page captured on 8 January 2020

NationBuilder’s mission synthesises its belief in for-profit social change and romantic individualism. According to NationBuilder’s mission page as of 7 January 2020, it wants to “build the infrastructure for a world of creators by helping leaders develop and organise thriving communities”. This includes a belief that: “The tools of leadership should be available to everyone. NationBuilder does not discriminate. It is arrogant, even absurd, for us to decide which leaders are “better” or “right” (NationBuilder, n.d.).

Their mission resembles Streeter’s discussion of the libertarian abstract sense of freedom that, in NationBuilder’s case, equates egalitarian access to a commercial service with a viable means for democratic reform. Whether nonpartisan or libertarian, NationBuilder has remained committed to this belief, defending its openness from critics, such as in Gilliam’s post from the introduction. In doing so, NationBuilder is at odds with former progressive clients and other political technology firms (Karpf, 2016b).

Methodology

My research combines document analysis, web analytics and content analysis to understand NationBuilder usage. The research team reviewed the company’s 2016, 2017 and 2018 annual reports and archived content from the NationBuilder website using the Wayback Machine. The team also turned to the web services tool BuiltWith. BuiltWith scans the million most-popular sites on the internet to detect what technologies they use. 3 BuiltWith generated a list of 6,435 web domains using NationBuilder on 10 October 2017. Research analysed BuiltWith’s data through two scans:

  1. Coding the top 125 websites (as ranked by Alexa, an Amazon company that estimates traffic on major websites) by industry and comparing the results with the publicised use cases in NationBuilder’s annual reports.
  2. Searching the full list of BuiltWith results for websites classified as extremist by ProPublica, itself informed by the Anti-Defamation League and the Southern Poverty Law Center (Angwin, Larson, Varner, and Kirchner, 2017b).

These methods admittedly offer a limited window into the use of NationBuilder. Rather than provide a complete scan of the NationBuilder ecosystem or track trends over time, this project sought to question whether NationBuilder has uses other than those advertised, and, if so, do these applications raise acceptability questions?

The coding schema classified uses of NationBuilder by industry. The schema developed out of a review of prior literature classifying websites (Elmer, Langlois, and McKelvey, 2012) as well as inductive coding developed by visiting the top fifty websites, paying special attention to self-descriptions, such as mission statements and “about us” sections, as well as other clues to a site’s legal status (as a non-profit or a political action committee) or its overt political party affiliation and stated political positions. In the end, each website in the sample was assigned one of ten codes:

  1. College or university: a higher education institution
  2. Cultural production: a site promoting a book, movie, etc.
  3. Educational organisation: a high school or below
  4. Government initiative: sites operated by incumbent political actors or elected officials that are explicitly tied to their work in government (i.e., not used for a re-election campaign)
  5. Media organisation: sites whose primary purpose is to publish or aggregate media content
  6. NGO: (non-governmental organisation) sites for organisations whose activities can reasonably be considered non-political; these are usually but not exclusively non-profits
  7. Other: sites that are unclassifiable (an individual’s blog, for example)
  8. Political advocacy group: organisations that are not directly associated with an official political party or campaign but nonetheless seek to actively affect the political process
  9. Political party or campaign: sites operated by a political party or dedicated to an individual politician’s electoral campaign
  10. Union: sites run by a labour union

Two independent coders classified the 125-website sample. Intercoder reliability was 88 percent with a Krippendorf’s alpha of 0.8425 (Freelon, 2010). Analysis below removed inconsistencies through consensus coding.

Findings

NationBuilder has applications not well presented in its corporate materials that raise acceptability issues. NationBuilder has been used as:

  1. a mobilisation tool for hate or groups targeting cultural or ethnic identities;
  2. a profiling tool for deceptive advertising or stealth media; and,
  3. a fundraising tool for entrepreneurial journalism.

None of these uses violate the official terms of use or acceptable use policy, a problem discussed later in the analysis, but they do provoke questions that may help improve its acceptable usage policies.

Results of scan 1: top industries found in most popular sites in the sample

The first scan, coding top domains by industry, found uses that differed from the corporate reporting. NationBuilder emphasises certain use cases in its annual report and marketing, signalling the authorised channels of circulation for the product as well as its popular applications. Reporting, however, has been inconsistent with the best data available from 2016. The 2016 Annual Report lists the following uses: political (40.80%), advocacy (24.60%), nonprofit (11.80%), higher education (11%), business (8.30%), association (2%), as well as government (1.50%). 4 NationBuilder also profiles “stand-out leaders” in all its annual reports. Politicians, advocacy groups and nonprofits mostly appear in the list. The 2017 list features six politicians out of ten slots, including the party of French President Emmanuel Macron, New Zealand's Prime Minister Jacinda Ardern, and the leader of Canada's New Democratic Party Jagmeet Singh. Their successful campaigns resonate with NationBuilder's brand of political inclusion. In a new twist on the politics of marketing, NationBuilder also profiles businesses as stand-outs. AllSaints is a British fashion retailer that uses NationBuilder to connect with fans of the brand, especially to announce the opening of new stores.

Chart
Figure 3: Sites using NationBuilder by industry

Media outlets are more prominent in the findings than in 3DNA’s corporate materials. Two media outlets are in the top ten domains in our sample sorted by popularity as seen in Table 1. The third and fourth ranked sites are media organisations. Faith Family America is a right-of-centre news outlet, describing itself as “a real-time, social media community of Americans who are passionate about faith, family, and freedom”. The Rebel is a Canadian-based far-right news outlet, comparable to Breitbart in the US. Seven other media organisations appear in the sample, nine in total as seen in Table 2.

Table 1: The top ten websites in BuiltWith data set, according to Alexa ranking (the lower the number, the more popular the website).

Name

Domain

Industry Code

Country

Alexa Rank

American Heart Foundation

heart.org

NGO

US

10,525

NationBuilder

nationbuilder.com

Cultural production

US

20,791

City of Los Angeles

lacity.org

Government initiative

US

33,419

Faith Family America

faithfamilyamerica.com

Media organisation

US

65,980

The Rebel

therebel.media

Media organisation

CA

71,126

Party of Wales

partyof.wales

Political party or campaign

GB

89,996

Lambeth Council

lambeth.gov.uk

Government initiative

GB

107,745

NALEO Education Fund

naleo.org

Political advocacy group

US

112,071

Labour Party of New Zealand

labour.org.nz

Political party or campaign

NZ

115,253

In Utero (film)

inuterofilm.com

Cultural production

US

120,394

Two of the questionable uses of NationBuilder relate to its move into journalism or at least the simulacra of journalism. Through these media outlets, NationBuilder becomes entangled in the ethics of entrepreneurial journalism. The term refers to the “embrace of entrepreneurialism by the world of journalism” (Rafter, 2016, p. 141).

Table 2: Top media outlets using NationBuilder, according to Alexa ranking (the lower the number, the more popular the website).

Name

Domain

Alexa Rank

Faith Family America

faithfamilyamerica.com

65,980

The Rebel

therebel.media

71,126

Thug Kitchen

thugkitchen.com

192,082

New Civil Rights Movement

thenewcivilrightsmovement.com

224,004

All Cute All the Time

allcuteallthetime.com

266,126

Inspiring Day

inspiringday.com

330,692

Newshounds

newshounds.us

432,266

Brave New Films

bravenewfilms.org

703,101

Mark Latham Outsiders

marklathamsoutsiders.com

763,959

Otherwise, findings resembled data from the 2016 annual report. Political, advocacy and nonprofits accounted for 77.2 % of NationBuilder’s customers in the annual report whereas non-governmental organisations, political advocacy groups, political party or campaign and union comprised 83.2% in the sample. Unlike the annual reports, the sample included nine media-based organisations out of the 125 sites, representing 7.2% of the findings. Other users were marginal. There was a curious absence of any brand ambassadors even though NationBuilder highlights these applications prominently in its annual reports and describes 1% of its customers as such in its 2017 report.

Results of scan 2: extremists or hate groups using NationBuilder

The second scan found one use case by a known hate group as defined by the Southern Poverty Law Center, Act for America (ranked 72nd in sample). The Southern Poverty Law Center describes the group as the “largest anti-Muslim group in America”. Act for America used NationBuilder until August 2018 when it switched to an open-source equivalent, Drupal and CiviCRM (cf. McKelvey, 2011). Act for America did not state the reason for the switch or reply to questions.

Covert political organising?

Three media outlets stood out in the sample: Faith Family America, Inspiring Day and All Cute All the Time. Each site used attention-grabbing headlines (also known as clickbait) to present curated news, updates about the British monarchy, and celebrity news that was respectively conservative, religious and innocuous (rather than cute). None of these sites listed staff in a masthead or provided many details about their reporting; instead, the sites encouraged users to join the community and promoted their Facebook groups.

Figure 4: Faith Family America’s front page, capture 23 April 2019

All three outlets were owned by the company Strategic Media 21 (SM21) – a fact that was only apparent through examining the site’s identical privacy policies. Now offline, SM21 was based in San Jose, California. It seems to have been a digital marketing firm with two different web presences: one for content marketing and one for digital strategy. Neither site discloses much information about the company, but their business strategy seems to be manufacturing audiences for political advertisers. SM21 identifies demographics, then creates specific outlets, like Faith Family America for conservative voters, in the hope of building up a dedicated audience for advertising. Data broker L2 blogged about their 2016 partnership with SM21 on a targeted Facebook political advertising campaign. In this case, SM21 was acting in its digital strategy role, working with clients “on messaging, creative, plans out the buy and launches the campaign using your targeted list” (Westcott, 2016). These services have proved valuable. SM21 has received $2,418,592 USD in political expenditures since 2014 according to OpenSecrets. The biggest clients were the conservative Super PACs (political action committees) Vote to Reduce Debt, and Future in America.

Strategic Media 21 raises suspicions that NationBuilder’s data analytics might be used covertly, a kind of native advertising without the journalism. This might be an application of what Daniels calls cloaked websites “published by individuals or groups that conceal authorship or feign legitimacy in order to deliberately disguise a hidden political agenda” (2009, p. 661). Kim et al. describe similar tactics as stealth media, “a system that enables the deliberate operations of political campaigns with undisclosed sponsors/sources, furtive messaging of divisive issues, and imperceptible targeting” (2018, p. 2). By building these niche websites and corresponding Facebook groups that crosspost their content, SM21 has created a political advertising business. NationBuilder features might assist in this business; its Match feature connects email addresses with other social media accounts, and its Political Capital feature monitors these feeds for certain activities.

Suspicions that Strategic Media 21 used NationBuilder for its data mining features are likely true. According to emails released as part of a suit filed against Facebook by the Office of the Attorney General for the District of Columbia, Facebook employees discussed Cambridge Analytica, NationBuilder and SM21 as all being in violation of its data sharing arrangements (Wong, 2019). As one internal document dated 22 September 2015 explains,

One vendor offering beyond [Cambridge Analytica] we're concerned with (given their prominence in the industry ) is NationBuilder’s “Social Matching,” on which they've pitched our clients and their website simply says “Automatically link the emails in your database to Facebook, Twitter, Linkedin and Klout profiles, and pull in social engagement activity.” I'm not sure what that means, and don't want to incorrectly tell folks to avoid it, but it is definitely being conflated in the market with other less above board services. Can you help clarify what they're actually doing?

Employees worried that “these apps’ data-scraping activity [were] likely non-compliant” according to a reply dated 30 September 2015 and the thread actively debated the matter for months. Facebook employees singled out SM21 in a comment on 20 October 2015. It begins,

thanks for confirming this seems in violation. [REDACTED] mentioned there is a lot of confusion in the political space about how people use Facebook to connect with other offline sets of data. In particular, Strategic Media 21 has been exerting a good deal of pressure on one of our clients to take advantage of this type of appending.

These concerns ensued even as Facebook employees reacted to a Guardian article on 11 December 2015 entitled “Ted Cruz using firm that harvested data on millions of unwitting Facebook users” – one of the first stories to develop in the ongoing scandal involving Cambridge Analytica and Facebook data sharing (Davies, 2015). What ultimately happened to NationBuilder and Strategic Media 21 has not been disclosed to date. NationBuilder still advertises its social matching features. SM21, on the other hand, has gone offline, with its website available for purchase as of September 2019.

This evidence raises our first problem of acceptable use: should NationBuilder be used by covert or stealth media to enable the deceptive or non-consensual collection of data? Strategic Media 21 then parallels Cambridge Analytica where users unwittingly trained its profiles by filling out quizzes on Facebook (Cadwalladr and Graham-Harrison, 2018). Visiting websites running Strategic Media 21 and joining related groups might unwittingly inform advertising profiles harvested through NationBuilder. This is a serious privacy harm noted by a UK Information Commissioner’s Office (2018) report and an Information and Privacy Commissioner for British Columbia (2019) report that both raised the issue of social matching in their own reports on NationBuilder.

Advocacy, journalism or outrage?

NationBuilder has become entangled in the ethics of entrepreneurial journalism and the boundaries between editorial and fundraising through The Rebel, its Australian-affiliate Mark Latham’s Outsiders, and to a lesser extent the Newshounds (Hunter, 2016; Porlezza and Splendore, 2016). All sites rely on crowdsourcing, reminding their readers that they need financial support. Newshounds.us is a media watchdog blog covering Fox News that asks its visitors to donate to support its coverage. The Rebel is a Canadian news start-up, established at the closure of Sun News TV or what was called Fox News North. While start-ups, these outlets position themselves as journalism outlets. Newshounds makes mention of its editor’s journalism degree. The Rebel asks its visitors to subscribe and to help support its journalism.

The line between fundraising and journalism is a clear ethical concern for journalism. As Porlezza and Splendore note in a thoughtful review of accountability and transparency issues in entrepreneurial journalism, the industry has to deal with a challenge “that touches the ethical core of journalism: are journalists in start-ups able to distinguish between their different and overlapping goals of publisher, fundraiser and journalist?” (2016, p. 197). Crowdfunding challenges ethical practice by requiring journalists to pitch and report their stories to the public. At its most extreme, fundraising may tip journalism into what Berry and Sobieraj call outrage public opinion media, “recognisable by the rhetoric that defines it, with its hallmark venom, vilification of opponents, and hyperbolic reinterpretations of current events” (2016, p. 5). Reporting, in this case, becomes a means to outrage its audiences and channel that emotion into donations.

The Rebel, for example, blurred the line between financing a movement and a news outlet. In a now-deleted post on the NationBuilder blog, Torch Agency, the creative agency for The Rebel, explains NationBuilder’s role in launching what it called “Canada’s premier source of conservative news, opinion and activism”. The post continues,

In 36 hours, we built a fully-functional NationBuilder site complete with a database and communication headquarters... The result: through compelling content and top-notch digital tools, The Rebel raised over $100,000 CAD in less than twelve hours providing crucial early funding for its continuation.

The Rebel promised to use NationBuilder to better engage news audiences. The Rebel has repeatedly asserted its status as a journalism outlet against claims to the contrary. The Rebel enlisted the support of national press organisations, PEN Canada and Canadian Journalists for Free Expression, after being denied press credentials for a UN climate conference for being “advocacy journalism” (Drinkwater, 2016). In the Canadian province of Alberta, The Rebel successfully protested being removing from the media gallery because it wasn’t a “journalist source” (Edmiston, 2016).

The Rebel's response to a Canadian terrorist attack best frames the problem of distinguishing between advocacy, fundraising and journalism as well as NationBuilder's challenges in defining acceptable use. On 29 January 2017, a man entered a mosque in Québec City with an AK-47, killing six, seriously wounding five and injuring twelve people (Saminather, 2018). The Rebel launched the website QuebecTerror.com the next day. The initial page urged visitors to donate to send a Rebel reporter to cover the aftermath. The site, days after its claims had been discredited by other outlets, described the killing as inter-mosque violence based on a mistranslation of a public YouTube video. Rather than presenting itself as a journalistic report, the QuebecTerror website appeared as a conventional email fundraising pitch, depicting a dire reality – in this case a “truth” the mainstream media would not report – solvable through donations.

The language and matter of The Rebel’s reporting on the Québec terror attack resemble the tactics of outrage media, inflammatory rhetoric in this case complemented by a service to mobilise those emotions (Berry and Sobieraj, 2014). The Rebel’s response to the Québec terror attack then raises a different problem than journalists being uncomfortable in asking for money, as Hunter (2016) notes in a review of crowdfunding in journalism. Here fundraising overtakes reporting; stories are optimised for outrage. The problem is not new, but rather a consequence of the movement of practices between separate fields. Using the news to solicit funds is a known email marketing tactic. Emails that reacted to the news had the highest open rates according to analysis of Hillary Clinton’s email campaigning (Detrow, 2015). NationBuilder may streamline outrage tactics by channelling user engagement. Called a funnel or a ladder in marketing, NationBuilder has a path feature that tries to nudge user behaviour toward certain goals. Taken together, NationBuilder might ease this questionable form of crowdfunding in entrepreneurial journalism and encourage outrage tactics.

These concerns raise a second question: should NationBuilder be used in journalism, especially on hyper-partisan sites or outrage media already blurring the line between reporting, advocacy and fundraising? For its own part, fundraising ethics did cause turmoil at The Rebel. It suffered a scandal when a former correspondent accused the site of misusing funds, pointing to a disclaimer on the website that stated, “surplus funds raised for specific initiatives will be used for other costs associated with that particular project, such as website development, website hosting, mail, and other such expenses” (Gordon and Goldsbie, 2017). Seemingly, any campaign was part of a general pool of revenue, adding to concerns that certain stories might be juiced to bring in more money to general revenues.

These first two cases situate NationBuilder as part of the networked press. Ananny (2018) introduced the concept of the networked press to argue journalism exists within larger sociotechnical systems, of which NationBuilder is a part. Changes or disruption in these systems, evidenced through the rapid uptake of large social networking sites, do not necessarily imply increased press freedom and, instead require journalists’ practices to acknowledge and adapt to broader infrastructural changes. Just as outlets and journalists need to consider these changes, so too does NationBuilder in understanding how its technology is participating in the infrastructure of the networked press. As seen above, NationBuilder already participates in the ethical quandaries and its emphasis on mobilisation and fundraising may be ill-suited for journalistic outlets. NationBuilder might enable data collection and profiling without sufficient audience consent. NationBuilder might also tip the balance from journalism to outrage media by being a better tool to fundraise than publish stories. How does a firm like NationBuilder recognise its role in facilitating these transfers, particularly the expansion of marketing as the ubiquitous logic of cultural production? Should it ultimately be part of press infrastructure? Does using a political engagement platform ultimately improve journalistic practice? These matters require a more hands-on approach than that which NationBuilder presently offers.

Illiberal uses of political technology

Act for America engages in identity-based political advocacy, targeting American Muslims. Their mission includes immigration reform and combating terrorism. According to the Southern Poverty Law Center, their leadership has questioned the right to citizenship of American Muslims, alluding to mass deportation. Politically such statements seem at odds with the rules of what political theorist Nancy Rosenblum calls the “regulated rivalry” of liberal democracy. To protect itself, a militant democracy needs to ban parties that if elected or capable of influencing government “would implement discriminatory policies or worse: strip opposition religious or ethnic groups of civil or political rights, discriminate against minorities (or majorities), deport despised elements of the population” (Rosenblum, 2008, p. 434). Act for America seems to have engaged in such acts in targeting Muslim Americans.

Figure 5: Act for American website, captured 23 April 2019

NationBuilder then faces a third existential question: should groups that mobilise hate have access to its innovations? Other firms, like PayPal, stopped offering Act for America services after ProPublica reported on their relationship (Angwin et al., 2017a). While defining hate might be a little more difficult for an American firm where there is no clear hate speech laws, NationBuilder operates in many countries with clear laws and could guide corporate policy. That these terms are left missing or undefined in 3DNA’s Acceptable Use Policy is troubling.

The more challenging question that faces the larger industry is what responsibility do service providers have for the speech acts made on their services? As Whitney Phillips and Ryan Milner (2017) reflect, “it is difficult…to know how best – most effectively, most humanely, most democratically – to respond to online speech that antagonises, marginalises, or otherwise silences others. On one level, this is a logistic question about what can be done… The deeper and more vexing question is what should be done” (2017, p. 201) This vexing question is a lingering one, echoing the origins of modern broadcasting policy, which begins with governments and media industries attempting to reconcile preserving free speech without propagating hate speech. The American National Association of Broadcasters established a code of conduct in 1939 in part to ban shows like Father Coughlin’s that aired speeches “plainly calculated or likely to rouse religious or racial hatred and stir up strife” (Miller, 1938, as cited in Brown, 1980, p. 203). The decision did not solve the problem, but rather established institutions to consider these normative matters.

NationBuilder is not merely a broadcaster or a communication channel, but a mobilisation tool. The use of NationBuilder by hate groups should trouble the wider political technology industry and the field of political communication. It is part of a tradition in democratic politics that media technology does not just inform publics, but cultivates them. As Sheila Jasanoff notes, American “laws conceived of citizens as being not necessarily knowing but knowledge-able–that is, capable at need of acquiring the knowledge needed for effective self-governance. This idea of an epistemically competent citizen runs through the American political thought from Thomas Jefferson to John Dewey and beyond” (Jasanoff, 2016, p. 239). Communication is about formation as much as information, of cultivating publics. NationBuilder punctuates an existential question for political technology: is it exceptional or mundane? Is it a glorified spreadsheet or a special class of technology? In short, if NationBuilder is an effective tool of political mobilisation, should it effectively mobilise hate?

From corporate social responsibility to liberal democratic responsibility

Finding solutions to the problematic cases above is part of an international debate about platform governance (DeNardis, 2012; Duguay, Burgess, and Suzor, 2018; Gillespie, 2018; Gorwa, 2019). Platform governance refers to the conduct of large information intermediaries and, by extension, the social impacts of publicly accessible and networked computer technology. Where human rights is one emerging value set for platform governance (Kaye, 2019), the international challenge now is to the appropriate ‘web of influence’ that might address human rights concerns and address the numerous regulatory challenges posed by large technology firms (Braithwaite and Drahos, 2000).

Options include external rules – such as fines and penalties through privacy, data protection or election law – and co-regulatory approaches, like codes of conduct and best practices, in addition to self-regulation, specifically corporate social responsibility and responsibilities bestowed for liability protection. Self-regulation dominates the status quo, at least in the US. The rules are largely self-written by platforms, in large part due to their public service obligations under the US Telecommunications Act (Gillespie, 2018). Companies, like Facebook, have acknowledged a need for changing, publicly calling for government regulation (Zuckerberg, 2018). Today, platforms in good faith moderate users in conversations under acceptable use rules. Users might be banned, suspended, surveilled, deprioritised or demonetised under acceptable use policies (Myers West, 2018). The stakes now involve a debate about the public obligations of platforms and whether they should self-police or be deputised to enforce government rules (DeNardis, 2012; Tusikov, 2017).

The regulation of firms like NationBuilder face even greater regulatory challenges as the field has been historically free from much oversight or responsibilities. Many western democracies did not consider political parties or political data to be under the jurisdiction of privacy law. Enforcement was also lacking. Even though political parties were regulated in Europe, regulators only took their responsibilities seriously after the Facebook/Cambridge Analytica scandal (Bennett, 2015; Howard and Kreiss, 2010). Even with new data protection laws, intermediaries still face limited liability as enforcement tends towards the user than the service provider. Service providers are exempt from liability or penalties for misuse, except in certain cases such as copyright. For its own part, NationBuilder claims zero liability for interactions and hosted content according to its Terms of Service.

Political engagement platforms do face an uncertain global regulatory context. On one hand, they function as service providers largely exempt from laws. On the other hand, international law is uneven and changing (for a recent review, see Bennett and Oduro-Marfo, 2019). Public inquiries in the United Kingdom and Canada have focused more on these companies and their status may be changing. A joint investigation of AggregateIQ by the Privacy Commissioner of Canada and the Information and Privacy Commissioner for British Columbia found that the third-party service provider “had a legal responsibility to check that the third-party consent on which they were relying applied to the activities they subsequently performed with that data” (2019, p. 22). The implication is that AiQ had a corporate responsibility to abide by privacy laws in the provision of its services. The same likely holds for NationBuilder.

Amidst regulatory uncertainty, corporate social responsibility might be the most immediate remedy to questionable uses of NationBuilder. Its mission today might be read as ‘functionalist business ethics’ that believe that the product in and of itself is a social good and that more access, or more sales, improves the quality of elections. Whereas other approaches to corporate social responsibility favour an integrative business ethics where “a company’s responsibilities are not merely restricted in one way or another to the profit principle alone but to sound and critical ethical reasoning” (Busch and Shepherd, 2014, p. 297). Where future debates might require consideration of NationBuilder’s obligations to liberal democracy, the next section considers how NationBuilder’s mission and philosophy might be clarified through the company’s acceptable use policy. NationBuilder might not have to become partisan, but it cannot be neutral toward these institutions of liberal democracy, at least if it wants to continue to believe in its mission to revolutionise politics.

Revising the Acceptable Use Policy is possible and has happened before. Clearly stating the relationship between its mission and prohibited uses would reverse past amendments that narrowed corporate responsibilities. The Acceptable Use Policy as of August 2019, last updated 1 May 2018, is more open than prior iterations. Most bans concern computer security, prohibiting uses that overload infrastructure or accessing data without authorisation. The policy does prohibit “possessing or disseminating child pornography, facilitating sex trafficking, stalking, troll storming, threatening imminent violence, death or physical harm to any individual or group whose individual members can reasonably be identified, or inciting violence”. Until 2014, 3DNA covered acceptable use as part of its terms of service; afterwards it became a separate document. Its Terms of Service agreement from 29 March 2011 banned specific user content including “any information or content that we deem to be unlawful, harmful, abusive, racially or ethnically offensive, defamatory, infringing, invasive of personal privacy or publicity rights, harassing, humiliating to other people (publicly or otherwise), libellous, threatening, profane, or otherwise objectionable” as well as a subsequently removed ban on posting incorrect information. These clauses were removed in the 2014 update that reduced prohibited uses to 15. These clauses have slowly been added back. The most recent acceptable usage policy, as of 1 May 2018, had 31 prohibited uses, adding back clauses regulating user activities.

Recommendation #1: Reconcile its mission statement with its prohibited uses

NationBuilder’s Mission is to connect anyone regardless of “race, age, class, religion, educational background, ideology, gender, sexual orientation or party”. By contrast, its Acceptable Use Policy does not consider the positive freedoms inferred in this mission that could conceivably prohibit campaigns aimed at excluding people from participating in politics. A revised Acceptable Use Policy should apply the implications of its corporate mission to its prohibited uses. Act for America, for example, targets its opponents by race and advocates for greater policing, terrorism laws and immigration enforcement that could disproportionately affect Muslim Americans, acting against NationBuilder’s vision of “a world where everyone has the freedom and opportunity to create what they are meant to create”. Revision might prohibit campaigns or parties targeting assigned identities like race, age, gender or sexual orientation, particularly when messages incite hate, while preserving customers’ right to campaign against ideology, party or other chosen or elective politicised issues. To achieve such a mission, NationBuilder may have to restrict access on political grounds (also called de-platforming) or to restrict certain features. 5

Harmonising its position on political freedom may prompt industry-wide reflection on the function of political technology. How do these services protect the liberal democratic institutions they ostensibly promise to disrupt? In finding shared values, NationBuilder has to consider its place in a partisan field. Can it navigate between parties to describe ethical campaigning, or, alternatively, must it find other companies with shared nonpartisan or libertarian values? The likely outcome either way is a code of conduct for digital campaigning similar to the Alliance of Democracies Pledge for Election Integrity or the codes of conduct of the American Association of Political Consultants or European Association of Political Consultants that discourage campaigns based on intolerance and discrimination. In doing so, NationBuilder might force partisan firms to be more explicit about their professional ethics.

Recommendation #2: Require disclosure on customers’ websites

NationBuilder should disclose when it is used even if it cannot decide if it should be used. Two out of the three questionable uses might have benefitted from the organisations’ disclosing their use of the political engagement platform, especially when used in journalism. At a minimum, NationBuilder should require sites to disclose using NationBuilder, ideally through an icon or other disclosure in the page’s footer that might create the possibility of public awareness (Ezrahi, 1999). NationBuilder might also consider requiring users to disclose what tracking features, such as Match and Political Capital, are enabled on the website not unlike the disclosure about data tracking under Europe’s Cookie Law that disclose a site’s use of the tracking tool.

NationBuilder might further standardise the reporting of uses found in its annual report and potentially release data in a separate report. Transparency reports have become an important, albeit imperfect, reporting tool in telecommunications and social media industries (Parsons, 2019). These reports, ideally, would continue the preliminary method used in this paper, breaking down NationBuilder’s use by industry over time and potentially expanding data collection to include other trends such as use by country, use by party and the popularity of features. Such proactive disclosure might also normalise greater transparency in a political technology industry known for its secrecy.

Recommendation #3: Clarify relationship to domestic privacy law

A revised acceptable use policy might define NationBuilder expectations for privacy rights both to explain its normative vision for privacy and improve its customers’ implementation of local privacy law. By contrast, the acceptable use policy currently prohibits applications that “infringe or violate the intellectual property rights (including copyrights), privacy rights or any other rights of anyone else (including 3DNA)”. The clause does not clarify the meaning of privacy rights or jurisdiction. Elsewhere 3DNA states that all its policies “are governed by the internal substantive laws of the State of California, without respect to its conflict of laws principles”. Such ambiguity confuses a clear interpretation of privacy rights, the law and regulation mentioned in the policy. A revised clause should state NationBuilder’s position on privacy as a human right, in such a way that it provides some guidance as to whether local law meets its standards and denies access in countries that do not meet its privacy expectations. Further, the acceptable use policy should also clarify that it expects customers to abide by local privacy law, and, in major markets, if it has any reporting obligations to privacy offices.

Clarifying its position on privacy rights recognises the important function NationBuilder plays in educating its customers on the law. NationBuilder may help implement “proactive guidance on best campaigning practices” recommended by Bennett and Oduro-Marfo (2019, p. 54). For its GDPR compliance, NationBuilder has built a blog and offers many educational resources to customers to understand how to campaign online and to respect the law. These posts clearly state that they are not legal advice, but they do help to interpret the law for practitioners. Similar posts could help clients understand if they should disable certain features in NationBuilder, such as Match or Political Capital, to comply with their domestic privacy law. Revisions to its Acceptable Use Policy might be another avenue for NationBuilder to educate its customers.

Adding privacy to its corporate mission may be a further signal of NationBuilder’s corporate responsibility. NationBuilder has an altogether different relationship to customer privacy than other advertising-based technology firms. Its revenues come from being a service provider and securing data. With growing pressure on political parties to improve their cyber-security, NationBuilder can help its clients better protect their voter data as well as call for better privacy protection in politics overall. Indeed, NationBuilder could advocate for privacy law to apply to its political clients to both simplify its regulatory obligations and reduce risk. Improving privacy may lessen its institutional risk of being associated with major privacy violations as well as simplifying the complex work of setting privacy rules on its own. As such, NationBuilder might be a possible global advocate for better privacy and data protection, a role to date unfulfilled long after public controversy.

Conclusion

This paper has reported the results of empirical research about the acceptable use of a political technology. The results demonstrate that political technologies have questionable uses involving their application within politics. Specifically when does a political movement exceed the limits of liberal democratic discourse? When are its uses in journalism and advertising unacceptable? The experiment demonstrates that harms to liberal democracy can be a reasonable way to judge technological risks. Liberal democratic norms are another factor to consider to the wider study of software and technological accountability (Johnson and Mulvey, 1995; Nissenbaum, 1994). These concerns have a long history. Norbert Wiener, who helped develop digital computing, warned against its misuse in Cold War America for the management of people (Wiener, 1966, p. 93). By comparison, science and technology scholar Sheila Jasanoff (2016) questions if the benefits of technological innovation outweigh the risks of global catastrophe, inequality, and human dignity. While catastrophic global devastation is commonly seen as a questionable use of technology (unless it concerns the climate), there is less consensus about how technology might undermine democracy, of which liberal democracy is just one set of norms. What democracy should be defended is debated (with fault lines drawn between representative, direct and deliberative democracy as well as between liberal and republican traditions) (Karppinen, 2013). My method helps to clarify this debate by finding inductively uses that might challenge many theories of democracy. Further research could extend the analysis to focus on particular concerns to different forms of democracy and democratic theories.

My specific recommendations for NationBuilder may improve the accountability of the political industry at large. Oversight is a major problem in the accountability of political platforms. My methods could easily be scaled to observe more companies and countries. No doubt privacy, information and election regulation could implement this approach as part of their situational awareness. The questionable uses here then offer uses to watch for:

  1. Does the technology facilitate or ease deceptive or non-consensual data collection?
  2. Does the technology undermine journalistic standards and consider its role in the networked press?
  3. Does the technology facilitate the mobilisation of hate groups?

Where remedies to these challenges may be unclear, at the very least ongoing monitoring could identify potential harms sooner than academic research.

Questionable uses of NationBuilder should trouble the company as well as the larger political technology industry and the field of political communication. Faith in political technologies has changed campaign practice in many democracies as well as attracted ongoing international regulatory attention concerned with trust and fairness during elections. Technologies like NationBuilder are premised on the value of communications to political engagement. They are designed to increase engagement and improve efficiency. NationBuilder and its peers are a special class of political technology and thus their obligations to liberal democratic values should be scrutinised. If 3DNA seeking to better politics suffers these abuses then what will come from political firms with less idealism?

Acknowledgements

The author wishes to acknowledge Colin Bennett, the Surveillance Studies Centre, the Office of the Information and Privacy Commissioner for British Columbia, and the Commissioner Michael McEvoy for organising the research workshop on data-driven elections. In addition, the author extend a thank you to Mike Miller, the Social Science Research Council, Erika Franklin Fowler, Sarah Anne Ganter, Natali Helberger, Shannon McGregor, Rasmus Kleis Nielsen and especially Dave Karpf and Daniel Kreiss for organising the 2019 International Communication Association post-conference, “The Rise of Platforms” where versions of this paper were presented and received helpful feedback. Sincere thanks to the anonymous reviewers, Frédéric Dubois, Robert Hunt, Tom Hackbarth and especially Colin Bennett for their feedback and suggestions.

References

Adams, K., Barrett, B., Miller, M., & Edick, C. (2019). The Rise of Platforms: Challenges, Tensions, and Critical Questions for Platform Governance [Report]. New York: Social Science Research Council. https://doi.org/10.35650/MD.2.1971.a.08.27.2019

Ananny, M. (2018). Networked press freedom: creating infrastructures for a public right to hear. Cambridge, MA: The MIT Press.

Angwin, J., Larson, J., Varner, M., & Kirchner, L. (2017a, August 19). Despite Disavowals, Leading Tech Companies Help Extremist Sites Monetize Hate. ProPublica. Retrieved from https://www.propublica.org/article/leading-tech-companies-help-extremist-sites-monetize-hate

Angwin, J., Larson, J., Varner, M., & Kirchner, L. (2017b, August 19). How We Investigated Technology Companies Supporting Hate Sites. ProPublica. Retrieved from https://www.propublica.org/article/how-we-investigated-technology-companies-supporting-hate-sites

Baldwin-Philippi, J. (2015). Using technology, building democracy: digital campaigning and the construction of citizenship. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190231910.001.0001

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bennett, C. (2015). Trends in voter surveillance in western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3/4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bennet, C. J., & Oduro-Marfo, S. (2019, October). Privacy, Voter Surveillance, and Democratic Engagement: Challenges for Data Protection Authorities. 2019 International Conference of Data Protection and Privacy Commissioners (ICDPPC), Greater Victoria. Retrieved from https://web.archive.org/web/20191112101932/https:/icdppc.org/wp-content/uploads/2019/10/Privacy-and-International-Democratic-Engagement_finalv2.pdf

Berry, J. M., & Sobieraj, S. (2016). The outrage industry: political opinion media and the new incivility. New York: Oxford University Press.

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Braun, J. A., & Eklund, J. L. (2019). Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Journalism. Digital Journalism, 7(1), 1–21. https://doi.org/10.1080/21670811.2018.1556314

Brown, J. A. (1980). Selling airtime for controversy: NAB self‐regulation and Father Coughlin. Journal of Broadcasting, 24(2), 199–224. https://doi.org/10.1080/08838158009363979

Busch, T., & Shepherd, T. (2014). Doing well by doing good? Normative tensions underlying Twitter’s corporate social responsibility ethos. Convergence, 20(3), 293–315. https://doi.org/10.1177/1354856514531533

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17) How Cambridge Analytica Turned Facebook ‘Likes’ into a Lucrative Political Tool. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm.

Daniels, J. (2009). Cloaked websites: propaganda, cyber-racism and epistemology in the digital era. New Media & Society, 11(5), 659–683. https://doi.org/10.1177/1461444809105345

Davies, H. (2015, December 11). Ted Cruz campaign using firm that harvested data on millions of unwitting Facebook users. The Guardian. Retrieved from https://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data

D’Aprile, S. (2011, September 25). Judge Ends Aristotle Advertising Case. Campaigns & Elections. Retrieved from http://www.campaignsandelections.com/campaign-insider/259782/judge-ends-aristotle-advertising-case.thtml

DeNardis, L. (2012). Hidden Levers of Internet Control. Information, Communication & Society, 15(5), 720–738. https://doi.org/10.1080/1369118X.2012.659199

Detrow, S. (2015, December 15). “Bill Wants To Meet You”: Why Political Fundraising Emails Work. All Things Considered, NPR. Retrieved from https://www.npr.org/2015/12/15/459704216/bill-wants-to-meet-you-why-political-fundraising-emails-work

Drinkwater, R. (2016, October 17). Ezra Levant’s Rebel Media denied UN media accreditation. Macleans. Retrieved from https://www.macleans.ca/news/canada/ezra-levant-rebel-media-denied-un-media/

Duguay, S., Burgess, J., & Suzor, N. (2018). Queer women’s experiences of patchwork platform governance on Tinder, Instagram, and Vine: Convergence. https://doi.org/10.1177/1354856518781530

Eatwell, R., & Mudde, C. (Eds.). (2004). Western democracies and the new extreme right challenge. New York: Routledge.

Edmiston, J. (2016, February 17). Alberta NDP says ‘it’s clear we made a mistake’ in banning Ezra Levant’s The Rebel. National Post. Retrieved from https://nationalpost.com/news/politics/alberta-ndps-ban-on-rebel-reporters-to-stay-for-at-least-two-weeks-while-it-reviews-policy-government-says

Elmer, G., Langlois, G., & McKelvey, F. (2012). The Permanent Campaign: New Media, New Politics. New York: Peter Lang.

Ezrahi, Y. (1999). Dewey’s Critique of Democratic Visual Culture and Its Political Implications. In D. Kleinberg-Levin (Ed.), Sites of Vision: The Discursive Construction of Sight in the History of Philosophy (pp. 315–336). Cambridge, MA: The MIT Press.

Freelon, D. G. (2010). ReCal: intercoder reliability calculation as a Web service. International Journal of Internet Science, 5(1), 20–33. Retrieved from https://www.ijis.net/ijis5_1/ijis5_1_freelon.pdf

Gillespie, T. (2007). Wired Shut: Copyright and the Shape of Digital Culture. Cambridge, MA: The MIT Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Gilliam, J. (2016, November 17). Choosing to lead. Retrieved from https://nationbuilder.com/choosing_to_lead

Gordon, G., & Goldsbie, J. (2017, August 17). Ex-Rebel Contributor Makes Explosive Claims In YouTube Video. CANADALAND. Retrieved from https://www.canadalandshow.com/caolan-robertson-why-left-rebel/

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6). https://doi.org/10.1080/1369118X.2019.1573914

Holcombe, M. (2019, January 13). GoFundMe to refund the $20 million USD raised for the border wall. CNN. Retrieved from https://www.cnn.com/2019/01/12/us/border-wall-gofundme-refund/index.html

Horowitz, B. (2012, March 8). How to Start a Movement [Blog post]. Retrieved from http://www.bhorowitz.com/how_to_start_a_movement

Howard, P. N. (2006). New Media Campaigns and the Managed Citizen. Cambridge: Cambridge University Press.

Howard, P. N., & Kreiss, D. (2010). Political parties and voter privacy: Australia, Canada, the United Kingdom, and United States in comparative perspective. First Monday, 15(12). https://doi.org/10.5210/fm.v15i12.2975

Hunter, A. (2016). “It’s Like Having a Second Full-Time Job”: Crowdfunding, journalism, and labour. Journalism Practice, 10(2), 217–232. https://doi.org/10.1080/17512786.2015.1123107

Information Commissioner’s Office. (2018). Democracy disrupted? Personal information and political influence. Information Commissioner’s Office. https://ico.org.uk/media/2259369/democracy-disrupted-110718.pdf

Jasanoff, S. (2016). The Ethics of Invention: Technology and the Human Future. New York: W.W. Norton & Company.

Johnson, D. G., & Mulvey, J. M. (1995). Accountability and computer decision systems. Communications of the ACM, 38(12), 58–64. https://doi.org/10.1145/219663.219682

Johnson, D. W. (2016). Democracy for Hire: A History of American Political Consulting. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190272692.001.0001

Karpf, D. (2016a). Analytic activism: digital listening and the new political strategy. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190266127.001.0001

Karpf, D. (2016b). The partisan technology gap. In E. Gordon & P. Mihailidis (Eds.), Civic media: technology, design, practice (pp. 199–216). Cambridge, MA; London: The MIT Press.

Karpf, D. (2018). The many faces of resistance media. In D. S. Meyer & S. Tarrow (Eds.), The Resistance: The Dawn of the Anti-Trump Opposition Movement (pp. 143–161). New York: Oxford University Press. https://doi.org/10.1093/oso/9780190886172.003.0008

Karppinen, K. (2013). Uses of democratic theory in media and communication studies. Observatorio, 7(3), 1–17. Retrieved from http://www.scielo.mec.pt/scielo.php?script=sci_arttext&pid=S1646-59542013000300001&lng=en&nrm=iso

Kaye, D. (2019). Speech police: The global struggle to govern the Internet. New York: Columbia Global Reports.

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., … Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 25(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kreiss, D. (2016). Prototype politics: technology-intense campaigning and the data of democracy. New York: Oxford University Press.

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Kreiss, D., & Jasinski, C. (2016). The Tech Industry Meets Presidential Politics: Explaining the Democratic Party’s Technological Advantage in Electoral Campaigning, 2004–2012. Political Communication, 1–19. https://doi.org/10.1080/10584609.2015.1121941

Kreiss, D., & Mcgregor, S. C. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Levy, S. (2002). Crypto: Secrecy and Privacy in the New Code War. London: Penguin.

Liao, S. (2019, March 22). GoFundMe pledges to remove anti-vax campaigns. The Verge. Retrieved from https://www.theverge.com/2019/3/22/18277367/gofundme-anti-vax-campaigns-remove-pledge

Marland, A. (2016). Brand Command: Canadian Politics and Democracy in the Age of Message Control. Vancouver: University of British Columbia Press.

McEvoy, M. (2019, February 6). Full Disclosure: Political parties, campaign data, and voter consent [Investigation Report No. P19-01]. Victoria: Office of the Information and Privacy Commissioner for British Columbia. Retrieved from https://www.oipc.bc.ca/investigation-reports/2278

McEvoy, M., & Therrien, D. (2019). AggregateIQ Data Services Ltd. [Investigation Report No. P19-03 PIPEDA-035913; p. 29]. Victoria; Gatineua: Office of the Information and Privacy Commissioner for British Columbia; Office of the Privacy Commissioner of Canada. https://www.oipc.bc.ca/investigation-reports/2363

McKelvey, F. (2011). A Programmable Platform? Drupal, Modularity, and the Future of the Web. The Fibreculture Journal, (18), 232–254. Retrieved from http://eighteen.fibreculturejournal.org/2011/10/09/fcj-128-programmable-platform-drupal-modularity-and-the-future-of-the-web/

McKelvey, F., & Piebiak, J. (2018). Porting the political campaign: The NationBuilder platform and the global flows of political technology. New Media & Society, 20(3), 901–918. https://doi.org/10.1177/1461444816675439

Mosco, V. (2004). The Digital Sublime: Myth, Power, and Cyberspace. Cambridge: The MIT Press.

Mouffe, C. (2005). The Return of the Political. New York: Verso.

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059

Nationbuilder. (n.d.). NationBuilder mission and beliefs. NationBuilder. Retrieved January 7, 2020, from https://nationbuilder.com/mission

Nissenbaum, H. (1994). Computing and Accountability. Communications of the ACM, 37(1), 72–80. https://doi.org/10.1145/175222.175228

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131. https://doi.org/10.1177/0007650317717957

Phillips, W., & Milner, R. M. (2017). The ambivalent Internet: mischief, oddity, and antagonism online. Malden: Polity.

Porlezza, C., & Splendore, S. (2016). Accountability and Transparency of Entrepreneurial Journalism. Journalism Practice, 10(2), 196–216. https://doi.org/10.1080/17512786.2015.1124731

Price, M. (2017, August 16). Why We Terminated Daily Stormer [Blog post]. Retrieved from https://blog.cloudflare.com/why-we-terminated-daily-stormer/

Rafter, K. (2016). Introduction: understanding where entrepreneurial journalism fits in. Journalism Practice, 10(2), 140–142. https://doi.org/10.1080/17512786.2015.1126014

Roberts, S. T. (2019). Behind the screen: content moderation in the shadows of social media. New Haven: Yale University Press.

Rosenblum, N. L. (2008). On the side of the angels: an appreciation of parties and partisanship. Princeton: Princeton University Press.

Saminather, N. (2018, August 10). Factbox: Canada’s biggest mass shootings in recent history. Reuters. Retrieved from https://www.reuters.com/article/us-canada-shooting-factbox-idUSKBN1KV2BO

Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), 204–231. https://doi.org/10.1177/0306312718772094

Streeter, T. (2011). The Net Effect: Romanticism, Capitalism, and the Internet. New York: New York University Press.

Tusikov, N. (2019). Defunding Hate: PayPal’s Regulation of Hate Groups. Surveillance & Society, 17(1/2), 46–53. https://doi.org/10.24908/ss.v17i1/2.12908

Westcott, P. (2016, September 23). Targeted Facebook advertising made possible from L2 and Strategic Media 21 [Blog post]. Retrieved from http://www.l2political.com/blog/2016/09/23/targeted-facebook-advertising-made-possible-from-l2-and-strategic-media-21/

White, H. B. (1961). The Processed Voter and the New Political Science. Social Research, 28(2), 127–150. https://www.jstor.org/stable/40969367

Wong, J. C. (2019, August 23). Document reveals how Facebook downplayed early Cambridge Analytica concerns. The Guardian. Retrieved from https://www.theguardian.com/technology/2019/aug/23/cambridge-analytica-facebook-response-internal-document

Footnotes

1. Promoting new media activism that shames companies for advertising on certain sites, a kind of corporate social responsibility for ad spending (Karpf, 2018).

2. The studies in ongoing reports can be found at: https://citizenlab.ca/2017/02/bittersweet-nso-mexico-spyware/

3. The company provides customers with this data for a fee. Most customers are web technology firms looking for information on who uses their competitors

4. The 2017 annual report re-categorised its usage statistics using active verbs, such as win or engage, rather than industry. As a result, there is no way to determine usage trends over time. The 2017 annual report also includes a curious ‘Other’ category without much detail. The 2018 report abandoned reporting by industry altogether.

5. See Chapter 7 in Phillips and Milner, 2017 for a good summary of the challenge of public debate and moderation.

Big data and democracy: a regulator’s perspective

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction: all roads lead to Victoria, British Columbia

As the Information and Privacy Commissioner for British Columbia, I am entrusted with enforcing the province’s two pieces of privacy legislation – BC’s Freedom of Information and Protection of Privacy Act (FIPPA) and the Personal Information Protection Act (PIPA). When these laws came into force, “Big Data” was not a term in public discourse. All that of course has changed irrevocably.

In late summer 2017, I left the Office of the Information and Privacy Commissioner for BC (OIPC) to take on an assignment with the UK Information Commissioner’s Office (ICO), under the former BC Commissioner, Elizabeth Denham. I had temporarily stepped aside from my role as Deputy Commissioner at the OIPC to help lead the ICO’s investigation of how the UK’s political parties collected and used the personal information of voters (Information Commissioner's Office, United Kingdom, 2018). Their enquiry came on the heels of media reports concerning the potential misuse of data during the country’s European Union referendum (Doward, 2017). At the time, I had no idea that I would find myself standing, two years later, full circle from the world’s most notorious data breach - the Facebook/Cambridge Analytica scandal, which affected more than 80 million users worldwide (Badshah, 2018).

Soon after my arrival, I interviewed the key data strategists of UK’s two largest parties. With their significant resources, these parties were able to gather volumes of voter data and make predictions about voting intentions. They also had the means to target specific classes of voters in pursuit of their support. Those party representatives were very nervous about sharing the mechanics of their work. This reluctance intersects with one of modern democracy’s great challenges, and it was why the ICO launched its investigation: citizens know very little about what information political parties collect about them – and how that information is being used.

The public was concerned about the opacity of political campaign systems even before the ICO began its work. But their concern was soon to grow exponentially. In early 2018, UK’s Information and Privacy Commissioner Elizabeth Denham and I met a young man in a lawyer’s office in London. He was from, of all places, Victoria, BC, and his name was Christopher Wylie.

We were the first regulator or law enforcement agency to talk with Wylie, and his story was sweeping and shocking in its breadth. Many weeks later, the rest of the world would learn the details of how Cambridge Analytica extracted psychological profiles of millions of Facebook users for the purposes of weaponising targeted political messages. Many of those revelations were reported exclusively by The Guardian journalist Carole Cadwalladr, who wrote extensively about the whistleblower beginning in March 2018 (Cadwalladr, 2018).

Suddenly the whole world was paying attention to the explosive mix of new technologies and personal information and how it was impacting political campaigns. The paired names of Cambridge Analytica and Facebook became seared on the public’s consciousness, providing a cautionary tale about what can go wrong when people’s personal information is abused in such a nefarious manner (Meredith, 2018). The Facebook/Cambridge Analytica breach has, without question, shaken the public’s confidence in our democratic political campaigning system.

It is no doubt purely coincidental that so many storylines of this scandal trace their way to Victoria, BC. Adding to the regulatory connection and the whistleblower Christopher Wylie, is the Victoria-based company AggregateIQ Data Services (AIQ), which analysed the data on behalf of the Cambridge Analytica’s parent company, SCL Elections. Victoria is also home to Dr. Colin Bennett. He has long been a leading global authority in pursuing the study of these matters, work that has now taken on an even greater urgency. For this reason, the OIPC teamed up with the Big Data Surveillance project coordinated by the Surveillance Studies Centre at Queen’s University, and headed by Dr David Lyon. Our office was pleased to host the workshop in April 2019 on “Data-Driven Elections: Implications and Challenges for Democratic Societies,” from which the papers in this collection originated.

Privacy regulators, along with electoral commissioners, are on the frontline of these questions about the integrity of our democratic institutions. However, in some jurisdictions, regulators have very few means to address them, especially as it concerns political parties whose appetites for the personal information of voters is seemingly insatiable. How then does a regulator convince the politicians to regulate themselves?

Home, and another Facebook/Cambridge Analytica investigation

Following the execution of the warrant on Cambridge Analytica’s office in London, I returned home to accept my appointment as BC’s fourth Information and Privacy Commissioner. However, there was no escaping the fallout of the issues I investigated in the UK and their connections to Canada.

As it turned out, the personal information of more than 600,000 Canadian Facebook users had been vacuumed up by Cambridge Analytica (Braga, 2018). But this wasn’t the only Canadian connection to the breach. After acquiring that personal information, Cambridge Analytica (CA) and its parent company SCL Elections needed a way to make the data ready for practical use for potential clients of CA. That requirement would eventually be filled by AIQ.

With a BC and a Canadian connection to this story it became clear that coordinated regulatory action would be required. The Privacy Commissioner of Canada, Daniel Therrien, and I decided to join forces to look at both the Facebook/CA breach and the activities of AIQ (OIPC news release, 2018).

This joint investigation found that Facebook did little to ensure its users’ data was properly protected. Its privacy protection programme was, as my colleague Daniel Therrien called it, an “empty shell.” We recommended, among other things, that Facebook properly audit all of the apps that were allowed to collect their users’ data (OIPC news release, 2019b). Facebook brazenly rejected our findings and recommendations, which of course underscores another huge obstacle.

How can society hold global giants like Facebook to account? Many data protection authorities, like my office, lack the enforcement tools commensurate with the challenges that these companies pose to the public interest. Moreover, my office and that of the federal commissioner have far fewer powers than those available to our European counterparts. I have order-making power, but I cannot levy fines. My federal counterpart does not even possess order-making power; he investigates in response to complaints, or on his own initiative, and he makes recommendations. The only real vehicle he has at his disposal to seek a remedy, is through an unwieldy court application process, which is ongoing as I write. So one can understand why we look with some envy to the European DPAs, which now have the power to impose administrative fines of up to 20 million euros, or 4% of the company’s worldwide annual revenue.

British Columbia’s political parties and privacy regulation

Responsibility for privacy legislation in Canada is divided between the federal government and the provinces (OPC, 2018). The federal regulator, the Office of the Privacy Commissioner of Canada, has no authority to hold political parties to account. Among the provinces that have their own privacy legislation, only one has regulatory oversight over political parties: British Columbia. Given all that was going on at home and around the world concerning political parties, we decided to exercise that authority and investigate how BC’s political parties were collecting and using voter information (OIPC news release, 2019a).

To varying degrees, the province’s three main political parties expressed concerns about how BC’s private sector privacy legislation, the Personal Information Protection Act (PIPA) (BC PIPA , 2019) might impact their ability to communicate with voters. Some argued that voter participation rates were in decline, and that it was already difficult enough to reach out to voters. Anything that further impaired methods of connecting with voters, like privacy regulation, would only make the problem worse, they said. My answer was this: can anyone seriously maintain that the Facebook/CA scandal has generated an increased desire on the part of citizens to participate in the electoral process? It is only when voters trust political parties to handle their data with integrity, and in a manner consistent with privacy law, that they will feel truly confident in engaging robustly in the political campaign system.

After some initial trepidation, these political parties, each with representatives in the legislative assembly, cooperated fully with my office’s investigation. It is important to stress we did not find abuses of personal data, of the kind exhibited in the Facebook/CA scandal. Nor did we find the sophisticated level of data collection and analytics associated with heavily funded US political campaigns. We did find, however, that the parties were collecting and using a lot of information about voters and had a clear appetite to do much more. So, our work was timely, and hopefully it will result in short-circuiting the worst excesses seen in other jurisdictions.

BC’s private sector privacy legislation is principle-based, and the predominant principle is consent. Consent was therefore the lens through which we assessed the parties’ actions. By that measure, many of their practices contravened our law and many others were at least legally questionable.

Like in many jurisdictions, BC’s political parties are entitled by law to receive a voters’ list of names and addresses from the Chief Electoral Officer (Elections BC, 2019). This information forms the basic building block upon which parties compile comprehensive voter profiles. We found what parties add to the voters’ list is sometimes done with consent, but in many cases, without. Door-to-door canvassing, the oldest and most basic method of gathering voter intelligence, is an example of this two-sided coin. The transparent element of this contact occurs when a voter voluntarily expresses support and provides a phone number or email for contact purposes. During the same visit, however, the canvasser might record, without permission, the voter’s ethnicity (or at least the canvasser’s best guess about the voter’s ethnicity). We found many instances of this type of information being downloaded in a party’s database.

We also found that parties used voter contact information in a way that was well beyond the voter’s expectation. The voter could expect to be called or emailed to be reminded to vote on election day. They would not expect, and did not consent to the party disclosing their personal information to Facebook. There is little question that Facebook has become the newest and best friend to almost all political parties. The company offers a rich gateway to parties to reach their supporters and potential supporters.

The problem is that neither the parties nor Facebook do very much to explain this to voters.

It starts with the fact that many, if not most, voters are Facebook users. The parties disclose their voters’ contact information to Facebook in the hope of matching them with their Facebook profiles. If successful, Facebook offers the party two valuable things. The first is the ability to advertise to these individuals in their Facebook newsfeed. Facebook gains revenue from this and is impliedly provided the opportunity to understand the political leanings of their users. The second use for matched voters contact information is Facebook’s analysis of the uploaded profiles to find common characteristics among them. When complete, it offers the party, for a price, the opportunity to advertise to these other Facebook users who “look like” the party’s supporters. This tool, which is also used by commercial businesses, provides an extremely effective means for political campaigns to reach an audience of potentially persuadable voters.

Reduced to its basics, what many parties do is gather voters’ contact information supposedly for direct communication purposes but instead disclose it to a social media giant for advertising and analytic purposes. It would understate things to say that these interactions with voters lack transparency.

All kinds of other data are also added and combined with basic voter information. Postal zone demographics and polling research for example are commonly deployed as parties attempt to attribute characteristics to voters with a view to targeting those they judge to be likely supporters. Most parties “score” voters on the likelihood of support.

Whether using these data sources to score voters is permitted by privacy law is a matter likely to be tested in the near future. What is clear, however, is that parties should be far more transparent about their actions, for no other reason than voters have a right to know what information parties have about them.

Political parties in BC and the UK have been slow to the realisation of this obligation. Parties in both jurisdictions told me that prediction data about a voter, for example their “persuadability score” was not, in fact, their personal information. In another instance, I was told that this score was a commercial secret that could be withheld from a voter. Such a stance does not breed public confidence and is contrary to privacy law in BC and most other jurisdictions.

What then does the future hold? Even the most cursory reflection on this question suggests the answers will come from multiple places. For my office, the first and most obvious ally in protecting the public interest is the province’s Chief Electoral Officer. He is not only the keeper of the voter list, he also tackles other immeasurably complex matters like election interference and disinformation campaigns. The need for us to work together is critical.

We have already embarked on a joint venture to develop a code of conduct for political parties which we hope BC political parties will adopt. Unlike the UK, which has a mechanism for the imposition of such codes, political parties in BC will have to voluntarily sign on. The benefit to parties is that everyone ends up playing by the same set of well-understood standards. It also means the public will have far greater confidence in their interactions with the parties, which hopefully will result in a far more robust campaign system. Thus far, the parties have accepted my investigation report’s recommendations and are working cooperatively with me and with the BC Chief Electoral Officer on developing the code.

The investigation into AIQ

Facebook is but one company political campaigns turn to. Of course, it is far from the only one. This brings us back to Victoria, BC, home base for AIQ (AggregateIQ, 2019). Among other things, AIQ developed “Project Ripon,” the architecture designed to make usable all of the data ingested by Cambridge Analytica. AIQ justified the non-consensual targeting of US voters on the basis that its American clients who collected the personal information at first instance had no legal obligation to seek consent.

My joint report on AIQ with the Office of the Privacy Commissioner of Canada (McEvoy & Therrien, 2019) determined that this was no legal answer. The fact is, they were a Canadian company operating in BC and were obligated to comply with BC law. This meant that AIQ had to exercise due diligence in seeking assurance from their clients that consent was employed to collect the personal information they intended to use. They obviously didn’t.

Subsequent events also undermined AIQ’s claim that the US data they worked with was lawfully obtained. The Federal Trade Commission found in late 2019 that Cambridge Analytica, working with app developer Aleksandr Kogan, deceived users by telling them they would not collect their personal information (Agreement Containing Consent Order as to Respondent Aleksandr Kogan, 2019). The message to Canadian companies operating globally is that they must observe the rules in the places that they work in and those of their home territory.

In the end, AIQ agreed to follow the recommendations of our joint report, cleaning up its practices to ensure, going forward, that they secure consent for the personal information used in client projects as well as improving security measures for safeguarding that information.

Conclusion

In the two years that have taken me from Victoria to the UK and back, the privacy landscape has changed dramatically. The public’s understanding of the privacy challenges we face as a society has been seismically altered. In the past, it was not uncommon for people to ask me at events, “Maybe I share a bit too much of my information on Facebook, but what could possibly go wrong with that?” . Facebook/Cambridge Analytica graphically demonstrated exactly what could go wrong. The idea that enormous numbers of people could be psychologically profiled for the purposes of political message targeting without their knowledge shocked people. The CanTrust Index (CanTrust Index, 2019) that tracks trust sentiment of major brands with Canadians found that Facebook’s reputation took a sharp nosedive with Canadians between 2017 and 2019, according to their most recent survey. In 2017, 51 per cent of Canadians trusted Facebook. Today, just 28 per cent say the same.

The underpinnings of the entire economic model now driving the internet and its social media platforms has been put on full public display. While few people can describe the detailed workings of real time bidding or a cookie’s inner mechanics, most comprehend that their daily activities across the web are tracked in meticulous detail.

While public awareness and concern have shifted markedly, action by legislators to address those concerns has in many jurisdictions tried to keep in step. It is true that the General Data Protection Regulation has set a new standard in Europe but even there, the more exacting ePrivacy Regulation has stalled (Bannerman, 2019). Canadian legislators have tried to be proactive in responding to privacy’s changing landscape. However, the Privacy Commissioner of Canada, as noted, is without direct order-making power. Neither of our offices have the authority to issue administrative penalties. It is little wonder citizens are left to ask “Who has my back?” when organisations violate data protection laws.

The road to reform will not be an easy one. There is considerable bureaucratic and corporate resistance to a stronger regulatory regime. Working together, regulators, academics, and civil society must continue to urge for legislative reform. Our efforts are strongly supported by public sentiment. The OPC’s 2019 survey on privacy (OPC, 2019) revealed that a substantial number of Canadians would be far more willing to transact with a business that was under an enhanced regulatory regime that included financial penalties for wrongdoers. That should be a signal to organisations, including political parties, that data protection is good for their business and that they too should support strengthened regulatory frameworks.

References

AggregateIQ. (2019, December 18). Discover what we can do for you. Retrieved from https://aggregateiq.com/

Badshah, N. (2018, April 8). Facebook to contact 87 million users affected by data breach. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/apr/08/facebook-to-contact-the-87-million-users-affected-by-data-breach

Bannerman, N. (2019, November 26). EU countries fail to agree on OTT ePrivacy regulation. Capacity Media. Retrieved from https://www.capacitymedia.com/articles/3824568/eu-countries-fail-to-agree-on-ott-eprivacy-regulation

British Columbia, Personal Information Protection Act (PIPA). (2019, November 27). Retrieved from http://www.bclaws.ca/civix/document/id/complete/statreg/03063_01

Braga, M. (2018, April 4). Facebook says more than 600,000 Canadians may have had data shared with Cambridge Analytica. CBC News. Retrieved from https://www.cbc.ca/news/technology/facebook-cambridge-analytica-600-thousand-canadians-1.4605097

Cadwalladr, C. (2018, March 17). I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump:

CanTrust Index. (2019, April 25). Retrieved from https://www.getproof.com/thinking/the-proof-cantrust-index/

Doward, J. (2017, March 4). Watchdog to launch inquiry into misuse of data in politics. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/04/cambridge-analytics-data-brexit-trump

Elections BC. (2019). What we do. Retrieved from https://elections.bc.ca/about/what-we-do/

Information Commissioner's Office (ICO). (2018, November 6). Investigation into the use of data analytics in political campaigns [Report]. London: Information Commissioner’s Office. Retrieved from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

McEvoy, M., & Therrien, D. (2019c). AggregateIQ Data Services Ltd [Investigation Report No. P19-03 PIPEDA-035913]. Victoria; Gatineau: Office of the Information & Privacy Commissioner for British Columbia; Office of the Privacy Commissioner of Canada. Retrieved from https://www.oipc.bc.ca/investigation-reports/2363

Meredith, S. (2018, April 10). Facebook-Cambridge Analytica: A timeline of the data hijacking scandal. CNBC. Retrieved from https://www.cnbc.com/2018/04/10/facebook-cambridge-analytica-a-timeline-of-the-data-hijacking-scandal.html

Office of Information and Privacy Commissioner for BC (OIPC).(2018, April 5). BC, federal commissioners initiate joint investigations into Aggregate IQ, Facebook [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2144

Office of Information and Privacy Commissioner for BC (OIPC) (2019a, February 6). BC Political Parties aren’t doing enough to explain how much personal information they collect and why [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2279

Office of Information and Privacy Commissioner for BC (OIPC) (2019b, April 25). Facebook refuses to address serious privacy deficiencies despite public apologies for breach of trust [News release]. Retrieved from https://www.oipc.bc.ca/news-releases/2308

Office of the Privacy Commissioner of Canada (OPC). (2018, January 1) Summary of privacy laws in Canada. Retrieved from https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/02_05_d_15/

Office of the Privacy Commissioner of Canada (OPC) (2019, May 9) 2018-19 Survey of Canadians on Privacy [Report No. POR 055-18]. Retrieved from https://www.priv.gc.ca/en/opc-actions-and-decisions/research/explore-privacy-research/2019/por_2019_ca/

United States, Federal Trade Commission(FTC). (2019).Agreement Containing Consent Order as to Respondent Aleksandr Kogan. Retrieved from https://www.ftc.gov/system/files/documents/cases/182_3106_kogan_do.pdf

 


Disinformation optimised: gaming search engine algorithms to amplify junk news

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Did the Holocaust really happen? In December 2016, Google’s search engine algorithm determined the most authoritative source to answer this question was a neo-Nazi website peddling holocaust denialism (Cadwalladr, 2016b). For any inquisitive user typing this question into Google, the first website recommended by Search linked to an article entitled: “Top 10 reasons why the Holocaust didn’t happen”. The third article “The Holocaust Hoax; IT NEVER HAPPENED” was published by another neo-Nazi website, while the fifth, seventh, and ninth recommendations linked to similar racist propaganda pages (Cadwalladr, 2016b). Up until Google started demoting websites committed to spreading anti-Semitic messages, anyone asking whether the Holocaust actually happened would have been directed to consult neo-Nazi websites, rather than one of the many credible sources about the Holocaust and tragedy of World War II.

Google’s role in shaping the information environment and enabling political advertising has made it a “de facto infrastructure” for democratic processes (Barrett & Kreiss, 2019). How its search engine algorithm determines authoritative sources directly shapes the online information environment for more than 89 percent of the world’s internet users who trust Google Search to quickly and accurately find answers to their questions. Unlike social media platforms that tailor content based on “algorithmically curated newsfeeds” (Golebiewski & boyd, 2019), the logic of search engines is “mutually shaped” by algorithms — that shape access — and users — who shape the information being sought (Schroeder, 2014). By facilitating information access and discovery, search engines hold a unique position in the information ecosystem. But, like other digital platforms, the digital affordances of Google Search have proved to be fertile ground for media manipulation.

Previous research has demonstrated how large volumes of mis- and disinformation were spread on social media platforms in the lead up to elections around the world (Hedman et al., 2018; Howard, Kollanyi, Bradshaw, & Neudert, 2017; Machado et al., 2018). Some of this disinformation was micro-targeted towards specific communities or individuals based on their personal data. While data-driven campaigning has become a powerful tool for political parties to mobilise and fundraise (Fowler et al., 2019; Baldwin-Philippi, 2017), the connection between online advertisements and disinformation, foreign election interference, polarisation, and non-transparent campaign practices has caused growing anxieties about its impact on democracy.

Since the 2016 presidential election in the United States, public attention and scrutiny has largely focused on the role of Facebook in profiting from and amplifying the spread of disinformation via digital advertisements. However, less attention has been paid to Google, who, along with Facebook, commands more than 60% of the digital advertising market share. At the same time, a multi-billion-dollar search engine optimisation (SEO) industry has been built around understanding how technical systems rank, sort, and prioritise information (Hoffmann, Taylor, & Bradshaw, 2019). The purveyors of disinformation have learned to exploit social media platforms to engineer content discovery and drive “pseudo-organic engagement”. 1 These websites — that do not employ professional journalistic standards, report on conspiracy theory, counterfeit professional news brands, and mask partisan commentary as news — have been referred to as “junk news” domains (Bradshaw, Howard, Kollanyi, & Neudert, 2019).

Together, the role of political advertising and the matured SEO industry make Google Search an interesting and largely underexplored case to analyse. Considering the importance of Google Search in connecting individuals to news and information about politics, this paper examines how junk news websites generate discoverability via Google Search. It asks: (1) How do junk news domains optimise content, through both paid and SEO strategies, to grow discoverability and grow their website value? (2) What strategies are effective at growing discoverability and/or growing website value; and (3) What are the implications of these findings for ongoing discussions about the regulation of social media platforms?

To answer these questions, I analysed 29 junk news domains and their advertising and search engine optimisation strategies between January 2016 and March 2019. First, junk news domains make use of a variety of SEO keyword strategies in order to game Search and grow pseudo-organic clicks and grow their website value. The keywords that generated the highest placements on Google Search focused on (1) navigational searches for known brand names (such as searches for “breitbart.com”) and (2) carefully curated keyword combinations that fill so-called “data voids” (Golebiewski & Boyd, 2018), or a gap in search engine queries (such as searches for “Obama illegal alien”). Second, there was a clear correlation between the number of clicks that a website receives and the estimated value of the junk news domains. The most profitable timeframes correlated with important political events in the United States (such as the 2016 presidential election, and the 2018 midterm elections), and the value of the domain increased based on SEO optimised — rather than paid — clicks. Third, junk news domains were relatively successful at generating top-placements on Google Search before and after the 2016 US presidential election. However, their discoverability abruptly declined beginning in August 2017 following major announcements from Google about changes to its search engine algorithms, as well as other initiatives to combat the spread of junk news in search results. This suggests that Google can, and has, measurably impacted the discoverability of junk news on Search.

This paper proceeds as follows: The first section provides background on the vocabulary of disinformation and ongoing debates about so-called fake news, situating the terminology of “junk news” used in this paper in the scholarly literature. The second section discusses the logic and politics of search, describing how search engines work and reviewing the existing literature on Google Search and the spread of disinformation. The third section outlines the methodology of the paper. The fourth section analyses 29 prominent junk news domains to learn about their SEO and advertising strategies, as well as their impact on content discoverability and revenue generation. This paper concludes with a discussion of the findings and implications for future policymaking and private self-regulation.

The vocabulary of political communication in the 21st century

“Fake news” gained significant attention from scholarship and mainstream media during the 2016 presidential election in the United States as viral stories pushing outrageous headlines — such as Hillary Clinton’s alleged involvement in a paedophile ring in the basement of a DC pizzeria — were prominently displayed across search and social media news feeds (Silverman, 2016). Although “fake news” is not a new phenomenon, the spread of these stories—which are both enhanced and constrained by the unique affordances of internet and social networking technologies — has reinvigorated an entire research agenda around digital news consumption and democratic outcomes. Scholars from diverse disciplinary backgrounds — including psychology, sociology and ethnography, economics, political science, law, computer science, journalism, and communication studies — have launched investigations into circulation of so-called “fake news” stories (Allcott & Gentzkow, 2017; Lazer et al., 2018), their role in agenda-setting (Guo & Vargo, 2018; Vargo, Guo, & Amazeen, 2018), and their impact on democratic outcomes and political polarisation (Persily, 2017; Tucker et al., 2018).

However, scholars at the forefront of this research agenda have continually identified several epistemological and methodological challenges around the study of so-called “fake news”. A commonly identified concern is the ambiguity of the term itself, as “fake news” has come to be an umbrella term for all kinds of problematic content online, including political satire, fabrication, manipulation, propaganda, and advertising (Tandoc, Lim, & Ling, 2018; Wardle, 2017). The European High-Level Expert Group on Fake News and Disinformation recently acknowledged the definitional difficulties around the term, recognising it “encompasses a spectrum of information types…includ[ing] low risk forms such as honest mistakes made by reporters…to high risk forms such as foreign states or domestic groups that would try to undermine the political process” (European Commission, 2018). And even when the term “fake news” is simply used to describe news and information that is factually inaccurate, the binary distinction between what is true and what is false has been criticised for not adequately capturing the complexity of the kinds of information being shared and consumed in today’s digital media environment (Wardle & Derakhshan, 2017).

Beyond the ambiguities surrounding the vocabulary of “fake news”, there is growing concern that the term has begun to be appropriated by politicians to restrict freedom of the press. A wide range of political actors have used the term “fake news” to discredit, attack, and delegitimise political opponents and mainstream media (Farkas & Schou, 2018). Certainly, Donald Trump’s (in)famous use of the term “fake news”, is often used to “deflect” criticism and to erode the credibility of established media and journalist organisations (Lakoff, 2018). And many authoritarian regimes have followed suit, adopting the term into a common lexicon to legitimise further censorship and restrictions on media within their own borders (Bradshaw, Neudert, & Howard, 2018). Given that most citizens perceive “fake news” to define “partisan debate and poor journalism”, rather than a discursive tool to undermine trust and legitimacy in media institutions, there is general scholarly consensus that the term is highly problematic (Nielsen & Graves, 2017).

Rather than chasing a definition of what has come to be known as “fake news”, researchers at the Oxford Internet Institute have produced a grounded typology of what users actually share on social media (Bradshaw et al., 2019). Drawing on Twitter and Facebook data from elections in Europe and North America, researchers developed a grounded typology of online political communication (Bradshaw et al., 2019; Neudert, Howard, & Kollanyi, 2019). They identified a growing prevalence of “junk news” domains, which publish a variety of hyper-partisan, conspiracy theory or click-bait content that was designed to look like real news about politics. During the 2016 presidential election in the United States, social media users on Twitter shared as much “junk news” as professionally produced news about politics (Howard, Bolsover, Kollanyi, Bradshaw, & Neudert, 2017; Howard, Kollanyi, et al., 2017). And voters in swing-states tended to share more junk news than their counterparts in uncontested ones (Howard, Kollanyi, et al., 2017). In countries throughout Europe — in France, Germany, the United Kingdom and Sweden — junk news inflamed political debates around immigration and amplified populist voices across the continent (Desiguad, Howard, Kollanyi, & Bradshaw, 2017; Kaminska, Galacher, Kollanyi, Yasseri, & Howard, 2017; Neudert, Howard, & Kollanyi, 2017).

According to researchers on the Computational Propaganda Project junk news is defined as having at least three out of five elements: (1) professionalism, where sources do not employ the standards and best practices of professional journalism including information about real authors, editors, and owners (2) style, where emotionally driven language, ad hominem attacks, mobilising memes and misleading headlines are used; (3) credibility, where sources rely on false information or conspiracy theories, and do not post corrections; (4) bias, where sources are highly biased, ideologically skewed and publish opinion pieces as news; and (5) counterfeit, where sources mimic established news reporting including fonts, branding and content strategies (Bradshaw et al., 2019).

In a complex ecosystem of political news and information, junk news provides a useful point of analysis because rather than focusing on individual stories that may contain honest mistakes, it examines the domain as a whole and looks for various elements of deception, which underscores the definition of disinformation. The concept of junk news is also not tied to a particular producer of disinformation, such as foreign operatives, hyper-partisan media, or hate groups, who, despite their diverse goals, deploy the same strategies to generate discoverability. Given that the literature on disinformation is often siloed around one particular actor, does not cross platforms, nor integrate a variety of media sources (Tucker et al., 2018), the junk news framework can be useful for taking a broader look at the ecosystem as a whole and the digital techniques producers use to game search engine algorithms. Throughout this paper, I use the term “junk news” to describe the wide range of politically and economically motivated disinformation being shared about politics.

The logic and politics of search

Search engines play a fundamental role in the modern information environment by sorting, organising, and making visible content on the internet. Before the search engine, anyone who wished to find content online would have to navigate “cluttered portals, garish ads and spam galore” (Pasquale, 2015). This didn’t matter in the early days of the web when it remained small and easy to navigate. During this time, web directories were built and maintained by humans who often categorised pages according to their characteristics (Metaxas, 2010). By the mid-1990s it became clear that the human classification system would not be able to scale. The search engine “brought order to chaos by offering a clean and seamless interface to deliver content to users” (Hoffman, Taylor, & Bradshaw, 2019).

Simplistically speaking, search engines work by crawling the web to gather information about online webpages. Data about the words on a webpage, links, images, videos, or the pages they link to are organised into an index by an algorithm, analogous to an index found at the end of a book. When a user types a query into Google Search, machine learning algorithms apply complex statistical models in order to deliver the most “relevant” and “important” information to a user (Gillespie, 2012). These models are based on a combination of “signals” including the words used in a specific query, the relevance and usability of webpages, the expertise of sources, and other information about context, such as a user’s geographic location and settings (Google, 2019).

Google’s search rankings are also influenced by AdWords, which allow individuals or companies to promote their websites by purchasing “paid placement” for specific keyword searches. Paid placement is conducted through a bidding system, where rankings and the number of times the advertisement is displayed are prioritised by the amount of money spent by the advertiser. For example, a company that sells jeans might purchase AdWords for keywords such as “jeans”, “pants”, or “trousers”, so when an individual queries Google using these terms, a “sponsored post” will be placed at the top of the search results. 2 AdWords also make use of personalisation, which allow advertisers to target more granular audiences based on factors such as age, gender, and location. Thus, a local company selling jeans for women can specify local female audiences — individuals who are more likely to purchase their products.

The way in which Google structures, organizes, and presents information and advertisements to users is important because these technical and policy decisions embed a wide range of political issues (Granka, 2010; Introna & Nissenbaum, 2000; Vaidhynathan, 2011). Several public and academic investigations auditing Google’s algorithms have documented various examples of bias in Search or problems with the autocomplete function (Cadwalladr, 2016a; Pasquale, 2015). Biases inherently designed into algorithms have been shown to disproportionately marginalise minority communities, women, and the poor (Noble, 2018).

At the same time, political advertisements have become a contentious political issue. While digital advertising can generate significant benefits for democracy, by democratising political finance and assisting in political mobilisation (Fowler et al., 2019; Baldwin-Philippi, 2017), it can also be used to selectively spread disinformation and messages of demobilisation (Burkell & Regan, 2019; Evangelista & Bruno, 2019; Howard, Ganesh, Liotsiou, Kelly, & Francois, 2018). Indeed, Russian AdWord purchases in the lead-up to the 2016 US election demonstrate how foreign states actors can exploit Google Search to spread propaganda (Mueller, 2019). But the general lack of regulation around political advertising has also raised concerns about domestic actors and the ways in which legitimate politicians campaign in increasingly opaque and unaccountable ways (Chester & Montgomery, 2017; Tufekci, 2014). These concerns are underscored by the rise of the “influence industry” and the commercialisation of political technologies who sell various ‘psychographic profiling’ technologies to craft, target, and tailor messages of persuasion and demobilisation (Chester & Montgomery, 2019; McKelvey, 2019; Bashyakarla, 2019). For example, during the 2016 US election, Cambridge Analytica worked with the Trump campaign to implement “persuasion search advertising”, where AdWords were bought to strategically push pro-Trump and anti-Clinton information to voters (Lewis & Hilder, 2018).

Given growing concerns over the spread of disinformation online, scholars are beginning to study the ways in which Google Search might amplify junk news and disinformation. One study by Metaxa-Kakavouli and Torres-Echeverry examined the top ten results from Google searches about congressional candidates over a 26-week period in the lead-up to the 2016 presidential election. Of the URLs recommended by Google, only 1.5% came from domains that were flagged by PolitiFact as being “fake news” domains (2017). Metaxa-Kakavouli and Torres-Echeverry suggest that the low levels of “fake news” are the result of Google’s “long history” combatting spammers on its platform (2017). Another research paper by Golebiewski and boyd looks at how gaps in search engine results lead to strategic “data voids” that optimisers exploit to amplify their content (2018). Golebiewski and boyd argue that there are many search terms where data is “limited, non-existent or deeply problematic” (2018). Although these searches are rare, if a user types these search terms into a search engine, “it might not give a user what they are looking for because of limited data and/or limited lessons learned through previous searches” (Golebiewski & boyd, 2018).

The existence of biases, disinformation, or gaps in authoritative information on Google Search matters because Google directly impacts what people consume as news and information. Most of the time, people do not look past the top ten results returned by the search engine (Metaxas, 2010). Indeed, eye-tracking experiments have demonstrated that the order in which Google results are presented to users matters more than the actual relevance of the page abstracts (Pan et al., 2007). However, it is important to note that the logic of higher placements does not necessarily translate to search engine advertising listings, where users are less likely to click on advertisements if they are familiar with the brand or product they are searching for (Narayanan & Kalyanam, 2015).

Nevertheless, the significance of the top ten placement has given rise to the SEO industry, whereby optimisers use digital keyword strategies to move webpages higher in Google’s rankings and thereby generate higher traffic flows. There is a long history of SEO dating back to the 1990s when the first search engine algorithms emerged (Metaxas, 2010). Since then, hundreds of SEO pages have published guesses about the different ranking factors these algorithms consider (Dean, 2019). However, the specific signals that inform Google’s search engine algorithms are dynamic and constantly adapting to the information environment. Google makes hundreds of changes to its algorithm every year to adjust the weight and importance of various signals. While most of these changes are minor updates designed to improve the speed and performance of Search, sometimes Google makes more significant changes to its algorithm to elude optimisers trying to game the system.

Google has taken several steps to combat people seeking to manipulate Search for political or economic gain (Taylor, Walsh, & Bradshaw, 2019). This involves several algorithmic changes to demote sources of disinformation as well as changes to their advertising policies to limit the extent to which users can be micro-targeted with political advertisements. In one study, researchers interviewed SEO strategists to audit how Facebook and Google’s algorithmic changes impacted their optimisation strategies (Hoffmann, Taylor, & Bradshaw, 2019). Since the purveyors of disinformation often rely on the same digital marketing strategies used by legitimate political candidates, news organisations, and businesses, the SEO industry can offer unique, but heuristic, insight into the impact of algorithmic changes. Hoffmann, Taylor and Bradshaw (2019) found that despite more than 125 announcements over a three-year period, the algorithmic changes made by the platforms did not significantly alter digital marketing strategies.

This paper hopes to contribute to the growing body of work examining the effect of Search on the spread of disinformation and junk news by empirically analysing the strategies — paid and optimised — employed by junk news domains. By performing an audit of the keywords junk news websites use to generate discoverability, this paper evaluates the effectiveness of Google in combatting the spread of disinformation on Search.

Methodology

Conceptual Framework: The Techno-Commercial Infrastructure of Junk News

The starting place for this inquiry into the SEO infrastructure of junk news domains is grounded conceptually in the field of science and technology studies (STS), which provides a rich literature on how infrastructure design, implementation, and use embeds politics (Winner, 1980). Digital infrastructure — such as physical hardware, cables, virtual protocols, and code — operate invisibly in the background, which can make it difficult to trace the politics embedded in technical coding and design (Star & Ruhleder, 1994). As a result, calls to study internet infrastructure has engendered digital research methods that shed light on the less-visible areas of technology. One growing and relevant body of research has focused on the infrastructure of social media platforms and the algorithms and advertising infrastructure that invisibly operate to amplify or spread junk news to users, or to micro-target political advertisements (Kim et al., 2018; Tambini, Anstead, & Magalhães, 2017). Certainly, the affordances of technology — both real and imagined — mutually shape social media algorithms and their potential for manipulation (Nagy & Neff, 2015; Neff & Nagy, 2016). However, the proprietary nature of platform architecture has made it difficult to operationalise studies in this field. Because junk news domains operate in a digital ecosystem built on search engine optimisation, page ranks, and advertising, there is an opportunity to analyse the infrastructure that supports the discoverability of junk news content, which could provide insights into how producers reach audiences, grow visibility, and generate domain value.

Junk news data set

The first step of my methodology involved identifying a list of junk news domains to analyse. I used the Computational Propaganda Project’s (COMPROP) data set on junk news domains in order to analyse websites that spread disinformation about politics. To develop this list, researchers on the COMPROP project built a typology of junk news based on URLs shared on Twitter and Facebook relating to the 2016 US presidential election, the 2017 US State of the Union Address, and 2018 US midterm elections. 3 A team of five rigorously trained coders labelled the domains contained in tweets and on Facebook pages based on a grounded typology of junk news that has been tested and refined over several elections around the world between 2016 and 2018. 4 A domain was labelled as junk news when it failed on three of the five criteria of the typology (style, bias, credibility, professionalism, and counterfeit, as described in section one). For this analysis, I used the most recent 2018 midterm election junk news list, which is comprised of the top-29 most shared domains that were labelled as junk news by researchers. This list was selected because all 29 domains were active during the 2016 US presidential election in November 2016 and the 2017 US State of the Union Address, which provides an opportunity to comparatively assess how both the advertising and optimisation strategies, as well as their performance, changed overtime.

SpyFu data and API queries

The second step of my methodology involved collecting data about the advertising and optimisation strategies used by junk news websites. I worked with SpyFu, a competitive keyword research tool used by digital marketers to increase website traffic and improve keyword rankings on Google (SpyFu, 2019). SpyFu collects, analyses and tracks various data about the search optimisation strategies used by websites, such as organic ranks, paid keywords bought on Google AdWords, and advertisement trends.

To shed light onto the optimisation strategies used by junk news domains on Google, SpyFu provided me with: (1) a list of historical keywords and keyword combinations used by the top-29 junk news that led to the domain appearing in Google Search results; and (2) the position the domain appeared in Google as a result of the keywords. The historical keywords were provided from January 2016 until March 2019. Only keywords that led to the junk news domains appearing in the top-50 positions on Google were included in the data set.

In order to determine the effectiveness of the optimisation and advertising strategies used by junk news domains to either grow their website value and/or successfully appear in the top positions on Google Search, I wrote a simple python script to connect to the SpyFu API service. This python script collected and parsed the following data from SpyFu for each of the top-29 junk news domains in the sample: (1) the number of keywords that show up organically on Google searches; (2) the estimated sum of clicks a domain receives based on factors including organic keywords, the rank of keyword, and the search volume of the keyword; (3) the estimated organic value of a domain based on factors including organic keywords, the rank of keywords, and the search volume of the keyword; (4) the number of paid advertisements a domain purchased through Google AdWords; and (5) the number of paid clicks a domain received from the advertisements it purchased from Google AdWords.

Data and methodology limitations

There are several data and methodology limitations that must be noted. First, the junk news domains identified by the Computational Propaganda Project highlights only a small sample of the wide variety of websites that peddle disinformation about politics. The researchers also do not differentiate between the different actors behind the junk news websites — such as foreign states or hyper-partisan media — nor do they differentiate between the political leaning of the junk news outlet — such as left-or-right-leaning domains. Thus, the outcomes of these findings cannot be described in terms of the strategies of different actors. Further, given that the majority of junk news domains in the top-29 sample lean politically to the right and far right, these findings might not be applicable to the hyper-partisan left and their optimisation strategies. Finally, the junk news domains identified in the sample were shared on social media in the lead-up to important political events in the United States. A further research question could examine the SEO strategies of domains operating in other country contexts.

When it comes to working with the data provided by SpyFu (and other SEO optimisation tools), there are two limitations that should be noted. First, the historical keywords collected by SpyFu are only collected when they appear in the top-50 Google Search results. This is an important limitation to note because news and information producers are constantly adapting keywords based on the content they are creating. Keywords may be modified by the source website dynamically to match news trends. Low performing keywords might be changed or altered in order to make content more visible via Search. Thus, the SpyFu data might not capture all of the keywords used by junk news domains. However, the collection strategy will have captured many of the most popular keywords used by junk news domains to get their content appearing in Google Search. Second, because SpyFu is a company there are proprietary factors that go into measuring a domain’s SEO performance (in particular, the data points collected via the API on the estimated sum of clicks and the estimated organic value). Nevertheless, considering that Google Search is a prominent avenue for news and information discovery, and that few studies have systematically analysed the effect of search engine optimisation strategies on the spread of disinformation, this study provides an interesting starting point for future research questions about the impact SEO can have on the spread and monetisation of disinformation via Search.

Analysis: optimizing disinformation through keywords and advertising

Junk news advertising strategies on Google

Junk news domains rarely advertise on Google. Only two out of the 29 junk news domains (infowars.com and cnsnews.com) purchased Google advertisements (See Figure 1: Advertisements purchased vs. paid clicks). The advertisements purchased by infowars.com were all made prior to the 2016 election in the United States (from the period of May 2015 to March 2016). cnsnews.com made several advertisement purchases over the three-year time period.

Figure 1: Advertisements purchased vs. paid clicks received: inforwars.com and cnsnews.com (May 2015-March 2019)

Looking at the total number of paid clicks received, junk news domains generated only a small amount of traffic using paid advertisements. Infowars on average, received about 2000 clicks as a result of their paid advertisements. cnsnews.com peaked at approximately 1800 clicks, but on average generated only about 600 clicks per month over the course of three years. By comparing the number of clicks that are paid versus those that were generated as a result of SEO keyword optimisation, there is a significant difference. During the same time period, cnsnews.com and infowars.com were generating on average 146,000 and 964,000 organic clicks respectively (See Figure 2:Organic vs. paid clicks (cnsnews.com and infowars.com)). Although it is hard to make generalisations about how junk news websites advertise on Google based on a sample of two, the lack of data suggests that advertising on Google Search might not be as popular as advertising on other social media platforms. Second, the return on investment (i.e., paid clicks generated as a result of Google advertisements) was very low compared to the organic clicks these junk news domains received for free. Factors other than advertising seem to drive the discoverability of junk news on Google Search.

Figure 2: organic vs. paid clicks (cnsnews.com and infowars.com)

Junk news keyword optimisation strategies

In order to assess the keyword optimisation strategies used by junk news websites, I worked with SpyFu, which provided historical keyword data for the 29 junk news domains, when those keywords made it to the top-50 results in Google between January 2016 and March 2019. In total, there were 88,662 unique keywords in the data set. Given the importance of placement on Google, I looked specifically at keywords that indexed junk news websites on the first — and most authoritative — position. Junk news domains had different aptitudes for generating placement in the first position (See Table 1: Junk news domains and number of keywords found in the first position on Google). Breitbart, DailyCaller and ZeroHedge had the most successful SEO strategies, respectively having 1006, 957 and 807 keywords lead to top placements on Google Search over the 39-month period. In contrast, six domains (committedconservative.com, davidharrisjr.com, reverbpress.news, thedailydigest.org, thefederalist.com, thepoliticalinsider.com) had no keywords reach the first position on Google. The remaining 20 domains had anywhere between 1 to 253 keywords place between the 2-10 positions on Google Search over the same timeframe.

Table 1: Junk news domains and number of keywords found in the first position on Google

Domain

Keywords reaching position 1

breitbart.com

1006

dailycaller.com

957

zerohedge.com

807

infowars.com

253

cnsnews.com

228

dailywire.com

214

thefederalist.com

200

rawstory.com

199

lifenews.com

156

pjmedia.com

140

americanthinker.com

133

thepoliticalinsider.com

111

thegatewaypundit.com

105

barenakedislam.com

48

michaelsavage.com

15

theblacksphere.net

9

truepundit.com

8

100percentfedup.com

5

bigleaguepolitics.com

3

libertyheadlines.com

2

ussanews.com

2

gellerreport.com

1

truthfeednews.com

1

Different keywords also generate different kinds of placement over the 39-month period. Table 2 (see Appendix) provides a sample list of up to ten keywords from each junk news domain in the sample when the keyword reached the first position.

First, many junk news domains appear in the first position on Google Search as a result of “navigational searches” whereby a user entered a query with the intent of finding a website. A search for a specific brand of junk news could happen naturally for many users, since the Google Search function is built into the address bar in Chrome, and sometimes set as the default search engine for other browsers. In particular, terms like “infowars” “breitbart” “cnsnews” and “rawstory” were navigational keywords users typed into Google Search. The performance of brand searches over time consistently places junk news webpages in the number one position (see Figure 3: Brand-related keywords over time). This suggests that brand-recognition plays an important role for driving traffic to junk news domains.

Figure 3: the performance of brand-related keywords overtime: top-5 junk news websites (January 2016-March 2019)

There is one outlier in this analysis, where keyword searches for “breitbart” drops to position two: in January 2017 and September 2017. This drop could have been a result of mainstream media coverage of Steve Bannon assuming (and eventually leaving) his position as the White House Chief Strategist during those respective months. The fact that navigational searches are one of the main drivers behind generating a top ten placement on Search suggests that junk news websites rely heavily on developing a recognisable brand and a dedicated readership that actively seeks out content from these websites. However, this also demonstrates that a complicated set of factors go into determining what keywords from what websites make the top placement in Google Search, and that coverage of news events from mainstream professional news outlets can alter the discoverability of junk news via Search.

Second, many keywords that made it to the top position in Google Search results are what Golebiewski and boyd (2018) would call terms that filled “data voids”, or gaps in search engine queries where there is limited authoritative information about a particular issue. These keywords tended to focus on conspiratorial information especially around President Barack Obama (“Obama homosexual” or “stop Barack Obama”), gun rights (“gun control myths”), pro-life narratives (“anti-abortion quotes” or “fetus after abortion”), and xenophobic or racist content (“against Islam” or “Mexicans suck”). Unlike brand-related keywords, problematic search terms do not achieve a consistently high placement on Google Search over the 39-week period. Keywords that ranked in number one for more than 30-weeks include: “vz58 vs. ak47”, “feminizing uranium”, “successful people with down syndrome”, “google ddrive”, and “westboro[sic] Baptist church tires slashed”. This suggests that, for the most part, data voids are either being filled by more authoritative sources, or Google Search has been able to demote websites attempting to generate pseudo-organic engagement via SEO.

The performance of junk news domains on Google Search

After analysing what keywords are used to get junk news websites in the number one position, the next half of my analysis looks at larger trends in SEO strategies overtime. What is the relationship between organic clicks and the value of a junk news website? How has the effectiveness of SEO keywords changed over the past 48 months? And have changes made by Google to combat the spread of junk news on Search had an impact on its discoverability?

Junk news, organic clicks, and the value of the domain

There is a close relationship between the number of clicks a domain receives and the estimated value of that domain. By comparing figure 4 and 5, you can see that the more clicks a website receives, the higher its estimated value. Often, a domain is considered more valuable when it generates large amounts of traffic. Advertisers see this as an opportunity, then, to reach more people. Thus, the higher the value of a domain, the more likely it is to generate revenue for the operator. The median estimated value of the top-29 most popular junk news was $5,160 USD during the month of the 2016 presidential election, $1,666.65 USD during the 2018 State of the Union, and $3,906.90 USD during the 2018 midterm elections. Infowars.com and breitbart.com were the two highest performing junk news domains — in terms of clicks and domain value. While breitbart.com maintained a more stable readership, especially around the 2016 US presidential election and the 2018 US State of the Union Address, its estimated organic click rate has steadily decreased since early 2018. In contrast, infowars.com has a more volatile readership. The spikes in clicks to infowars.com could be explained by media coverage of the website, including the defamation case against Alex Jones in April 2018 who claimed the shooting at Sandy Hook Elementary School was “completely fake” and a “giant hoax”. Since then, several internet companies — including Apple, Twitter, Facebook, Spotify, and YouTube — banned Infowars from their platforms, and the domain has not been able to regain its clicks nor value since. This demonstrates the powerful role platforms play in not only making content visible to users, but also controlling who can grow their website value — and ultimately generate revenue — from the content they produce and share online.

Figure 4: Estimated organic value for the top 29 junk news domains (May 2015 – March 2019)
Figure 5: Estimated organic clicks for the top 29 junk news domains (May 2015-April 2019)

Junk news domains, search discoverability and Google’s response to disinformation

Figure 6 shows the estimated organic results of the top 29 junk news domains overtime. The estimated organic results are the number of keywords that would organically appear in Google searches. Since August 2017, there has been a sharp decline in the number of keywords that would appear in Google. The four top-performing junk news websites (infowars.com, zerohedge.com, dailycaller.com, and breitbart.com) all appeared less frequently in top-positions on Google Search based on the keywords they were optimising for. This is an interesting finding and suggests that the changes Google made to its search algorithm did indeed have an impact on the discoverability of junk news domains after August 2017. In comparison, other professional news sources (washingtonpost.com, nytimes.com, foxnews.com, nbcnews.com, bloomberg.com, bbc.co.uk, wsj.com, and cnn.com) did not see substantial drops in their search visibility during this timeframe (see Figure 7). In fact, after August 2017 there has been a gradual increase in the organic results of mainstream news media.

Figure 6: Estimated organic results for the top 29 junk news domains (May 2015- April 2019)
Figure 7: Estimated organic results for mainstream media websites in the United States (May 2015-April 2019)

After almost a year, the top-performing junk news websites have regained some of their organic results, but the levels are not nearly as high as they were leading up to and preceding the 2016 presidential election. This demonstrates the power of Google’s algorithmic changes in limiting the discoverability of junk news on Search. But it also shows how junk news producers learn to adapt their strategies in order to extend the visibility of their content. In order to be effective at limiting the visibility of bad information via search, Google must continue to monitor the keywords and optimisation strategies these domains deploy — especially in the lead-up to elections — when more people will be naturally searching for news and information about politics.

Conclusion

In conclusion, the spread of junk news on the internet and the impact it has on democracy has certainly been a growing field of academic inquiry. This paper has looked at a small subset of this phenomenon, in particular the role of Google Search in assisting in the discoverability and monetisation of junk news domains. By looking at the techno-commercial infrastructure that junk news producers use to optimise their websites for paid and pseudo-organic clicks, I found:

  1. Junk news domains do not rely on Google advertisements to grow their audiences and instead focus their efforts on optimisation and keyword strategies;
  2. Navigational searches drive the most traffic to junk news websites, and data voids are used to grow the discoverability of junk news content to mostly small, but varying degrees.
  3. Many junk news producers place advertisements on their websites and grow their value particularly around important political events; and
  4. Overtime, the SEO strategies used by junk news domains have decreased in their ability to generate top-placements in Google Search.

For millions of people around the world, the information Google Search recommends directly impacts how ideas and opinions about politics are formulated. The powerful role of Google as an information gatekeeper has meant that bad actors have tried to subvert these technical systems for political or economic game. For quite some time, Google’s algorithms have come under attack by spammers and other malign actors who wish to spread disinformation, conspiracy theories, spam, and hate speech to unsuspecting users. The rise of “computational propaganda” and the variety of bad actors exploiting technology to influence political outcomes has also led to the manipulation of Search. Google’s response to the optimisation strategies used by junk news domains has had a positive effect on limiting the discoverability of these domains over time. However, the findings of this paper are also showing an upward trend, as junk news producers find new ways to optimise their content for higher search rankings. This game of cat and mouse is one that will continue for the foreseeable future.

While it is hard to reduce the visibility of junk news domains when individuals actively search for them, more can be done to limit the ways in which bad actors might try to optimise content to generate pseudo-organic engagement, especially around disinformation. Google can certainly do more to tweak its algorithms in order to demote known disinformation sources, as well as identify and limit the discoverability of content seeking to exploit data voids. However, there is no straightforward technical patch that Google can implement to stop various actors from trying to game their systems. By co-opting the technical infrastructure and policies that enable search, the producers of junk news are able to spread disinformation — albeit to small audiences who might use obscure search terms to learn about a particular topic.

There have also been growing pressures for regulators to take steps that force social media platforms to take greater actions that limit the spread of disinformation online. But the findings of this paper have two important lessons for policymakers. First, the disinformation problem — through both optimisation and advertising — on Google Search is not as dramatic as it is sometimes portrayed. Most of the traffic to junk news websites are by users performing navigational searches to find specific, well-known brands. Only a limited number of placements — as well as clicks — to junk news domains come from pseudo-organic engagement generated by data voids and other problematic keyword searches. Thus, requiring Google to take a heavy-handed approach to content moderation could do more harm than good, and might not reflect the severity of the problem. Second, the reason why disinformation spreads on Google are reflective of deeper systemic problems within democracies: growing levels of polarisation and distrust in the mainstream media are pushing citizens to fringe and highly partisan sources of news and information. Any solution to the spread of disinformation on Google Search will require thinking about media and digital literacy and programmes to strengthen, support, and sustain professional journalism.

References

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Barrett, B., & D. Kreiss (2019). Platform Transience:  changes in Facebook’s policies, procedures and affordances in global electoral politics. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1446

Bradshaw, S., Howard, P., Kollanyi, B., & Neudert, L.-M. (2019). Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018. Political Communication. https://doi.org/10.1080/10584609.2019.1663322

Bradshaw, S., & Howard, P. N. (2018). Why does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life [Working Paper]. Miami: Knight Foundation. Retrieved from https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/142/original/Topos_KF_White-Paper_Howard_V1_ado.pdf

Bradshaw, S., Neudert, L.-M., & Howard, P. (2018). Government Responses to the Malicious Use of Social Media. NATO.

Burkell, J., & Regan, P. (2019). Voting Public: Leveraging Personal Information to Construct Voter Preference. In N. Witzleb, M. Paterson, & J. Richardson (Eds.), Big Data, Privacy and the Political Process. London: Routledge.

Cadwalladr, C. (2016a, December 4). Google, democracy and the truth about internet search. The Observer. Retrieved from https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook

Cadwalladr, C. (2016b, December 11). Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2016/dec/11/google-frames-shapes-and-distorts-how-we-see-world

Chester, J. & Montgomery, K. (2019). The digital commercialisation of US politics—2020 and beyond. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1443

Dean, B. (2019). Google’s 200 Ranking Factors: The Complete List (2019). Retrieved April 18, 2019, from Backlinko website: https://backlinko.com/google-ranking-factors

Desiguad, C., Howard, P. N., Kollanyi, B., & Bradshaw, S. (2017). Junk News and Bots during the French Presidential Election: What are French Voters Sharing Over Twitter In Round Two? [Data Memo No. 2017.4]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved May 19, 2017, from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/05/What-Are-French-Voters-Sharing-Over-Twitter-Between-the-Two-Rounds-v7.pdf

European Commission. (2018). A multi-dimensional approach to disinformation: report of the independent high-level group on fake news and online disinformation. Luxembourg: European Commission.

Evangelista, R., & F. Bruno. (2019) WhatsApp and political instability in Brazil: targeted messages and political radicalization. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1435

Farkas, J., & Schou, J. (2018). Fake News as a Floating Signifier: Hegemony, Antagonism and the Politics of Falsehood. Journal of the European Institute for Communication and Culture, 25(3), 298–314. https://doi.org/10.1080/13183222.2018.1463047

Gillespie, T. (2012). The Relevance of Algorithms. In T. Gillespie, P. J. Boczkowski, & K. Foot (Eds.), Media Technologies: Essays on Communication, Materiality and Society (pp. 167–193). Cambridge, MA: The MIT Press. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.692.3942&rep=rep1&type=pdf

Golebiewski, M., & Boyd, D. (2018). Data voids: where missing data can be easily exploited. Retrieved from Data & Society website: https://datasociety.net/wp-content/uploads/2018/05/Data_Society_Data_Voids_Final_3.pdf

Google. (2019). How Google Search works: Search algorithms. Retrieved April 17, 2019, from https://www.google.com/intl/en/search/howsearchworks/algorithms/

Granka, L. A. (2010). The Politics of Search: A Decade Retrospective. The Information Society, 26(5), 364–374. https://doi.org/10.1080/01972243.2010.511560

Guo, L., & Vargo, C. (2018). “Fake News” and Emerging Online Media Ecosystem: An Integrated Intermedia Agenda-Setting Analysis of the 2016 U.S. Presidential Election. Communication Research. https://doi.org/10.1177/0093650218777177

Hedman, F., Sivnert, F., Kollanyi, B., Narayanan, V., Neudert, L. M., & Howard, P. N. (2018, September 6). News and Political Information Consumption in Sweden: Mapping the 2018 Swedish General Election on Twitter [Data Memo No. 2018.3]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/09/Hedman-et-al-2018.pdf

Hoffmann, S., Taylor, E., & Bradshaw, S. (2019, October). The Market of Disinformation. [Report]. Oxford: Oxford Information Labs; Oxford Technology & Elections Commission, University of Oxford. Retrieved from https://oxtec.oii.ox.ac.uk/wp-content/uploads/sites/115/2019/10/OxTEC-The-Market-of-Disinformation.pdf

Howard, P., Ganesh, B., Liotsiou, D., Kelly, J., & Francois, C. (2018). The IRA and Political Polarization in the United States, 2012-2018 [Working Paper No. 2018.2]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S., & Neudert, L.-M. (2017). Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? [Data Memo No. 2017.1]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/2017/03/26/junk-news-and-bots-during-the-u-s-election-what-were-michigan-voters-sharing-over-twitter/

Howard, P. N., Kollanyi, B., Bradshaw, S., & Neudert, L.-M. (2017). Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States? [Data Memo No. 2017.8]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2017/09/Polarizing-Content-and-Swing-States.pdf

Introna, L., & Nissenbaum, H. (2000). Shaping the Web: Why the Politics of Search Engines Matters. The Information Society, 16(3), 169–185. https://doi.org/10.1080/01972240050133634

Kaminska, M., Galacher, J. D., Kollanyi, B., Yasseri, T., & Howard, P. N. (2017). Social Media and News Sources during the 2017 UK General Election. [Data Memo No. 2017.6]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from https://www.oii.ox.ac.uk/blog/social-media-and-news-sources-during-the-2017-uk-general-election/

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., … Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Lakoff, G. (2018, January 2). Trump uses social media as a weapon to control the news cycle. Retrieved from https://twitter.com/GeorgeLakoff/status/948424436058791937

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998

Lewis, P. & Hilder, P. (2018, March 23). Leaked: Cambridge Analytica’s Blueprint for Trump Victory. The Guardian. Retrieved from: https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-blueprint-for-trump-victory

Machado, C., Kira, B., Hirsch, G., Marchal, N., Kollanyi, B., Howard, Philip N., … Barash, V. (2018). News and Political Information Consumption in Brazil: Mapping the First Round of the 2018 Brazilian Presidential Election on Twitter [Data Memo No. 2018.4]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://blogs.oii.ox.ac.uk/comprop/wp-content/uploads/sites/93/2018/10/machado_et_al.pdf

McKelvey F. (2019). Cranks, Clickbaits and Cons:  On the acceptable use of political engagement platforms.  Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1439

Metaxa-Kakavouli, D., & Torres-Echeverry, N. (2017). Google’s Role in Spreading Fake News and Misinformation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3062984

Metaxas, P. T. (2010). Web Spam, Social Propaganda and the Evolution of Search Engine Rankings. In J. Cordeiro & J. Filipe (Eds.), Web Information Systems and Technologies (Vol. 45, pp. 170–182). https://doi.org/10.1007/978-3-642-12436-5_13

Nagy, P., & Neff, G. (2015). Imagined Affordance: Reconstructing a Keyword for Communication Theory. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115603385

Narayanan S., & Kalyanam K. (2015). Position Effects in Search Advertising and their Moderators: A Regression Discontinuity Approach. Marketing Science, 34(3), 388–407. https://doi.org/10.1287/mksc.2014.0893

Neff, G., & Nagy, P. (2016). Talking to Bots: Symbiotic Agency and the Case of Tay. International Journal of Communication,10, 4915–4931. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6277

Neudert, L.-M., Howard, P., & Kollanyi, B. (2017). Junk News and Bots during the German Federal Presidency Election: What Were German Voters Sharing Over Twitter? [Data Memo 2 No. 2017.2]. Oxford: Project on Computational Propaganda, Oxford University. Retrieved from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-German-Voters-Sharing-Over-Twitter-v6-1.pdf

Nielsen, R. K., & Graves, L. (2017). “News you don’t believe”: Audience perspectives on fake news. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen&Graves_factsheet_1710v3_FINAL_download.pdf

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google We Trust: Users’ Decisions on Rank, Position, and Relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x

Pasquale, F. (2015). The Black Box Society. Cambridge: Harvard University Press.

Persily, N. (2017). The 2016 U.S. Election: Can Democracy Survive the Internet? Journal of Democracy, 28(2), 63–76. https://doi.org/10.1353/jod.2017.0025

Schroeder, R. (2014). Does Google shape what we know? Prometheus, 32(2), 145–160. https://doi.org/10.1080/08109028.2014.984469

Silverman, C. (2016, November 16). This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook. Buzzfeed. Retrieved July 25, 2017 from https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

SpyFu. (2019). SpyFu - Competitor Keyword Research Tools for AdWords PPC & SEO. Retrieved April 19, 2019, from https://www.spyfu.com/

Star, S. L., & Ruhleder, K. (1994). Steps Towards an Ecology of Infrastructure: Complex Problems in Design and Access for Large-scale Collaborative Systems. Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, 253–264. New York: ACM.

Tambini, D., Anstead, N., & Magalhães, J. C. (2017, June 6). Labour’s advertising campaign on Facebook (or “Don’t Mention the War”) [Blog Post]. Retrieved April 11, 2019, from Media Policy Blog website: http://blogs.lse.ac.uk/mediapolicyproject/

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions Digital Journalism, 6(2). https://doi.org/10.1080/21670811.2017.1360143

Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018, March). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature [Report]. Menlo Park: William and Flora Hewlett Foundation. Retrieved from https://eprints.lse.ac.uk/87402/1/Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf

Vaidhynathan, S. (2011). The Googlization of Everything: (First edition). Berkeley: University of California Press.

Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5), 2028–2049. https://doi.org/10.1177/1461444817712086

Bashyakarla, V. (2019). Towards a holistic perspective on personal data and the data-driven election paradigm. Internet Policy Review, 8(4). Retrieved from https://policyreview.info/articles/news/towards-holistic-perspective-personal-data-and-data-driven-election-paradigm/1445

Wardle, C. (2017, February 16). Fake news. It’s complicated. First Draft News. Retrieved July 20, 2017, from https://firstdraftnews.com:443/fake-news-complicated/

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward and interdisciplinary framework for research and policy making [Report No. DGI(2017)09]. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/information-disorder-report-november-2017/1680764666

Winner, L. (1980). Do Artifacts have Politics. Daedalus, 109(1), 121–136. Retrieved from http://www.jstor.org/stable/20024652

Appendix 1

Junk news seed list (Computational Propaganda Project’s top-29 junk news domains from the 2018 US midterm elections).

www.americanthinker.com, www.barenakedislam.com, www.breitbart.com, www.cnsnews.com, www.dailywire.com, www.infowars.com, www.libertyheadlines.com, www.lifenews.com,www.rawstory.com, www.thegatewaypundit.com, www.truepundit.com, www.zerohedge.com,100percentfedup.com, bigleaguepolitics.com, committedconservative.com, dailycaller.com, davidharrisjr.com, gellerreport.com, michaelsavage.com, newrightnetwork.com, pjmedia.com, reverbpress.news, theblacksphere.net, thedailydigest.org, thefederalist.com, ussanews.com, theoldschoolpatriot.com, thepoliticalinsider.com, truthfeednews.com.

Appendix 2

Table 2: A sample list of up to ten keywords from each junk news domain in the sample when the keyword reached the first position.

100percentfedup.com

dailywire.com

theblacksphere.net

gruesome videos

6

states bankrupt

22

black sphere

28

snopes exposed

5

ms 13 portland oregon

15

dwayne johnson gay

10

gruesome video

4

the gadsen flag

12

george soros private security

1

teendreamers

2

f word on tv

12

bombshell barack

1

bush cheney inauguration

2

against gun control facts

10

madame secretary

1

americanthinker.com

end of america 90

9

head in vagina

1

medienkritic

23

racist blacks

8

mexicans suck

1

problem with taxes

22

associates clinton

8

obama homosexual

1

janet levy

19

diebold voting machine

8

comments this

1

article on environmental protection

18

diebold machines

8

thefederalist.com

maya angelou criticism

18

gellerreport.com

the federalist

39

supply and demand articles 2011

17

geller report

1

federalist

30

ezekiel emanuel complete lives system

16

infowars.com

gun control myths

26

articles on suicide

12

www infowars

39

considering homeschooling

23

American Thinker Coupons

11

infowars com

39

why wont it work technology

22

truth about obama

10

info wars

39

debate iraq war

21

barenakedislam.com

infowars

39

lesbian children

20

berg beheading video

11

www infowars com

39

why homeschooling

19

against islam

11

al-qaeda 100 pentagon run

38

home economics course

18

beheadings

10

info war today

35

iraq war debate

17

iraquis beheaded

10

war info

34

thegatewaypundit.com

muslim headgear

8

infowars moneybomb

34

thegatewaypundit.com

39

torture clips

7

feminizing uranium

33

civilian national security force

10

los angeles islam pictures

7

libertyheadlines.com

safe school czar

8

beheaded clips

7

accusers dod

2

hillary clinton weight gain 2011

8

berg video

7

liberty security guard bucks country

1

RSS Pundit

7

hostages beheaded

6

lifenews.com

hillary clinton weight gain

7

bigleaguepolitics.com

successful people with down syndrome

39

all perhaps hillary

4

habermans

1

life news

35

hillary clinton gained weight

4

fbi whistleblower

1

lifenews.com

35

london serendip i tea camp

4

ron paul supporters

1

fetus after abortion

26

whoa it

4

breitbart.com

anti abortion quotes

21

thepoliticalinsider.com

big journalism

39

pro life court cases

17

obama blames

19

big government breitbart

39

rescuing hug

16

michael moore sucks

14

breitbart blog

39

process of aborting a baby

15

marco rubio gay

11

www.breitbart.com

39

different ways to abort a baby

14

weapons mass destruction iraq

10

big hollywood

39

adoption waiting list statistics

14

weapons of mass destruction found

10

breitbart hollywood

39

michaelsavage.com

wmd iraq

10

breitbart.com

39

www michaelsavage com

19

obama s plan

9

big hollywood blog

39

michaelsavage com

19

chuck norris gay

9

big government blog

39

michaelsavage

18

how old is bill clinton

8

breitbart big hollywood

39

michael savage com

18

stop barack obama

7

cnsnews.com

michaelsavage radio

17

truepundit.com

cns news

39

michael savage

17

john kerrys daughter

8

cnsnews

39

savage nation

15

john kerrys daughters

5

conservative news service

39

michael savage nation

14

sex email

2

christian news service

21

michael savage savage nation

13

poverty warrior

2

cns

20

the savage nation

12

john kerry daughter

1

major corporations

20

pjmedia.com

RSS Pundit

1

billy graham daughter

18

belmont club

39

whistle new

1

taxing the internet

17

belmont club blog

39

pay to who

1

pashtun sexuality

15

pajamas media

39

truthfeednews.com

record tax

15

dr helen

38

nfl.comm

5

dailycaller.com

instapundit blog

38

ussanews.com

the daily caller

37

instapundit

33

imigration expert

2

vz 58 vs ak 47

33

pj media

33

meabolic syndrome

1

condition black

28

instapundit.

32

zerohedge.com

patriot act changes

26

google ddrive

28

zero hedge

33

12 hour school

25

instapundits

27

unempolyment california

24

common core stories

25

rawstory.com

hayman capital letter

24

courtroom transcript

23

the raw story

39

dennis gartman performance

24

why marijuana shouldnt be legal

22

raw story

39

the real barack obama

23

why we shouldnt legalize weed

22

rawstory

39

meredith whitney blog

22

why shouldnt marijuana be legalized

22

rawstory.com

39

weaight watchers

22

  

westboro baptist church tires slashed

35

0hedge

22

  

the raw

25

doug kass predictions

19

  

mormons in porn

22

usa hyperinflation

17

  

norm colemans teeth

19

  
  

xe services sold

18

  
  

duggers

17

  

Footnotes

1. Organic engagement is used to describe authentic user engagement, where an individual might click a website or link without being prompted. This is different from "transactional engagement" where a user engages with content through prompting via paid advertising. In contrast, I use the term “pseudo-organic engagement” to capture the idea that SEO practitioners are generating clicks through the manipulation of keywords that move websites closer to the top of search engine rankings. An important aspect of pseudo-organic engagement is that these results are indistinguishable from those that have “earnt” their search ranking, meaning, users may be more likely to treat the source as authoritative despite the fact their ranking has been manipulated.

2. It is important to note that AdWord purchases can also be displayed on affiliate websites. These “display ads” appear on websites and generate revenue for the website operator.

3. For the US presidential election, 19.53 million tweets were collected between 1 November 2016, and 9 November 2016; for the State of the Union Address 2.26 million tweets were collected between 24 January 2018, and 30 January 2018; and for the 2018 US midterm elections 2.5 million tweets were collected between 21-30 September 2018 and 6,986 Facebook groups between 29 September 2018 and 29 October 2018. For more information see Bradshaw et al., 2019.

4. Elections include: 2016 United States presidential election, 2017 French presidential election, 2017 German federal election, 2017 Mexican presidential election, 2018 Brazilian presidential election, and the 2018 Swedish general election.

Data-driven political campaigns in practice: understanding and regulating diverse data-driven campaigns

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

Data has become an important part of how we understand political campaigns. In reviewing coverage of elections – particularly in the US – the idea that political parties and campaigners now utilise data to deliver highly targeted, strategic and successful campaigns is readily found. In academic and non-academic literature, it has been argued that “[i]n countries around the world political parties have built better databases, integrated online and field data, and created more sophisticated analytic tools to make sense of these traces of the electorate” (Kreiss and Howard, 2010, p. 1; see also in t’Veld, 2017, pp. 2-3). These tools are reported to allow voters to “be monitored and targeting continuously and in depth, utilising methods intricately linked with and drawn from the commercial sector and the vast collection of personal and individual data” (Kerr Morrison, Naik, and Hankey, 2018, p. 11). The Trump campaign in 2016 is accordingly claimed to have “target[ed] 13.5 million persuadable voters in sixteen battleground states, discovering the hidden Trump voters, especially in the Midwest” (Persily, 2017, p. 65). On the basis of such accounts, it appears that data-driven campaigning is coming to define electoral practice – especially in the US - and is now key to understanding modern campaigns.

Yet, at the same time, important questions have been raised about the sophistication and uptake of data-driven campaign tools. As Baldwin-Philippi (2017) has argued, there are certain “myths” about data-driven campaigning. Studying campaigning practices Baldwin-Philippi has shown that “all but the most sophisticated digital and data-driven strategies are imprecise and not nearly as novel as the journalistic feature stories claim” (2017, p. 627). Hersh (2015) has also shown that the data that parties possess about voters is not fine-grained, and tends to be drawn from public records that contain certain standardised information. Moreover, Bennett has highlighted the significant incentive that campaign consultants and managers have to emphasise the sophistication and success of their strategies, suggesting that campaigners may not be offering an accurate account of current practices (2016, p. 261; Kreiss and McGregor, 2018).

These competing accounts raise questions about the nature of data-driven campaigning and the extent to which common practices in data use are found around the globe. These ideas are conceptually important for our understanding of developments in campaigning, but they also have significance for societal responses to the practice of data-driven campaigning. With organisations potentially adopting different data-driven campaigning practices it is important to ask which forms of data use are seen to be democratically acceptable or problematic. 1 These questions are particularly important given the recent interest from international actors and politicians in understanding and responding to the use of data analytics (Information Comissioners Office, 2018a), and specifically practices at Facebook (Kang et al., 2018). Despite growing pressure from these actors to curtail problematic data-driven campaigning practices, it is as yet unclear precisely what is unacceptable and how prevalent these practices are in different organisations and jurisdictions. For these reasons there is a need to understand more about data-driven campaigning.

To generate this insight, in this article I pose the question: “what practices characterise data-driven campaigning?” and develop a comparative analytical framework that can be used to understand, map and consider responses to data-driven campaigning. Identifying three facets of this question, I argue that there can be variations in who is using data in campaigns, what the sources of data are, and how data informs communication in campaigns. Whilst not exhaustive, these questions and the categories they inspire are used to outline the diverse practices that constitute data-driven campaigning within single and different organisations in different countries. It is argued that our understanding of who, what and how data is being used is critical to debates around the democratic acceptability of data-driven campaigning and provides essential insights required when contemplating a regulatory response.

This analysis and the frameworks it inspires have been developed following extensive analysis of the UK case. Drawing on a three-year project exploring the use of data-driven campaigning within political parties, the analysis discusses often overlooked variations in how data is used. In highlighting these origins I contend that these questions are not unique to the UK case, but can inspire analysis around the globe and in different organisations. Indeed, as I will discuss below, this form of inquiry is to be encouraged as comparative analysis makes it possible to explore how different legal, institutional and cultural contexts affect data-driven campaigning practices. Furthermore, analysis of different kinds of organisation makes it possible to understand the extent to which party practices are unique. Although this article is therefore inspired by a particular context and organisational type, the questions and frameworks it provides can be used to unpack and map the diversity of data-driven campaigning practices, providing conceptual clarity able to inform a possible regulatory response.

Data and election campaigns

The relationship between data and election campaigns is well established, particularly in the context of political parties. Describing the focus of party campaigning, Dalton, Farrell and McAllister (2013) outline the longstanding interest parties have in collecting data that can be analysed to (attempt to) achieve electoral success. In their account, “candidates and party workers meet with individual voters, and develop a list of people’s voting preferences. Then on election day a party worker knocks on the doors of prospective supporters at their homes to make sure they cast their ballot and often offers a ride to the polls if needed” (p. 56). Whilst parties in different contexts are subject to different regulations and norms that affect the data they can collect and use (Kreiss and Howard, 2010), it is common for them to be provided with information by the state about voters’ age, registered status and turnout history (Hersh, 2015). In addition, parties then tend to gather their own data about voter interests, voting preferences and degree of support, allowing them to build large data sets and email lists at national and local levels. Although regulated – most notably through the General Data Protection Regulation (GDPR), which outlines rules in Europe for how data can be collected, used and stored – parties’ use of data is often seen to be democratically permissible as it enables participation and promotes an informed citizenry.

In recent history, the use of data by parties is seen to have shifted significantly, making it unclear how campaigns are organised and whether they are engaging in practices that may not be democratically appropriate. In characterising these practices, two very different accounts of data use have emerged. On the one hand, scholars such as Gibson, Römmele and Williamson (2014) have argued that parties now adopt data-driven campaigns that “focus on mining social media platforms to improve their voter profiling efforts” (p. 127). From this perspective, parties are now often seen to be routinely using data to gain information, communicate and evaluate campaign actions.

In terms of information, it has been argued that data-driven campaigning draws on new sources of data (often from social media and online sources) to allow parties to search for patterns in citizens’ attitudes and behaviours. Aggregating data from many different sources at a level hitherto impossible, data-driven campaigning techniques are seen to allow parties to use techniques common in the commercial sector to “construct predictive models to make targeting campaign communications more efficient” (Nickerson and Rogers, 2014, p. 54; Castleman, 2016; Hersh, 2015, p. 28). Similarly, attention has been directed to the capacity to use algorithms to identify “look-alike audiences” (Tactical Tech, 2019, pp. 37-69), 2 allowing campaigners to find new supporters who possess the same attributes as those already pledged to a campaign (Kreiss, 2017, p. 5). Data-driven campaigning techniques are therefore seen to offer campaigns additional information with minimal investment of resources (as one data analyst becomes able to find as many target voters as an army of grassroots activists) (Dobber et al., 2017, p. 4).

In addition, data-driven campaigning has facilitated targeted communication (Hersh, 2015, pp. 1-2), allowing particular messages to be conveyed to certain kinds of people. These capacities are seen to enable stratified campaign messaging, allowing personalised messages that can be delivered fast through cheap and easy to use online (and offline) interfaces. Data-driven campaigning has therefore been reported to allow campaigners to “allocate their finite resources more efficiently” (Bennett, 2016, p. 265), “revolutioniz[ing] the process” of campaigning (International IDEA, 2018, p. 7; Chester and Montgomery, 2017).

It has also been claimed that data-driven campaigning enables parties to evaluate campaign actions and gather feedback in a way previously not possible. Utilising message-testing techniques such as A/B testing, and monitoring response rates and social media metrics, campaigners are seen to be able to use data to analyse – in real time – the impact of campaign actions. Whether monitoring the effect of an email title on the likelihood that it is opened by recipients (Nickerson and Rogers, 2014, p. 57), or testing the wording that makes a supporter most likely to donate funds, data can be gathered and analysed by campaigns seeking to test whether their interventions work (Kreiss and McGregor, 2018, pp. 173-4; Kerr Morrison et al., 2018, p. 12; Tactical Tech, 2019). 3

These new capacities are often highlighted in modern accounts of campaigning and suggest that there has been significant and rapid change in the activities of campaigning organisations. Whilst prevalent, this idea has, however, been challenged by a small group of scholars who have offered a more sceptical account, arguing that “the rhetoric of data-driven campaigning and the realities of on-the-ground practices” are often misaligned (Baldwin-Philippi, 2017, p. 627).

The sceptical account

A number of scholars of campaign practice have questioned the idea that elections are characterised by data-driven campaigning and have highlighted a gulf between the rhetoric and reality of practices here. Nielsen, for example, has shown that whilst data-driven tools are available, campaigns continue to rely primarily on “mundane tools” (2010, p. 756) such as email to organise their activities. Hersh also found that, in practice, campaigns do not possess “accurate, detailed information about the preference and behaviours of voters” (2015, p. 11), but rely instead on relatively basic, publically available data points. Similar observations led Baldwin-Philippi to conclude that the day-to-day reality of campaigning is “not nearly as novel as the journalistic feature stories claim” as “campaigns often do not execute analytic-based campaigning tactics as fully or rigorously as possible” (2017, p. 631). In part the gulf between possible and actual practice has emerged because parties – especially at a grassroots level – lack the capacity and expertise to utilise data-driven campaigning techniques (Ibid., p. 631). There is accordingly little evidence that parties are routinely using data to gain more information about voters, to develop new forms of targeted communication or to evaluate campaign interventions. Indeed, in a study of the UK, Anstead et al. found no evidence “that campaigns were seeking to send highly targeted but contradictory messages to would-be supporters”, with their study of Facebook advertisements showing that parties placed adverts that reflected “the national campaigns parties were running” (unpublished, p. 3).

Other scholars have also questioned the scale of data-use by highlighting the US-centric focus of much scholarship on political campaigns (Kruschinski and Haller, 2017; Dobber at al., 2017). Kreiss and Howard (2010) have highlighted important variations in campaign regulation that restrict the practices of data-driven campaigns (see also: Bennett, 2016). In this way, a study of German campaigning practices by Kruschinski and Haller (2017) highlights how regulation of data collection, consent and storage means that “German campaigners cannot build larger data-bases for micro-targeting” (p. 8). Elsewhere Dobber et al. (2017, p. 6) have highlighted how different electoral systems, regulatory systems and democratic cultures can inform the uptake of data-driven campaigning tools. This reveals that, whilst often discussed in universal terms, there are important country and party level variations that reflect different political, social and institutional contexts. 4 These differences are not, however, often highlighted in existing accounts of data-driven campaigning.

Reflecting on reasons for this gulf in rhetoric and practice, some attention has been directed to the incentives certain actors have to “sell” the sophistication and success of data-driven campaigning practices. For Bennett, political and technical consultants “are eager to tout the benefits of micro-targeting and data-driven campaigning, and to sell a range of software applications, for both database and mobile environments” (2016, p. 261). Indeed, with over 250 companies operating worldwide that specialise in the use of individual data in political campaigns (Kerr Morrison, Naik, and Hankey, 2018, p. 20), there is a clear incentive for many actors to “oversell” the gains to be achieved through the use of data-targeting tools (a behaviour Cambridge Analytica has, for example, been accused of). Whatever the causes of these diverging narratives, it is clear that our conceptual understanding of the nature of data-driven campaigning, and our empirical understanding of how extensively different practices are found is underdeveloped. We therefore currently lack clear benchmarks against which to monitor the form and extent of data-driven campaigning.

These deficiencies in our current conceptualisation of data-driven campaigning are particularly important because there has been recent (and growing) attention paid to the need to regulate data-use in campaigns. Indeed, around the globe calls for regulation have been made citing concerns about the implications of data-driven campaigning for privacy, political debate, transparency and social fragmentation (Dobber et al, 2017, p. 2). In the UK context, for example, the Information Commissioner, Elizabeth Denham, launched an inquiry into the use of data analytics for political purposes by proclaiming:

[w]hat we're looking at here, and what the allegations have been about, is mashing up, scraping, using large amounts of personal data, online data, to micro target or personalise or segment the delivery of the messages without individuals' knowledge. I think the allegation is that fair practices and fair democracy is under threat if large data companies are processing data in ways that are invisible to the public (quoted in Haves, 2018, pp. 2-3).

Similar concerns have been raised by the Canadian Standing Committee on Access to Information, Privacy and Ethics, the US Senate Select Committee on Intelligence, and by international bodies such as the European Commission. These developments are particularly pertinent because the conceptual and empirical ambiguities highlighted above make it unclear which data-driven campaign practices are problematic, and how extensively they are in evidence.

It is against this backdrop that I argue there is a need to unpack the idea of data-driven campaigning by asking “what practices characterise data-driven campaigning?”. Posing three supplementary questions, in the remainder of the article I provide a series of conceptual frameworks that can be used to understand and map a diversity of data use practices that are currently obscured by the idea of data-driven campaigning. This intervention aims not only to clarify our conceptual understanding of data-driven campaigning practices, and to provide a template for future empirical research, but also to inform debate about the democratic acceptability of different practices and the form any regulatory response should take.

Navigating the practice of data-driven campaigns

Whilst often spoken about in uniform terms, data-driven campaigning practices come in a variety of different forms. To begin to understand the diversity of different practices, it is useful to pose three questions:

  1. Who is using data in campaigns?
  2. What are the sources of campaign data?
  3. How does data inform communication?

For each question, I argue that it is possible to identify a range of answers rather than single responses. Indeed, different actors, sources and communication strategies can be associated with data use within single as well as between different campaigns. Recognising this, I develop three analytical frameworks (one for each question) that can be used to identify, map and contemplate different practices.

These frameworks have been designed to enable comparative analysis between different countries and organisations, highlighting the many different ways in which data is used. Whilst not applied empirically within this article, the ideal type markers outlined below can be operationalised to map different practices. In doing so it should be expected that a spectrum of different positions will be found within any single organisation. Whilst it is not within the scope of this paper to fully operationalise these frameworks, methods of inquiry are discussed to highlight how data may be gathered and used in future analysis. In the discussion below, I therefore offer these frameworks as a conceptual device that can be built upon and extended in the future to generate comparative empirical insights. This form of empirical analysis is vital because it is expected that answers to the three questions will vary depending on the specific geographic or organisational context being examined, highlighting differences in data driven campaigning that need to be recognised by those considering regulation and reform.

Who is using data in campaigns?

When imagining the orchestrators of data-driven campaigning the actors that come to mind are often data specialists who provide insights for party strategists about how best to campaign. Often working for an external company or hired exclusively for their data expertise, these actors have received much coverage in election campaigns. Ranging from the now notorious Cambridge Analytica, to established companies such as BlueStateDigital and eXplain (formerly Liegey Muller Pons), there is often evidence that professional actors facilitate data-driven campaigns. Whilst the idea that parties utilise professional expertise is not new (Dalton et al., 2001, p. 55; Himmelweit et al., 1985, pp. 222-3), data professionals are seen to have gained particular importance because “[n]ew technologies require new technicians” (Farrell et al., 2001). This means that campaigners require external, professional support to utilise new techniques and tools (Kreiss and McGregor, 2018; Nickerson and Rogers, 2014, p. 70). Much commentary therefore gives the impression that data-driven campaigning is being facilitated by an elite group of professional individuals with data expertise. For those concerned about the misuse of data and the need to curtail practices seen to have negative democratic implications, this conception suggests that it is the actions of a very small group that are of concern. And yet, as the literature on campaigns demonstrates, parties are reliant on the activism of local volunteers (Jacobson, 2015), and often lack the funds to pay for costly data expertise (indeed, in many countries spending limits prevent campaigners from paying for such expertise). As a result, much data-driven campaigning is not conducted by expert data professionals.

In thinking through this point, it is useful to note that those conducting data-driven campaigning can have varying professional status and levels of expertise. These differences need to be recognised because they affect both who researchers study when they seek to examine data-driven campaigning, but also whose actions need to be regulated or overseen to uphold democratic norms. 5 Noting this, it is useful to draw two conceptual distinctions between professional and activist data users, and between data novices and experts. These categories interact, allowing four “ideal type” positions to be identified in Figure 1.

Figure 1: Who is using data in campaigns?6

Looking beyond the “expert data professionals” who often spring to mind when discussing data-driven campaigning, Figure 1 demonstrates that there can be different actors using data in campaigns. It is therefore common to find “professionals without data expertise” who are employed by a party. Whilst often utilising or collecting data, these individuals do not possess the knowledge to analyse data or develop complex data-driven interventions. Interestingly, this group has been understudied in the context of campaigns, meaning the precise differences between external and internal professionals are not well understood.

In addition to professionals, Figure 1 also shows that data-driven campaigning is performed by activists who can vary in their degree of expertise. Some, described here as “expert data activists”, can possess specialist knowledge - often having many of the same skills as expert data professionals. Others, termed “activists without data expertise”, lack even basic understandings of digital technology (let alone data-analysis) (Nielsen, 2012). Some attention has been paid to activists” digital skills in recent elections with, for example, coverage of digital expertise amongst Momentum activists in the UK (Zagoria and Schulkind, 2017) and Bernie Sanders activists in the US (Penney, 2017). And yet, other studies have suggested that such expertise is not common amongst activists (Nielsen, 2012).

These classifications therefore suggest that data-driven campaigning can and is being conducted by very different actors who vary in their relationship with the party, and in their expertise. Currently we have little insight into the extent to which these different actors dominate campaigns, making it difficult to determine who is using data, and hence whose activities (if any) are problematic. This indicates the need for future empirical analysis that sets out to determine the prevalence and relative power of these different actors within different organisations. Whilst space prevents a full elucidation of the markers that could be used for this analysis, it would be possible to map organisational structures and use surveys to gauge the extent of data-expertise present amongst professionals and activists. In turn, these insights could be mapped against practices to determine who was using data in problematic ways. It may, for example, be that whilst “expert data professionals” are engaging in practices that raise questions about the nature of democratic debate (such as micro-targeting), “activists without data expertise” may be using data in ways that raise concerns about data security and privacy.

Knowing who is using data how is critical for thinking about where any response may be required, but also when considering how a response can be made. Far from being subject to the same forms of oversight these different categories of actors are subject to different forms of control. Whilst professionals tend to be subject to codes of conduct that shape data use practices, or can be held accountable by the threat of losing their employment, the activities of volunteers can be harder to regulate. As shown by Nielsen (2012), even when provided with central guidance and protocols, local activists often diverge from central party instructions, reflecting a classic structure/agency dilemma. This suggests not only that the activities of different actors may require monitoring and regulation, but also that different responses may be required. The question “who is using data in campaigns?” therefore spotlights a range of practices and democratic challenges that are often overlooked, but which need to be appreciated in developing our understanding and any regulatory response.

What are the sources of campaign data?

Having looked at who is using data in campaigns, it is, second, important to ask what are the sources of campaign data? The presumption inherent in much coverage of data-driven campaigning is that campaigners possess complex databases that hold numerous pieces of data about each and every individual. The International Institute for Democracy and Electoral Assistance (IDEA), for example, has argued that parties “increasingly use big data on voters and aggregate them into datasets” which allow them to “achieve a highly detailed understanding of the behaviour, opinions and feelings of voters, allowing parties to cluster voters in complex groups” (2018, p. 7; p. 5). It therefore often appears that campaigns use large databases of information composed of data from different (and sometimes questionable) sources. However, as suggested above, the data that campaigns possess is often freely disclosed (Hersh, 2015), and many campaigners are currently subject to privacy laws around the kind of data they can collect and utilise (Bennett, 2016; Kruschinski and Haller, 2017).

To understand variations and guide responses, four more categories are identified. These are determined by thinking about variations in the form of data; differentiating between disclosed and inferred data, and the conditions under which data is made available; highlighting differences between data that is made available without charge, and data that is purchased.

Figure 2: The sources of campaigning data

As described in Figure 2, much of the data that political parties use is provided to them without charge, but it can come in two forms. The first category “free data disclosed by individuals” refers to data divulged to a campaign without charge, either via official state records or directly by an individual to a campaign. The official data provided to campaigns varies from country to country (Dobber et al., 2017, p. 7; Kreiss and Howard, 2010, p. 5) but can include information on who is registered to vote, a voter’s date of birth, address and turnout record. In the US it can even include data on the registered partisan preference of a particular voter (Bennett, 2016, p. 265; Hersh, 2015). This information is freely available to official campaigners and citizens are often legally required to divulge it (indeed, in the UK it is compulsory to sign up to the Electoral Register). In addition, free data can also be more directly disclosed by individuals to campaigns through activities such as voter canvassing and surveys that gather data about individuals’ preferences and concerns (Aron, 2015, pp. 20-1; Nickerson and Rogers, 2014, p. 57). The second category “free inferred data” identifies data available without charge, but which is inferred rather than divulged. These deductions can occur through contact with a campaign. Indeed, research by the Office of the Information and Privacy Commissioner for British Columbia, Canada describes how party canvassers often collect data about ethnicity, age, gender and the extent of party support by making inferences that the individual themselves is unaware of (2019, p. 22). It is similarly possible for data that campaigns already possess to be used to make inferences. Information gathered from a petition, for example, can be used to make suppositions about an individual’s broader interests and support levels. Much of the data campaigners use is therefore available without charge, but differs in form.

In addition, Figure 2 captures the possibility that campaigns purchase data. This data can be classified in two ways. The category “purchased data disclosed by individuals” describes instances in which parties buy data that was not disclosed directly to them, but was provided to other actors. This data can come in the form of social media data (which parties can buy access to rather than possess), or include data such as magazine subscription lists (Chester and Montgomery, 2017, pp. 3-4; Nickerson and Rogers, 2014, p. 57). Figure 2 also identifies “purchased inferred data”. This refers to modelled data whereby inferences are made about individual preferences on the basis of available data. This kind of modelling is frequently accomplished by external companies using polling data or commercially available insights, but it can also be done on social media platforms, with features such as look-a-like audiences on Facebook selling access to inferred data about individuals’ views.

Campaigns can therefore use different types of data. Whilst the existing literature has drawn attention to the importance of regulatory context in shaping the data parties in different countries are legally able to use (Kruschinski and Haller, 2017), there are remarkably few comparative studies of data use in different countries. This makes it difficult to determine not only how places vary in their regulatory tolerance of these different forms of data, but also how extensively parties actually use them. Such analysis is important because parties’ activities are not only shaped by laws, but can also be informed by variables such as resources or available expertise (Hersh, 2015, p. 170). This makes it important to map current practices and explore if and why data is used in different ways by parties around the world. In envisioning such empirical analysis, it is important to note that parties are likely to be sensitive to the disclosure of data sources. However a mix of methods - including interviews with those using data within parties and data subject access requests - can be used to gain insights here.

In the context of debates around data-driven campaigning and democracy, these categories also prompt debate about the acceptability of different practices. Whilst the idea that certain forms of disclosed data should be available without charge is relatively established as an acceptable component of campaigns, it appears there are concerns over the purchase of data and the collection of inferred data. Indeed, in Canada the Office of the Information and Privacy Commissioner for British Columbia recommended that “[a]ll political parties should ensure door-to-door canvassers do not collect the personal information of voters, including but not limited to gender, religion, and ethnicity information unless that voter has consented to its collection” (2019, p. 41). By acknowledging the different sources of data used for data-driven campaigning it is therefore possible to not only clarify what is happening, but also to think about which forms of data can be acceptably used by campaigns.

How does data inform communication?

Finally, in thinking about data-driven campaigning much attention has been paid to micro-targeting and the possibility that data-driven campaigning allows parties to conduct personalised campaigns. IDEA has therefore argued that micro-targeting allows parties to “reach voters with customized information that is relevant to them…appealing to different segments of the electorate in different ways” with new degrees of precision (2018, p. 7). In the context of digital politics, micro-targeting is seen to have led parties to:

…try to find and send messages to their partisan audiences or intra-party supporters, linking the names in their databases to identities online or on social media platforms such as Facebook. Campaigns can also try to find additional partisans and supporters by starting with the online behaviours, lifestyles, or likes or dislikes of known audiences and then seeking out “look-alike audiences”, to use industry parlance (Kreiss, 2017, p. 5).

In particular, platforms such as Facebook are seen to provide parties with a “powerful “identity-based“ targeting paradigm” allowing them to access “more than 162 million US users and to target them individually by age, gender, congressional district, and interests” (Chester and Montgomery, 2017, p. 4). These developments have raised important questions about the inclusivity of campaign messaging and the degree to which it is acceptable to focus on specific segments of the population. Indeed, some have highlighted risks relating to mis-targeting (Hersh and Schaffner, 2013) and privacy concerns (Kim et al., 2018, p. 4). However, as detailed above, there are questions about the extent to which campaigns are sending highly targeted messages (Anstead et al., unpublished).

In order to understand different practices, Figure 3 differentiates between audience size; specifying between wide and narrow audiences, and message content; noting differences between generic and specialised messages.

Figure 3: How data informs communication

Much campaigning activity comprises generic messages, with content covering a broad range of topics and ideas. By using data (often generated through polling or in focus groups) parties can determine the form of messaging likely to win them appeal. The category “general message to all voters” describes instances in which a general message is broadcast to a wide audience, something that often occurs via party political TV broadcasts or political speeches (Williamson, Miller and Fallon, 2010, p. iii). In contrast “generic message to specific voters” captures instances in which parties limit the audience, but maintain a general message. Such practices often emerge in majoritarian electoral systems where campaigners want to appeal to certain voters who are electorally significant, rather than communicating with (and potentially mobilising) supporters of other campaigns (Dobber et al., 2017, p. 6). Parties therefore often gather data to identify known supporters or sympathisers who are then sent communications that offer a general overview of the party’s positions and goals.

Figure 3 also spotlights the potential for parties to offer more specialised messages, describing a campaign’s capacity to cover only certain issues or aspects of an issue (focusing, for example, on healthcare rather than all policy realms, or healthcare waiting lists rather than plans to privatise health services). These messages can, once again, be deployed to different audiences. The category “specialised message to all voters” describes instances in which parties use data to identify a favourable issue (Budge and Farlie, 1983) that is then emphasised in communications with all citizens. In the UK, for example, the Labour Party often communicates its position on the National Health Service, whereas the Conservative Party focuses on the economy (as these are issues which, respectively, the two parties are positively associated with). Finally, “specialised message to specific voters” captures the much discussed potential for data to be used to identify a particular audience that can then be contacted with a specific message. This means that parties can speak to different voters about different issues – an activity that Williamson, Miller and Fallon describe as “segmentation” (2010, p. 6).

These variations suggest that campaigners can use data to inform different communication practices. Whilst much attention has been paid to segmented micro-targeting (categorised here as “specialised messages to specific voters”), there is currently little data on the degree to which each approach characterises different campaigns (either in single countries or different nations). This makes it difficult to determine how extensive different practices are, and whether the messaging conducted under each heading is taking a problematic form. It may, for example, be that specialised messaging to specific voters is entirely innocuous, or it could be that campaigners are offering contradictory messages to different voters and hence potentially misleading people about the positions they will take (Kreiss, 2017, p. 5). Empirically, this form of analysis can be pursued in different ways. As above, interviews with campaign practitioners can be used to explore campaign strategies and targeting, but it is also important to look at the actual practices of campaigns. Resources such as online advertising libraries and leaflet repositories are therefore useful in monitoring the content and focus of campaign communications. Using these methods, a picture of how data informs communication can be developed.

Thinking about the democratic implications of these different practices, it should be noted that message variation by audience size and message scope is not new - campaigns have often varied in their communication practices. And yet digital micro-targeting and voter segmentation has been widely greeted with alarm. This suggests the importance of thinking further about the precise cause of concern here, determining which democratic norms are being violated, and whether this is only occurring in the digital realm. It may, for example, be that concerns do not only reflect digital practices, suggesting that regulation is needed for practices both online and offline. These categories therefore help to facilitate debate about the democratic implications of different practices, raising questions about precisely what it is that is the cause for concern and where a response needs to be made.

Discussion

The above discussion has shown that data-driven campaigning is not a homogenous construct but something conducted by different actors, using different data, adopting different strategies. To date much existing discussion of data-driven campaigning has focused on the extent to which this practice is found. In contrast, in this analysis I have explored the extent to which different data-driven campaigning practices can be identified. Highlighting variations in who is using data in campaigns, what the sources of campaign data are, and how data informs campaign communication, I argue that there are a diverse range of possible practices.

What is notable in posing these questions and offering these frameworks is that whilst there is evidence to support these different conceptual categories, at present there is little empirical data on the extent to which each practice exists in different organisations. As such, it is not clear what proportion of campaign activity is devoted to targeting specific voters with specific messages as opposed to all voters with a general message. Moreover, it is not clear the extent to which parties rely on different actors for data-driven campaigning, nor how much power and scope these actors have within a single campaign. At present, therefore, there is considerable ambiguity about the type of data-driven campaigns that exist. This suggests the urgent need for new empirical analysis that explores the practice of data-driven campaigning in different organisations and different countries. By operationalising the categories proposed here and using methods including interviews, content analysis and data subject access requests, I argue that it is possible to build up a picture of who is using what data how.

Of particular interest is the potential to use these frameworks to generate comparative insights into data-driven campaigning practice. At present studies of data use have tended to be focused on one country, but in order to understand the scope of data-driven campaigning it is necessary to map the presence of different practices. This is vital because, as previous comparative electoral research has revealed, the legal, cultural and institutional norms of different countries can have significant implications on campaigning practices. In this way it would be expected that a country such as Germany with a history of strong data protection law would exhibit very different data-driven campaigning practices to a country such as Australia. In a similar way, it would be expected that different institutional norms would lead a governmental organisation, charity or religious group to use data differently to parties. At present, however, the lack of comparative empirical data makes it difficult to determine what influences the form of data-driven campaigning and how different regulatory interventions affect campaigning practices. This framework therefore enables such comparative analysis, and opens the door to future empirical and theoretical work.

One particularly valuable aspect of this approach is the potential to use these questions and categories to contribute to existing debates around data-driven campaigning and democracy. Throughout the discussion, I have argued that many commentators have voiced concerns. These relate variously to privacy, the inclusivity of political debate, misinformation and disinformation, political finance, external influence and manipulation, transparency and social fragmentation (for more see Zuiderveen Borgesius et al., 2018, p. 92; Chester and Montgomery, 2017, p. 8; Dobber et al., 2017, p. 2; Hersh, 2015, p. 207; Kreiss and Howard, 2010, p. 11; International IDEA, 2018, p. 19). Such concerns have led to calls for regulation, and, as detailed above, many national governments, regulators and international organisations have moved to make a response. And yet, before creating new regulations and laws, it is vital for these actors to possess accurate information about how precisely data-driven campaigning is being conducted, and to reflect on which democratic ideals these practices violate or uphold. Data-driven campaigning is not an inherently problematic activity, indeed, it is an established feature of democratic practice. However, our understanding of the acceptability of this practice will vary dependent on our understanding of who, what and how data is being used (as whilst some practices will be viewed as permissible, others will not). This makes it important to reflect on what is happening and how prevalent these practices are in order to determine the nature and urgency of any regulatory response. Importantly, these insights need to be gathered in the specific regulatory context of interest to policy makers, as it should not be presumed that different countries or institutions will use data in the same way, or indeed have the same standards for acceptable democratic conduct.

The frameworks presented in this article therefore provide an important means by which to consider the nature, prevalence and implications of data-driven campaigning for democracy and can be operationalised to produce vital empirical insights. Such data and conceptual clarification together can ensure that any reaction to data-driven campaigning takes a consistent, considered approach and reflects the practice (rather than the possibility) of this activity. Given, as a report from Full Fact (2018, p. 31) makes clear that there is a danger of “government overreaction” based on limited information and self-evident assumptions (Ostrom, 2000) about how campaigning is occurring, it is vital that such insights are gathered and utilised in policy debates.

Conclusion

This article has explored the phenomenon of data-driven campaigning. Whilst receiving increased attention over recent years, existing debate has tended to focus on the extent to which this practice can be found. In this article, I present an alternative approach, seeking to map the diversity of data-driven campaigning practices to understand the different ways in which data can and is being used. This has shown that far from being characterised by uniform data-driven campaigning practices, data-use can vary in a number of ways.

In classifying variations in who is using data in campaigns, what the sources of campaign data are, and how data informs campaign communication, I have argued that there are diverse practices that can be acceptable to different actors to different degrees. At an immediate level, there is a need to gain greater understanding of what is happening within single campaigns and how practices vary between different political parties around the globe. More widely, there is a need to reflect on the implications of these trends for democracy and the form that any regulatory response may need to take. As democratic norms are inherently contested, there is no single roadmap for how to make a response, but the nature of any response will likely be affected by our understanding of who, what and how data is being utilised. This suggests the need for new conceptual and empirical understanding of data-driven campaigning practices amongst both academics and regulators alike.

References

Anstead, N., et al. (2018). Facebook Advertising the 2017 United Kingdom General Election: The Uses and Limits of User-Generated Data. Unpublished Manuscript. Retrieved from https://targetingelectoralcampaignsworkshop. files.wordpress.com/2018/02/anstead_et_al_who_targets_me.pdf.

Aron, J. (2015, May 2). Mining for Every Vote. New Scientist, 226(3019), 20–21. https://doi.org/10.1016/S0262-4079(15)30251-7

Baldwin-Philippi, K. (2017). The Myths of Data Driven Campaigning. Political Communication, 34(4), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bennett, C. (2016). Voter Databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America?. International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Budge, I., & Farlie, D. (1983). Explaining and Predicting Elections. London: Allen and Unwin.

Castleman, D. (2016). Essentials of Modelling and Microtargeting. In A. Therriault (Ed.), Data and Democracy: How Political Data Science is Shaping the 2016Elections, (pp. 1–6). Sebastopol, CA: O’Reilly Media. Retrieved from https://www.oreilly.com/ideas/data-and-democracy/page/2/essentials-of-modeling-and-microtargeting

Chester, J., & Montgomery, K.C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Dalton, R. J., Farrell, D. M., & McAllister, I. (2013). Political Parties and Democratic Linkage. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199599356.001.0001

Dobber, T., Trilling, D., Helberger, N., & de Vreese, C. H. (2017). Two Crates of Beer and 40 pizzas: The adoption of innovative political behavioral targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dommett, K., & Temple, L. (2018). Digital Campaigning: The Rise of Facebook and Satellite Campaigns. Parliamentary Affairs, 71(1), 189–202. https://doi.org/10.1093/pa/gsx056

Farrell, D., Kolodny, R., & Medvic, S. (2001). Parties and Campaign Professionals in a Digital Age: Political Consultants in the United States and Their Counterparts Overseas. The International Journal of Press/Politics, 6(4), 11–30. https://doi.org/10.1177/108118001129172314

Full Fact. (2018). Tackling Misinformation in an Open Society [Report]. London: Full Fact. Retrieved from https://fullfact.org/blog/2018/oct/tackling-misinformation-open-society/

Gibson, R., Römmele, A., & Williamson, A. (2014) Chasing the Digital Wave: International Perspectives on the Growth of Online Campaigning. Journal of Information Technology & Politics, 11(2), 123–129. https://doi.org/10.1080/19331681.2014.903064

Haves, E. (2018). Personal Data, Social Media and Election Campaigns. House of Lords Library Briefing. London: The Stationary Office.

Hersh, E. (2015). Hacking the Electorate: How Campaigns Perceive Voters. Cambridge: Cambridge University Press.

Hersh, E. & Schaffner, B. (2013). Targeted Campaign Appeals and the Value of Ambiguity. The Journal of Politics, 75(2), 520–534. https://doi.org/10.1017/S0022381613000182

Himmelweit, H., Humphreys, P. , & Jaeger, M. (1985). How Voters Decide. Open University Press.

in ‘t Veld, S. (2017). On Democracy. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.779

Information Commissioners Office. (2018a). Investigation into the use of data analytics in political campaigns. London: ICO.

Information Commissioners Office. (2018b) Notice of Intent. Retrieved from https://ico.org.uk/media/2259363/emmas-diary-noi-redacted.pdf.

International IDEA. (2018). Digital Microtargeting. IDEA: Stockholm.

Jacobson, G. (2015). How Do Campaigns Matter?. Annual Review of Political Science, 18(1), 31–47. https://doi.org/10.1146/annurev-polisci-072012-113556

Kaang, C., Rosenburg, M., and Frenkel, S. (2018, July 2). Facebook Faces Broadened Federal Investigations Over Data and Privacy. New York Times.Retrieved fromhttps://www.nytimes.com/2018/07/02/technology/facebook-federal-investigations.html?module=inline

Kerr Morrison, J., Naik, R., & Hankey, S. (2018). Data and Democracy in the Digital Age. London: The Constitution Society.

Kim, T., Barasz, K., & John, L. (2018). Why Am I Seeing this Ad? The Effect of Ad Transparency on Ad Effectiveness. Journal of Consumer Research, 45(5), 906–932. https://doi.org/10.1093/jcr/ucy039

Kreiss, D., & Howard, P. N. (2010). New challenges to political privacy: Lessons from the first US Presidential race in the Web 2.0 era. International Journal of Communication, 4(19), 1032–1050. Retrieved from https://ijoc.org/index.php/ijoc/article/view/870

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Kreiss, D., & McGregor, S. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter and Google with Campaigns During the 2016 US Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Kruschinski, S., & Haller, A. (2017). Restrictions on data-driven political micro-targeting in Germany. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.780

Nickerson, D., & Rogers, T. (2014). Political Campaigns and Big Data. Journal of Economic Perspectives, 28(2), 51–74. https://doi.org/10.1257/jep.28.2.51

Nielsen, R. (2010). Mundane internet tools, mobilizing practices, and the coproduction of citizenship in political campaigns. New Media and Society, 13(5), 755–771. https://doi.org/10.1177/1461444810380863

Nielsen, R. (2012). Ground Wars. Princeton: Princeton University Press.

Office of the Information and Privacy Commissioner for British Columbia. (2019). Investigation Report P19-01, Full Disclosure: Political Parties, Campaign Data and Voter Consent. Retrieved from https://www.oipc.bc.ca/investigation-reports/2278

Ostrom, E. (2000). The Danger of Self-Evident Truths. Political Science and Politics, 33(1), 33–44. https://doi.org/10.2307/420774

Penney, J. (2017). Social Media and Citizen Participation in “Official“ and “Unofficial“ Electoral Promotion: A Structural Analysis of the 2016 Bernie Sanders Digital Campaign. Journal of Communication, 67(3), 402–423. https://doi.org/10.1111/jcom.12300

Persily, N. (2017). Can Democracy Survive the Internet?. Journal of Democracy, 28(2), 63–76. https://doi.org/10.1353/jod.2017.0025

Tactical Tech. (2019). Personal Data: Political Persuasion – Inside the Influence Industry. How it works. Berlin: Tactical Technology Collective.

Williamson, A., Miller, L., & Fallon, F. (2010). Behind the Digital Campaign: An Exploration of the Use, Impact and Regulation of Digital Campaigning. London: Hansard Society.

Zagoria, T. and Schulkind, R. (2017). How Labour Activists are already building a digital strategy to win the next election”, New Statesman. Retrieved from https://www.newstatesman.com/politics/elections/2017/07/how-labour-activists-are-already-building-digital-strategy-win-next.

Zuiderveen Borgesius, F., Möller, J., Kruikemeier, S., Fathaigh, R., Irion, K., Dobber, T., Bodo, B. & de Vreese, C. H. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrect Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. This question is important because it is to be expected that universal responses to this question do not exist, and that different actors in different countries will view and judge practices in different ways (against different democratic standards).

2. See the report from Tactical Tech (2019) Personal Data for a range of examples of how data can be used to gain “political intelligence“ about voters.

3. Importantly, this data use is not guaranteed to persuade voters. Campaigns can identify the type of campaign material viewers are more likely to watch or engage with, but this does not necessarily mean that those same viewers are persuaded by that content.

4. Similarly there are likely to be variations between parties and other types of organisation such as campaign groups or state institutions.

5. It should be noted that these democratic norms are not universal, but are expected to vary dependent on context and the perspective of the particular actor concerned.

6. For more on local expert activism in the UK see Dommett and Temple, 2017. In the US see Penney, 2017.

On the edge of glory (…or catastrophe): regulation, transparency and party democracy in data-driven campaigning in Québec

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

For the last 50 years, Québec politics has been characterised by a lasting two-party system based on a dominant divide between the Yes and No options to the project of political independence from the rest of Canada of the 8.4 million people in Canada’s predominantly Francophone jurisdiction (Pelletier, 1989). Following the failure of the 1995 referendum, the erosion of this divide led to an openness of the partisan system and the arrival of four parties in the Québec National Assembly (Dufresne et al., 2019; Langlois, 2018). With a new party elected to government for the first time since 1976, the 2018 election was one of realignment. The Coalition avenir Québec (CAQ) elected 74 Members of the National Assembly (MNAs). With 31 seats, the former government, the Québec Liberal Party (QLP), received its worst result in 150 years and formed the official opposition. With 10 MNAs each, Québec solidaire (QS), a left-wing party and the Parti québécois (PQ), the historic vehicle for independence, occupied the remaining opposition seats.

Beyond these election results, the 2018 Québec election also marks an organisational change. For the first time, the major parties have all massively adopted what is often referred to as “US” data-campaigning practices. However, when it comes to the use of digital technologies for electoral purposes, the US case is the exception rather than the rule (Enli and Moe, 2013; Gibson, 2015; Vaccari, 2013, p. ix). Indeed, data campaigning, as with other techniques of political communication, are conducted in specific contexts that affect what is accessible, possible and viable (Bennett, 2016; Dobber et al., 2017; Ehrhard et al., 2019; Flanagan, 2010, p. 156).

Not unlike other Canadian jurisdictions, Québec is therefore an interesting case to study the effects of these practices in parties operating in a parliamentary system, while not being subject to privacy protection rules. Moreover, to our knowledge, studies on this subject in a sub-national context are few. In Canada, the majority of the work focuses on federal parties (see for example Bennett, 2018; McKelvey and Piebiak, 2018; Munroe and Munroe, 2018; Patten, 2015, 2017; Thomas, 2015), leaving provincial and municipal levels behind (with the notable exception of Carlile, 2017; Yawney, 2018; and Giasson et al., 2019). Thus, the French-speaking jurisdiction represents, as Giasson et al. (2019, p. 3) argue, one of those relevant but “less obvious” cases to study in order to better understand the similarities and differences in why and how political parties adopt or resist technological innovations. The use of this type of case study also makes it possible to explore the gap between emerging opportunities and the campaigns actually deployed by the parties, beyond the "rhetoric of data-driven campaigning" (see Baldwin-Philippi, 2017, p. 627).

Many factors influence technological innovation in campaigns (Kreiss, 2016). Furthermore, as Hersh indicates (2015), cultural and legal contexts influence political actors’ behaviour because types of data that are made available to campaigns shape their perceptions of voters, and therefore their communication practices. According to Munroe and Munroe (2018), political parties may use data as a resource generated in many ways that can be used to guide strategic and tactical decisions. Because parties set up integrated platforms in which personal data on voters are stored and analysed, ethical and political issues emerge (Bennett, 2013, 2015). In most Canadian provinces, including Québec, and at the federal level, parties are not subjected to privacy laws regarding the use and protection of personal data. This absence of a regulatory framework also leads to inadequate self-regulation (Bennett, 2018; Howard and Kreiss, 2010).

As was the case in many other jurisdictions around the globe, Québec parties were faced with a transparency deficit following the March 2018 revelations of the Cambridge Analytica affair (Bashyakarla et al, 2019; Cadwalladr and Graham-Harrison, 2018). Within hours of the scandal becoming public, political reporters in Québec turned to party leaders to get a better sense of the scope and use of the digital data they were collecting, why they collected them and what this all meant for the upcoming fall elections as well as for citizens’ privacy (Bélair-Cirino, 2018). Most claimed that their data collection and analysis practices were ethical and respectful of citizen’s privacy. However, none of them agreed to fully disclose the scope of the data they collected nor the exact purpose of these databases.

Research objectives and methodology

This article examines the increasing pressure to regulate uses of digital personal data by Québec’s political parties. First, it illustrates the central role now played by voter personal data in Québec’s politics. Second, it presents the current (and weak) legislative framework and how the issue of the protection of personal data came onto the agenda in Québec. At first, many saw this shift has a positive evolution where Québec’s parties “caught up” with current digital marketing practices. However, following the Cambridge Analytica affair and revelations about the lack of proper regulation on voter data use, public discourse started casting these technological advancements as democratic catastrophes waiting to happen.

We use three types of data to investigate this context. First, in order to assess the growth in party use of digital voter data, we rely on 40 semi-directed interviews conducted for a broader research project with party organisers, elected officials, activists and advisors of all the main political parties operating in Québec 1. The interviews, each lasting from 45 minutes to 1-hour - were conducted in French just a few weeks before the launch of the 2018 provincial election campaign. Citations presented in this article are therefore translations. The interviewees were selected according to their political representativeness, but also for their high level of electoral involvement. In this article, we only use those responses that relate to digital campaigning and the use of personal information. The citations selected here represented viewpoints shared by at least three interviewees. They illustrate shared perceptions of the evolution of the strategic use of voter personal data in Québec’s electioneering.

Second, we also analysed the legislative framework as well as the self-regulatory practices of political parties in Québec in order to measure the levels of regulation and transparency surrounding their use of personal data. To do this, we studied the websites of the four main parties in order to compare their practices.

Finally, we also conducted a media coverage analysis on the issue of how parties engaged in digital marketing. We conducted a keyword search on the Eureka.cc database to retrieve all texts published in the four main daily newspapers published in French in Québec (La Presse, Le Devoir, Le Soleil and Le Journal de Montréal), in the public affairs magazine L’Actualité, as well as on the Radio-Canada website about digital data issues related to politics in Québec. The time period runs from 1 January 2012 to 1 March 2019 and covers three general (2012, 2014 and 2018) and two municipal (2013 and 2017) elections. Our search returned 223 news articles.

What we find is a perfect storm. We saw parties that are massively adopting data marketing at the same time that regulatory bodies expressed concerns about their lack of supervision. In the background, an international scandal made the headlines and changed the prevailing discourse surrounding these technological innovations.

New digital tools, a new political reality

The increased use of digital technologies and data for electioneering can be traced back to the 2012 provincial election (see Giasson et al., 2019). Québec political parties were then faced with a changing electorate, and data collection helped them adapt to this new context. Most of them also experienced greater difficulties in rallying electors ideologically. In Québec, activist, partisan politics was giving way to more political data-marketing (Del Duchetto, 2016).

In 2018, Québec’s four main political parties integrated digital technologies at the core of their electoral organisations. In doing so, they aimed to close the technological gap with Canadian parties at the federal level (Marland et al., 2012; Delacourt, 2013). Thus, the CAQ developed the Coaliste, its own tool for processing and analysing data. The application centralises information collected on voters in a database and targets them according to their profile. Developed at a cost of 1 million Canadian dollars, the tool was said by a party strategist to help carry a campaign "with 3 or 4 times less" money than before (Blais and Robillard, 2017).

For its part, QS created a mobilisation platform called Mouvement. The tool was inspired by the "popular campaigns of Bernie Sanders and La France Insoumise in France."2 Decentralised in nature, the platform aimed to facilitate event organisation, networking between sympathisers, to create local discussion activities, as well as to facilitate voter identification.

The PQ has also developed its own tool: Force bleue. At its official launch, a party organiser insisted on its strategic role in tight races. It would include “an intelligent mapping system to crisscross constituencies, villages, neighbourhoods to maximise the time spent by local teams and candidates by targeting the highest paying places in votes and simplify your vote turnout” (Bergeron, 2018).

Finally, the QLP outsourced its digital marketing and built on the experience of the federal Liberal Party of Canada as well as Emmanuel Macron’s movement in France. For the 2018 election campaign, the party contracted Data Sciences, a private firm which "collects information from data of all kind, statistics among others, on trends or expectations of targeted citizens or groups"(Salvet, 2018).

Our interviews with political strategists help better understand the scope of this digital shift that Québec’s parties completed in 2018. They also put into perspective the effects of these changes and the questions they raise within the parties themselves.

Why change?

Party organisers interviewed for this article who advocate for the development of new tools stress two phenomena. On the one hand, the Québec electorate is more volatile and on the other, it is much more difficult to communicate with electors than before. A former MNA notes that today: "The campaign counts. It's very volatile and identifying who votes for you early in the campaign doesn’t work anymore. "

With social media, Québec parties’ officials see citizens as more segmented than before. An organiser attributes the evolution of this electoral behaviour to social media. "Today, the big change is that the speed and accessibility of information means that you do not need a membership card to be connected. It circulates freely. It's on Facebook. It’s on Twitter".

He notes that "it is much more difficult to attract someone in a political party by saying that if you become a member you will have privileged access to a certain amount of information or to a certain quality of information". A rival organiser also confirms that people's behaviour has changed: "It's not just generational, they buy a product". He adds that this has implications on the level of volunteering and on voters’ motivation:

When we look at the beginning of the 1970s, we had a lot of people. People were willing to go door-to-door to meet voters. We had people on the ground, they needed to touch each other. The communications were person-to-person. (…) Today, we do marketing.

In sum, "people seek a product and are less loyal" which means that parties must rely on voters’ profiling and targeting.

Increased use of digital technology in 2018

The IT turn in Québec partisan organisations is real. One organiser goes so far as to say that most of the volunteer work that was central in the past is now done digitally. According to him, "any young voter who uses Facebook, is now as important, if not more, than a party activist". This comment reinforces the notion that any communication with an elector must now be personalised:

Now we need competent people in computer science, because we use platforms, email lists. When I send a message reminding to newly registered voters that it will be the first time they will vote, I am speaking directly to them.

To achieve this micro-targeting, party databases are updated constantly. An organiser states that: "Our job is to feed this database with all the tools like surveys, etc... In short, we must bomb the population with all kinds of things, to acquire as much data as possible". For example, Québec solidaire and the Coalition avenir Québec broadly used partisan e-petitions to feed their database (Bélair-Cirino, 2017). There are neither rules nor legislation that currently limit the collection and use of this personal information if it is collected through a partisan online petition or website.

Old political objectives - new digital techniques

In accordance with the current literature on the hybridisation of electoral campaigns (Chadwick, 2013; Giasson et al., 2019), many respondents indicate that the integration of digital tools associated with data marketing has changed the way things are done. This also had an effect on the internal party organisation, as well as on the tasks given to members on the ground. An organiser explains how this evolution took place in just a few years:

Before, we had a field organisation sector, with people on the phones, distributors, all that. We had communication people, we had people distributing content. (...) Right now, we have to work with people that are not there physically and with something that I will not necessarily control.

An organiser from another political party is more nuanced: "We always need people to help us find phone numbers, we always need people to make calls". He confirms, however, that communication tactics changed radically:

The way to target voters in a riding has changed. The way to start a campaign, to canvas, has changed. The technological tools at our disposal means that we need more people who are able to use them and who have the skills and knowledge to use the new technological means we have to reach the electorate.

Another organiser adds that it is now important to train activists properly for their canvassing work. According to her: "We need to give activists digital tools and highly technological support tools that make their lives easier". She adds that: "Everything is chained with intelligent algorithms that will always target the best customer, always first, no matter what...".

New digital technologies and tools are therefore used to maximise efficiency and resources. The tasks entrusted to activists also change. For another organiser, mobilisation evolves with technology: "We used to rely on lots of people to reach for electors". He now sees that people are reached via the internet and that this new reality is not without challenges: "we are witnessing a revolution where new clients do not live in the real world…". It then becomes difficult to meet them in real life, off-line.

Another organiser confirms having "a different canvas technique using social media and other tools”. According to him:

Big data is already outdated. We are talking about smart data. These data are used efficiently and intelligently. How do we collect this data? (...) We used to do a lot of tally by door-to-door or by phone. Now we do a lot of capture. The emails are what interest me. I am not interested in phone numbers anymore, except cell phones.

An experienced organiser observes that "this has completely changed the game. Before, we only had one IT person, now I have three programmers". He adds that "liaison officers have become press officers". This change also translates in the allocation of resources and the integration of new profiles of employees for data management. It brought a new set of digital strategists into war rooms. These new data analysts have knowledge in data management, applied mathematics, computer science and software engineering. They are working alongside traditional field organisers, sometimes even replacing them at the decision table.

Second thoughts

Organisers themselves raise democratic and ethical concerns related to the digital evolution of their work. One of them points out that they face ethical challenges. He openly wonders about the consequences of this gathering of personal information: "It's not because we can do something that we have to do it. With the list of electors, there are many things that can be done. Is it ethical to do it? At some point, you have to ask that question". He points out that new technologies are changing at a rapid pace and that with "each technology comes a communication opportunity". The question is now "how can we appropriate this technology, this communication opportunity, and make good use of it".

Reflecting upon the lack of regulation on the use of personal data by parties in Québec, an organiser added that: "We have the right to do that, but people do not like it". For him, this issue is "more than a question of law, there could be a question of what is socially acceptable".

Another organiser points out that the digital shift could also undermine intra-party democracy. Speaking about the role of activists, he is concerned that "they feel more like being given information that has been chewed on by a small number of people than being collected by more people in each constituency". He notes that the technological divide is also accompanied by a generational divide within the activist base:

The activist who is older, we will probably have less need of him. The younger activist is likely to be needed, but in smaller numbers. (...) Because of the technological gap, it's a bit of a vicious circle, that is also virtuous. The more we try to find technological means that will be effective, the less we need people.

Still, democratically, the line can be very thin between mobilisation and manipulation. Reflecting on a not-so-distant future, this organiser spoke of the many possibilities data collection could provide parties with:

These changes bring us into a dynamic that the Americans call ‘activation fields’. (...) From the moment we have contact with someone, what do we do with this person, where does she go? (...) This gives incredible arborescence, but also incredible opportunities.

He concludes that: "Today, the world does not realise how all the data is piling up on people and that this is how elections are won now". Is there a limit to the information a party could collect on an elector? This senior staffer does not believe so. He adds: “If I could know everything you were consuming, it would be so useful to me and help mobilise you".

Québec’s main political parties completed their digital shift in preparation for the 2018 election. Our interviews show that this change was significant. From an internal democracy perspective, digital technologies and data marketing practices help respond to the decline of activism and membership levels observed in most Québec parties (Montigny, 2015). This can also lead to frustration among older party activists who would feel less involved. On the other hand, from a data protection perspective we note that in the absence of a rigorous regulatory framework, parties in Québec can do almost anything. As a result, they collect a significant amount of unprotected personal data. The pace at which this change is taking place and the risks it represents for data security even lead some political organisers to question their own practices. As the next section indicates, Québec is lagging behind in adapting the data marketing practices of political parties to contemporary privacy standards.

The protection of personal information over time

The data contained in the Québec list of electors has been the cornerstone of all political parties’ electioneering efforts for many years and now form the basis of their respective databases of voter information. It is from this list that they are able, with the addition of other information collected or purchased, to file, segment and target voters. An overview of the legislative amendments concerning the disclosure of the information contained in the list of electors reveals two things: (1) its relatively recent private nature, and (2) the fact that the ability for political parties to collect and use personal data about voters never really seems to have been questioned until recently. Parties mostly reacted by insisting on self-regulation (Élections Québec, 2019).

With regard to the public/private nature of the list of electors, we should note that prior to 1979 it was displayed in public places. Up to 2001, the list of electors of a polling division was even distributed to all voters in that section. Therefore, the list used to be perceived as a public document in order to prevent electoral fraud. Thus, citizens were able to identify potential errors and irregularities.

From 1972 on, the list has been sent to political parties. With the introduction of a permanent list of electors in 1995, political parties and MNAs were granted, in 1997, the right to receive annual copies of the list for verification purposes. Since 2006, parties receive an updated version of the list three times a year. This facilitates the update of their computerised voter databases. It should also be noted that during election periods, all registered electoral candidates are granted access to the list and its content.

Thus, while public access to the list of electors has been considerably reduced, political parties’ access has increased in recent years. Following legislative changes, some information has been removed from the list, the age and profession of the elector for instance. Yet, the Québec list remains the most exhaustive of any Canadian jurisdiction in terms of the quantity of voter information it contains, indicating the name, full address, gender and date of birth of each elector (Élections Québec, 2019, p. 34).

From a legal perspective, Québec parties are not subject to the "two general laws that govern the protection of personal information, namely the Act respecting access to documents held by public bodies and the protection of personal information, which applies in particular to information held by a public body, and the Act respecting the protection of personal information in the private sector, which concerns personal information held by a person carrying on a business within the meaning of section 1525 of the Civil Code of Québec" (Élections Québec, 2019, p. 27). Indirectly, however, this law would apply when a political party chooses to outsource some of its marketing, data collection or digital activities to a private sector firm.

Moreover, the Election Act does not specifically define which uses of data taken from the list of electors are permitted. It merely provides some general provisions. Therefore, parties cannot use or communicate a voter’s information for purposes other than those provided under the Act. It is also illegal to communicate or allow this information to be disclosed to any person who is not lawfully entitled to it.

Instead of strengthening the law, parties represented in the National Assembly first chose to adopt their own privacy and confidentiality policies. This form of self-regulation, however, has its limits. Even if they appear on their websites, these norms are usually not easy to find and there is no way to confirm that they are effectively enforced by parties. Only the Coalition avenir Québec and the Québec Liberal Party offer a clear link on their homepage. 3 We analysed each of these according to five indicators: the presence of 1) a definition of what constitutes personal information, 2) a reference to the type of use and sharing of data, 3) methods of data collection, 4) privacy and security measures that are taken and 5) the possibility for an individual to withdraw his or her consent and contact the party in connection with his or her personal information.

Table 1: Summary of personal information processing policies of parties represented at the National Assembly of Québec
 

CAQ

PLQ

QS

PQ

Definition of personal information

Identifies a person (contact information, name, address and phone number).

Identifies a natural person (the name, date of birth, email address and mailing address of that person, if the person decides to provide them).

About an identifiable individual that excludes business contact information (name, date of birth, personal email address, and credit card).

 

Strategic use and sharing of data protocols

- To provide news and information about the party.

- Can engage third parties to perform certain tasks (processing donations, making phone calls and providing technical services for the website).

- Written contracts include clauses to protect personal information.

- To contact including by newsletter to inform news and events of the Party.

- To provide a personalised navigation experience on the website with targeted information according to interests and regions.

- May disclose personal information to third parties for purposes related to the management of party activities (administration, maintenance or internal management of data, organisation of an event).

- Not sell, trade, lend or voluntarily disclose to third parties the personal information transmitted.

- To improve the content of the website and use for statistical purposes.

Data collection method

- Following a contact by email.

- Following the subscription to a communication.

- After filling out an information request form or any other form on a party page, including polls, petitions and party applications.

- The party reserves the right to use cookies on its site.

- Collected only from an online form provided for this purpose.

  

Privacy and Security of data

- Personal information is not used for other purposes without first obtaining consent. From data provider.

- Personal information may be shared internally between the party's head office and its constituency associations.

- Respect the confidentiality and the protection of personal information collected and used.

- Only people assigned to subscriptions management or communications with subscribers have access to information.

- Protection of information against unauthorized access attempts with a server that is in a safe and secure place.

- Respect the privacy and confidentiality of personal information.

- Personal details will not be published or posted on the Internet in any way except at the explicit request of the person concerned.

- The information is sent in the form of an encrypted email message that guarantees confidentiality.

- No guarantees that the information disclosed by the Internet will not be intercepted by a third party.

- The site strives to use appropriate technological measures, procedures, and storage devices to prevent unauthorised use or disclosure of your personal information.

- No information to identify an individual unless he has provided this information for this purpose.

- Take reasonable steps to protect the confidentiality of this information.

- The information automatically transmitted between computers does not identify an individual personally.

- Access to collected information is limited only to persons authorized by the party or by law.

Withdrawal of consent and information

- Any person registered on a mailing list can unsubscribe at any time.

- Invitation to share questions, comments and suggestions.

- Ability to apply to no longer receive party information at any time.

- Ability to withdraw consent at any time on reasonable notice.

 

In general, we find that three out of four parties offer similar definitions of the notion of personal information: the Coalition avenir Québec, the Liberal Party of Québec and Québec solidaire. Beyond this indicator, the information available varies from one party to another. Thus, voters have little information on the types of use of their personal data. Moreover, only the Coalition avenir Québec and Québec solidaire indicate that they can use a third party in the processing of data without having to state the purpose of this processing to the data providers. The Coalition avenir Québec is the only party that specifies its methods of data collection in more detail. Similarly, Québec solidaire is more specific with respect to the measures taken to protect the privacy and security of the data it collects. Finally, the Parti québécois does not specify the mechanism by which electors could withdraw their consent.

Cambridge Analytica as a turning point

Our analysis of media coverage of the partisan and electoral use of voter data in Québecreveals three main conclusions. First, even though Québec political parties, both at the provincial and municipal levels, began collecting, storing and using personal data on voters several years ago, news media attention on these practices is relatively new. Secondly, the dominant media frame on the issue seems to have changed over the years: after first being rather anecdotal, the treatment of the issue grew in importance and became more suspicious. Finally, the Cambridge Analytica scandal appears as a turning point in news coverage. It is this affair that will force parties and their strategists to explain their practices publicly for the first time (Bélair-Cirino, 2018), will put pressure on the government to react, and bring to the fore the concerns and demands of other organisations such as Élections Québec and the Commission d’accès à l’information du Québec, the administrative tribunal and oversight body responsible for the protection of personal information in provincial public agencies and private enterprises.

Interest in ethical and security issues related to data campaigning built up slowly in Québec’s political news coverage. Already in 2012, parties used technological means to feed their databases and target the electorate (Giasson et al., 2019). However, it is in the context of the municipal elections in the Fall of 2013 that the issue of the collection and processing of personal data on voters was first covered in a news report. It was only shortly after the 2014 Québec elections that we found a news item dealing specifically with the protection of personal data of Québec voters. The Montréal-based newspaper Le Devoir reported that the list of electors was made available online by a genealogy institute. It was even possible to get it for a fee. The Drouin Institute - which released the list - estimated that about 20,000 people had accessed the data (Fortier, 2014).

Paradoxically, the following year, the media reported that investigators working for Élections Québec could not access the data of the electoral list for the purpose of their inquiry (Lajoie, 2015a). That same year, another anecdotal event made headlines: a Liberal MNA was asked by Élections Québec to stop using the voters list data to call his constituents to... wish them a happy birthday (Lajoie, 2015b). In the 2017 municipal elections, and even more so after the revelations regarding Cambridge Analytica in 2018, the media in Québec seemed to have paid more attention to data-driven electoral party strategies than to the protection of personal data by the parties.

For instance, in the hours following the revelation of the Cambridge Analytica scandal, political reporters covering the National Assembly in Québec quickly turned their attention to the leadership of political parties, asking them to report on their respective organisations’ digital practices and about the regulations in place to frame them. Simultaneously, Élections Québec, which had been calling for stronger control of personal data use by political parties since 2013, expressed its concerns publicly and fully joined the public debate. As a way to mark its willingness to act on the issue, the liberal government introduced a bill at the end of the parliamentary session, the last of this parliament. The bill was therefore never adopted by the House, which was dissolved a few days later, in preparation for the next provincial election.

Political reporters in Québec have since then paid sustained attention to partisan practices regarding the collection and use of personal information. In their coverage of the 2018 election campaign, they widely discussed the use of data by leaders and their political parties. Thus, while the Cambridge Analytica affair did not directly affect Québec political parties, it nevertheless appears as a shifting point in the media coverage of the use of personal data for political purposes.

Media framing of the issue also evolved over the studied period, becoming more critical and suspicious of partisan data marketing with time. Before the Cambridge Analytica case, coverage rarely focused on the democratic consequences or privacy and security issues associated with the use of personal data for political purposes. Initial coverage seems to have been largely dominated by the story depicting how parties were innovating in electioneering and on how digital technologies could improve electoral communication. Journalists mostly cited the official discourse of political leaders, their strategists or of the digital entrepreneurs from tech companies who worked with them.

An illustrative example of this type of coverage can be found in an article published in September 2013 during municipal elections held in Québec. It presents a portrait of two Montréal-based data analysis companies – Democratik and Vote Rapide – offering technological services to political parties (Champagne, 2013). Their tools were depicted as simple databases fed by volunteers, mainly intended for the identification of sympathisers to facilitate the get-out-the-vote operations (GOTV). It emphasised the affordability and universal use of these programmes by parties, and even indicated that one of them had been developed with the support of the Civic Action League, a non-profit organisation that helps fight political corruption.

However, as the years passed, a change of tone began to permeate the coverage, especially in the months building up to the 2018 general election. A critical frame became more obvious in reporting. It even used Orwellian references to data campaigning in titles such as… “Political parties are spying on you” (Castonguay, 2015) “They all have a file on you” (Joncas, 2018), “What parties know about you” (Croteau, 2018), or “Political parties exchange your personal details” (Robichaud, 2018). In a short period of time, data campaigning had gone from cool to dangerous.

Conclusion

Québec political parties began their digital shift a few years later than their Canadian federal counterparts. However, they have adapted their digital marketing practices rapidly; much faster in fact than the regulatory framework. For the 2018 election, all major parties invested a great deal of resources to be up to date on data-driven campaigning.

To maximise the return on their investment in technology, they must now “feed the beast” with more data. Benefiting from weak regulation over data marketing, this means that they will be able to gather even more personal information in the years to come, without having to explain to voters what their data are used for or how they are protected. In addition, parties are now involving an increasing number of volunteers in the field for the collection of digital personal information, which also increases the risk of data leakage or misuse.

They have, so far, implemented that change with very limited transparency. Up until now, research in Canada has not been able to identify precisely what kind of information is collected or how it is managed and protected. Canadian political strategists have been somewhat forthcoming in explaining how parties collect and why they use personal data for electoral purposes (see for instance Giasson et al., 2019; Giasson and Small, 2017; Flanagan, 2014; Marland, 2016). They however remain silent on the topics of regulation and data protection.

This lack of transparency is problematic in Canada since party leaders who win elections have much more internal powers in British parliamentary systems then in the US presidential system. They control the executive and legislative branches as well as the administration of the party. This means that there is no firewall and real restrictions to the use of data collected by a party during an election once it arrives in office. In that regard, it was revealed that the Office of the Prime Minister of Canada, Justin Trudeau, used its party’s database to vet judges’ nominations (Bergeron, 2019). The same risks apply to Québec.

It is in this context that Élections Québec and the Access to Information Commission of Québec have initiated a broad reflection on the electoral use of personal data by parties. In 2018, following a leak of personal data from donors of a Montréal-based municipal party, the commission contacted the campaign to "examine the measures taken to minimise risks". The commission took the opportunity to "emphasise the importance of political parties being clearly subject to privacy rules, as is the case in British Columbia" (Commission d’accès à l’information du Québec, 2018).

In a report published in February 2019, the Chief Electoral Officer of Québec presented recommendations that parties should follow in their voter data collection and analysis procedures (Élections Québec, 2019). It suggested that provincial and municipal political parties be submitted to a general legislative framework for the protection of personal information. Heeding these calls for change, Québec’s new Minister of Justice and Democratic Reform announced, in November 2019, plans for an overhaul of the province’s regulatory framework on personal data and privacy, which would impose stronger regulations on data protection and use and would grant increased investigation powers to the head of the Commission d’accès à l’information. All businesses, organisations, governments and public administrations operating in Québec and collecting personal data would be covered under these news provisions and could be subjected to massive fines for any form of data breach in their systems. Aimed at ensuring better control, transparency and consent of citizens over their data, these measures, which will be part of a bill introduced in 2020 to the National Assembly, were said to also apply to political parties (Croteau, 2019). However, as this article goes to print, the specific details of these new provisions aimed at political parties remain unknown.

This new will to regulate political parties is the result of a perfect storm where three factors came into play at the same time. Thus, in addition to the rapid integration of new data collection technologies by Québec’s main political parties, there was increased pressure from regulatory agencies and an international scandal that changed the media framing of the political use of personal data.

Well beyond the issue of privacy, data collection and analysis for electoral purposes also change some features of our democracy. Technology replacing activists translates in major intra-party changes. In a parliamentary system, this could increase the centralisation of power around party leaders who now rely less on party members to get elected. This would likely be the case in Québec and in Canada.

Some elements also fuel resistance to change within parties, such as the dependence on digital technologies at the detriment of human contact, fears regarding the reliability of systems or data and the high costs generated by the development and maintenance of databases. For some, party culture also plays a role. A former political strategist who worked closely with former Québec Premier Pauline Marois declared in the media: "You know in some parties, we value the activist work done by old ladies who come to make calls and talk to each voter, one by one" (Radio-Canada, 2017).

As some of our respondents mentioned, parties may move from ‘big data’ to ‘smart data’ in coming years, as they adapt to or adopt novel technological tools. In an era of partisan flexibility, data marketing seems to have helped some parties find and reach their voters. A move towards ‘smart data’ may now also help them modify those voters’ beliefs with even more targeted digital strategies. What might this mean for democracy in Québec? Will its voters be mobilised or manipulated when parties will use their data in upcoming campaigns? Are political parties on the edge of glory or of catastrophe? These questions should be central to the study of data-driven campaigning.

References

Baldwin-Philippi, J. (2017). The Myths of Data-Driven Campaigning. Political Communication, 34(7), 627–633. https://doi.org/10.1080/10584609.2017.1372999

Bashyakarla, V., Hankey, S., Macintyre, S., Rennó, R., & Wright, G. (2019). Personal Data: Political Persuasion. Inside the Influence Industry. How it works. Berlin: Tactical Tech. Retrieved from https://cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works_print-friendly.pdf

Bélair-Cirino, M. (2018). Inquiétude à Québec sur les banques de données politiques [Concern in Quebec City about Political Databanks]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/523240/donnees-personnelles-inquietude-a-quebec

Bélair-Cirino, M. (2017, April 15). Vie privée – Connaître les électeurs grâce aux petitions [Privacy - Getting to know voters through petitions]. Le Devoir. Retrieved from https://www.ledevoir.com/politique/quebec/496477/vie-privee-connaitre-les-electeurs-grace-aux-petitions

Bergeron, P. (2018, May 26). Le Parti québécois se dote d'une «Force bleue» pour gagner les élections [The Parti Québécois has a "Force Bleue" to win elections]. La Presse. Retrieved from https://www.lapresse.ca/actualites/politique/politique-quebecoise/201805/26/01-5183364-le-parti-quebecois-se-dote-dune-force-bleue-pour-gagner-les-elections.php

Bergeron, É. (2019, April 24). Vérification politiques sur de potentiels juges: l’opposition crie au scandale [Political checks on potential judges: Opposition cries out for scandal]. TVA Nouvelles. Retrieved from https://www.tvanouvelles.ca/2019/04/24/verification-politiques-sur-de-potentiels-juges-lopposition-crie-au-scandale

Bennett, C. J. (2018). Data-driven elections and political parties in Canada: privacy implications, privacy policies and privacy obligations. Canadian Journal of Law and Technology,16(2), 195-226. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3146964

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law,6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bennett, C. J. (2015). Trends in voter surveillance in Western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3-4), 370–384. https://doi.org/10.24908/ss.v13i3/4.5373

Bennett, C. J. (2013). The politics of privacy and the privacy of politics: Parties, elections and voter surveillance in Western democracies. First Monday, 18(8). https://doi.org/10.5210/fm.v18i8.4789

Blais, A., & A. Robillard. (2017, October 4). 1 Million $ pour un logiciel électoral [1 Million for election software]. LeJournal de Montréal. Retrieved from https://www.journaldemontreal.com/2017/10/04/1-million--pour-un-logiciel-electoral

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Carlile, C. N. (2017). Data and Targeting in Canadian Politics: Are Provincial Parties Taking Advantage of the Latest Political Technology? [Master Thesis, University of Calgary]. Calgary: University of Calgary. https://doi.org/10.11575/PRISM/5226

Castonguay, A. (2015, September 14). Les partis politiques vous espionnent [The political parties are spaying on you]. L’Actualité. Retrieved from https://lactualite.com/societe/les-partis-politiques-vous-espionnent/

Champagne, V. (2013, September 25). Des logiciels de la Rive-Nord pour gagner les élections [Rive-Nord software to win elections]. Ici Radio-Canada.

Commission d’accès à l’information du Québec. (2018, April 3). La Commission d’accès à l’information examinera les faits sur la fuite de données personnelles de donateurs du parti Équipe Denis Coderre [The Commission d'accès à l'information will examine the facts on the leak of personal data of Team Denis Coderre donors]. Retrieved from http://www.cai.gouv.qc.ca/la-commission-dacces-a-linformation-examinera-les-faits-sur-la-fuite-de-donnees-personnelles-de-donateurs-du-parti-equipe-denis-coderre/

Croteau, M. (2018, August 20). Ce que les partis savent sur vous [What the parties know about you]. La Presse+. Retrieved from http://mi.lapresse.ca/screens/8a829cee-9623-4a4c-93cf-3146a9c5f4cc__7C___0.html

Croteau, M. (2019, November 22). Données personnelles: un chien de garde plus. Imposant [Personal data: one guard dog more. Imposing]. La Presse+. Retrieved from https://www.lapresse.ca/actualites/politique/201911/22/01-5250741-donnees-personnelles-un-chien-de-garde-plus-imposant.php

Del Duchetto, J.-C. (2016). Le marketing politique chez les partis politiques québécois lors des élections de 2012 et de 2014 [Political marketing by Quebec political parties in the 2012 and 2014 elections] [Master’s thesis, University of Montréal]). Retrieved from http://hdl.handle.net/1866/19404

Delacourt, S. (2013). Shopping for votes. How politicians choose us and we choose them. Madeira Park: Douglas & McIntyre.

Dobber, T., Trilling, D., Helberger, N. & de Vreese, C. H. (2017). Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.777

Dufresne, Y., Tessier, C., & Montigny, E. (2019). Generational and Life-Cycle Effects on Support for Quebec Independence. French politics, 17(1), 50–63. https://doi.org/10.1057/s41253-019-00083-9

Ehrhard, T., Bambade, A., & Colin, S. (2019). Digital campaigning in France, a Wide Wild Web? Emergence and evolution of the market and Its players. In A. M. G. Solo (Ed.), Handbook of Research on Politics in the Computer Age (pp. 113-126). Hershey (PA), USA: IGI Global. https://doi.org/10.4018/978-1-7998-0377-5.ch007

Élections Québec. (2019). Partis politiques et protection des renseignements personnels: exposé de la situation québécoise, perspectives comparées et recommandations [Political Parties and the Protection of Personal Information: Presentation of the Quebec Situation, Comparative Perspectives and Recommendations]. Retrieved from https://www.pes.electionsquebec.qc.ca/services/set0005.extranet.formulaire.gestion/ouvrir_fichier.php?d=2002

Enli, G. & Moe, H. (2013). Social media and election campaigns – key tendencies and ways forward. Information, Communication & Society, 16(5), 637–645. https://doi.org/10.1080/1369118x.2013.784795

Flanagan, T. (2014). Winning power. Canadian campaigning in the 21st century. Montréal; Kingston: McGill-Queen’s University Press.

Flanagan, T. (2010). Campaign strategy: triage and the concentration of resources. In H. MacIvor(Ed.), Election (pp. 155-172). Toronto: Emond Montgomery Publications.

Fortier, M. (2014, May 29). La liste électorale du Québec vendue sur Internet [Quebec's list of electors sold on the Internet]. Le Devoir. Retrieved from https://www.ledevoir.com/societe/409526/la-liste-electorale-du-quebec-vendue-sur-internet

Giasson, T., & Small, T. A. (2017). Online, all the time: the strategic objectives of Canadian opposition parties. In A. Marland, T. Giasson, & A. L. Esselment (Eds.), Permanent campaigning in Canada (pp. 109-126). Vancouver: University of British Columbia Press.

Giasson, T., Le Bars, G. & Dubois, P. (2019). Is Social Media Transforming Canadian Electioneering? Hybridity and Online Partisan Strategies in the 2012 Québec Election. Canadian Journal of Political Science, 52(2), 323–341. https://doi.org/10.1017/s0008423918000902

Gibson, R. K. (2015). Party change, social media and the rise of ‘citizen-initiated’ campaigning. Party Politics, 21(2), 183-197. https://doi.org/10.1177/1354068812472575

Hersh, E. D. (2015). Hacking the electorate: how campaigns perceive voters. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9781316212783

Howard, P. N, & D. Kreiss. (2010). Political parties and voter privacy: Australia, Canada, the United Kingdom, and United States in comparative perspective. First Monday, 15(12). https://doi.org/10.5210/fm.v15i12.2975

Joncas, H. (2018, July 28). Partis politiques : ils vous ont tous fichés [Political parties: they've got you all on file…]. Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2018/07/28/partis-politiques-ils-vous-ont-tous-fiches

Karpf, D., Kreiss, D. Nielsen, R. K., & Powers, M. (2015). The role of qualitative methods in political communication research: past, present, and future. International Journal of Communication, 9(1), 1888–1906. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4153

Kreiss, D. (2016). Prototype politics. Technology-intensive campaigning and the data of democracy. Oxford, UK: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199350247.001.0001

Lajoie, G. (2015a, December 3). Les enquêteurs du DGEQ privés des informations contenues dans la liste électorale [DGEQ investigators deprived of the information contained in the list of electors]. Le Journal de Montréal. Retrieved from https://www.journaldemontreal.com/2015/12/03/le-dge-prive-ses-propres-enqueteurs-des-informations

Lajoie, G. (2015b, November 27). André Drolet ne peut plus souhaiter bonne fête à ses électeurs [André Drolet can no longer wish his constituents a happy birthday]. Le Journal de Québec. Retrieved from https://www.journaldequebec.com/2015/11/27/interdit-de-souhaiter-bon-anniversaire-a-ses-electeurs

Langlois, S. (2018). Évolution de l'appui à l'indépendance du Québec de 1995 à 2015 [Evolution of Support for Quebec Independence from 1995 to 2015]. In A. Binette and P. Taillon (Eds.), La démocratie référendaire dans les ensembles plurinationaux (pp. 55-84). Québec: Presses de l'Université Laval.

Marland, A. (2016). Brand command: Canadian politics and democracy in the age of message control. Vancouver: University of British Columbia Press.

Marland, A., Giasson, T., & Lees-Marshment, J. (2012). Political marketing in Canada. Vancouver: University of British Columbia Press.

McKelvey, F., & Piebiak, J. (2018). Porting the political campaign: The NationBuilder platform and the global flows of political technology. New Media & Society, 20(3), 901–918. https://doi.org/10.1177/1461444816675439

Montigny, E. (2015). The decline of activism in political parties: adaptation strategies and new technologies. In G. Lachapelle & P. J. Maarek (Eds.), Political parties in the digital age. The Impact of new technologies in politics (pp. 61-72). Berlin: De Gruyter. https://doi.org/10.1515/9783110413816-004

Munroe, K. B & Munroe, H. D. (2018). Constituency campaigning in the age of data. Canadian Journal of Political Science,51(1), 135–154. https://doi.org/10.1017/S0008423917001135

Patten, S. (2017). Databases, microtargeting, and the permanent campaign: a threat to democracy. In A. Marland, T. Giasson, & A. Esselment. (Eds.), Permanent campaigning in Canada (pp. 47-64). Vancouver: University of British Columbia Press.

Patten, S. (2015). Data-driven microtargeting in the 2015 general election. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy. Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Pelletier, R. (1989). Partis politiques et société québécoise [Political parties and Quebec society]. Montréal: Québec Amérique.

Radio-Canada. (2017, October 1). Episode of Sunday, October 1, 2017[Television Series Episode] in Les Coulisses du Pouvoir [Behind the scenes of power]. ICI RD. Retrieved from https://ici.radio-canada.ca/tele/les-coulisses-du-pouvoir/site/episodes/391120/joly-charest-sondages

Robichaud, O. (2018, August 20). Les partis politiques s'échangent vos coordonnées personnelles [Political parties exchange your personal contact information]. Huffpost Québec. Retrieved from https://quebec.huffingtonpost.ca/entry/les-partis-politiques-sechangent-vos-coordonnees-personnelles_qc_5cccc8ece4b089f526c6f070

Salvet, J.-M. (2018, January 31). Entente entre le PLQ et Data Sciences: «Tous les partis politiques font ça», dit Couillard [Agreement between the QLP and Data Sciences: "All political parties do that," says Couillard]. Le Soleil. Retrieved from https://www.lesoleil.com/actualite/politique/entente-entre-le-plq-et-data-sciences-tous-les-partis-politiques-font-ca-dit-couillard-21f9b1b2703cdba5cd95e32e7ccc574f

Thomas, P. G. (2015). Political parties, campaigns, data, and privacy. In A. Marland and T. Giasson (Eds.), 2015 Canadian election analysis. Communication, strategy, and democracy (pp. 16-17). Vancouver: University of British Columbia Press. Retrieved from http://www.ubcpress.ca/asset/1712/election-analysis2015-final-v3-web-copy.pdf

Vaccari, C. (2013). Digital politics in western democracies: a comparative study. Baltimore: Johns Hopkins University Press.

Yawney, L. (2018). Understanding the “micro” in micro-targeting: an analysis of the 2018 Ontario provincial election [Master’s thesis, University of Victoria]. Retrieved from https://dspace.library.uvic.ca//handle/1828/10437

Footnotes

1. Even though there are 22 officially registered political parties in Québec, all independent and autonomous from their counterpart at the federal level, only four are represented at the National Assembly: CAQ, QLP, QS and PQ. Since the Québec political system is based on the Westminster model, each MNA is elected in a given constituency by a first-past-the-post ballot.

2.According to QS website (view July 2, 2019).

3.Websites viewed on 27 March 2019.

Towards a holistic perspective on personal data and the data-driven election paradigm

$
0
0

This commentary is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Politics is an art and not a science, and what is required for its mastery is not the rationality of the engineer but the wisdom and the moral strength of the statesman. - Hans Morgenthau, Scientific Man versus Power Politics

Voters, industry representatives, and lawmakers – and not infrequently, journalists and academics as well – have asked one question more than any other when presented with evidence of how personal data is changing modern-day politicking: “Does it work?” As my colleagues and I have detailed in our report, Personal Data: Political Persuasion, the convergence of politics and commercial data brokering has transformed personal data into a political asset, a means for political intelligence, and an instrument for political influence. The practices we document are varied and global: an official campaign app requesting camera and microphone permissions in India, experimentation to select slogans designed to trigger emotional responses from Brexit voters, a robocalling-driven voter suppression campaign in Canada, attack ads used to control voters’ first impressions on search engines in Kenya, and many more.

Asking “Does it work?” is understandable for many reasons, including to address any real or perceived damage to the integrity of an election, to observe shifts in attitudes or voting behaviour, or perhaps to ascertain and harness the democratic benefits of the technology in question. However, discourse fixated on the efficacy of data-intensive tools is fraught with abstraction and reflects a shortsighted appreciation for the full political implications of data-driven elections.

“Does it work?”

The question “Does it work?” is very difficult to answer with any degree of confidence regardless of the technology in question: personality profiling of voters to influence votes, natural language processing applied to the Twitter pipeline to glean information about voters’ political leanings, political ads delivered in geofences, or a myriad of others.

First, the question is too general with respect to the details it glosses over. The technologies themselves are a heterogenous mix, and their real-world implementations are manifold. Furthermore, questions of efficacy are often divorced of context, and a technology’s usefulness to a campaign very likely depends on the sociopolitical context in which it lives. Finally, the question of effectiveness continues to be studied extensively. Predictably, the conclusions of peer-reviewed research vary.

As one example, the effectiveness of implicit social pressure in direct mail in the United States evidently remains inconclusive. The motivation for this research is the observation that voting is a social norm responsive to others’ impressions (Blais, 2000; Gerber & Rogers, 2009). However, some evidence suggests that explicit social pressure to mobilise voters (e.g., by disclosing their vote histories) may seem invasive and can backfire (Matland & Murray, 2013). In an attempt to preserve the benefits of social pressure while mitigating its drawbacks, researchers have explored whether implicit social pressure in direct mail (i.e., mailers with an image of eyes, reminding recipients of their social responsibility) boosts turnout on election day. Of their evaluation of implicit social pressure, which had apparently been regarded as effective, political scientists Richard Matland and Gregg Murray concluded that, “The effects are substantively and statistically weak at best and inconsistent with previous findings” (Matland & Murray, 2016). In response to similar, repeated findings from Matland and Murray, Costas Panagopoulos wrote that their work in fact “supports the notion that eyespots likely stimulate voting, especially when taken together with previous findings” (Panagopoulos, 2015). Panagopoulos soon thereafter authored a paper arguing that the true impact of implicit social pressure actually varies with political identity, claiming that the effect is pronounced for Republicans but not for Democrats or Independents, while Matland maintained that the effect is "fairly weak" (Panagopoulos & van der Linden, 2016; Matland, 2016).

Similarly, studies on the effects of door-to-door canvassing lack consensus (Bhatti et al., 2019). Donald Green, Mary McGrath, and Peter Aronow published a review of seventy-one canvassing experiments and found their average impact to be robust and credible (Green, McGrath, & Aronow, 2013). A number of other experiments have demonstrated that canvassing can boost voter turnout outside the American-heavy literature: among students in Beijing in 2003, with British voters in 2005, and for women in rural Pakistan in 2008 (Guan & Green, 2006; John & Brannan, 2008; Giné & Mansuri, 2018). Studies from Europe, however, call into question the generalisability of these findings. Two studies on campaigns in 2010 and 2012 in France both produced ambiguous results, as the true effect of canvassing was not credibly positive (Pons, 2018; Pons & Liegey, 2019). Experiments conducted during the 2013 Danish municipal elections observed no definitive effect of canvassing, while Enrico Cantoni and Vincent Pons found that visits by campaign volunteers in Italy helped increase turnout, but those by the candidates themselves did not (Bhatti et al., 2019; Cantoni & Pons, 2017). In some cases, the effect of door-to-door canvassing was neither positive nor ambiguous but distinctly counterproductive. Florian Foos and Peter John observed that voters contacted by canvassers and given leaflets for the 2014 British European Parliament elections were 3.7 percentage points less likely to vote than those in the control group (Foos & John, 2018). Putting these together, the effects of canvassing still seem positive in Europe, but they are less pronounced than in the US. This learning has led some scholars to note that “practitioners should be cautious about assuming that lessons from a US- dominated field can be transferred to their own countries’ contexts” (Bhatti et al., 2019).

A cursory glance at a selection of literature related to these two cases alone – implicit social pressure and canvassing – illustrates how tricky answering “Does it work?” is. Although many of the technologies in use today are personal data-supercharged analogues of these antecedents (e.g., canvassing apps with customised scripts and talking points based on data about each household’s occupants instead of generic, door-to-door knocking), I have no reason to suspect that analyses of data-powered technologies would be any different. The short answer to “Does it work?” is that it depends. It depends on baseline voter turnout rates, print vs. digital media, online vs. offline vs. both combined, targeting young people vs. older people, reaching members of a minority group vs. a majority group, partisan vs. nonpartisan messages, cultural differences, the importance of the election, local history, and more. Indeed, factors like the electoral setup may alter the effectiveness of a technology altogether. A tool for political persuasion might work in a first-past-the-post contest in the United States but not in a European system of proportional representation in which winner-take-all stakes may be tempered. This is not to suggest that asking “Does it work?” is a futile endeavour – indeed there are potential democratic benefits to doing so – but rather that it is both limited in scope and rather abstract given the multitude of factors and conditions at play in practice.

Political calculus and algorithmic contagion

With this in mind, I submit that a more useful approach to appreciating the full impacts of data-driven elections may be a consideration of the preconditions that allow data-intensive practices to thrive and an examination of their consequences than a preoccupation with the efficacy of the practices themselves.

In a piece published in 1986, philosopher Ian Hacking coined the term ‘semantic contagion’ to describe the process of ascribing linguistic and cultural currency to a phenomenon by naming it and thereby also contributing to its spread (Hacking, 1999). I propose that the prevailing political calculus, spurred on by the commercial success of “big data” and “AI”, appears overtaken by an ‘algorithmic contagion’ of sorts. On one level, algorithmic contagion speaks to the widespread logic of quantification. For example, understanding an individual is difficult, so data brokers instead measure people along a number of dimensions like level of education, occupation, credit score, and others. On another level, algorithmic contagion in this context describes an interest in modelling anything that could be valuable to political decision-making, as Market Predict’s political page suggests. It presumes that complex phenomena, like an individual’s political whims, can be predicted and known within the structures of formalised algorithmic process, and that they ought to be. According to the Wall Street Journal, a company executive claimed that Market Predict’s “agent-based modelling allows the company to test the impact on voters of events like news stories, political rallies, security scares or even the weather” (Davies, 2019).

Algorithmic contagion also encompasses a predetermined set of boundaries. Thinking within the capabilities of algorithmic methods prescribes a framework to interpret phenomena within bounds that enable the application of algorithms to those phenomena. In this respect, algorithmic contagion can influence not only what is thought but also how. This conceptualisation of algorithmic contagion encompasses the ontological (through efforts to identify and delineate components that structure a system, like an individual’s set of beliefs), the epistemological (through the iterative learning process and distinction drawn between approximation and truth), and the rhetorical (through authority justified by appeals to quantification).

Figure 1: The political landing page of Market Predict, a marketing optimisation firm for brand and political advertisers, that explains its voter simulation technology. It claims to, among other things, “Account for the irrationality of human decision-making”. Hundreds of companies offer related services. Source: Market Predict Political Advertising

This algorithmic contagion-informed formulation of politics bears some connection to the initial “Does it work?” query but expands the domain in question to not only the applications themselves but also to the components of the system in which they operate – a shift that an honest analysis of data-driven elections, and not merely ad-based micro-targeting, demands. It explains why and how a candidate for mayor in Taipei in 2014 launched a viral social media sensation by going to a tattoo parlour. He did not visit the parlour to get a tattoo, to chat with an artist about possible designs, or out of a genuine interest in meeting the people there. He went because a digital listening company that mines troves of data and services campaigns across southeast Asia generated a list of actions for his campaign that would generate the most buzz online, and visiting a tattoo parlour was at the top of the list.

Figure 2: A still from a video documenting Dr Ko-Wen Je’s visit to a tattoo parlour, prompting a social media sensation. His campaign uploaded the video a few days before municipal elections in which he was elected mayor of Taipei in 2014. The post on Facebook has 15,000 likes, and the video on YouTube has 153,000 views. Against a backdrop of creeping voter surveillance, Dr Ko-Wen Je’s visit to this tattoo parlour begs questions about the authenticity of political leaders. (Image brightened for clarity) Sources: Facebook and YouTube

As politics continues to evolve in response to algorithmic contagion and to the data industrial complex governing the commercial (and now also political) zeitgeist, it is increasingly concerned with efficiency and speed (Schechner & Peker, 2018). Which influencer voters must we win over, and whom can we afford to ignore? Who is both the most likely to turn out to vote and also the most persuadable? How can our limited resources be allocated as efficiently as possible to maximise the probability of winning? In this nascent approach to politics as a practice to be optimised, who is deciding what is optimal? Relatedly, as the infrastructure of politics changes, who owns the infrastructure upon which more and more democratic contests are waged, and to what incentives do they respond?

Voters are increasingly treated as consumers – measured, ranked, and sorted by a logic imported from commerce. Instead of being sold shoes, plane tickets, and lifestyles, voters are being sold political leaders, and structural similarities to other kinds of business are emerging. One challenge posed by data-driven election operations is the manner in which responsibilities have effectively been transferred. Voters expect their interests to be protected by lawmakers while indiscriminately clicking “I Agree” to free services online. Efforts to curtail problems through laws are proving to be slow, mired in legalese, and vulnerable to technological circumvention. Based on my conversations with them, venture capitalists are reluctant to champion a transformation of the whole industry by imposing unprecedented privacy standards on their budding portfolio companies, which claim to be merely responding to the demands of users. The result is an externalised cost shouldered by the public. In this case, however, the externality is not an environmental or a financial cost but a democratic one. The manifestation of these failures include the disintegration of the public sphere and a shared understanding of facts, polarised electorates embroiled in 365-day-a-year campaign cycles online, and open campaign finance and conflict of interest loopholes introduced by data-intensive campaigning, all of which are exacerbated by a growing revolving door between the tech industry and politics (Kreiss & McGregor, 2017).

Personal data and political expediency

One response to Cambridge Analytica is “Does psychometric profiling of voters work?” (Rosenberg et al., 2018). A better response examines what the use of psychometric profiling reveals about the intentions of those attempting to acquire political power. It asks what it means that a political campaign was apparently willing to invest the time and money into building personality profiles of every single adult in the United States in order to win an election, regardless of the accuracy of those profiles, even when surveys of Americans indicate that they do not want political advertising tailored to their personal data (Turow et al., 2012). And it explores the ubiquity of services that may lack Cambridge Analytica’s sensationalised scandal but shares the company’s practice of collecting and using data in opaque ways for clearly political purposes.

The ‘Influence Industry’ underlying this evolution has evangelised the value of personal data, but to whatever extent personal data is an asset, it is also a liability. What risks do the collection and use of personal data expose? In the language of the European Union’s General Data Protection Regulation (GDPR), who are the data controllers, and who are the data subjects in matters of political data which is, increasingly, all data? In short, who gains control, and who loses it?

As a member of a practitioner-oriented group based in Germany with a grounding in human rights, I worry about data-intensive practices in elections and the larger political sphere going awry, especially as much of our collective concern seems focused on questions of efficacy while companies race to capitalise on the market opportunity. For historical standards of the time, the Holocaust was a ruthlessly data-driven, calculated, and efficient undertaking fuelled by vast amounts of personal data. As Edwin Black documents in IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation, personal data managed by IBM was an indispensable resource for the Nazi regime. IBM’s President at the time, Thomas J. Waston Sr., the namesake of today’s IBM Watson, went to great lengths to profit from dealings between IBM’s German subsidiary and the Nazi party. The firm was such an important ally that Hitler awarded Watson an Order of the German Eagle award for his invaluable service to the Third Reich. IBM aided the Nazi’s record-keeping across several phases of the Holocaust, including identification of Jews, ghettoisation, deportation, and extermination (Black, 2015). Black writes that “Prisoners were identified by descriptive Hollerith cards, each with columns and punched holes detailing nationality, date of birth, marital status, number of children, reason for incarceration, physical characteristics, and work skills” (Black, 2001). These Hollerith cards were sorted in machines physically housed in concentration camps.

The precursors to these Hollerith cards were originally developed to track personal details for the first American census. The next American census, to be held in 2020, has already been a highly politicised affair with respect to the addition of a citizenship question (Ballhaus & Kendall, 2019). President Trump recently abandoned an effort to formally add a citizenship question to the census, vowing to seek this information elsewhere, and the US Census Bureau has already published work investigating the quality of alternate citizenship data sources for the 2020 Census (Brown et al., 2018). For stakeholders interested in upholding democratic ideals, focusing on the accuracy of this alternate citizenship data is myopic; that an alternate source of data is being investigated to potentially advance an overtly political goal is the crux of the matter.

Figure 3: A card showing the personal data of Symcho Dymant, a prisoner at Buchenwald Concentration Camp. The card includes many pieces of personal data, including name, birth date, condition, number of children, place of residence, religion, citizenship, residence of relatives, height, eye colour, description of his nose, mouth, ears, teeth, and hair. Source: US Holocaust Memorial Museum

This prospect may seem far-fetched and alarmist to some, but I do not think so. If the political tide were to turn, the same personal data used for a benign digital campaign could be employed in insidious and downright unscrupulous ways if it were ever expedient to do so. What if a door-to-door canvassing app instructed volunteers walking down a street to skip your home and not remind your family to vote because a campaign profiled you as supporters of the opposition candidate? What if a data broker classified you as Muslim, or if an algorithmic analysis of your internet browsing history suggests that you are prone to dissent? Possibilities like these are precisely why a fixation on efficacy is parochial. Given the breadth and depth of personal data used for political purposes, the line between consulting data to inform a political decision and appealing to data – given the rhetorical persuasiveness it enjoys today – in order to weaponise a political idea is extremely thin.

A holistic appreciation of data-driven elections’ democratic effects demands more than simply measurement, and answering “Does it work?” is merely one component of grasping how campaigning transformed by technology and personal data is influencing our political processes and the societies they engender. As digital technologies continue to rank, prioritise, and exclude individuals even when – indeed, especially when – inaccurate, we ought to consider the larger context in which technological practices shape political outcomes in the name of efficiency. The infrastructure of politics is changing, charged with an algorithmic contagion, and a well-rounded perspective requires that we ask not only how these changes are affecting our ideas of who can participate in our democracies and how they do so, but also who derives value from this infrastructure and how they are incentivised, especially when benefits are enjoyed privately but costs sustained democratically. The quantitative tools underlying the ‘datafication’ of politics are neither infallible nor safe from exploitation, and issues of accuracy grow moot when data-intensive tactics are enlisted as pawns in political agendas. A new political paradigm is emerging whether or not it works.

References

Ballhaus, R., & Kendall, B. (2019, July 11). Trump Drops Effort to Put Citizenship Question on Census, The Wall Street Journal. Retrieved from https://www.wsj.com/articles/trump-to-hold-news-conference-on-census-citizenship-question-11562845502

Bhatti, Y., Olav Dahlgaard, J., Hedegaard Hansen, J., & Hansen, K.M. (2019). Is Door-to-Door Canvassing Effective in Europe? Evidence from a Meta-Study across Six European Countries, British Journal of Political Science,49(1), 279–290. https://doi.org/10.1017/S0007123416000521

Black, E. (2015, March 17). IBM’s Role in the Holocaust -- What the New Documents Reveal. HuffPost. Retrieved from https://www.huffpost.com/entry/ibm-holocaust_b_1301691

Black, E. (2001). IBM & The Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation. New York: Crown Books.

Blais, A. (2000). To Vote or Not to Vote: The Merits and Limits of Rational Choice Theory. Pittsburgh: University of Pittsburgh Press. https://doi.org/10.2307/j.ctt5hjrrf

Brown, J. D., Heggeness, M. L., Dorinski, S., Warren, L., & Yi, M.. (2018). Understanding the Quality of Alternative Citizenship Data Sources for the 2020 Census [Discussion Paper No. 18-38] Washington, DC: Center for Economic Studies. Retrieved from https://www2.census.gov/ces/wp/2018/CES-WP-18-38.pdf

Cantoni, E., & Pons, V. (2017). Do Interactions with Candidates Increase Voter Support and Participation? Experimental Evidence from Italy [Working Paper No. 16-080]. Boston: Harvard Business School. Retrieved from https://www.hbs.edu/faculty/Publication%20Files/16-080_43ffcfcb-74c2-4713-a587-ebde30e27b64.pdf

Davies, P. (2019). A New Crystal Ball to Predict Consumer and Investor Behavior. Wall Street Journal, June 10. Retrieved from https://www.wsj.com/articles/a-new-crystal-ball-to-predict-consumer-and-investor-behavior-11560218820?mod=rsswn

Foos, F., & John, P. (2018). Parties Are No Civic Charities: Voter Contact and the Changing Partisan Composition of the Electorate*, Political Science Research and Methods, 6(2), 283–98. https://doi.org/10.1017/psrm.2016.48

Gerber, A. S., & Rogers, T. (2009). Descriptive Social Norms and Motivation to Vote: Everybody’s Voting and so Should You. The Journal of Politics, 71(1), 178–191. https://doi.org/10.1017/S0022381608090117

Giné, X. & Mansuri, G. (2018). Together We Will: Experimental Evidence on Female Voting Behavior in Pakistan. American Economic Journal: Applied Economics, 10(1), 207–235. https://doi.org/10.1257/app.20130480

Green, D.P., McGrath, M. C. & Aronow, P. M. (2013). Field Experiments and the Study of Voter Turnout. Journal of Elections, Public Opinion and Parties, 23(1), 27–48. https://doi.org/10.1080/17457289.2012.728223

Guan, M. & Green, D. P. (2006). Noncoercive Mobilization in State-Controlled Elections: An Experimental Study in Beijing. Comparative Political Studies, 39(10), 1175–1193. https://doi.org/10.1177/0010414005284377

Hacking, I. (1999). Making Up People. In M. Biagioli (Ed.), The Science Studies Reader (pp. 161–171). New York: Routledge. Retrieved from http://www.icesi.edu.co/blogs/antro_conocimiento/files/2012/02/Hacking_making-up-people.pdf

John, P., & Brannan, T. (2008). How Different Are Telephoning and Canvassing? Results from a ‘Get Out the Vote’ Field Experiment in the British 2005 General Election. British Journal of Political Science,38(3), 565–574. https://doi.org/10.1017/S0007123408000288

Kreiss, D., & McGregor, S. C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle, Political Communication, 35(2), 155–77. https://doi.org/10.1080/10584609.2017.1364814

Matland, R. (2016). These Eyes: A Rejoinder to Panagopoulos on Eyespots and Voter Mobilization, Political Psychology, 37(4), 559–563. https://doi.org/10.1111/pops.12282 Available at https://www.academia.edu/12128219/These_Eyes_A_Rejoinder_to_Panagopoulos_on_Eyespots_and_Voter_Mobilization

Matland, R. E. & Murray, G. R. (2013). An Experimental Test for ‘Backlash’ Against Social Pressure Techniques Used to Mobilize Voters, American Politics Research, 41(3), 359–386. https://doi.org/10.1177/1532673X12463423

Matland, R. E., & Murray, G. R. (2016). I Only Have Eyes for You: Does Implicit Social Pressure Increase Voter Turnout? Political Psychology, 37(4), 533–550. https://doi.org/10.1111/pops.12275

Panagopoulos, C. (2015). A Closer Look at Eyespot Effects on Voter Turnout: Reply to Matland and Murray, Political Psychology, 37(4). https://doi.org/10.1111/pops.12281

Panagopoulos, C. & van der Linden, S. (2016). Conformity to Implicit Social Pressure: The Role of Political Identity, Social Influence, 11(3), 177–184. https://doi.org/10.1080/15534510.2016.1216009

Pons, V. (2018). Will a Five-Minute Discussion Change Your Mind? A Countrywide Experiment on Voter Choice in France, American Economic Review, 108(6), 1322–1363. https://doi.org/10.1257/aer.20160524

Pons, V., & Liegey, G. (2019). Increasing the Electoral Participation of Immigrants: Experimental Evidence from France, The Economic Journal, 129(617), 481–508. https://doi.org/10.1111/ecoj.12584 Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=53575

Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, March 17). How Trump Consultants Exploited the Facebook Data of Millions, The New York Times. Retrieved from https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

Schechner, S. & Peker, E. (2018, October 24). Apple CEO Condemns ‘Data-Industrial Complex’, Wall Street Journal, October 24.

Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans Roundly Reject Tailored Political Advertising [Departmental Paper No. 7-2012]. Annenberg School for Communication, University of Pennsylvania. Retrieved from http://repository.upenn.edu/asc_papers/398

Unpacking the “European approach” to tackling challenges of disinformation and political manipulation

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Introduction

In recent years, the spread of disinformation on online platforms and micro-targeted data-driven political advertising has become a serious concern in many countries around the world, in particular as regards the impact this practice may have on informed citizenship and democratic systems. In April 2019, for the first time in the country’s modern history, Switzerland’s supreme court has overturned a nationwide referendum on the grounds that the voters were not given complete information and that it "violated the freedom of the vote”. While in this case it was the government that had failed to provide correct information, the decision still comes as another warning of the conditions under which elections nowadays are being held and as a confirmation of the role that accurate information plays in this process. There is limited and sometimes even conflicting scholarly evidence as to whether today people are exposed to more diverse political information or trapped in echo chambers, and whether they are more vulnerable to political disinformation and propaganda than before (see, for example: Bruns, 2017, and Dubois & Blank, 2018). Yet, many claim so, and cases of misuse of technological affordances and personal data for political goals have been reported globally.

The decision of Switzerland’s supreme court has particularly resonated in Brexit Britain where the campaign ahead of the European Union (EU) membership referendum left too many people feeling “ill-informed” (Brett, 2016, p. 8). Even before the Brexit referendum took place, the House of Commons Treasury Select Committee complained about “the absence of ‘facts’ about the case for and against the UK’s membership on which the electorate can base their vote” (2016, p. 3). According to this, the voters in the United Kingdom were not receiving complete or even truthful information, and there are also concerns that they might have been manipulated by the use of bots (Howard & Kollanyi, 2016) and by the unlawful processing of personal data (ICO, 2018a, 2018b).

The same concerns were raised in the United States during and after the presidential elections in 2016. Several studies have shown evidence of the exposure of US citizens to social media disinformation in the period around elections (see: Guess et al., 2018, and Allcott & Gentzkow, 2017). In other parts of the world, such as in Brazil and in several Asian countries, the means and platforms for transmission of disinformation were somewhat different but the associated risks have been deemed even higher. The most prominent world media, fact checkers and researchers systematically reported about the scope and spread of disinformation on the Facebook-owned and widely used messaging application WhatsApp in the 2018 presidential elections in Brazil. Freedom House warned that elections in some Asian countries, such as India, Indonesia, and Thailand, were also afflicted by falsified content.

Clearly, online disinformation and unlawful political micro-targeting represent a threat to elections around the globe. The extent to which certain societies are more resilient or more vulnerable to the impact of these phenomena depends on different factors, including, among other things, the status of journalism and legacy media, levels of media literacy, the political context and legal safeguards (CMPF, forthcoming). Different political and regulatory traditions play a role in shaping the responses to online disinformation and data-driven political manipulation. Accordingly, these range from doing nothing to criminalising the spread of disinformation, as is the case with the Singapore’s law1 which came into effect in October 2019. While there seems to be more agreement that regulatory intervention is needed to protect democracy, the concerns over the negative impact of inadequate or overly restrictive regulation on freedom of expression remain. In his recent reports (2018, 2019), UN Special Rapporteur on Freedom of Expression David Kaye warned against regulation that entrusts platforms with even more powers to decide on content removals in very short time frames and without public oversight. Whether certain content is illegal or problematic on other grounds is not always a straightforward decision and often depends on the context in which it is presented. Therefore, as highlighted by the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression (2019), to require platforms to make these content moderation decisions in an automated way, without built-in transparency, and without notice or timely recourse for appeal, contains risks for freedom of expression.

The European Commission (EC) has recognised the exposure of citizens to large scale online disinformation (2018a) and micro-targeting of voters based on the unlawful processing of personal data (2018b) as major challenges for European democracies. In a response to these challenges, and to ensure citizens’ access to a variety of credible information and sources, the EC has put in place several measures which aim to create an overarching “European approach”. This paper provides an analysis of this approach to identify the key principles upon which it builds, and to what extent, if at all, they differ from the principles of “traditional” political advertising and media campaign regulation during the electoral period. The analysis further looks at how these principles are elaborated and whether they reflect the complexity of the challenges identified. The focus is on the EU as it is “articulating a more interventionist approach” to the relations with the online platform companies (Flew et al., 2019, p. 45). Furthermore, due to the size of the European market, any relevant regulation can set the global standard, as is the case with the General Data Protection Regulation (GDPR) in the area of data protection and privacy (Flew et al., 2019).

The role of (social) media in elections

The paper starts from the notion that a healthy democracy is dependent on pluralism and that the role of (social) media in elections and the transparency of data-driven political advertising are among the crucial components of any assessment of the state of pluralism in a given country. In this view, pluralism “implies all measures that ensure citizens' access to a variety of information sources, opinion, voices etc. in order to form their opinion without the undue influence of one dominant opinion forming power” (EC, 2007, p. 5; Valcke et al., 2009, p. 2). Furthermore, it implies the relevance of citizens' access to truthful and accurate information.

The media have long been playing a crucial role in election periods: serving, on one side, as wide-reaching platforms for parties and candidates to deliver their messages, and, on the other, helping voters to make informed choices. They set the agenda by prioritising certain issues over others and by deciding on time and space to be given to candidates; they frame their reporting within a certain field of meaning and considering the characteristics of different types of media; and, if the law allows, they sell time and space for political advertising (Kelley, 1963). A democracy requires the protection of media freedom and editorial autonomy, but asks that the media be socially responsible. This responsibility implies respect of fundamental standards of journalism, such as impartiality and providing citizens with complete and accurate information. As highlighted on several occasions by the European Commission for Democracy through Law (so-called Venice Commission) of the Council of Europe (2013, paras. 48, 49): “The failure of the media to provide impartial information about the election campaign and the candidates is one of the most frequent shortcomings that arise during elections”.

Access to the media has been seen as “one of the main resources sought by parties in the campaign period” and to ensure a level playing field “legislation regarding access of parties and candidates to the public media should be non-discriminatory and provide for equal treatment” (Venice Commission, 2010, para. 148). The key principles of media regulation during the electoral period are therefore media impartiality and equality of opportunity for contenders. Public service media are required to abide by higher standards of impartiality compared to private outlets, and audiovisual media are more broadly bound by rules than the printed press and online media. The latter are justified by the perceived stronger effects of audiovisual media on voters (Schoenbach & Lauf, 2004) and by the fact that television channels benefit from the public and limited resource of the radio frequency spectrum (Venice Commission, 2009, paras. 24-28, 58).

In the Media Pluralism Monitor (MPM) 2, a research tool supported by the European Commission and designed to assess risks to media pluralism in EU member states, the role of media in the democratic electoral process is one out of 20 key indicators. It is seen as an aspect of political pluralism and the variables against which the risks are assessed have been elaborated in accordance with the above-mentioned principles. The indicator assesses the existence and implementation of a regulatory and self-regulatory framework for the fair representation of different political actors and viewpoints on public service media and private channels, especially during election campaigns. The indicator also takes into consideration the regulation of political advertising – whether the restrictions are imposed to allow equal opportunities for all political parties and candidates.

The MPM results (Brogi et al., 2018) showed that the rules to ensure the fair representation of political viewpoints in news and informative programmes on public service media channels and services are imposed by law in all EU countries. It is, however, less common for such regulation and/or self-regulatory measures to exist for private channels. A similar approach is observed in relation to political advertising rules, which are more often and more strictly defined for public service than for commercial media. Most countries in the EU have a law or another statutory measure that imposes restrictions on political advertising during election campaigns to allow equal opportunities for all candidates. Even though political advertising is “considered as a legitimate instrument for candidates and parties to promote themselves” (Holtz-Bacha & Just, 2017, p. 5), some countries do not allow it at all. In cases when there is a complete ban on political advertising, public service media provide free airtime on principles of equal or proportionate access. In cases when paid political advertising is allowed, it is often restricted only to the campaign period and regulation seeks to set limits on, for example, campaign resources and spending, the amount of airtime that can be purchased and the timeframe in which political advertising can be broadcast. In most countries there is a requirement for transparency – how much was spent for advertising in the campaign, presented through spending on different types of media. For traditional media, the regulatory framework requires that political advertising (as any other advertising) be properly identified and labelled as such.

Television remains the main source of news for citizens in the EU (Eurobarometer, 2018a, 2017). However, the continuous rise of online sources and platforms as resources for (political) news and views (Eurobarometer, 2018a), and as channels for more direct and personalised political communication, call for a deeper examination of the related practice and potential risks to be addressed. The ways people find and interact with (political) news and the ways political messages are being shaped and delivered to people has been changing significantly with the global rise, popularity and features offered by the online platforms. An increasing number of people, and especially young populations, are using them as doors to news (Newman et al., 2018, p. 15; Shearer, 2018). Politicians are increasingly using the same doors to reach potential voters, and the online platforms have become relevant, if not central, to different stages of the whole process. This means that platforms are now increasingly performing functions long attributed to media and much more through, for example, filtering and prioritising certain content offered to users, and selling the time and space for political advertising based on data-driven micro-targeting. At the same time, a majority of EU countries still do not have specific requirements that would ensure transparency and fair play in campaigning, including political advertising in the online environment. According to the available MPM data (Brogi et al., 2018; and preliminary data collected in 2019), only 11 countries (Belgium, Bulgaria, Denmark, Finland, France, Germany, Italy, Latvia, Lithuania, Portugal and Sweden) have legislation or guidelines to require transparency of online political advertisements. In all cases, it is the general law on political advertising during the electoral period that also applies to the online dimension.

Political advertising and political communication more broadly take on different forms in the environment of online platforms, which may hold both promises and risks for democracy (see, for example, Valeriani & Vaccari, 2016; and Zuiderveen Borgesius et al., 2018). There is still limited evidence on the reach of online disinformation in Europe, but a study conducted by Fletcher et al. (2018) suggests that even if the overall reach of publishers of false news is not high, they achieve significant levels of interaction on social media platforms. Disinformation online comes in many different forms, including false context, imposter, manipulated, fabricated or extreme partisan content (Wardle & Derakhshan, 2017), but always with an intention to deceive (Kumar & Shah, 2018). There are also different motivations for the spread of disinformation, including financial and political (Morgan, 2018), and different platforms’ affordances affect whether disinformation spreads better as organic content or as paid-for advertising. Vosoughi et al. (2018) have shown that Twitter disinformation organically travels faster and further than true information pieces due to technological possibilities, but also due to human nature that is more likely to spread something surprising and emotional, which disinformation often does. On Facebook, on the other hand, the success of spread of disinformation may be significantly attributed to advertising, claim Chiou and Tucker (2018). Accordingly, platforms have put in place different policies towards disinformation. Twitter has recently announced a ban on political advertising, while Facebook continues to run it and exempts politician’s speech and political advertising from third-party fact-checking programmes.

Further to different types of disinformation, and different affordances of platforms and their policies, there are “many different actors involved and we’re learning much more about the different tactics that are being used to manipulate the online public sphere, particularly around elections”, warns Susan Morgan (2018, p. 40). Young Mie Kim and others (2018) have investigated the groups that stood behind divisive issue campaigns on Facebook in the weeks before the 2016 US elections, and found that most of these campaigns were run by groups which did not file reports to the Federal Election Commission. These groups, clustered by authors as non-profits, astroturf/movement groups, and unidentifiable “suspicious” groups, have sponsored four times more ads than those that did file the reports to the Commission. In addition to the variety of groups playing a role in political advertising and political communication on social media today, a new set of tactics are emerging, including the use of automated accounts, so-called bots, and data-driven micro-targeting of voters (Morgan, 2018).

Bradshaw and Howard (2018) have found that governments and political parties in an increasing number of countries of different political regimes are investing significant resources in using social media to manipulate public opinion. Political bots, as they note, are used to promote or attack particular politicians, to promote certain topics, to fake a follower base, or to get opponents’ accounts and content removed by reporting it on a large scale. Micro-targeting, as another tactic, is commonly defined as a political advertising strategy that makes use of data analytics to build individual or small group voter models and to address them with tailored political messages (Bodó et al., 2017). These messages can be drafted with the intention to deceive certain groups and to influence their behaviour, which is particularly problematic in the election period when the decisions of high importance for democracy are made, the tensions are high and the time for correction or reaction is scarce.

The main fuel of contemporary political micro-targeting is data gathered from citizens’ online presentation and behaviour, including from their social media use. Social media has also been used as a channel for distribution of micro-targeted campaign messages. This political advertising tactic came into the spotlight with the Cambridge Analytica case reported by journalist Carole Cadwalladr in 2018. Her investigation, based on the information from whistleblower Christopher Wylie, revealed that the data analytics firm Cambridge Analytica, which worked with Donald Trump’s election team and the winning Brexit campaign, harvested the personal data of millions of peoples' Facebook profiles without their knowledge and consent, and used it for political advertising purposes (Cadwalladr, 2018). In the EU, the role of social media in elections came high on the agenda of political institutions after the Brexit referendum in 2016. The focus has been in particular on the issue of ‘fake news’ or disinformation. The reform of the EU’s data protection rules, which resulted in the GDPR, started in 2012. The Regulation was adopted on 14 April 2016, and its scheduled time of enforcement, 25 May 2018, collided with the outbreak of the Cambridge Analytica case.

Perspective and methodology

Although, European elections are primarily the responsibility of national governments, the EU has taken several steps to tackle the issue of online disinformation. In the Communication of 26 April 2018 the EC called these steps a “European approach” (EC, 2018a), with one of its key deliverables being the Code of Practice on Disinformation (2018), presented as a self-regulatory instrument that should encourage proactivity of online platforms in ensuring transparency of political advertising and restricting the automated spread of disinformation. The follow up Commission’s Communication from September 2018, focused on securing free and fair European elections (EC, 2018f), suggests that, in the context of elections, principles set out in the European approach for tackling online disinformation (EC, 2018a) should be seen as complementary to the GDPR (Regulation, 2016/679). The Commission also prepared specific guidance on the application of GDPR in the electoral context (EC, 2018d). It further suggested considering the Recommendation on election cooperation networks (EC, 2018e), and transparency of political parties, foundations and campaign organisations on financing and practices (Regulation, 2018, p. 673). This paper provides an analysis of the listed legal and policy instruments that form and complement the EU’s approach to tackling disinformation and suspicious tactics of political advertising on online platforms. The Commission’s initiatives in the area of combating disinformation contain also a cybersecurity aspect. However, this subject is technically and politically too complex to be included in this specific analysis.

The EC considers online platforms as covering a wide range of activities, but the European approach to tackling disinformation is concerned primarily with “online platforms that distribute content, particularly social media, video-sharing services and search engines” (EC, 2018a). This paper employs the same focus and hence the same narrow definition of online platforms. The main research questions are: which are the key principles upon which the European approach to tackling disinformation and political manipulation builds; and to what extent, if at all, do they differ from the principles of “traditional” political advertising and media campaign regulation in the electoral period? The analysis further seeks to understand how these principles are elaborated and whether they reflect the complexity of the challenges identified. For this purpose, the ‘European approach’ is understood in a broad sense (EC, 2018f). Looking through the lens of pluralism, this analysis uses a generic inductive approach, a qualitative research approach that allows findings to emerge from the data without having pre-defined coding categories (Liu, 2016). This methodological decision was made as this exploratory research sought not only to analyse the content of the above listed documents, but also the context in which they came into existence and how they relate to one another.

Two birds with one stone: the European approach in creating fair and plural campaigning online

The actions currently contained in the EU’s approach to tackling online disinformation and political manipulation derive from the regulation (GDPR), EC-initiated self-regulation of platforms (Code of Practice on Disinformation), and the non-binding Commission’s communications and recommendations to the member states. While some of the measures, such as data protection, have a long tradition and have only been evolving, some represent a new attempt to develop solutions to the problem of platforms (self-regulation). In general, the current European approach can be seen as primarily designed towards (i) preventing unlawful micro-targeting of voters by protecting personal data; and (ii) combating disinformation by increasing the transparency of political and issue-based advertising on online platforms.

Protecting personal data

The elections of May 2019 were the first European Parliament (EP) elections after major concerns about legality and legitimacy of the vote in US presidential election and the UK's Brexit referendum. The May 2019 elections were also the first elections for the EP held under the GDPR, which became directly applicable across the EU as of 25 May 2018. As a regulation, the GDPR is directly binding, but does provide flexibility for certain aspects of the regulation to be adjusted by individual member states. For example, to balance the right to data protection with the right to freedom of expression, article 85 of the GDPR provides for the exemption of, or derogation for, the processing of data for “journalistic purposes or the purpose of academic artistic or literary expression”, which should be clearly defined by each member state. While the GDPR provides the tools necessary to address instances of unlawful use of personal data, including in the electoral context, its scope is still not fully and properly understood. Since it was the very first time the GDPR was applied in the European electoral context, the European Commission published in September 2018 the Guidance on the application of Union data protection law in the electoral context (EC, 2018d).

The data protection regime in the EU is not new, 3 even though it has not been well harmonised and the data protection authorities (DPAs) have had limited enforcement powers. The GDPR aims to address these shortcomings as it gives DPAs powers to investigate, to correct behaviour and to impose fines up to 20 million Euros or, in the case of a company, up to 4% of its worldwide turnover. In its Communication, the EC (2018d) particularly emphasises the strengthened powers of authorities and calls them to use these sanctioning powers especially in cases of infringement in the electoral context. This is an important shift as the European DPAs have historically been very reluctant to regulate political parties. The GDPR further aims at achieving cooperation and harmonisation of the Regulation’s interpretations between the national DPAs by establishing the European Data Protection Board (EDPB). The EDPB is made up of the heads of national data protection authorities and of the European Data Protection Supervisor (EDPS) or their representatives. The role of the EDPS is to ensure that EU institutions and bodies respect people's right to privacy when processing their personal data. In March 2018, the EDPS published an Opinion on online manipulation and personal data, confirming the growing impact of micro-targeting in the electoral context and a significant shortfall in transparency and provision of fair processing of information (EDPS, 2019).

The Commission guidance on the application of GDPR in the electoral context (EC, 2018d) underlines that it “applies to all actors active in the electoral context”, including European and national political parties, European and national political foundations, platforms, data analytics companies and public authorities responsible for the electoral process. Any data processing should comply with the GDPR principles such as fairness and transparency, and for specified purposes only. The guidance provides relevant actors with the additional explanation of the notions of “personal data” and of “sensitive data”, be it collected or inferred. Sensitive data may include political opinions, ethnic origin, sexual orientation and similar, and the processing of such data is generally prohibited unless one of the specific justifications provided for by the GDPR applies. This can be in the case where the data subject has given explicit, specific, fully informed consent for processing; when this information is manifestly made public by the data subject; when the data relate to “the members or to former members of the body or to persons who have regular contact with”; or when processing “is necessary for reasons of substantial public interest” (GDPR, Art. 9, para. 2). In a statement adopted in March 2019, the EDPB points out that derogations of special data categories should be interpreted narrowly. In particular, the derogation in the case when a person makes his or her ‘political opinion’ public cannot be used to legitimate inferred data. Bennett (2016) also warns that vagueness of several terms used to describe exceptions from the application of Article 9(1) might lead to confusion or inconsistencies in interpretation as processing of ‘political opinions’ becomes increasingly relevant for contemporary political campaigning.

The principles of fairness and transparency require that individuals (data subjects) are informed of the existence of the processing operation and its purposes (GDPR, Art. 5). The Commission’s guidance clearly states that data controllers (those who make the decision on and the purpose of processing, like political parties or foundations) have to inform individuals about key aspects related to the processing of their personal data, including why they receive personalised messages from different organisations; which is the source of the data when not collected directly from the person; how are data from different sources combined and used; and whether the automated decision-making has been applied in processing.

Despite the strengthened powers and an explicit call to act more in the political realm (EC, 2018d), to date we have not seen many investigations by DPAs into political parties under the GDPR. An exception is UK Information Commissioner Elizabeth Denham. In May 2017, she announced the launch of a formal investigation into the use of data analytics for political purposes following the wrongdoings exposed by journalists, in particular Carole Cadwalladr, during the EU Referendum, and involving parties, platforms and data analytics companies such as Cambridge Analytica. The report of November 2018 concludes:

that there are risks in relation to the processing of personal data by many political parties. Particular concerns include the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing and the use of third-party data analytics companies, with insufficient checks around consent (ICO, 2018a, p. 8).

As a result of the investigation, the ICO sent 11 letters to the parties with formal warnings about their practices, and in general it became the largest investigation conducted by a DPA on this matter and encompassing different actors, not only political parties but also social media platforms, data brokers and analytics companies.

Several cases have been reported where the national adaptation of the GDPR does not fully meet the requirements of recital 56 GDPR which establishes that personal data on people’s political opinions may be processed “for reasons of public interest” if “the operation of the democratic system in a member state requires that political parties compile” such personal data; and “provided that appropriate safeguards are established”. In November 2018 a question was raised in the European Parliament on the data protection law adapting Spanish legislation to the GDPR which allows “political parties to use citizens’ personal data that has been obtained from web pages and other publicly accessible sources when conducting political activities during election campaigns”. As a member of the European Parliament Sophia in 't Veld, who posed the question, highlighted: “Citizens can opt out if they do not wish their data to be processed. However, even if citizens do object to receiving political messages, they could still be profiled on the basis of their political opinions, philosophical beliefs or other special categories of personal data that fall under the GDPR”. The European Commission was also urged to investigate the RomanianGDPR implementation for similar concerns. Further to the reported challenges with national adaptation of GDPR, in November 2019 the EDPS has issued the first ever reprimand to an EU institution. The ongoing investigation into the European Parliament was prompted by the Parliament’s use of a US-based political campaigning company NationBuilder to process personal data as part of its activities relating to the 2019 EU elections.

Combating disinformation

In contrast to the GDPR, which is sometimes praised as “the most consequential regulatory development in information policy in a generation” (Hoofnagle et al., 2019, p. 66), the EC has decided to tackle fake news and disinformation through self-regulation, at least in the first round. The European Council, a body composed of the leaders of the EU member states, first recognised the threat of online disinformation campaigns in 2015 when it asked the High Representative of the Union for Foreign Affairs and Security Policy to address the disinformation campaigns by Russia (EC, 2018c). The Council is not one of the EU's legislating institutions, but it defines the Union’s overall political direction and priorities. So, it comes as no surprise that the issue of disinformation came high on the agenda of the EU, in particular after the UK referendum and US presidential elections in 2016. In April 2018 the EC (2018a) adopted a Communication on Tackling online disinformation: a European Approach. This is the central document that set the tone for future actions in this field. In the process of its drafting, the EC carried out consultations with experts and stakeholders, and used citizens’ opinions gathered through polling. The consultations included the establishment of a High-Level Expert Group on Fake News and Online Disinformation (HLEG) in early 2018, which two months later produced a Report (HLEG, 2018) advising the EC against simplistic solutions. Broader public consultations and dialogues with relevant stakeholders were also held, and the specific Eurobarometer (2018b) poll was conducted via telephone interviews in all EU member states. The findings indicated a high level of concern among the respondents for the spread of online disinformation in their country (85%) and saw it as a risk for democracy in general (83%). This urged the EC to act and the Communication on tackling online disinformation was a starting point and the key document in understanding the European approach to the pressing challenges. The Communication builds around four overarching principles and objectives: transparency, diversity of information, credibility of information, and cooperation (EC, 2018a).

Transparency, in this view, means that it should be clear to users where the information comes from, who the author is and why they see certain content when an automated recommendation system is being employed. Furthermore, a clearer distinction between sponsored and informative content should be made and it should be clearly indicated who paid for the advertisement. The diversity principle is strongly related to strengthening so-called quality journalism, 4 to rebalancing the disproportionate power relations between media and social media platforms, and to increasing media literacy levels. The credibility, according to the EC, is to be achieved by entrusting platforms to design and implement a system that would provide an indication of the source and information trustworthiness. The fourth principle emphasises cooperation between authorities at national and transnational level and cooperation of broad stakeholders in proposing solutions to the emerging challenges. With an exception of emphasising media literacy and promoting cooperation networks of authorities, the Communication largely recommends that platforms design solutions which would reduce the reach of manipulative content and disinformation, and increase the visibility of trustworthy, diverse and credible content.

The key output of this Communication is a self-regulatory Code of Practice on Online Disinformation (CoP). The document was drafted by the working group composed of online platforms, advertisers and the advertising industry, and was reviewed by the Sounding Board, composed of academics, media and civil society organisations. The CoP was agreed by the online platforms Facebook, Google and Twitter, Mozilla, and by advertisers and the advertising industry, and was presented to the EC in October 2018. The Sounding Board (2018), however, presented a critical view on its content and the commitments laid out by the platforms, stating that it “contains no clear and meaningful commitments, no measurable objectives” and “no compliance or enforcement tool”. The CoP, as explained by the Commission, represents a transitional measure where private actors are entrusted to increase transparency and credibility of the online information environment. Depending on the evaluation of their performance in the first 12 months, the EC is supposed to determine the further steps, including the possibility of self-regulation being replaced with regulation (EC, 2018c). The overall assessment of the Code’s effectiveness is expected to be presented in early 2020.

The CoP builds on the principles expressed in the Commission’s Communication (2018a) through the actions listed in Table 1. For the purpose of this paper the actions are not presented in the same way as in the CoP. THey are instead slightly reorganised under the following three categories: Disinformation; Political advertising, Issue-based advertising.

Table 1: Commitments of the signatories of the Code of Practice on Online Disinformation selected and grouped under three categories: disinformation, political advertising, issue-based advertising. Source: composed by the author based on the Code of Practice on Online Disinformation

Disinformation

Political advertising

Issue-based advertising

To disrupt advertising and monetisation incentives for accounts and websites which consistently misrepresent information about themselves

To clearly label paid-for communication as such

Limiting the abuse of platforms by unauthentic users (misuse of automated bots)

To publicly disclose political advertising, including actual sponsor and amounts spent

To publicly disclose, conditioned to developing a working definition of “issue-based advertising” which does not limit freedom of expression and excludes commercial advertising

Implementing rating systems (on trustworthiness), and report system (on false content)

Enabling users to understand why they have been targeted by a given advertisement

To invest in technology to prioritise “relevant, authentic and authoritative information” in search, feeds and other ranked channels

  

Resources for users on how to recognise and limit the spread of false news

  

In the statement on the first annual self-assessment reports by the signatories of the CoP, the Commission acknowledged that some progress has been achieved, but warns that it “varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny”. The European Regulators Group for Audiovisual Media Services (ERGA) has been supporting the EC in monitoring the implementation of the commitments made by Google, Facebook and Twitter under the CoP, particularly in the area of political and issue-based advertising. In June 2019 ERGA released an interim Report as a result of the monitoring activities carried out in 13 EU countries, based on the information reported by platforms and on the data available in their online archives of political advertising. While it stated “that Google, Twitter and Facebook made evident progress in the implementation of the Code’s commitments by creating an ad hoc procedure for the identification of political ads and of their sponsors and by making their online repository of relevant ads publicly available”, it also emphasised that the platforms have not met a request to provide access to the overall database of advertising for the monitored period, which “was a significant constraint on the monitoring process and emerging conclusions” (ERGA, 2019, p. 3). Furthermore, based on the analysis of the information provided in the platforms’ repositories of political advertising (e.g., Ad Library), the information was “not complete and that not all the political advertising carried on the platforms was correctly labelled as such” (ERGA, 2019, p. 3).

The EC still needs to provide a comprehensive assessment on the implementation of the commitments under the CoP after an initial 12-month period. However, it is already clear that the issue of the lack of transparency of the platforms’ internal operations and decision-making processes remains and represents a risk. If platforms are not amenable to thorough public auditing, then the adequate assessment of the effectiveness of implementation when it comes to self-regulation becomes impossible. The ERGA Report (2019) further warns that at this point it is not clear what options for micro-targeting were offered to political advertisements nor if all options are disclosed in the publicly available repositories of political advertising.

Further to the commitments laid down in the CoP and relying on social media platforms to increase transparency of political advertising online, the Commission Recommendation of 9 September 2018 (EC, 2018e), “encourages”, and asks member states to “encourage” further transparency commitments by European and national political parties and foundations, in particular:

information on the political party, political campaign or political support group behind paid online political advertisements and communications” [...] “information on any targeting criteria used in the dissemination of such advertisements and communications” [...] “make available on their websites information on their expenditure for online activities, including paid online political advertisements and communications (EC, 2018e, p. 8).

The Recommendation (EC, 2018e) further advises member states to set up a national election network, involving national authorities with competence for electoral matters, including data protection commissioners, electoral authorities and audio-visual media regulators. This recommendation is further elaborated in the Action plan (EC, 2018c) but, because of practical obstacles, national cooperation between authorities has not yet become a reality in many EU countries.

Key principles and shortcomings of the European approach

This analysis has shown that the principles contained in the above mentioned instruments, which form the basis of the European approach to combating disinformation and political manipulation are: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

Data protection and transparency principles related to personal data collection, processing and use are contained in the GDPR. The requirement to increase transparency of political and issues-based advertising and of automated communication is currently directed primarily towards platforms that have committed themselves to label and publicly disclose sponsors and content of political and issues-based advertising, as well as to identify and label automated accounts. Unlike with the traditional media landscapes where, in general, on the same territory, media were broadcasting the same political advertising and messages to their audiences, in the digital information environment political messages are being targeted and shown only to specific profiles of voters with limited ability to track them to see which messages were targeted to whom. To increase transparency on this level would require platforms to provide a user-friendly repository of political ads, including searchable information on actual sponsors and amounts spent. At the moment, they struggle with how to identify political and issue-based ads, to distinguish them from other types of advertising, and to verify ad buyers’ identities (Leerssen et al., 2019).

Furthermore, the European approach fails to impose similar transparency requirements towards political parties to provide searchable and easy to navigate repositories of the campaign materials used. The research project of campaign monitoring during the 2019 European elections, showed that parties/groups/candidates participating in the elections were largely not transparent about their campaign materials. Materials were not readily available on their websites or social media accounts nor did they respond to direct requests from researchers (Simunjak et al., 2019). This warns that while it is relevant to require platforms to provide more transparency on political advertising, it is perhaps even more relevant to demand this transparency directly from political parties and candidates in elections.

In the framework of transparency, the European approach also fails to further emphasise the need for political parties to declare officially to authorities and under a specific category the amounts spent for digital (including social media) campaigning. At present, in some EU countries (for example Croatia, see: Klaric, 2019), authorities with competences in electoral matters do not consider social media as media and accordingly do not apply the requirements to report spending on social media and other digital platforms in a transparent manner. This represents a risk, as the monitoring of the latest EP elections has clearly showed that the parties had spent both extensive time and resources on their social media accounts (Novelli & Johansson, 2019).

The diversity and credibility principles stipulated in the Communication on tackling online disinformation and in the Action plan ask from platforms to indicate the information trustworthiness, to label automated accounts, to close down fake accounts, and to prioritise quality journalism. At the same time, clear definition or instructions on criteria to determine whether an information or a source is trustworthy and whether it represents quality journalism is not provided. Entrusting platforms with making these choices without the possibility of auditing their algorithms and decision-making processes represents a potential risk for freedom of expression.

The signatories of the CoP have committed themselves to disrupt advertising and monetisation incentives for accounts and websites, which consistently misrepresent information about themselves. But, what about accounts that provide accurate information about themselves but occasionally engage in campaigns which might also contain disinformation? For example, a political party may use data to profile and target individual voters or a small group of voters with messages that are not completely false but are exaggerated, taken out of context or framed with an intention to deceive and influence voters’ behaviour. As already noted, disinformation comes in many different forms, including false context, imposter, manipulated or fabricated content (Wardle & Derakhshan, 2017). While the work of fact-checkers and flagging of false content are not completely useless here, in the current state of play this is far from sufficient to tackle the problems of disinformation, including in political advertising and especially of dark ads 5. The efficiency of online micro-targeting depends largely on data and profiling. Therefore, if effectively implemented, the GDPR should be of use here by preventing the unlawful processing of personal data.

Another important aspect of the European approach are stronger sanctions in cases when the rules are not respected. This entails increased powers of authorities, first and foremost of DPAs and increased fines under the GDPR. Data protection in the electoral context is difficult to ensure if the cooperation between different authorities with competence for electoral matters (such as data protection commissioners, electoral authorities and audio-visual media regulators) is not established and operational. While the European approach strongly recommends cooperation, it is not easily achievable at a member state level, as it requires significant investments in capacity building and providing channels for cooperation. In some cases, it may even require amendments to the legislative framework. The cooperation of regulators of the same type at the EU level is sometimes hampered by the fact that their competences differ in different member states.

The CoP also contains a commitment on “empowering the research community”. This means that the CoP signatories commit themselves to support research on disinformation and political advertising by providing researchers access to data sets, or collaborating with academics and civil society organisations in other ways. However, the CoP does not specify how this cooperation should work, the procedures for granting access and for what kind of data, or which measures should researchers put in place to ensure appropriate data storage, security and protection. In the reflection on the platform’s progress under the Code, three Commissioners warned that the “access to data provided so far still does not correspond to the needs of independent researchers”.

Conclusions

This paper has given an overview of the developing European approach to combating disinformation and political manipulation during an electoral period. It provided an analysis of the key instruments contained in the approach and drew out the key principles upon which it builds: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.

The principles of legacy media regulation in the electoral period are impartiality and equality of opportunity for contenders. This entails balanced and non-partisan reporting as well as equal or proportionate access to media for political parties (be it free or paid-for). If political advertising is allowed, it is usually subject to transparency and equal conditions requirements: how much was spent on advertising in the campaign needs to be presented through spending on different types of media and reported to the competent authorities. The regulatory framework requires that political advertising be properly labelled as such.

In the online environment, the principles applied to legacy media require further elaboration as the problem of electoral disinformation cuts across a number of different policy areas, involving a range of public and private actors. Political disinformation is not a problem that can easily be compartmentalised into existing legal and policy categories. It is a complex and multi-layered issue that requires a more comprehensive and collaborative approach when designing potential solutions. The emerging EU approach reflects the necessity for that overall policy coordination.

The main fuel of online political campaigning is data. Therefore, the protection of personal data and especially of “sensitive” data from abuse becomes a priority of any action that aims to ensure free, fair and plural elections. The European approach further highlights the importance of transparency. It calls on platforms to clearly identify political advertisements and who paid for them, but it fails to emphasise the importance of having a repository of all the material used in the campaign provided by candidates and political parties. Furthermore, a stronger requirement for political parties to report on the amounts spent on different types of communication channels (including legacy, digital and social media) is lacking in this approach, as well as the requirement for platforms to provide more comprehensive and workable data on sponsors and spending in political advertising.

The European Commission’s communication of the European approach claims that it aims to address all actors active in the electoral context, including European and national political parties and foundations, online platforms, data analytics companies and public authorities responsible for the electoral process. However, it seems that the current focus is primarily on the platforms and in a way that enables them to shape the future direction of actions in the fight against disinformation and political manipulation.

As regards the principle of cooperation, many obstacles, such as differences in competences and capacities of the relevant national authorities, have not been fully taken into account. The elections are primarily a national matter so the protection of the electoral process, as well as the protection of media pluralism, falls primarily within the competence of member states. Yet, if the approach to tackling disinformation and political manipulation is to be truly European, there should be more harmonisation between authorities and approaches taken at national levels.

While being a significant step in the creation of a common EU answer to the challenges of disinformation and political manipulation, especially during elections, the European approach requires further elaboration, primarily to include additional layers of transparency. This entails transparency of political parties and of other actors on their actions in the election campaigns, as well as more transparency about internal processes and decision-making by platforms especially on actions of relevance to pluralism, elections and democracy. Furthermore, the attempt to propose solutions and relevant actions at the European level faces two constraints. On the one hand, it faces the power of global platforms shaped in the US tradition, which to a significant extent differs from the European approach in balancing freedom of expression and data protection. On the other hand, the EU approach confronts the resilience of national political traditions in member states, in particular if the measures are based on recommendations and other soft instruments.

References

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives,31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bodó, B., Helberger, N. & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse?. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Bradshaw, S. & Howard, P. N. (2018). Challenging Truth and Trust: A Global Inventory of Organised Social Media Manipulation [Report]. Computational Propaganda Research Project, Oxford Internet Institute. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf

Brett, W. (2016). It’s Good to Talk: Doing Referendums Differently. The Electoral Reform Society’s report. Retrieved from https://www.electoral-reform.org.uk/wp-content/uploads/2017/06/2016-EU-Referendum-its-good-to-talk.pdf

Brogi, E., Nenadic, I., Parcu, P. L., & Viola de Azevedo Cunha, M. (2018). Monitoring Media Pluralism in Europe: Application of the Media Pluralism Monitor 2017 in the European Union, FYROM, Serbia and Turkey [Report]. Centre for Media Pluralism and Media Freedom, European University Institute. Retrieved from https://cmpf.eui.eu/wp-content/uploads/2018/12/Media-Pluralism-Monitor_CMPF-report_MPM2017_A.pdf

Bruns, A. (2017, September 15). Echo chamber? What echo chamber? Reviewing the evidence. 6th Biennial Future of Journalism Conference (FOJ17), Cardiff, UK. Retrieved from https://eprints.qut.edu.au/113937/1/Echo%20Chamber.pdf

Cadwalladr, C. & Graham-Harrison, E. (2018, March 17) Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Chiou, L. & Tucker, C. E. (2018). Fake News and Advertising on Social Media: A Study of the Anti-Vaccination Movement [Working Paper No. 25223]. Cambridge, MA: The National Bureau of Economic Research. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3209929https://doi.org/10.3386/w25223

Centre for Media Pluralism and Media Freedom (CMPF). (forthcoming, 2020). Independent Study on Indicators to Assess Risks to Information Pluralism in the Digital Age. Florence: Media Pluralism Monitor Project.

Code of Practice on Disinformation (September 2018). Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

Council Decision (EU, Euratom) 2018/994 of 13 July 2018 amending the Act concerning the election of the members of the European Parliament by direct universal suffrage, annexed to Council Decision 76/787/ECSC, EEC, Euratom of 20 September 1976. Retrieved from https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32018D0994&qid=1531826494620

Commission Recommendation (EU) 2018/234 of 14 February 2018 on enhancing the European nature and efficient conduct of the 2019 elections to the European Parliament (OJ L 45, 17.2.2018, p. 40)

Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37)

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information,Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656

Eurobarometer (2018a). Standard 90: Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2215

Eurobarometer (2018b). Flash 464: Fake news and disinformation online. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/survey/getsurveydetail/instruments/flash/surveyky/2183

Eurobarometer (2017). Standard 88:. Media use in the EU. Retrieved from https://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/Survey/getSurveyDetail/instruments/STANDARD/surveyKy/2143

European Commission (EC). (2018a). Tackling online disinformation: a European Approach, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. COM/2018/236. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&from=EN

European Commission (EC). (2018b). Free and fair European elections – Factsheet, State of the Union. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/IP_18_5681

European Commission (EC). (2018c, December 5). Action Plan against Disinformation. European Commission contribution to the European Council (5 December). Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/eu-communication-disinformation-euco-05122018_en.pdf

European Commission (EC). (2018d, September 12). Commission guidance on the application of Union data protection law in the electoral context: A contribution from the European Commission to the Leaders' meeting in Salzburg on 19-20 September 2018. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-data-protection-law-electoral-guidance-638_en.pdf

European Commission (EC). (2018e, September 12). Recommendation on election cooperation networks, online transparency, protection against cybersecurity incidents and fighting disinformation campaigns in the context of elections to the European Parliament. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-cybersecurity-elections-recommendation-5949_en.pdf

European Commission (EC). (2018f). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Securing free and fair European elections. COM(2018)637. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-free-fair-elections-communication-637_en.pdf

European Commission (EC). (2007). Media pluralism in the Member States of the European Union [Commission Staff Working Document No. SEC(2007)32]. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/media_pluralism_swp_en.pdf

European Data Protection Board (EDPB). (2019). Statement 2/2019 on the use of personal data in the course of political campaigns. Retrieved from https://edpb.europa.eu/our-work-tools/our-documents/ostalo/statement-22019-use-personal-data-course-political-campaigns_en

European Data Protection Supervisor (EDPS). (2018). Opinion 372018 on online manipulation and personal data. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-03-19_online_manipulation_en.pdf

European Regulators Group for Audiovisual Media Services (ERGA). (2019, June). Report of the activities carried out to assist the European Commission in the intermediate monitoring of the Code of practice on disinformation [Report]. Slovakia: European Regulators Group for Audiovisual Media Services. Retrieved from http://erga-online.eu/wp-content/uploads/2019/06/ERGA-2019-06_Report-intermediate-monitoring-Code-of-Practice-on-disinformation.pdf?fbclid=IwAR1BZV2xYlJv9nOzYAghxA8AA5q70vYx0VUNnh080WvDD2BfFfWFM3js4wg

Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the reach of “fake news” and online disinformation in Europe. Retrieved from https://www.press.is/static/files/frettamyndir/reuterfake.pdf

Flew, T., Martin, F., Suzor, N. P. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media and Policy, 10(1), 33–50. https://doi.org/10.1386/jdtv.10.1.33_1

Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: evidence from the consumption of fake news during the 2016 US presidential campaign [Working Paper]. Retrieved from https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf

High Level Expert Group on Fake News and Online Disinformation (HLEG). (2018). Final report [Report]. Retrieved from https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

Hoofnagle, C.J. & van der Sloot, B., & Zuiderveen Borgesius, F. J. (2019). The European Union general data protection regulation: what it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501

Holtz-Bacha, C. & Just, M. R. (Eds.). (2018). Routledge Handbook of Political Advertising. New York: Routledge.

House of Commons Treasury Committee. (2016, May 27). The economic and financial costs and benefits of the UK’s EU membership. First Report of Session 2016–17. Retrieved from https://publications.parliament.uk/pa/cm201617/cmselect/cmtreasy/122/122.pdf

Howard, P. N. & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum. ArXiv160606356 Phys. Retrieved from https://arxiv.org/abs/1606.06356

Information Commissioner’s Office (ICO). (2018a, July 11). Investigation into the use of data analytics in political campaigns [Report to Parliament]. Retrieved from https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

Information Commissioner’s Office (ICO). (2018b, July 11). Democracy disrupted? Personal information and political influence. Retrieved from https://ico.org.uk/media/action-weve-taken/2259369/democracy-disrupted-110718.pdf

Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., Heinrich, R., Baragwanath, R., & Raskutti, G. (2018). The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook. Political Communication, 35(4), 515–541. https://doi.org/10.1080/10584609.2018.1476425

Kelley, S. Jr. (1962). Elections and the Mass Media. Law and Contemporary Problems, 27(2), 307–326. Retrieved from https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=2926&context=lcp

Klaric, J. (2019, March 28) Ovo je Hrvatska 2019.: za Državno izborno povjerenstvo teletekst je medij, Facebook nije. Telegram. Retrieved from https://www.telegram.hr/politika-kriminal/ovo-je-hrvatska-2019-za-drzavno-izborno-povjerenstvo-teletekst-je-medij-facebook-nije/

Kreiss, D. l., & McGregor, S. C. (2018). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2), 155–177. https://doi.org/10.1080/10584609.2017.1364814

Valcke, P., Lefever, K., Kerremans, R., Kuczerawy, A., Sükosd, M., Gálik, M., … Füg, O. (2009). Independent Study on Indicators for Media Pluralism in the Member States – Towards a Risk-Based Approach [Report]. ICRI, K.U. Leuven; CMCS, Central European University, MMTC, Jönköping Business School; Ernst & Young Consultancy Belgium. Retrieved from https://ec.europa.eu/information_society/media_taskforce/doc/pluralism/pfr_report.pdf

Kumar, S., & Shah, N. (2018, April). False information on web and social media: A survey. arXiv:1804.08559 [cs]. Retrieved from https://arxiv.org/pdf/1804.08559.pdf

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & de Vreese, C. H. (2019). Platform ad archives: promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Liu, L. (2016). Using Generic Inductive Approach in Qualitative Educational Research: A Case Study Analysis. Journal of Education and Learning, 5(2), 129–135. https://doi.org/10.5539/jel.v5n2p129

Morgan, S. (2018). Fake news, disinformation, manipulation and online tactics to undermine democracy. Journal of Cyber Policy, 3(1), 39–43. https://doi.org/10.1080/23738871.2018.1462395

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Digital News Report 2018. Oxford: Reuters Institute for the Study of Journalism. Retrieved from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/digital-news-report-2018.pdf

Novelli, E. & Johansson, B. (Eds.) (2019). 2019 European Elections Campaign: Images,Topics, Media in the 28 Member States [Research Report]. Directorate-General of Communication of the European Parliament. Retrieved from https://op.europa.eu/hr/publication-detail/-/publication/e6767a95-a386-11e9-9d01-01aa75ed71a1/language-en?fbclid=IwAR0C9R6Mw0Gd5aggB7wZx6KGWt3is84M210q3rv0g9LbXJqJpXuha1H6yeQ

Regulation (EU, Euratom). 2018/673 amending Regulation (EU, Euratom) No 1141/2014 on the statute and funding of European political parties and European political foundations. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32018R0673

Regulation (EU). 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1)

Regulation (EU, Euratom). No 1141/2014 of the European Parliament and of the Council of 22 October 2014 on the statute and funding of European political parties and European political foundations, (OJ L 317, 4.11.2014, p.1).

Report of the Special Rapporteur to the General Assembly on online hate speech. (2019). (A/74/486). Retrieved from https://www.ohchr.org/Documents/Issues/Opinion/A_74_486.pdf

Report of the Special Rapporteur to the Human Rights Council on online content regulation. (2018). (A/HRC/38/35). Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/096/72/PDF/G1809672.pdf?OpenElement

Schoenbach, K., & Lauf, E. (2004). Another Look at the ‘Trap’ Effect of Television—and Beyond. International Journal of Public Opinion Research, 16(2), 169–182. https://doi.org/10.1093/ijpor/16.2.169

Shearer, E. (2018, December 10). Social media outpaces print newspapers in the U.S. as a news source. Pew Research Center. Retrieved from https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/

Šimunjak, M., Nenadić, I., & Žuvela, L. (2019). National report: Croatia. In E. Novelli & B. Johansson (Eds.), 2019 European Elections Campaign: Images, topics, media in the 28 Member States (pp. 59–66). Brussels: European Parliament.

Sounding Board. (2018). The Sounding Board’s Unanimous Final Opinion on the so-called Code of Practice on 24 September 2018. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

The Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression. (2019). How governments and platforms have fallen short in trying to moderate content online (Co-Chairs Report No. 1 and Working Papers). Retrieved from https://www.ivir.nl/publicaties/download/TWG_Ditchley_intro_and_papers_June_2019.pdf

Valeriani, A., & Vaccari, C. (2016). Accidental exposure to politics on social media as online participation equalizer in Germany, Italy, and the United Kingdom. New Media & Society, 18(9). https://doi.org/10.1177/1461444815616223

Venice Commission. (2013). CDL-AD(2013)021 Opinion on the electoral legislation of Mexico, adopted by the Council for Democratic Elections at its 45th meeting (Venice, 13 June 2013) and by the Venice Commission at its 95th Plenary Session (Venice, 14-15 June 2013).

Venice Commission. (2010). CDL-AD(2010)024 Guidelines on political party regulation, by the OSCE/ODIHR and the Venice Commission, adopted by the Venice Commission at its 84th Plenary Session (Venice, 15-16 October 2010).

Venice Commission. (2009). CDL-AD(2009)031 Guidelines on media analysis during election observation missions, by the OSCE Office for Democratic Institutions and Human Rights (OSCE/ODIHR) and the Venice Commission, adopted by the Council for Democratic Elections at its 29th meeting (Venice, 11 June 2009) and the Venice Commission at its 79th Plenary Session (Venice, 12- 13 June 2009).

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Wakefield, J. (2019, February 18). Facebook needs regulation as Zuckerberg 'fails' - UK MPs. BBC. Retrieved from https://www.bbc.com/news/technology-47255380

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking [Report No. DGI(2017)09]. Strasbourg: Council of Europe. Retrieved from https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-de%CC%81sinformation-1.pdf?x56713

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S. Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., de Vreese, C. H. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. The so-called ‘fake news’ law was passed in May 2019 allowing ministers to issue orders to platforms like Facebook to put up warnings next to disputed posts or, in extreme cases, to take the content down. The law also allows for fines of up to SG$ 1 million (665,000 €) for companies that fail to comply, and the individual offenders could face up to ten years in prison. Many have raised the voice against this law, including the International Political Science Association (IPSA), but it came into effect and is being used.

2. To which the author is affiliated.

3. The GDPR supplanted the Data Protection Directive (Directive 95/46/EC on the protection of individuals with regard to the processing of personal data (PII (US)) and on the free movement of such data).

4. The Council of Europe also uses the term ‘quality journalism’ but it is not fully clear what is entailed in ‘quality’ and who decides on what ‘quality journalism’ is, and what is not. The aim could be (and most likely is) to distinguish journalism that respects professional standards from less reliable, less structured and less ethical and professional standards bound forms of content production and delivery. Many argue that journalism already entails the request for quality so this attribute adjective is not necessary and, in fact, may be problematic.

5. Dark advertising is a type of online advertising visible only to the advert's publisher and the intended target group.

Viewing all 248 articles
Browse latest View live