A few days ago, remarks by Prime Minister Theresa May sparked a flurry of speculation that the criteria for a “border poll” in Northern Ireland had been met. Just today a report from a Queen’s University survey highlighted the high response rate of “don’t know” amongst interviewees asked whether they would vote for a united Ireland after Brexit. Before getting too excited about the possibility of another referendum, however, it’s worth addressing the scandal engulfing the Brexit campaign and Cambridge Analytica – revelations which have called into question the integrity of referendums and the entire democratic process. What implications will these kinds of online campaigns have on a future border poll?

What exactly is Cambridge Analytica?

This is a good question, as there is a confusing web of organizations that have been implicated in the scandal: Cambridge Analytica, SCL Group, Aggregate IQ (AIQ) and Global Science Research.

At the top of the food chain is a company called SCL Group. The CEO of SCL was Alexander Nix, also known as the pretentious Etonian seen in the Channel 4 video claiming to have run “all” of Donald Trump’s digital campaign.

Although Cambridge Analytica and AggregateIQ are technically separate companies, a variety of arrangements means that they are both connected to SCL. SCL is the parent company of Cambridge Analytica, who’s CEO was also Alexander Nix. A whistleblower told a parliamentary committee that, through a licensing agreement, all AIQ’s work was licensed to SCL; however, AIQ have publicly refuted that they are related companies.

“It is important for people to understand that Cambridge Analytica is more of a concept or a brand than anything else because it does not have employees. It is all SCL. [Cambridge Analytica] is just the front-facing company for the United States.” – Former employee and whistleblower Chris Wylie

But hold on to your seat belts, because there’s one more company: Global Science Research (GSR) and its founder Alexander Kogan.

Alexander Kogan is a psychology professor at Cambridge University and St Petersburg State University. He has conducted research on personalities traits and Facebook, and received grants from the Russian government to research Facebook users’ emotional states. Global Science Research entered into a commercial deal with SCL to harvest and process Facebook data (although they have denied this).

Phew! If it sounds confusing, that’s because it was designed to be confusing.

What is it that these companies actually do?

Well, SCL Group is a “private British behavioural research and strategic communication company.” They’re contractors for the UK’s Ministry of Defence, the US’s Department of Defense and run projects in the Baltics and eastern Europe for various NATO countries on counter-Russian propaganda. They’re also alleged to have manipulated elections in developing countries, and engaged in the distribution of violent or threatening material, including in Nigeria, Kenya and Myanmar. A cyberwarfare expert for the US air force called them “cyber warfare for elections.” Former employee turned-whistleblower Chris Wylie called them an example of “modern-day colonialism.”

During the Brexit campaign, Cambridge Analytica harvested thousands of Facebook users’ data in order to create a “psychological profiling” tool. This tool allowed campaigners to purchase targeted ads that often spread fake news and disinformation to people in the campaign for the Brexit vote and the 2016 US Presidential election.

So how exactly does this “psychological warfare for elections” work?

Well, there are two stages to how this process worked: psychological profiling and targeted ads. The type of psychological profiling done by Cambridge Analytica was different than traditional data collection, which might look at where an individual lives or public voting records to see their likelihood of voting (like during the 2012 Obama campaign). Instead, Cambridge Analytica harvested data that seems pretty innocuous: the type of content and posts you like and engage with on Facebook.

Even though this data seems pretty innocent, gaining access to huge amount of Facebook data meant that it was possible for algorithms to identify certain traits among users. Sometimes, these patterns don’t make any logical sense: for example, people who liked “I hate Israel” on Facebook also tended to like Nike shoes and KitKats. Wylie and other researchers created a tool that could identify people’s psychological or personality traits from the Facebook data, allowing the company to purchase targeted ads that they would be most responsive to.

For example, if the algorithm could identify people who have a high degree of openness and a high degree of neuroticism, they would be able to isolate a group of people who were prone to conspiratorial thinking. Then, going back to Facebook, they could purchase targeted ads with fake news or conspiratorial messages, such as “Obama has moved a battalion into Texas because he’s planning on staying for a third term.” Those individuals would then organically share the content or messages, until everyone in their network was talking about them. So it wouldn’t matter if you had been profiled or exposed to the personalized advertisements: the trends would still affect you if everyone on your social media account started talking about certain stories or information.

The idea of this targeting is less about providing people with facts or campaign information, and more about starting “trends” – it’s no accident that the main creators of these tools were originally studying fashion. As Chris Wylie explained to The Guardian, “Trump is like a pair of Uggs, or Crocs, basically. So how do you get from people thinking ‘Ugh. Totally ugly’ to the moment when everyone is wearing them?” Targeting certain people meant that they would become “trend-setters.”

Wylie said: “Traditional marketing doesn’t misappropriate tens of millions of people’s data, and it is not or should not be targeted at people’s mental state like neuroticism and paranoia, or racial biases.”

OK . . . so what was actually illegal about this?

Well, there are a couple of issues: whether the Facebook data was harvested illegally and whether the Facebook data was shared illegally.

Cambridge Analytica collaborated with Aleksandr Kogan (the Cambridge professor mentioned above) and his company Global Science Research to obtain the Facebook data. They created an app called thisisyourdigitallife in which users were paid to take a personality test and agreed to have their Facebook data (sometimes including private messages) collected for “academic use.” Crucially, however, the app also collected the data of their Facebook friends who had not consented and would not have suspected their data had been collected.

British data protection laws ban the sale or use of personal data without consent, including cases where consent is given for one purpose but data is actually used for another. Therefore, the collection of users’ friends’ data and the use of this data for use in election campaigns would be a violation of the law.

Facebook, after The Observer published allegations that Cambridge Analytica had misused the data, sent a letter to the company demanding they delete it. However, Chris Wylie has alleged that Facebook never followed up to ensure that the data was actually deleted and several people have claimed the data is still available online.

So . . . what’s the problem with Facebook? And did they break any laws?

Well, it’s not yet clear if Facebook has broken any laws. The UK Parliament has repeatedly requested for Mark Zuckerberg to testify. Certainly there are questions about how they are protecting and using Facebook user data, since they were aware that Cambridge Analytica was harvesting a large amount of data (although they claim they believed it was for academic purposes).

Additionally, some people are raising questions about the ethical issue of one company having so much control over political messages and advertisements, and whether they are contributing to political polarisation and the spread of fake news. This was also an issue during the 2016 US Presidential election, where there were allegations that Russian companies and government actors purchased targeted ads to influence the election. For more about the issues facing Facebook, check out this excellent Wired feature.

What was the Leave campaign’s connection to all of this?

Well, a lot of this information is still being investigated. Some whistleblowers have alleged that there was a common plan across Vote Leave, BeLeave, the DUP, and Veterans for Britain, all of whom contracted AggregateIQ during the Brexit campaign and spent nearly £5m combined on AggregrateIQ. It is not known how much the various campaigns knew about the illegal activities of these companies.

However, British campaign finance laws stipulate that there can be no coordination across different campaigns, or else they have to file their expenditures jointly. The Electoral Commission has recently referred the Leave.EU campaign to the authorities for breaches of electoral law after it found evidence of coordination between the Leave.EU campaign and the BeLeave campaign.

Did this actually affect the Brexit vote?

This is more difficult to prove. The vote was very close. A lot of things could have affected the outcome. However, Dominic Cummings did say, “Without a doubt, the Vote Leave campaign owes a great deal of its success to the work of Aggregate IQ. We couldn’t have done it without them.” 

Whistleblower Chris Wylie has also pointed out that Vote Leave spent 40% of its funding on AggregateIQ, a financial decision that would be remarkable if they did not actually believe it would have had an impact. Wylie has also pointed out that their tool had conversion rates as high as 10% while most ads have a conversion rate of 1%.

Even if this isn’t illegal, doesn’t this subvert the democratic process?

That’s a much more complicated question. If a psychological profiling tool was created with stolen data, what does that mean about the use of that profiling tool? Should that invalidate the result of the election? Furthermore, targeted advertising and spreading of misinformation is not technically illegal. But should it be? How should governments regulate these big social media companies, like Facebook, to ensure they’re not being abused? Or should it be up to these companies to police themselves?

“We don’t require car companies to make unsafe cars and just put terms and conditions on the outside… We have rules that require safety and to put people first… In the 21st century it is nearly impossible for people to be functional without the use of the internet, so there should be some degree of accountability and oversight.” – Chris Wylie

What could happen in the event of a border poll?

When the Good Friday Agreement was signed twenty years ago, the world may not have been a simpler place but technology certainly was. The idea that Northern Ireland’s constitutional status should be decided by a popular referendum seemed like a good one. But new technologies that have the ability to connect us also have the ability to spread disinformation and propaganda, in ways that could subvert the entire democratic process and call into question the integrity of referendums. We should tread with caution.

What if in the event of a border poll a political party or organisation contracted the kinds of companies mentioned, to spread misinformation across Northern Ireland? What would happen to the political climate and the peace process if the region is bombarded with fake news or other fear mongering tactics that we saw during the Brexit referendum? Will people accept a result amidst allegations of similar illegal activity surrounding campaigning? And what will an environment like that mean for peace afterwards, no matter what the outcome is?

These revelations, and questions around how we ensure a future poll campaign doesn’t result in a return to widespread violence, are worth reflecting on.