Facebook founder Mark Zuckerberg will be questioned by US lawmakers today about the “use and abuse of data” — following weeks of breaking news about a data misuse scandal dating back to 2014.
The Guardian published its first story linking Cambridge Analytica and Facebook user data in December 2015. The newspaper reported that the Ted Cruz campaign had paid UK academics to gather psychological profiles about the US electorate using “a massive pool of mainly unwitting US Facebook users built with an online survey”.
Post-publication, Facebook released just a few words to the newspaper — claiming it was “carefully investigating this situation”.
Yet more than a year passed with Facebook seemingly doing nothing to limit third party access to user data nor to offer more transparent signposting on how its platform could be — and was being — used for political campaigns.
Through 2015 Facebook had actually been ramping up its internal focus on elections as a revenue generating opportunity — growing the headcount of staff working directly with politicians to encourage them to use its platform and tools for campaigning. So it can hardly claim it wasn’t aware of the value of user data for political targeting.
Yet in November 2016 Zuckerberg publicly rubbished the idea that fake news spread via Facebook could influence political views — calling it a “pretty crazy idea”. This at the same time as Facebook the company was embedding its own staff with political campaigns to help them spread election messages.
Another company was also involved in the political ad targeting business. In 2016 Cambridge Analytica signed a contract with the Trump campaign. According to former employee Chris Wylie — who last month supplied documentary evidence to the UK parliament — it licensed Facebook users data for this purpose.
The data was acquired and processed by Cambridge University professor Aleksandr Kogan whose personality quiz app, running on Facebook’s platform in 2014, was able to harvest personal data on tens of millions of users (a subset of which Kogan turned into psychological profiles for CA to use for targeting political messaging at US voters).
Cambridge Analytica has claimed it only licensed data on no more than 30M Facebook users — and has also claimed it didn’t actually use any of the data for the Trump campaign.
But this month Facebook confirmed that data on as many as 87M users was pulled via Kogan’s app.
What’s curious is that since March 17, 2018 — when the Guardian and New York Times published fresh revelations about the Cambridge Analytica scandal, estimating that around 50M Facebook users could have been affected — Facebook has released a steady stream of statements and updates, including committing to a raft of changes to tighten app permissions and privacy controls on its platform.
The timing of this deluge is not accidental. Facebook itself admits that many of the changes it’s announced since mid March were already in train — long planned compliance measures to respond to an incoming update to the European Union’s data protection framework, the GDPR.
If GDPR has a silver lining for Facebook — and a privacy regime which finally has teeth that can bite is not something you’d imagine the company would welcome — it’s that it can spin steps it’s having to make to comply with EU regulations as an alacritous and fine-grained response to a US political data scandal and try to generate the impression it’s hyper sensitive to (now highly politicized) data privacy concerns.
Reader, the truth is far less glamorous. GDPR has been in the works for years and — like the Guardian’s original Cambridge Analytica scoop — its final text also arrived in December 2015.
On the GDPR prep front, in 2016 — during its Cambridge Analytica ‘quiet period’ — Facebook told us it had assembled “the largest cross functional team” in the history of its family of companies to support compliance.
The company really has EU regulators to thank for forcing it to do so much of the groundwork now underpinning its response to this its largest ever data scandal.
Here’s a timeline of how the company has reacted since mid March — when the story morphed into a major public scandal.
March 16, 2018: Just before the Guardian and New York Times publish fresh revelations about the Cambridge Analytica scandal, Facebook quietly drops the news that it has finally suspended CA/SCL. Why it didn’t do this years earlier remains a key question
March 17: In an update on the CA suspension Facebook makes a big show of rejecting the notion that any user data was ‘breached’. “People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked,” it writes
March 19: Facebook says it has hired digital forensics firm Stroz Friedberg to perform an audit on the political consulting and marketing firm Cambridge Analytica. It subsequently confirms its investigators have left the company’s UK offices at the request of the national data watchdog which is running its own investigation into use of data analytics for political purposes. The UK’s information commissioner publicly warns the company its staff could compromise her investigation
March 21: Zuckerberg announces further measures relating to the scandal — including a historical audit, saying apps and developers that do not agree to a “thorough audit” will be banned, and committing to tell all users whose data was misused. “We will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well,” he writes on Facebook.
He also says developers’ access to user data will be removed if people haven’t used the app in three months. And says Facebook will also reduce the data users give to an app when they sign in — to just “your name, profile photo, and email address”.
Facebook will also require developers to not only get approval but also “sign a contract in order to ask anyone for access to their posts or other private data”, he says.
Another change he announces in the post: Facebook will start showing users a tool at the top of the News Feed “to make sure you understand which apps you’ve allowed to access your data” and with “an easy way to revoke those apps’ permissions to your data”.
He concedes that while Facebook already had a tool to do this in its privacy settings people may not have seen or known that it existed.
These sorts of changes are very likely related to GDPR compliance.
Another change the company announces on this day is that it will expand its bug bounty program to enable people to report misuse of data.
It confirms that some of the changes it’s announced were already in the works as a result of the EU’s GDPR privacy framework — but adds: “This week’s events have accelerated our efforts”
March 25: Facebook apologizes for the data scandal with a full page ad in newspapers in the US and UK
March 28: Facebook announces changes to privacy settings to make them easier to find and use. It also says terms of services changes aimed at improving transparency are on the way — also all likely to be related to GDPR compliance
March 29: Facebook says it will close down a 2013 feature called Partner Categories — ending the background linking of its user data holdings with third party data held by major data brokers. Also very likely related to GDPR compliance
At the same time, in an update on parallel measures it’s taking to fight election interference, Facebook says it will launch a public archive in the summer showing “all ads that ran with a political label”. It specifies this will show the ad creative itself; how much money was spent on each ad; the number of impressions it received; and the demographic information about the audience reached. Ads will be displayed in the archive for four years after they ran
April 1: Facebook confirms to us that it is working on a certification tool that requires marketers using its Custom Audience ad targeting platform to guarantee email addresses were rightfully attained and users consented to their data being used them for marketing purposes — apparently attempting to tighten up its ad targeting system (again, GDPR is the likely driver for that)
April 3: Facebook releases the bulk app deletion tool Zuckerberg trailed as coming in the wake of the scandal — though this still doesn’t give users a select all option, but it makes the process a lot less tedious than it was.
It also announces culling a swathe of IRA Russian troll farm pages and accounts on Facebook and Instagram. It adds that it will be updating its help center tool “in the next few weeks” to enable people to check whether they liked or followed one of these pages. It’s not clear whether it will also proactively push notifications to affected users
April 4: Facebook outs a rewrite of its T&Cs — again, likely a compliance measure to try to meet GDPR’s transparency requirements — making it clearer to users what information it collects and why. It doesn’t say why it took almost 15 years to come up with a plain English explainer of the user data it collects
April 4: Buried in an update on a range of measures to reduce data access on its platform — such as deleting Messenger users’ call and SMS metadata after a year, rather than retaining it — Facebook reveals it has disabled a search and account recovery tool after “malicious actors” abused the feature — warning that “most” Facebook users will have had their public info scraped by unknown entities.
The company also reveals a breakdown of the top ten countries affected by the Cambridge Analytica data leakage, and subsequently reveals 2.7M of the affected users are EU citizens
April 6: Facebook says it will require admins of popular pages and advertisers buying political or “issue” ads on “debated topics of national legislative importance” like education or abortion to verify their identity and location — in an effort to fight disinformation on its platform. Those that refuse, are found to be fraudulent or are trying to influence foreign elections will have their Pages prevented from posting to the News Feed or their ads blocked
April 9: Facebook says it will begin informing users if their data was passed to Cambridge Analytica from today by dropping a notification into the News Feed.
It also offers a tool where people can do a manual check
April 9: Facebook also announces an initiative aimed at helping social science researchers gauge the product’s impact on elections and political events.
The initiative is funded by the Laura and John Arnold Foundation, Democracy Fund, the William and Flora Hewlett Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Omidyar Network, and the Alfred P. Sloan Foundation.
Facebook says the researchers will be given access to “privacy-protected datasets” — though it does not detail how people’s data will be robustly anonymized — and says it will not have any right or review or approval on research findings prior to publication.
Zuckerberg claims the election research commission will be “independent” of Facebook and will define the research agenda, soliciting research on the effects of social media on elections and democracy
April 10: Per its earlier announcement, Facebook begins blocking apps from accessing user data 90 days after non-use. It also rolls out the earlier trailed updates to its bug bounty program