Thursday , November 23 2017
Home / 2016 presidential election / 10 million saw Facebook political ads posted from Russia-linked fake accounts

10 million saw Facebook political ads posted from Russia-linked fake accounts


Why it matters to you

Around 10 million people saw at least one of the ads that came from inauthentic accounts, so Facebook is developing new plans to curb future abuse.

On Monday, October 2, Facebook shared data with Congress on over 3,000 Facebook political ads that came from fake Russian accounts during and after the 2016 presidential election. Now, the social media platform is sharing that data with users, along with what steps the company is taking next to curb similar attempts in the future.

Facebook says that it found over 3,000 ads that came from inauthentic accounts linked to a Russian group called the Internet Research Agency that operated between 2015 and 2017. Some 10 million people in the U.S. viewed at least one of those ads, with around 44 percent of those views happening before the Nov. 8, 2016 election. The ads, as well as the spread of fake news during the election, have brought the social media platform under scrutiny.

“The 2016 US election was the first where evidence has been widely reported that foreign actors sought to exploit the internet to influence voter behavior,” wrote Elliot Schrage, Facebook’s vice president of policy and communications. “We understand more about how our service was abused and we will continue to investigate to learn all we can. We know that our experience is only a small piece of a much larger puzzle.”

Facebook says its advertising guidelines are designed to prevent abuse without inhibiting free speech — for example, preventing advertisers from advertising globally to other countries would prevent organizations like UNICEF and Oxfam from communicating with global audiences. All of the ads in question violated policy because they came from inauthentic accounts, but Facebook says the content inside some of them would have been approved if the information had come from an authentic account.

“We strongly believe free speech and free elections depend upon each other,” Schrage wrote. “We’re fast developing both standards and greater safeguards against malicious and illegal interference on our platform. We’re strengthening our advertising policies to minimize and even eliminate abuse. Why? Because we are mindful of the importance and special place political speech occupies in protecting both democracy and civil society. We are dedicated to being an open platform for all ideas — and that may sometimes mean allowing people to express views we — or others — find objectionable. This has been the longstanding challenge for all democracies: how to foster honest and authentic political speech while protecting civic discourse from manipulation and abuse.”

With the data, Facebook also shared a list of next steps the platform is taking to catch ads like those from the so-called Internet Research Agency — which violated Facebook policy but ran anyway — in the future. Facebook will be adding 1,000 people to its staff to manually review more ads, looking at content as well as context and targeted demographics. Ads that target certain demographics will automatically be flagged for manual review, Facebook says, after both the election ads and the inappropriate user-typed demographics that snuck into the system. The platform currently uses both algorithms and human reviewers, reviewing millions of ads every week.

The platform is also taking steps to help users better determine where an ad came from. In the name of transparency, users will soon be able to click on an ad targeted to them and also view ads targeted toward other demographics, a feature that Facebook is currently building. Expanded advertising policies are also coming. For Pages that want to run ads related to U.S. federal elections, Facebook will be requiring more documentation confirming the business or organization. The social media giant is also reaching out to industry leaders and other governments to establish industry standards, continuing efforts like the partnership with Twitter, Microsoft, and YouTube designed to fight extremist content.

Since the U.S. election, the platform has also taken steps to curb fake news in elections in Germany and the U.K.





Source link

About David Wiky

Leave a Reply

Your email address will not be published. Required fields are marked *