Facebook kind of, somewhat continues to crack down on fake news

Butembo

Does Facebook really want to stop the spread of fake news, or does it just want to stop being blamed for it?

where can i buy clomid from in the uk See the original article on CBC.ca from May 10, 2017

Another election, another cycle of fake news. And Facebook is, once again, caught in the middle.

In the U.S. presidential election, the rise of fake news caught many off guard. But in the French presidential election, it was almost expected. And next up, as the U.K. heads back to the ballot box for its general election, we are again anticipating its arrival.

This is the new reality, it would seem, especially on social media, where misinformation spreads like wildfire. Facebook, meanwhile, is trying to figure out its role in all of this; it likes to maintain that it’s not a media company, conveniently excusing itself from any connection to the content that gets posted on its platform.

Chief Operating Officer Sheryl Sandberg has insisted that Facebook cannot be the “arbiter of truth” — it is a platform, not a publisher, she said.

Facebook taking action

But at the same time, Facebook has taken action to try to curb the spread of misinformation across its platform, employing a team of fact-checkers to flag false news reports in order to reduce the visibility of fake news in the main news feed. In a report on the recent French election, for example, Facebook’s internal security researchers admitted to taking action against approximately 30,000 pages that spread misinformation.

Yet Facebook still can’t manage to get things straight: Is it responsible for the fake news posted on its platform, or not? Is the onus on them, or on users?

In anticipation of the June general election in the U.K., Facebook took out ads in multiple British papers outlining their measures to control the spread of fake news.

It might seem a sign that the company is starting to come to terms with its role in shaping the public consciousness, but a closer look at the ads show it’s still Facebook being Facebook; it’s a lukewarm approach to taking responsibility, while still foisting much of the liability on its users.

Facebook’s newspaper ads offer ten tips on spotting fake news, which include being skeptical of headlines, explaining that “if shocking claims in the headline sound unbelievable they probably are.” Facebook also advises its 31 million registered British users to “investigate the source” and to “only share news that you know to be credible.”

Passing the onus

It’s all good advice. In fact, in this era of misinformation, developing a critical eye for where a story is coming from has become a pillar of digital literacy 101. But the campaign, meant to show Facebook’s commitment to fighting fake news, also passes the onus back onto users.

It’s Facebook’s classic “we’re not a media company,” argument, in a different form, suggesting that the real responsibility lies with the people who read, click and share the news stories – fake or otherwise – that get disseminated across its platform.

Complicating all of this is that those page views, clicks and shares are profitable for Facebook, regardless of whether they are derived from fake news. Facebook trades in the attention economy, a transactional system that doesn’t discriminate based on the quality or validity of information, but rather, based on its mileage and virality.

All of which is to say, Facebook’s efforts to control the spread of fake news should be taken with a grain of salt. Ultimately, it’s up to us to be critical and to question what we read. But it doesn’t help when the vehicle for that news can’t get its story straight on the role it’s supposed to play.

Does Facebook really want to stop the spread of fake news, or does it just want to stop being blamed for it?