Shelly Palmer

Facebook: Platform, Publisher, or Ministry of Truth?

Facebook is being pilloried for its unwillingness to remove a paid advertisement by President Trump’s reelection campaign after Facebook was notified by the Biden campaign that the ad contained false statements about Joe Biden and his son.

On the surface, this seems simple, and Facebook’s decision not to remove the ad seems wrong. If someone is paying to run an ad that is lying or making false claims, some policy or regulation or truth-in-advertising law should be in place to ensure that the lie or false claims are removed. This is just common sense.

If the President Can Lie, Why Can’t I?

After all, if Facebook is OK with President Trump lying and making false claims in a paid ad, Facebook should be OK with me (or anyone) lying and making false claims in an ad too. If so, I’m going to quit my day job and start running ads on Facebook for “eat anything you like and lose weight” programs and “get rich quick” schemes. If Facebook doesn’t care about the truth, why should I?

Does Facebook have truth-in-advertising policies? Do those policies apply to everyone, or is President Trump “above the law?”

There Is No Soundbite Answer – You Need to Read This

The moment I learned about Facebook’s decision to let the Trump ad run, I reached out to my friends and clients at Facebook. I discussed this specific issue (and the size and scope of the problem) with several senior executives. The conversations were thoughtful, and to be fair, this issue is wildly more complex than it appears at first glance.

First and foremost, this is not a First Amendment issue. Often abbreviated by the phrase “freedom of expression,” the First Amendment says, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

Facebook is not the government; it’s a company, and it can make any policies it likes. If Facebook creates a policy that states, “Employees who do not wear blue shirts will be fired,” the First Amendment will not protect workers who are “freely expressing themselves” by wearing green shirts from being fired. If Facebook says it will not allow the president of the United States to lie on its platform, the First Amendment does not apply. Facebook is not the government.

What Is True?

If Facebook has been notified that claims in an ad are false, how can Facebook possibly leave a false ad on its platform?

In this specific case, Facebook (its leadership, its AI, or its human workers) may believe that President Trump’s campaign is lying. Facebook may believe that the Biden campaign is right to call attention to what it believes is a lie. Facebook may believe that Joe and Hunter Biden are innocent of any wrongdoing.

But Facebook has no knowledge of the facts. Facebook only knows what has been reported. It cannot know with absolute certainty what is, or is not, true. No one can. Regardless of what Facebook may want to believe, there are others who will want to believe the exact opposite. Who gets to decide what is or is not actually true?

That said, Facebook has an enormous amount of power; it is the largest communications platform ever created; and with its size and influence, it must be held to the highest standards. But what does that even mean?

In an effort to help you think about the magnitude of the “what should be allowed on Facebook” problem, I offer the text of an email Facebook sent me as a follow-up to my conversations with them. I asked for this in writing, and I want you to read it and let me know what you think.

An Excerpt from an Email to Me from Facebook:

Our approach to free speech is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny.

That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

When it comes to our third party fact checking program, the only time we would demote content on a politician’s page is if they shared a link to an article or a video or photo, created by someone else that has been otherwise debunked. We would also display related information from fact-checkers, and reject its inclusion in advertisements.

But this is different from a politician’s own claim or statement—even if that claim has been debunked in another context.

Transparency:

While we aren’t sending ads from politicians to our third party fact checking partners, they still must comply with our Advertising Policies (which on the whole, are stricter than our community standards), and their ads still go through review systems to check against those policies:

Misleading content in ads (general): Our policies continue to address misleading behavior:

How we are protecting elections:

Over the past two and a half years, we’ve developed smarter tools, greater transparency, and stronger partnerships to help us do just that. We’ve blocked millions of fake accounts so they can’t spread misinformation. We’re working with independent fact-checkers to reduce the spread of fake news.

In 2016, we were on the lookout for traditional cyber threats like hacking and stealing information. What happened was a much different kind of attack meant to sow discord around hot political issues. We’ve learned lessons from 2016 and have seen threats evolve, ensuring that our defenses stay ahead of those efforts, making it harder to use our platform for election interference.

We know that security is never finished and we can’t do this alone — that’s why we continue to work with policymakers and experts to make sure we are constantly improving.

Smarter Tools

Our teams are working to build innovative new tools, combining stronger artificial intelligence with expert investigations, to find and prevent abuse, including:

Specifically, we’ve introduced:

We are also improving our rapid response efforts. We now have more than 30,000 people working on safety and security, with 40 teams contributing to our work on elections globally. Our work builds on efforts and investments that began in 2017, and in early 2019 we established a dedicated team focused on the US 2020 elections. The team has conducted detailed risk assessments and threat analysis and continues to run scenario exercises so that we can anticipate and address emerging threats. Their work also includes proactive sweeps looking for impersonation of campaigns and candidate accounts and Pages.

Stronger Partnerships We also continue to improve our coordination and cooperation with law enforcement, including DNI, DHS, FBI, as well as federal officials, state election officials, and other technology companies, to allow for better information sharing and threat detection.

We are also working with academics, civil society groups, and researchers, including the Atlantic Council’s Digital Forensic Research Laboratory, to get the best thinking on these issues.

We know we can’t do this alone, and these partnerships are an important piece of our comprehensive efforts to fight election interference.

We know there is more to do, and we are committed.

Security is an arms race, and as we continue to improve our defenses, our adversaries evolve their tactics. We will never be perfect, and we are up against determined opponents. But we are committed to doing everything we can to prevent people from using our platforms to interfere in elections.

Facebook also provided a link to a post and speech by Nick Clegg, Facebook’s VP of Global Affairs and Communications, which is certainly worth a look.

 

Take the Survey

If the survey is not visible, click here.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

Other Essays That May Interest You