As social media firms ramp up their fight against misinformation, politicians have been largely left exempt. To some, that’s a huge problem.
Facebook, Twitter and other social media platforms have decided to allow politicians including President Donald Trump extra leeway to their rules, seeking to avoid stifling political debate and leaving “newsworthy” content online.
But Trump’s efforts to push falsehoods and conspiracy theories have prompted calls for platforms to rethink those guidelines to prevent the president and others from spreading false and misleading information.
Democratic presidential hopeful Joe Biden recently asked Facebook to take down “debunked” claims in a Trump ad on the leading social network, only to be rebuffed.
In a response to Biden, Facebook said statements by politicians, even if false, are “considered direct speech and ineligible for our third-party fact checking program.”
Senator and presidential candidate Kamala Harris meanwhile called on Twitter to ban Trump after the president violated the platform’s rules by accusing his critics of “treason” and warning that an attempt to impeach him amounted to a “coup.”
The candidates’ demands are typical of the conundrum social media firms face as they seek to remain open for public debate while curbing “hate speech,” abusive conduct, and patently false claims from politicians.
Facebook and Twitter have both steered away from removing “newsworthy” content which may include false or misleading comments from political leaders. YouTube offers a similar exemption.
This policy “seems like a troubling compromise because it’s an invitation to political actors to say whatever they think is expedient whether it’s true or not,” said Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University and author of a report on “Disinformation and the 2020 Election.”
Barrett’s report recommends that social networks take down “provably false” information, though he acknowledged that would leave big loopholes for politicians stretching the truth.
The report noted that a majority of deliberately deceptive or false information shared on social media comes not from Russia or other foreign sources but from within the United States, making it more complicated to take down.
“It’s a real conundrum. I don’t think there’s a an easy answer,” Barrett said.
‘Vector for misinformation’
Facebook vice president Nick Clegg said last month the social network would treat speech from politicians “as newsworthy content that should, as a general rule, be seen and heard.”
Gaurav Laroia of the watchdog group Free Press said exceptions allowed by Facebook means the company “is allowing its platform to be a vector for misinformation in the lead-up to the 2020 election.”
Facebook’s ad policies leave a gaping loophole for Trump, the biggest political ad spender on the platform, as he faces a congressional impeachment inquiry, according to Free Press.
Senator Elizabeth Warren, another presidential candidate, accused Facebook of buckling to pressure from the White House on political misinformation.
“Trump and (Facebook CEO Mark) Zuckerberg met at the White House two weeks ago. What did they talk about?” Warren tweeted.
“Facebook is now okay with running political ads with known lies.”
Facebook maintains it has not changed its stand but clarified a policy of steering clear of the touchy subject of moderating political speech.
“Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” Facebook public policy director Katie Harbath said.
‘They need standards’
Michelle Amazeen, a Boston University professor specializing in political communication, said platforms have an economic interest in attracting political ads and have been “opaque” about misinformation policies.
She said social networks are capable of rejecting ads from Trump that have debunked information.
“They need to have some standards,” Amazeen said.
Some messages, she said, are not just misleading but may incite violence or otherwise put lives in danger.
“If fact checkers have shown that a political ad is inaccurate, it should not be allowed to circulate,” she said.
Samuel Woolley, a University of Texas professor who has researched manipulation in the 2016 election campaign, agreed that social networks need to step up against misinformation, whatever the source.
“False information coming from a political leader is much more potent than if it were coming from a bot or fake account,” Woolley said.
Whether social networks have the capacity to rapidly detect and remove false information is an open question, but Woolley said it is their responsibility.
“Social media companies created this problem, they made clear decisions to scale at this rate, so I feel it’s up to them to use their massive resources to address this problem through technology and human labor,” he said.