Technology

Facebook to tighten live stream access after mosque attacks

Facebook on Friday said it is tightening live video streaming rules in response to the service being used to broadcast deadly attacks on mosques in New Zealand.

The Christchurch attacks — carried out by a self-avowed white supremacist who opened fire on worshippers at two mosques — claimed 50 lives.

Many people have “rightly questioned how online platforms such as Facebook were used to circulate horrific videos of the attack,” chief operating officer Sheryl Sandberg said in an online post.

“In the wake of the terrorist attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community,” she added.

Facebook is looking into barring people who have previously violated the social network’s community standards from livestreaming on its platform, according to Sandberg.

The social network is also investing in improving software to quickly identify edited versions of violent video or images to prevent them from be shared or re-posted.

“While the original New Zealand attack video was shared Live, we know that this video spread mainly through people re-sharing it and re-editing it to make it harder for our systems to block it,” Sandberg said.

“People with bad intentions will always try to get around our security measures.”

Facebook identified more than 900 different videos showing portions of the streamed violence.

 

Hateful nationalism

The social network is using artificial intelligence tools to identify and remove hate groups in Australia and New Zealand, according to Sandberg.

Those groups will be banned from Facebook services, she said.

Facebook this week announce it would ban praise or support for white nationalism and white separatism as part of a stepped-up crackdown on hate speech.

The ban will be enforced starting next week on the leading online social network and its image-centric messaging service Instagram.

“It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services,” the social network said in a statement.

Facebook policies already banned posts endorsing white supremacy as part of its prohibition against spewing hate at people based on characteristics such as race, ethnicity or religion.

The ban had not applied to some postings because it was reasoned they were expressions of broader concepts of nationalism or political independence, according to the social network.

Facebook said that conversations with academics and “members of civil society” in recent months led it to view white nationalism and separatism as linked to organized hate groups.

People who enter search terms associated with white supremacy will get results referring them to resources such as Life After Hate, which focus on helping people turn their backs on such groups, according to Facebook.

Amid pressure from governments around the world, Facebook has ramped up machine learning and artificial intelligence tools for finding and removing hateful content.

“We are deeply committed to strengthening our policies, improving our technology and working with experts to keep Facebook safe,” Sandberg said.

“We must all stand united against hate and work together to fight it wherever and whenever it occurs.”

 

Related Articles

Back to top button