Violence on Facebook Live Presents Censorship Problem

Facebook Live has been used to broadcast more than 50 acts of violence on the platform.

Ethan Hurlburt, Columnist

As live streaming might be seen as the future of social media, it’s gaining attention for some violent, disturbing acts occurring through a popular live-streaming outlet Facebook Live.

Facebook Live, as seen in a TV ad campaign, gives users with a phone a way to share live video with Facebook’s 1.8 billion users. But, it has also been the cause of a real censorship problem.

“[Facebook] Live is like having a TV camera in your pocket. Anyone with a phone now has the power to broadcast to anyone in the world,” Mark Zuckerberg, the CEO of Facebook wrote.

In the year since its launch, the feature has been used to broadcast at least 50 acts of violence, according to the Wall Street Journal, including murder, suicides, and the beating of a special-needs teenager in Chicago earlier this year.

When Facebook initially introduced the feature in 2016, it allowed users to share moments, create content, and view live video. The problem is that Facebook didn’t adequately prepare for potential negative use of  Facebook Live, as it has become an outlet for horrible, gruesome violence.

Facebook recognizes that live video can be seen to gain a window into the lives of others, whether it’s exposing the best parts or the worst, whether it’s live streaming a funny moment or other content that could be deemed unsuitable for viewers.

As Zuckerberg addressed the issue last month, he stated, “Facebook is not just technology or media, but a community of people. That means we need Community Standards that reflect our collective values for what should and should not be allowed” and that “in the last year, the complexity of the issues we’ve seen has outstripped our existing processes for governing the community.”

They have also made a clear statement back on July 8, 2016, that “In those situations, context and degree are everything. For instance, if a person witnessed a shooting, and used Facebook Live to raise awareness or find the shooter, we would allow it. However, if someone shared the same video to mock the victim or celebrate the shooting, we would remove the video.”

Last Wednesday, Facebook presented a unique way using Artificial Intelligence to help with users using the live streaming feature to stream a potential suicide to receive on-screen help. Resources like National Suicide Prevention Lifeline and the Crisis Text Line are being used through Facebook Messenger to aid those in need.

Public suicide is not new at all, but technologies like Artificial Intelligence by identifying posts including suicidal or harmful thoughts in order to prevent haunting public acts from reaching far more people.

As with all content on Facebook, they have reporting tools in place for people to report content that they believe violates Facebook’s community standards. Teams will review reports, and if the content violates standards, it will be removed.