This week for my Law and Ethics class for my MA in Business Admin and Marketing, I had to write a research paper about the recent spike in Facebook Live murders and suicides being broadcast to the world, and what responsibility Facebook should hold in these cases. Since this was still a practice in my writing skills, I thought I would share it with everyone. Please feel free to comment at the end of the paper, but please no hate comments about my opinions on the matter. Debates are okay, but please be respectful of others’ opinions as well.
Facebook Live Killings
With the continual growth of social media, in particular Facebook, there has been another growth in our society that is quite disturbing; live broadcasts of murders and suicides. Livestreaming became popular just a few years ago, but now it seems to have taken over the most used social media websites, including Facebook, YouTube, and Twitter. But who holds responsibility when these platforms are used to broadcast a crime? This very question has sparked debate since July 2016 when a young woman broadcasted her boyfriend’s death at the hands of the Minnesota police, which included bloodied pictures of his lifeless body (Crossley, 2017). Since this horrendous incident, there have been many crimes broadcasted for all to see, including drive-by attacks, murder plots, actual murders, confessions to crimes, suicides, sexual assault, and more.
It is hard to determine who holds what kind of responsibility in these situations without first knowing how Facebook runs its livestream. Currently, all livestreaming platforms are self-regulated (Crossley, 2017). “The Communications Decency Act of 1996 contains a section that states, ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’” (Blumenthal, 2018). This is the loophole that has struck controversy around the world over whether Facebook should be held responsible to some degree for the crimes it broadcasts. When compared to how television stations, radio stations, and news publications are controlled, and with the 1996 Communications Decency Act, it is easy to see how this debate continues to be ongoing without any results.
“They are given protections that no one can sue them for any reason—that is Google and
Facebook—that’s unlike the protection that your publication has or NBC News has or the New York Times has. They are completely shielded from any responsibility for the content that appears on their service” (Blumenthal, 2018).
I do not believe this is right and that changes should be made so social media platforms are being held accountable for the content they broadcast to the world, just as other outlets are held responsible. Facebook, and other sites that use livestream, have the power to do so much good when it comes to stopping crimes while they are happening, or even in some cases before they happen. Instead, they rely on policies that keep them from reacting to these crimes in a timely manner, costing many lives in the process.
The biggest debate about whether Facebook should be held to the same standard as other media outlets is whether imposing this responsibility on the site would be considered a Good Samaritan Law or not. Many Americans value their individual rights. They do not want to be told they have to intervene with a crime if they witness it or face jail time. They want to be able to make the choice, even though many might choose the selfish route of turning the other way and letting the crime continue. I feel that is what Facebook is doing, and it is not right. “Ironically, a society that draws such bold and certain lines to protect innocent life…refuses to impose even a minimal duty to rescue the same innocent life” (Ridolfi, 2000). So many do not want their individual rights compromised, including the leaders of Facebook, but what they do not realize is that we are called to serve as a Good Samaritan already and it is considered acceptable and normal.
“A duty to rescue is not unlike the obligation to serve on jury duty, a civic responsibility also enforced by criminal law. The idea at its core is that people invested in society make the best citizens, and the best citizens make decisions that go beyond their own self-interest…As a matter of fact, we have imposed duties to act in many situations: when there is a contract, special status relationship, if someone creates a risk to another person then fails to act to prevent harm from occurring, and if someone comes to the aid of another person but then abandons the effort, leaving the victim in a worse position” (Ridolfi, 2000).
Social media sites need to be more proactive and thorough with their review of the content they bring to the world. It is a major responsibility that Facebook seems to be determined to avoid. Three ways I feel Facebook could be regulated more efficiently are by hiring an Ethics Officer or an Oversight Committee, instilling new Federal Regulations for the social media platforms, and implementing stricter monitoring of Facebook Live by real people, not technology. The first and what I feel is the most important step in regulating Facebook better is for them to appoint a Chief Ethicist or an Oversight Committee. A Chief Ethicist would help the executives of the company work through difficult but critical conditions. They would also be able to develop ethical guidelines for Facebook that the company is lacking and provide company-wide training on ethical decision-making (Heider, 2017). If they chose to go with an Oversight Committee, they would have more hands-on regulation through a group of specialists that can provide guidance and strategic vision in regards to Facebook’s role in society and the risks they pose to people and communities (Neidig, 2018). In fact, several of Facebook’s major investors are calling for this action to be taken immediately. I believe the company should implement both a Chief Ethicist and an Oversight Committee, because they do not seem to take seriously the impact that they have on everyone in the world or what the right steps are to take when a crime is being shown on their platform.
The next way Facebook can be more proactive would be to work with other social media platforms and the government to implement better Federal Regulations to hold themselves accountable for what is shown on their site. “Of course, Facebook and YouTube could choose to restrict access to its platform or review every piece of video before posting. They choose not to do so” (Crossley, 2017). The main reason these companies do not want to use this thoroughness in their reviews is because it would take away from the purpose of livestreaming. In television, live broadcasts still have a few seconds of delay so they can monitor the content, but Facebook does not even do this. If the company worked with the government to come up with new regulations, a middle ground could be met in order to save many lives that are being lost due to inefficient systems and algorithms. Currently, in its self-regulated form, Facebook relies on its users to alert them to a crime being committed. If a certain amount of users flag a posting or livestream at the same time, the algorithms send the post to a global team of moderators, who review the content and decide if it violates Facebook’s terms of service (Isaac, 2017). This has caused problems of delay in many of the livestream crime cases. For example, in the April 2016 case in Cleveland of a man uploading a video with his plan to murder, then a video showing the murder, and then a last video of him confessing to the murder before taking his own life, Facebook only received a report of the confession after it had ended. It took twenty-three minutes for them to disable the user’s account after reports of the murder footage (Crossley, 2017). Even with these outrageous delays, Zuckerberg denies the need for more regulation and continues to rely on his users to do his job for him. On May 3rd of 2017, the CEO announced plans to hire an extra 3,000 moderators, in addition to the 4,500 the company says it already has. At his announcement he was heard saying, “We’re working to make these videos easier to report so we can take the right action sooner—whether that’s responding quickly when someone needs help or taking a post down” (Crossley, 2017). This is not the answer to stopping crimes from being broadcasted. The only thing that will lessen the visibility of these crimes is Federal Regulation.
Another thing the social media company can do to be proactive has already been addressed to some capacity, but I think it should be pushed even further. Facebook needs to accept that algorithms and technology are falling short when it comes to stopping these crimes from being broadcasted on their platform. It is only with human review that these livestreams can be stopped and the right authorities can be alerted before the crime is complete. I think that every piece of content, whether it be a post, a picture, a video, or a livestream, should be reviewed before it is posted to the platform. That is the only way to make sure there are no posts violating the company’s policies and terms.
Two ways the company can take steps in the right direction towards these proactive actions would be to put in place an internal alert system. The company should not rely on users to alert them to inappropriate content, but should have word alerts that trigger content to be sent to the mediators. For example, if during a livestream someone says the word “murder” or “AR-15”, the content is then immediately sent to one of the many mediators the company is planning to hire and that person can make a quick and sound judgement about the content being displayed and take the right actions to shut down the stream and disband the person’s account. I would rather the company err on the side of too careful with their moderating than not careful enough, as they seem to be. It would be worth it for the company to anger a few livestreamers by stopping their stream out of concern than have the stream continue and potentially harm someone in the process. Another step the company could use to move in the right direction of stopping crimes on its site would be to take a page from television broadcasting and use a delay on their livestreams. Even if the delay is only a few seconds, as long as there is someone moderating these livestreams, the intent of a crime can be caught before the crime has happened in many cases and authorities can be informed. This could potentially save lives, and I do not understand how profits could stop a company from doing whatever they can to save their own user’s lives.
“Facebook is too immunized from competition to be left to adopt self-regulations. There is certainly room for some self-regulation in response to public pressure, but basically these companies are monopolies or near-monopolies in their areas of operation, and the only way to achieve desirable outcomes is through clear, effective regulation,” says Benkler, a Harvard law professor (Blumenthal, 2018).
In conclusion, if Facebook does not take more steps to clear the situation it has caused in broadcasting crimes, I think they should be held more accountable for them being shown on their platform. Zuckerberg should be more proactive and thorough with his review of the content he displays and that can be done through hiring an Ethics Officer and Oversight Committee, working with the government to produce new regulations, and by relying more on real humans to monitor the content displayed rather than technology and algorithms. Two safeguards the company could put in place to deter violence on their platform would be to use a word alert system that works for posts, videos, and livestreams to send the potentially harmful content directly to a human monitor so they can make a quick and correct decision regarding the content’s violation of policy, which they will be able to do this from the ethics training they will receive from the Ethics Officer, and by using a delay system in regards to their livestreams so that no content is going out pre-monitored to some degree. A five second delay could be the difference between life and death in some cases. Two ways Facebook could encourage ethical use of their platform would be to follow through with producing new regulations with the help of the government, and erring on the side of safe rather than sorry when it comes to shutting down someone’s livestream or disabling their account. All of these actions could help to stop crimes and save a life. This, above profit, should be worth it to not only Mark Zuckerberg and his company, but to everyone.
Resources
Blumenthal, P. (2018). Here’s How Facebook Could Be Regulated. Retrieved from Huffington Post: http://www.huffingtonpost.com/entry/facebook-regulation_us_59e5211ee4b0ca9f483a14bd
Crossley, P. (2017, 05 16). When Facebook Livestreams a Murder, Who’s Responsible? Retrieved from ABC News: http://www.abc.net.au/news/2017-05-17/facebook-livestream-murder-suicide/8500586
Heider, D. (2017, 01 09). Why Facebook Should Hire a Chief Ethicist. Retrieved from USA Today: http://www.usatoday.com/story/opinion/2017/01/08/facebook-ethics-fake-news-social-media-column/96212172/
Isaac, M. (2017, 04 17). A Murder Posted on Facebook Prompts Outrage and Questions Over Responsibility. Retrieved from The New York Times: http://www.nytimes.com/2017/04/17/technology/facebook-live-murder-broadcast.html
Neidig, H. (2018, 04 17). Investor Calls for Oversight Committee at Facebook. Retrieved from The Hill: http://thehill.com/policy/technology/383618-investor-calls-for-oversight-committee-at-facebook
Ridolfi, K. M. (2000). Law, Ethics, and the Good Samaritan: Should There Be a Duty to Rescue? Retrieved from Santa Clara Law: http://digitalcommons.law.scu.edu/facpubs/114