How Are First Amendment Rights Interpreted by Different Social Media?

Ever since social media first came about, there has been an ongoing debate about how each of the companies should interpret the idea of free speech. As free speech abuse is something that is legally actionable but difficult to define, there have been multiple instances of lawsuits against social media companies for crossing the lines of propriety. Companies have adjusted their policies over time, but there continue to be concerns about what the best-defined policies are that allow for freedom of expression yet do not offend people or encourage wrongful behavior.

First Amendment

This article will examine certain aspects of different social media’s policies related to free speech, including hate speech, obscenity, misinformation, and harassment. First, though, we will provide an overview of the subject in general and briefly describe the recent history of policy creation in this area. This will provide a basis for the various companies’ interpretations.

Overview

 

Two amendments to the US code since the creation of the Internet have helped to protect social media companies from liability. In 1996, the US amended Title 47 of the Communications Decency Act to include what is known as Section 230. Section 230 states that Internet service providers are not liable for the content that their users post, as long as they take “reasonable” steps to remove it or restrict access if it is made public.

The other amendment that has enabled social media companies to largely avoid liability in potential cases of allowing inappropriate content is the Digital Millennium Copyright Act of 1998. This Act amended Title 17 of the US Code to allow for limited liability of online service providers with regard to the subject of copyright infringement.

Despite the inclusion of these amendments, Congress and other political leaders have repeatedly argued that social media companies abuse their privileges and that Section 230 should be either abolished or changed. They claim that the protections were written long enough ago that the world was unable to foresee the changes that would come about in the online world and that the available technologies were of a much more primitive nature in the 1990s.

Many people have argued that social media companies bear a large degree of responsibility for events such as the January 6 Capitol riots, other types of terrorist activity, and the sale of illicit goods, among other things.

We will now examine each social medium’s policies with regard to specific aspects of free speech abuse and see how they compare to one another. In each case, we will give examples of cases that have been brought against one or more of the companies for violating legal norms related to the standard in question.

Hate Speech Policies

 

Hate speech has historically been a difficult thing to take action against. In 2017, a case was brought before the Supreme Court that many believe to be a definitive ruling on the subject.

The case of Matal V. Tam involved the founder of a band known as “The Slants” trying to register his music band with the US Patent Office several times in vain, with the office claiming that the name was disparaging to people of Asian descent.

Through a series of appeals, Tam’s case eventually made its way to the Supreme Court, and the Court unanimously agreed that a federal law prohibiting trademark names that disparage others is unconstitutional. The logical extension of this rule is that the government may not discriminate against people’s speech on the basis of the speaker’s viewpoint.

For this reason, the subject of hate speech is a particularly difficult one to manage.

Social media policies define it in the following way:

Twitter

Twitter’s policy says that “You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

Facebook

Facebook’s hate speech policy states the following: “We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” However, Facebook does allow for the sharing of other people’s hate speech, as well as humor or social commentary “related” content.

Instagram

Instagram has arguably the greatest degree of qualification in its stated policy against hate speech. The Instagram policy states that “We remove content that contains credible threats or hate speech…” but also states that “We do generally allow stronger conversation around people who are featured in the news or have a large public audience due to their profession or chosen activities.”

In other words, Instagram considers comments directed at people who are enough degrees removed from most of the public to be more objects of theoretical curiosity than people that could genuinely suffer from hate speech.

Despite these policies, there have been a number of cases in which the companies In a controversial decision, Facebook and Instagram allowed for posts urging violence against Russians following the beginning of the conflict in Ukraine in February 2022.

Obscenity Policies

Similarly, the subject of obscenity is not protected under first amendment rights to free speech. Although there is no strict definition of obscenity in US law, courts use a three-part test to determine whether material can be considered obscene. The test that they use is often referred to as the Miller test. It takes its name from a famous case known as Miller vs California, in which

The components of the Miller test include the following:

  1. Determination that the average person would think that the work on the whole appeals to the prurient interest, according to “contemporary community standards.”
  2. Determination that the work describes sexual activity or excretory functions in a patently offensive way according to state law.
  3. Determination that the work as a whole lacks serious literary, artistic, political, or scientific value.

According to the Department of Justice, it is only in meeting all three of these criteria that written works may be determined to be obscene.

Social media companies’ policies regarding obscenity are the following:

Twitter

 

Twitter’s policy bans pornographic or excessively violent images in profiles and headers. However, it states that “you can share graphic violence and consensually produced adult content within your Tweets, provided that you mark this media as sensitive.”

Facebook

Facebook’s policy is slightly different than that of Twitter, but also primarily bans the use of graphic images. Facebook’s policy bans pornography and images of genitalia, although it states that it will “sometimes” allow images if they are for educational, humorous, or satirical purposes. If images are ones of paintings, they are acceptable.

Instagram

Instagram’s policy is the most definitive of the three. No nudity is allowed, except for images depicting paintings, or images of women breastfeeding.

Nonetheless, according to federal law, it is illegal to distribute, transport, sell, ship, mail, produce with intent to distribute or sell, or engage in a business of selling or transferring obscene matter. Therefore, a number of cases have been brought against social media companies for violating these norms.

For example, in a 2021 case brought before the Texas Supreme Court, a judge ruled that Facebook is not shielded by Section 230 for sex trafficking recruitment on its platform.

Misinformation Policies

US Code 1038 makes it a crime to provide false or misleading information…or do conduct hoaxes. Nonetheless, as it is often difficult to determine what misleading information is, there is continued dispute about what may and may not be allowed on social media. In addition, Section 230 generally shields the companies from liability.

Social media companies define misinformation in the following ways:

Twitter

Twitter’s misinformation policy states the following: “We define misleading content (‘misinformation’) as claims that have been confirmed to be false by external, subject-matter experts or include information that is shared in a deceptive or confusing manner.”

When Elon Musk became CEO of the company last year, many people heavily criticized it for “eliminating” its misinformation policy, particularly with regard to policing Covid misinformation. When he took the position, Musk immediately restored over 60,000 of the accounts that had previously been suspended because of misinformation violations. Although the policy still exists in writing, many believe that it only has nominal value and is rarely enforced.

Facebook

Facebook defines misinformation in the following way: “We define misinformation as content with a claim that is determined to be false by an authoritative third party.” In this case, too, it is unclear what the company means by “authoritative third party” and the policy is left open to interpretation.”

Instagram

Instagram’s policy states that “We want you to trust what you see on Instagram…. In May of [2019], we began working with third-party fact-checkers in the US to help identify, review, and label false information.”

The subject of Covid is one that was particularly sensitive among the global population, and in addition to Twitter, other social media have also been increasingly scrutinized. In what many people believe to be a major blow to science, a recent ruling about Covid-related disinformation has incensed health professionals.

In June of this year, a Federal judge in Louisiana issued a preliminary injunction that bars a number of federal agencies from certain kinds of interactions with social media companies. This means that the companies are essentially free to post whatever information they want related to Covid. The population is very much divided on the correctness of the ruling, with many people believing that it will lead to the further perpetuation of false information about the disease (and others, in the future) and greater health risks for the public.

Harassment Policies

18 U.S. Code Section 2261A prohibits stalking and includes clauses that include online actions to harass, injure, harm, or intimidate people. Nonetheless, what constitutes these behaviors is often not entirely clear.

Social media platforms take the following stances on the issue of harassment:

Twitter

Twitter’s harassment policy states that “You may not share abusive content, harass someone, or encourage other people to do so…we prohibit behavior and content that harasses, shames, or degrades others.” Twitter lays out categories describing ways in which harassment can occur. They include “targeted harassment,” and “encouraging or calling on others to harass an individual or group of people.”

Facebook

Facebook’s (now Meta’s) harassment policy states the following: “Bullying and harassment happen in many places and come in many different forms…. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.” However, it also states that “We distinguish between public figures and private individuals because we want to allow discussion…” In this way, the company allows for some degree of subjective decision making with regard to what constitutes harassment.

Instagram

Instagram’s policy encourages users to block people whom they believe are harassing them: “If an account is established with the intent of bullying or harassing another person…please report it…. Once you’ve reported the abuse, consider blocking the person.”

People and groups have often taken advantage of company policies’ gray areas and used the mediums as a tool to harass and threaten others successfully. The landmark case of Force v. Facebook in 2019 determined that Section 230 bars civil terrorism claims against social media companies and Internet service providers. The claim in the case was that Hamas used the social medium as a means of encouraging terrorist attacks on Israel.

The rationalization behind the ruling stated that the “recommender system” through which Facebook operates is rightfully part of the distributor role and not that of publisher. Therefore, Facebook (and by extension, other social media companies) cannot be seen as content creators with regard to the spread of terroristic messages.

There was an appeal to the Supreme Court in 2020, but the Court declined to hear it.

Conclusion

These debates are likely to continue for as long as social media companies continue to operate. Policies will likely shift in whatever ways that executives feel necessary to appease their critics. However, there will continue to be controversial cases both nationally and internationally as there is too much subjectivity in these topics for them to be determined definitively.

Scroll to Top