To what extent may social media censor content? And what legal authority does the government have to regulate a company’s speech censorship? Until recently, it seemed that social media giants held unfettered ability to censor users. But not necessarily now!
Social media has become a major platform for speech and source of information in today’s society. With the tremendous rise of cancel culture, however, the companies that run these platforms have increasingly suppressed discussion of certain subjects with great impunity.
Two states—Florida and Texas—have pushed back against social media censorship with laws specifically targeted at giants like Twitter, Facebook, Instagram, and YouTube. Given that social media is not bounded by state borders, these laws carry nationwide impact on such censorship policies.
Some social media platforms swiftly challenged the states’ authority to enact these laws. One federal appellate court struck down the Florida law, while another federal appellate court upheld Texas’s requirements based on extensive compelling arguments including key distinctions from the Florida law.[1] The resulting judicial conflict dramatically increases the likelihood that the United States Supreme Court will take up the issue of social media censorship regulation, with intense interest for nonprofits, other organizations, and individuals alike.
Florida and Texas Censorship Laws
Generally, nonprofits and other users of social media have little to no recourse if YouTube, Facebook, or Twitter removes their content. They may be able to reach out to the corporation for further review, but this usually doesn’t result in a change or better understanding of the reason for the removal. The platform’s explanation of a post’s removal usually includes only a vague reference to a less-than-helpful policy. Users may experience Facebook or Twitter “jail” with little understanding of the charges against them and no assurances of due process.
It is true that these social media platforms should remove content that promotes violence and criminal activity. But many believe these platforms step beyond this reasonable boundary and suppress speech based on certain user viewpoints deemed as “offensive” or contrary to the platform’s viewpoints, around which there is compelling debate.
Florida’s Requirements
Florida’s law attempts to prevent social media’s viewpoint discrimination by combating censorship, including the deletion, alteration, or addition of an addendum to content and the suspension, deletion, or ban of users’ ability to publish content on the platform.[2] The law requires platforms to publish the standards, including detailed definitions, that they use to determine whether content is censored. The goal of this requirement is to provide users with a better understanding of what may and may not be said. Under Florida’s law, Facebook, YouTube, and Twitter will no longer be able to hide behind vague censorship policies. Further, the platforms must notify users any time these policies change and cannot change the policy more than once every thirty days.
Further, Florida allows users to escape the algorithm that determines what posts users see by choosing to view posts sequentially or chronologically. The law also contains provisions that protect political candidates and news agencies from being censored or otherwise limited by the algorithm.
If a platform engages in viewpoint discrimination by inconsistently applying its standards or failing to properly notify a user their content was censored, the user may sue the platform for monetary damages and to prevent the platform from continuing its discriminatory actions.
Texas’s Requirements
Texas took a slightly different approach to combat viewpoint discrimination. Like Florida, the Texas law requires the social media platform to publish acceptable use policies to inform users what is and is not allowed to be posted so they are no longer subject to vague criteria.[3] The platform must additionally notify the user of the detailed reasons for the removal of any content and allow for an appeal to be made. The policy must also explain how the platform enforces this policy to ensure the fair treatment of all content.
Notably, the Texas law requires the platform to publish a report detailing the number of times the platform censored content or took action against an account. This report will help promote accountability for the censorship choices made by social media platforms.
The only person eligible to enforce this law is the Texas attorney general who may sue the platform to block them from further violating these requirements. Monetary damages are not available. This distinction from the Florida law proved quite important for the court evaluating the Texas law.
May the Government Get Involved?
Historically, three main legal considerations have prevented government from regulating social media censorship. First, the First Amendment’s freedom of speech applies to government action limiting speech, not the action of private speakers. Second, the First Amendment’s freedom of speech has been viewed as protecting the editorial choice of social media platforms. Third, lawmakers and courts have been hesitant about whether social media platforms qualify as “common carriers,” a legal classification that allows the government to impose anti-discrimination regulations.
Government v. Private Action
The First Amendment only protects against the government’s infringement on speech. It does nothing to protect against the limitation of speech by private actors like social media corporations. Any argument that claims the First Amendment protects an individual’s posts from being taken down is thus legally incorrect. Consequently, a nonprofit cannot sue Facebook claiming a freedom of speech violation if Facebook takes down the nonprofit’s post.
However, if the government steps in and regulates what Facebook, Twitter, or YouTube can or must say, these platforms can challenge these regulations using the First Amendment protections of speech. The government has avoided implementing regulations on content moderation policies because the policies may be viewed as speech.
Editorial Choice v. Unprotected Actions
Are the platforms engaged in speech when they determine whether specific content can be posted? The platform companies argue they are “speaking” in doing so. They liken their content review process to a newspaper’s “editorial choice” which has repeatedly received speech protection from the Supreme Court.
The Supreme Court has been hesitant to require newspapers to publish pieces, choosing instead to view the newspaper’s “editorial choice” as protected speech under the First Amendment. Indeed, the Court struck down a law requiring a newspaper that criticized a political candidate to offer the candidate equal space in the paper to publish a reply. This requirement penalized the newspaper in the cost of printing and materials and the loss of space available for what the newspaper wanted to publish.[4] Similarly, the Court invalidated a requirement that a utility company allow a third party representing ratepayer to publish in space usually reserved for the utility’s newsletter. The requirement prevented the utility from using the space for its own speech and forced the utility to potentially publish speech with which it may disagree.[5] These and other cases illustrate a publisher’s freedom of “editorial choice” when speaking.
In addressing the Florida law, the 11th Circuit Court of Appeals determined that the platforms are constantly making editorial choices about what content to publish to its users based on its own views about what is valuable and appropriate. The appellate court identified YouTube’s message as creating a “welcoming community of viewers” and pointed to similar purposes for the moderation policies of the other platforms. This message, according to the 11th Circuit, is protected speech, and the platforms’ actions to tailor content that meets its standards is protected editorial choice. Florida’s attempt to regulate this editorial choice violated the platform’s freedom of speech.
Addressing the Texas law, and ruling later, the 5th Circuit Court of Appeals came to the opposite conclusion. The court determined that social media censorship is not editorial choice as the platforms are not editors but rather distributors of the speech of the broader public. In so ruling, the court recognized that platforms are different from newspapers. They do not curate content to convey a particular message. They are simply the distribution channel for the content of others. Additionally, platforms have very little connection to the message of any particular post. The average person is not likely to attribute the speech of a user for that of Twitter or Facebook. Thus, a platform does not have to speak out against any content with which it disagrees. The platform may choose to speak out or it may choose not to. The state regulations do not require the platform to speak. Finally, the court reasoned that platforms are not like newspapers because they can distribute an infinite amount content online. They are not limited by the size of a newspaper or the space in a billing envelope.
While a broad definition of speech is generally preferred, the 11th Circuit characterization of creating a welcoming community as speech appears to be somewhat of a stretch. The court provides a lengthy discussion of prior Supreme Court cases, but its application of these cases to modern-day social media is brief and almost conclusory. On the other hand, the 5th Circuit saw through this lack of applicability and gave convincing reasons, including quotes from social media executives, to demonstrate why social media is not an editor like newspapers and is not engaged in speech when it distributes other people’s content. The 5th Circuit also found it quite notable that only injunctive relief is available under the Texas law, alleviating any concerns about “chilling” speech regulation if damages might be available.
Common Carrier
“Common carrier” refers to a class of businesses, such as transportation or communication companies, that must offer their services without discrimination. This doctrine has its origins with ferry and wharf operators in England during the 1400s. By the founding of the United States, the common carrier doctrine had been applied to stagecoaches, barges, gristmills, and innkeepers. All these industries were required to offer their services to all without requiring discriminatory rates. During the 1800s, this doctrine was commonly applied to railroad companies and during the following century to telegraph, telephone, and other communication corporations.
To combat unfair practices, states often regulate common carriers imposing restrictions and requirements to ensure equal treatment for all customers. If social media platforms are common carriers, then states may enact restrictions like the ones in Florida and Texas to prevent viewpoint discrimination.
In addressing the Florida and Texas laws, the two appellate courts disagreed as to the limits of the common carrier doctrine. Both courts appeared to agree that states may regulate private companies as common carriers if their services hold themselves out as to serve the general public and are affected with a public interest. However, their differing findings on whether platforms are editors or distributors led the courts to differing conclusions. The 5th Circuit’s conclusion that platforms are distributors naturally led the court to find that platforms are similar to ferries, stagecoaches, and communication operations that merely transport someone or some message to another point. The 11th Circuit’s conclusion that platforms are editors naturally led it to the conclusion that platforms do not hold themselves out to transport any message from the general public.
What’s the Ultimate Legal Answer?
Will the Supreme Court take on these questions to provide definitive, controlling answers about whether social media platforms are editors or distributors of speech? The 5th Circuit’s reasoning for platforms being distributors of content rather than editors appears to have corrected the weaknesses in the 11th Circuit opinion. This rationale may prove to be a strong enough argument for how the Supreme Court may decide this issue.
Meanwhile, pending the high court’s response, both the Florida and Texas laws are judicially blocked from being enforced. Twitter and the other social media corporations thus remain free – at least for now - to continue censoring content with little explanation or due process.
[1] NetChoice, LLC v. Attorney General, State of Florida, 34 F.4th 1196 (11th Cir. 2022); NetChoice, LLC v. Paxton, 49 F.4th 439 (5th Cir. 2022).
[2] Florida S.B. 7072 (2021).
[3] Texas H.B. 20 (2021).
[4] Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974).
[5] PG&E v. Public Utilities Commission of California, 475 U.S. 1 (1986).