Communications Litigation Today was a Warren News publication.
‘Escalation of Online Hate’

Social Media Platforms Have First Amendment Right to Moderate Content: ADL, AJC

The American Jewish Committee and the Anti-Defamation League, in separate amicus briefs Thursday before the U.S. Supreme Court, came to the defense of NetChoice and the Computer & Communications Industry Association in their efforts to defeat the Texas and Florida social media laws (dockets 22-277 and 22-555) on grounds that their content-moderation and other restrictions violate the First Amendment.

DOJ’s brief agreed that Texas and Florida “have failed to justify the content-moderation requirements under any potentially applicable form of First Amendment scrutiny.” Thursday was the deadline for filing amicus briefs in the combined NetChoice and CCIA challenges to the Florida and Texas laws.

The deadline drew some three dozen amicus brief submissions, the vast majority in support of defeating the statutes. “The great diversity of filings illustrate that a broad range of individuals, businesses, advocates, and scholars recognize the threat to democracy if the government is allowed to mandate what speech is seen online,” said CCIA President Matt Schruers in a statement Friday. “We look forward to defending online speech when we argue the case next year.”

Private online services “have a First Amendment right to engage in content moderation,” choosing what types of speech to permit, or not permit, on their websites, said the AJC’s brief. Online speech isn’t “hermetically sealed off from the real world, and in some tragic cases, online hate contributes to horrific offline violence,” it said.

Individuals who want to “incite or celebrate” offline violence against faith and ethnic groups “often use private social media services to do so,” said the AJC’s brief. Research shows that hateful or violent online rhetoric “can encourage unstable and potentially violent individuals to act on their extreme views,” it said. “Mass casualty events” like the shootings at the Tree of Life Synagogue in Pittsburgh in 2018, Muslim worship sites in Christchurch, New Zealand, in 2019 and a predominantly black supermarket in Buffalo in 2022 “were each fueled by hateful online content,” it said.

Recognizing the relationship between online hate and offline violence, “many online services have chosen to moderate third-party content to mitigate this danger,” said the AJC’s brief. Different platforms have taken different approaches to this problem, “with varying degrees of success,” AJC said. It’s “essential that they retain the freedom to do so, and to respond quickly, decisively and efficiently, before violent online rhetoric "materializes in the real world,” said the brief. The Texas and Florida laws under review “would impede that freedom, with potentially grave real-world consequences,” it said.

The ADL believes the Texas and Florida statutes “unconstitutionally deprive social media platforms of the content-moderation tools they urgently need to help stop the proliferation of hate and harassment online,” said its brief, submitted in support of no party. Without such tools, social media platforms “will be ill-equipped to halt the aforementioned escalation of online hate into offline violence, posing a grave threat to the safety and well-being of all, especially the most vulnerable,” it said.

The ADL asks SCOTUS to “make clear” that states, through the type of content-moderation restrictions at issue in the Texas and Florida laws, can’t “prohibit social media companies from taking the steps necessary to promote user safety and mitigate the risk of violence stemming from online hate and radicalization,” said its brief. The laws “strike at the heart of First Amendment freedoms” that SCOTUS “has long guaranteed,” it said.

The social media companies that belong to NetChoice “have chosen not to platform hateful, harassing speech that dehumanizes its targets,” said ADL’s brief. Under the Constitution, Florida and Texas can’t force these social media companies “to make a different choice,” it said: “Private actors choose what messages they wish to disseminate, as well as what messages they do not. The government cannot make that choice for them.”

A “contrary holding” by SCOTUS “would invite dire consequences,” said ADL’s brief. A well-established and growing body of research links “online hate to offline violence, both at the individual and group levels,” it said. Disabling social media platforms from combating and containing online hate and harassment “is certain to increase the amount of offline violence perpetrated against members of minority and marginalized communities,” it said.

Even hate “that stays online” can cause harm, said ADL’s brief. Americans “face exclusion” every day from online spaces “merely for existing as ethnic, religious, or other minorities, deeply chilling their participation in public discourse and depriving them of full citizenship,” it said. Spreading online hate also can have significant commercial consequences for platforms that choose to do so, “as advertisers flee from spaces in which their brands appear side by side with Nazi iconography, for example,” it said.

While governments “can and should do more” to hold social media platforms accountable for “breeding hate and violence,” the Texas and Florida laws “strike precisely the wrong balance,” said ADL’s brief. They both “go well beyond what the Constitution permits,” it said.

Texas and Florida assert that the content-moderation requirements in their social media laws “serve an interest in ensuring that the public has access to diverse sources of information,” said DOJ’s brief. But SCOTUS has repeatedly rejected the suggestion that the government “has a valid interest in increasing the diversity of views presented by a particular private speaker,” even if that speaker “controls a powerful or dominant platform,” it said.

Because the only interest the states have asserted here is a “bare desire” to change the way private social media platforms are exercising their editorial discretion, SCOTUS “need not consider how the First Amendment might apply to different regulations justified by different interests,” said DOJ’s brief. The laws’ “individualized-explanation” requirements also violate the First Amendment because they impose “unjustified burdens on the platforms’ expressive activity,” it said.

Those requirements compel platforms to provide an individualized explanation “each time they choose to remove or otherwise moderate user content,” said DOJ’s brief. The requirements can’t withstand “even deferential scrutiny” because they impose a penalty “in the form of administrative costs and potential liability each time a platform engages in a form of expressive activity,” it said. Given the “millions of moderation decisions” the platforms make each day, that burden is “substantial and likely to chill protected activity,” it said.