Deplatformed: Social Media Censorship and the First Amendment

Host Ken White looks at the legal bases of arguments made by critics of social media sites moderation and shows why Twitter, Facebook, and Youtube bans are legally protected.

Podcast
Make No Law: The First Amendment Podcast

Your user agent does not support the HTML5 Audio element.

Listen & Subscribe
Share Episode
Featured Guest

Eric Goldman

Eric Goldman

Eric Goldman is Associate Dean for Research, Professor of Law, Co-Director of the High Tech Law Institute.

Your Host

Ken White

Ken White

Ken White is a First Amendment litigator and criminal defense attorney at Brown White & Osborn LLP.

This Episode
Published: August 28, 2019
Podcast: Make No Law: The First Amendment Podcast
Category: News & Current Events
Episode Notes

Politically conservative voices have been arguing recently that social media outlets such as Twitter, Facebook, and Youtube have been illegally censoring their views. They claim, as a result of their political leanings, that they are being “deplatformed”, or having their accounts suspended or removed. These allegations have led to congressional hearings, complaints from the President, and claims that these platforms are a serious threat to Americans’ freedom of speech. Critics and pundits argue that Twitter bans and videos pulled from Youtube amount to censorship and nothing less than an unconstitutional abridgement of their First Amendment rights. Are they right? Is there something to the argument that these services serve as the modern day “public forum” and are therefore required to be neutral?

In this episode of Make No Law: The First Amendment Podcast from Popehat.com, host Ken White reviews the common arguments made by critics of these moderation policies by highlighting the legal foundations on which they’re made: the First Amendment right to free speech; Section 230 of the Communications Decency Act; and anti-discrimation law. With the help of professor Eric Goldman, Ken pulls these arguments apart, demonstrating that these companies are not breaking the law when they ban, block, or demonetize an individual due to their political beliefs.

Eric Goldman is a professor at Santa Clara University School of Law where he teaches, amongst other subjects, Internet Law. He is also the author of the Technology and Marketing Law blog

Transcript

Make No Law: The First Amendment Podcast

Deplatformed: Social Media Censorship and the First Amendment

08/28/2019

Ken White: Around ten years ago I was expelled from an online etiquette forum. It was really the best thing for everyone involved. I thought that discussing etiquette included pointing out how we can fall short of it. They thought that was uncouth and so we went our separate ways.

I did find it ironic. I also remember being mildly annoyed, but what I did not feel was that in any sense my legal rights had been violated. After all, the forum was run by a private group of people, not by the government. They had every right to boot anyone they wanted from their website. That used to be our common understanding of the Internet. You don’t have a protected legal right to be on somebody’s private website.

But the last few years have seen more vigorous banning and other moderation on huge popular social media sites like Facebook and Twitter and YouTube. And with that heavier moderation has come more complaints. People claim that the big social media sites are unfair, haphazard, and politically biased in the moderation choices they make. They claim that they are singled out for conservative thought and that their speech is suppressed because the sites disagree with it.

Increasingly they claim that social media sites are violating their legal rights. These critics are encouraged in that view by politicians and commentators who assert that social media sites are somehow breaking the law. Are they right? Is there anything to this notion that if Facebook or Twitter banned me that they are violating my rights under the Constitution or under Federal Law? No.

Male Speaker: No. No. No. No, God please no. No.

Ken White: I am Ken White and this is Make No Law: The First Amendment Podcast from popehat.com, brought to you on the Legal Talk Network.

This is Episode 11, Deplatformed.

When social media critics argue explicitly or implicitly that Internet sites are violating their legal rights by banning them, they are usually relying on one of three legal concepts.

The First Amendment Right to Free Speech, a Federal Law called Section 230 that protects websites from defamation suits and anti-discrimination law.

Let’s talk about each of those. Spoiler alert, they do not stop Twitter from banning you.

First, let’s talk about the First Amendment, after all it’s the subject of this podcast. The First Amendment stops the government from punishing your speech. It prohibits laws that limit your speech, but it doesn’t stop private individuals from excluding you based on your speech. That’s a basic constitutional doctrine called State Action. It’s the notion that the Constitution only limits things the government does, not things that private individuals do.

Now, critics of social media have an answer for this. They point to some older cases suggesting that sometimes a private entity can look so much like the government or private property can look so much like public property that the First Amendment governs how it can be operated.

They point to a 1946 Supreme Court case called Marsh v. Alabama.

Grace Marsh was a Jehovah’s Witness. You may remember from our first episode about Fighting Words that the 1940s were a dangerous time for Jehovah’s Witnesses. They were very unpopular and frequently subjected to abuse and persecution in the United States.

Grace Marsh went to Chickasaw, Alabama to hand out Jehovah’s Witnesses literature and to preach. She stood on the sidewalk outside the town’s post office and tried to distribute her religious leaflets, but she was arrested and prosecuted for trespassing, because Chickasaw, Alabama wasn’t a normal town; it was a company town. Everything in Chickasaw, Alabama, from the houses to the streets and the sidewalks and the stores was owned by the Gulf Shipbuilding Corporation.

Grace Marsh argued that she had a free speech right to hand out leaflets peacefully on a sidewalk outside a post office, that this was classic First Amendment activity. The Town of Chickasaw argued that she was on private property, that even though it looked like a town and even though it had sidewalks and a post office like a town, it was really all the private property of Gulf Shipbuilding Corporation.

The Supreme Court agreed with Grace Marsh. The court said that the town’s apparently public spaces were accessible to everyone and that there was nothing evident that distinguished it from any other town.

Here is what Justice Black said writing for the court:

Justice Hugo Black: Since these facilities are built and operated primarily to benefit the public and since their operation is essentially a public function, it is subject to state regulation.

Ken White: So as you can imagine, people who are vexed at being kicked off of Facebook or Twitter have been citing Marsh v. Alabama. They argue that social media sites are like a company town, that they are thrown open for everyone, like a public forum, and so should be treated like one legally, but the problem with that argument is that the Supreme Court has been steadily retreating from Marsh v. Alabama since 1946.

Lawyers call this limiting a case to its facts. It’s when a court has conveyed that it would only decide a case the same way again if it was presented with exactly the same set of facts and that it’s not going to extend the rule from that case.

Instead, the Supreme Court has repeatedly ruled that a private entity only engages in State Action and therefore is only subject to the First Amendment when it performs a traditional exclusive public function.

The United States Supreme Court put the final nail in the coffin of any argument that the First Amendment binds social media companies in June of 2019 in a case called Manhattan Community Access Corporation v. Halleck. That case was about a public access cable channel that the government handed over to a private nonprofit company to operate. The question was did the First Amendment control how that private nonprofit company ran that public access channel.

The Supreme Court said no, and in doing so it reiterated that private entities only become state actors bound by the First Amendment when they perform traditional government functions. The court dismissed the precedent of Marsh v. Alabama saying that Marsh involved a situation where a private actor was doing a classic government function, running a whole town government, but creating a public forum for speech the court said is not a classic government function and therefore doing it doesn’t make somebody a state actor.

Here is Justice Kavanaugh writing for the court.

Justice Brett Kavanaugh: Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed; therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards, comedy clubs host open mic nights.

Ken White: As Judge Jacobs persuasively explained, it is not at all a near exclusive function of the state to provide the forum’s republic expression, politics, information or entertainment, that’s about as clear a signal as you can get from the Supreme Court that Twitter and Facebook and YouTube are not governed by the First Amendment, no matter how big they are and no matter how much people say that they are the modern town square.

Eric Goldman is a professor at Santa Clara University School of Law and the author of the Technology and Law Blog, which is absolutely indispensable if you are interested in following legal developments about the law of the Internet and the fortunes of people trying to sue websites. I asked him about the prospects for suing social media under the First Amendment at this point in light of these cases.

Eric Goldman: It’s been clear for literally decades that private actors are not state actors just because they are doing services over the Internet, and we got a reaffirmation of that principle from last week’s Halleck decision that really emphasized that private publishers are not state actors and any lawsuits based on State Action principles are going to fail.

Ken White: Now, Halleck was a 5-4 decision, that’s pretty divided, but crucially even the four justice dissent in Halleck made it very clear that you are not a state actor under the First Amendment just because you open up a forum for public discussion.

Here is Eric Goldman again.

Eric Goldman: In fact, the dissent goes out of its way to say that if the facts were as the majority described it, then they would be on board with the majority’s opinion. So in terms of the basic principle that private publishers aren’t state actors, it’s possible that we could read it as a 9-0 opinion.

Ken White: So that’s why the First Amendment doesn’t protect you from being banned from Twitter.

By the way, sometimes you hear arguments that State Constitutions protect you from social media sites banning you. The most common reference you will hear about this is the California’s Constitution and to a 1979 case called Pruneyard Shopping Center v. Robins.

In that case the California Supreme Court found that the State Constitution protected the right to collect signatures on petitions at a privately owned mall. The court said that under the California Constitution the mall had thrown its doors open to create a space where people congregated and socialized and that therefore was bound by the State Constitution’s Free Speech Clause.

But just like Marsh v. Alabama, the courts have been steadily retreating from Pruneyard for the last 40 years, constantly narrowing it and adding exception after exception. Now, like Marsh, it’s been limited to its own facts and on several occasions the California Supreme Court has come within one vote of overturning it entirely. There is absolutely no indication that it can be extended to social media sites.

The next common argument you hear from critics of social media is that federal law requires them to be neutral in moderating content on their sites. This theory has some powerful supporters, including United States Senator Ted Cruz. Here is Cruz grilling Facebook’s Mark Zuckerberg at a congressional hearing.

Mr. Chairman: Senator Cruz.

Ted Cruz: Thank you Mr. Chairman. Mr. Zuckerberg, welcome, thank you for being here. Mr. Zuckerberg, does Facebook consider itself a neutral public forum?

Mark Zuckerberg: Senator, we consider ourselves to be a platform for all ideas.

Ted Cruz: Let me ask the question again. Does Facebook consider itself to be a neutral public forum and representatives of your company have given conflicting answers on this? Are you a First Amendment speaker expressing your views or are you a neutral public forum allowing everyone to speak?

Mark Zuckerberg: Senator, here is how we think about this.

Ken White: What is Ted Cruz talking about? He is referring to a law called Section 230 of the Communications Decency Act of 1996. In 1996, in the infancy of the Internet, Congress and the courts were still grappling with how to handle content on websites. They were trying to decide how much the federal government should get involved in regulating obscene or abusive content online and they were trying to sort out who is responsible when an Internet user posts something defamatory on a website.

Now, if the New York Times runs a column falsely claiming someone was convicted of a crime, it’s clear the New York Times is on the hook for that defamation claim. But if someone posted that claim on some message board, on some primitive Internet site, like AOL or Prodigy, would they be legally responsible for the comment?

Congress saw here in one of its very rare moments of technical competence that it would be impossible for the Internet to thrive and grow if every website was responsible for every comment left by every random person.

The New York Times can carefully review 100 news stories a day, but AOL can’t possibly scrutinize a million comments a day.

Before Section 230, AOL’s choice was not to police those comments and therefore maybe incur huge liability or to shut off comments entirely. Congress was also concerned with obscenity and pornography. Part of the Communications Decency Act of 1996 dealt with obscene materials on the Internet and with what it called indecent materials made available to children.

Most of that part of the law was struck down by the courts under the First Amendment. But a significant portion of what Congress wanted to achieve was to encourage websites to do their own policing of content and to make tools available to parents to use to restrict their children’s access to adult content.

So, to address these issues, Congress passed Section 230. It’s been called the law that created the Internet as we know it, and it does two things. First, in Section C1 of the Statute, it protects websites and web users from liability for things that other people post.

Here’s the language. ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’. Now what does that mean?

It means that if I post something on Twitter or Facebook that no one can sue Twitter or Facebook as if they had posted that thing. So if I falsely accuse someone of being a convicted felon in a tweet, I’m legally responsible, not Twitter. That is huge. It makes social media and the broader Internet possible.

Twitter has about 500 million tweets posted every day. Twitter can’t possibly police them all to make sure they’re not defamatory or illegal. If they had to do that they’d shut down. Section 230 says they don’t have to.

Here is the second crucial part of Section 230 Subsection C2. It protects websites and users when they moderate content. Here’s the language. ‘No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.’ What does that mean?

It means what it says, that a website can delete your post or ban you if it thinks you’ve posted something lewd, harassing or objectionable and you can’t sue them for it. Once again, this is essential to the operation of the Internet.

If every moderator’s decision could trigger a lawsuit and if websites had to justify every post deletion or ban in court, then they would not be able to moderate at all and every site would be an un-moderated sewer, flooded with a lowest common denominator of porn, threats and trolls.

Now, let’s go back to Senator Cruz and his questions to Mark Zuckerberg.

Ted Cruz: It’s just a simple question. The predicate for Section 230 immunity under the CDA is that you are a neutral public forum. Do you consider yourself a neutral public forum? Are you engaged in political speech, which is your right under the First Amendment?

Ken White: If you listen to me read Section 230 to you, right now you’re saying, wait a minute, I didn’t hear anything requiring the sites to be neutral public forums, and you’re right. It says nothing of the kind. Nothing in Section 230 requires a website to be neutral or public in any way to get the benefits of the statute.

Do you want to run a private forum that only members of your club can use? It’s protected by Section 230. Do you want to site for Christians or furries or Republicans or Mets’ fans that only allows comments supporting your views? Section 230 protects you.

Section 230 was not passed to require sites to be neutral. To the contrary, it was passed to give websites freedom and moderating vigorously under the theory that it was better than government intervention. And yet, this idea that Section 230 requires sites to moderate in a neutral fashion has become very popular even though it has no basis in reality or law.

Here’s Eric Goldman again.

Eric Goldman: Section 230 was designed to preserve online publishers’ curation and editorial freedom. It covers the same ground of the First Amendment but by doing statutorily, it provides the defendants with a number of benefits, both substantive and procedural.

So anything that involves a claim over online services editorial curation decisions about third-party content should be within Section 230’s wheelhouse.

Ken White: To put it in another way the decision not to allow something on your site is just as much a part of a publisher’s freedom as deciding to have it on your site.

Eric Goldman: It’s censorship to tell people what they can’t publish and it’s censorship to tell people what they must publish. Those are two different sides of the same coin; they’re both ways of curbing editorial freedom. So, anything that curbs editorial freedom in my mind is censorship trying to force services to be neutral which means to subvert their editorial judgment is a form of censorship.

Ken White: If you’ve listened to politicians and commentators talk about Section 230 or about websites’ freedom to moderate, you’ve probably heard them use the terms “publisher” and “platform”. Social media, people like Ted Cruz say, has to decide which it is, a platform or a publisher. Here’s the problem, that’s not a thing. Publisher versus platform is not a thing.

Eric Goldman: I personally wish I could retire the term “platform”, I find it unhelpful to any discussion here. In the end we’re always talking about publishing third-party content, that’s what social media providers do, that’s what online services do, they publish their party content and so their publishers. And any effort to introduce other terminology like platform, it’s usually a way to try and get around the obvious limits are restricting publication decisions.

Well, we’re not restricting the publication decision, we’re restricting their behavior as a platform so that semantic move is a way of basically advancing a pro censorship agenda, so let’s make a pact between you and me and all your listeners, we’re never going to use the term “platform” again, we’ll only talk about publishing if that’s what’s being done.

Ken White: The third and final argument people make the claim that social media sites violate our rights by banning us is based on anti-discrimination law. This argument fails just like the others. The Constitution does prohibit the government from some forms of discrimination. Specifically the Equal Protection Clause of the 14th Amendment limits the government’s ability to discriminate based on impermissible factors like race or religion or gender, but just like the First Amendment the 14th Amendment does not restrict private actions by private actors, it requires State action.

There are also federal statutes prohibiting discrimination. For instance, the Civil Rights Act of 1964 prohibits discrimination in employment and in public accommodations, basically any business open to the public, but the Civil Rights Act prohibits discrimination based on race, color, religion or national origin. It does not prohibit discrimination based on political preference or speech about particular subjects.

So, if Twitter bans someone for their political beliefs, the way critics claim it does, that is not a violation of federal anti-discrimination law. Even though the law is clear, even though these arguments are, well, bad, don’t expect people to stop making them and don’t expect people to stop suing social media platforms based on creative variations on them.

Eric Goldman: So, we’re going to put the term “creative” in quotes, because at this point I don’t know that I’ve seen everything, but pretty much everything that people can think of and the first hour to thinking about it has already been tried and it has failed. All the efforts to work around Section 230 or to work around the First Amendment have failed and yet because there’s such a desire for censorship, because there’s such a desire to tell online services what they can or can’t publish we still see the same arguments recycled over-and-over again.

Ken White: In this series of podcasts I’ll be telling more stories behind important First Amendment decisions and addressing more current controversies about free speech topics. If there’s a case you want to hear about or a First Amendment issue you’d like addressed on the podcast, drop me a line at [email protected] .

Thanks for listening.

You can find documents and cases mentioned on this podcast at popehat.com or legaltalknetwork.com.

If you liked what you heard today, please remember to rate us in Apple Podcasts, Google Podcasts and follow us on Twitter or Facebook.

Lastly, I’d like to thank our participants, producers and audio engineers for their participation.

My guest, Professor Eric Goldman, producers, Evan Dicharry and Kate Nutting. Our voice actors, Evan Dicharry as Justice Hugo Black; Chad Jolly as Justice Brett Kavanaugh; executive producer is Lawrence Coletti, and last but not least, music, sound design, editing and mixing by Adam Lockwood.

See you next time.

Ken White: The views expressed by the participants of this program are their own and do not represent the views of nor are they endorsed by Popehat, Legal Talk Network or their respective officers, directors, employees, agents, representatives, shareholders or subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer, please.

Newsletter

Notify me when there’s a new episode!