From the sanitary perspective, it’s a bizarre decision lacking both sound rationale or respect for precedent. The Fifth Circuit got it wrong.
Yesterday, the US Court of Appeals for the Fifth Circuit upheld Texas’ law banning major social media websites from using most forms of content moderation. The decision is at odds with a recent Eleventh Circuit ruling striking down Florida’s similar law (written by prominent conservative Trump appointee Judge Kevin Newsom). In May, the Supreme Court signaled that at least five justices believe the law to be unconstitutional, when it overturned a previous Fifth Circuit ruling lifting a trial court injunction against implementation of the Texas law. For reasons I summarized here, I agree with the Eleventh Circuit’s approach, and believe the Texas and Florida laws violate the First Amendment’s guarantee of freedom of speech. In this post, I argue that these laws also violate the Takings Clause of the Fifth Amendment.
From a more free speech focused perspective, the decision in Netchoice v. Paxton was batshit crazy.
A Texas statute named House Bill 20 generally prohibits large social media platforms from censoring speech based on the viewpoint of its speaker. The platforms urge us to hold that the statute is facially unconstitutional and hence cannot be applied to anyone at any time and under any circumstances.
In urging such sweeping relief, the platforms offer a rather odd inversion of the First Amendment. That Amendment, of course, protects every person’s right to “the freedom of speech.” But the platforms argue that buried somewhere in the person’s enumerated right to free speech lies a corporation’s unenumerated right to muzzle speech.
Judge Andrew Oldham, former counsel to Gov. Abbott and appointed by Trump, opens with a bit of snark about “a corporation’s unenumerated right to muzzle speech,” not that he would reverse Citizens United but muzzling speech, to Judge Oldham, is entirely different than speaking because reasons. But some wag may ask a question of Judge Oldham’s swipe against muzzlers. Aren’t they private businesses such that the First Amendment isn’t applicable?
The implications of the platforms’ argument are staggering. On the platforms’ view, email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business. What’s worse, the platforms argue that a business can acquire a dominant market position by holding itself out as open to everyone—as Twitter did in championing itself as “the free speech wing of the free speech party.” Then, having cemented itself as the monopolist of “the modern public square,” Packingham v. North Carolina (2017), Twitter unapologetically argues that it could turn around and ban all pro-LGBT speech for no other reason than its employees want to pick on members of that community, Oral Arg. at 22:39–22:52.
Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.
Private businesses don’t “censor” so much as don’t host. They can’t, and don’t, stop people from saying any damn thing they want to say. They just make them do so elsewhere, which is really what the problem is, and why concurring judge Edith Jones tried to tie her view to PruneYard and FAIR.
Functioning as conduits for both makers and recipients of speech, the platforms’ businesses are closer analytically to the holdings of the Supreme Court in PruneYard and FAIR than to Miami Herald, Pacific Gas & Electric, and Hurley. It follows from the first two cases that in arbitrarily excluding from their platforms the makers of speech and preventing disfavored speech from reaching potential audiences (“censoring,” in the comprehensive statutory term), they are not themselves “speaking” for First Amendment purposes.
The notion here is that when private physical property, a mall in PruneYard, replaces public property as the “public square” such that the owners of private property get to decide who to allow on so that their voices are heard, they assume the trapping of government and become subject to the First Amendment’s prohibition against censorship.
In particular, it is ludicrous to assert, as NetChoice does, that in forbidding the covered platforms from exercising viewpoint-based “censorship,” the platforms’ “own speech” is curtailed. But for their advertising such “censorship”—or for the censored parties’ voicing their suspicions about such actions—no one would know about the goals of their algorithmic magic. It is hard to construe as “speech” what the speaker never says, or when it acts so vaguely as to be incomprehensible. Further, the platforms bestride a nearly unlimited digital world in which they have more than enough opportunity to express their views in many ways other than “censorship.” The Texas statute regulates none of their verbal “speech.” What the statute does, as Judge Oldham carefully explains, is ensure that a multiplicity of voices will contend for audience attention on these platforms. That is a pro-speech, not anti-free speech result.
By forcing platforms to host speech that it chooses not to host, whether it’s pedo Nazis or transgender racism hunters, these private entities are being compelled to speak, not because they are personally saying something but because their interface is being forced to leave unmolested a lengthy explanation of the virtues of QAnon. Hundreds of them, perhaps.
The irony is that the argument was made to the court that this will not merely preclude them from “censoring” speech such as “Trump really won the election with space aliens” but also from people who argue that the law prohibiting man/boy love is wrong.
The Platforms do not directly engage with any of these concerns. Instead, their primary contention—beginning on page 1 of their brief and repeated throughout and at oral argument—is that we should declare HB 20 facially invalid because it prohibits the Platforms from censoring “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” Red Br. at 1. Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite. The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception. United States v. Williams, 553 U.S. 285, 301 (2008). But it’s improper to exercise the Article III judicial power based on “hypothetical cases thus imagined.”
The reason such “fanciful hypotheticals” are raised is to present the point as clearly, if sometimes a bit hyperbolically, as possible that bad shit will happen if a bad law is allowed. Judge Oldham wasn’t buying, there being no “case and controversy” before the court about these make-believe “terrorists and Nazis” and he won’t suffer such flights of fantasy before the law is even enforced. Because, you know, there are no such things as “terrorists and Nazis” who post crap on the interwebs, but Trump had a big boat parade that only lost one boat so how could he even lose?
I’m disinclined to presume that any judge appointed by Trump is a bad judge, or any judge not appointed by Trump is good (or better). But this is a mindnumbingly dumb decision, and Edith Jones should be ashamed of theyself.