The Supreme Court’s 2024 decision in Moody v. NetChoice confirmed a long-debated legal principle: algorithmic content moderation by social media platforms constitutes speech protected by the First Amendment.
This ruling builds on jurisprudence established in 2013, when legal scholars first argued that platforms’ editorial decisions—such as prioritizing or suppressing content—should be treated as expressive conduct under constitutional law.
Critics of this interpretation raise concerns. The decision invalidates laws aimed at regulating how platforms curate content, a prospect some find unsettling. Responses to the ruling vary:
- Calls to revise First Amendment jurisprudence entirely, extending beyond editorial choices.
- Proposals to reclassify social media platforms as state actors or common carriers.
In the forthcoming book Content Moderation and the First Amendment, the author evaluates these responses, ultimately advocating for a more targeted approach: excluding editorial judgments made by monopolistic platforms from First Amendment protections.
The author also explores an alternative possibility—expanding First Amendment coverage to include AI-generated content without human involvement. However, they caution against this approach, warning of its transformative implications for free speech law.
Looking ahead, the author predicts that debates over algorithmic moderation will intensify as social media’s influence grows and artificial general intelligence (AGI) becomes more plausible. These developments, rather than traditional ideological divides, may shape future legal and public reactions to platform speech.
The shifting legal landscape, they argue, risks destabilizing current discussions on free speech and platform accountability.
The post Content Moderation and the First Amendment appeared first on Reason.com.