Many commentators have called March’s California jury verdict—finding Meta and Google liable for designing addictive platforms that harm children—social media’s “big tobacco moment.” The comparison is apt, but not in the way most people mean it.
The tobacco litigation story is usually told triumphantly: a malicious industry held accountable, victims vindicated, and a dangerous product now regulated. What that story leaves out is directly relevant to what happens next with social media.
The tobacco litigation succeeded not because cigarettes were addictive, but because the industry committed fraud. For decades, tobacco companies knew about nicotine’s addictive properties and the link between smoking and cancer—and they actively concealed that knowledge. Lawsuits that worked targeted the concealment directly. Once that concealment was exposed and disclosure became mandatory, the personal responsibility narrative reasserted itself: adults who smoke know the risks, and they choose to smoke regardless.
The processed food industry traced an almost identical arc. In the 1970s, consumer advocates petitioned the Federal Trade Commission to restrict advertising of junk foods to children. The industry fought back hard. A Washington Post editorial called the proposal a measure to “shield children from their parents’ weaknesses.” Decades later, a bill formally protecting fast food companies from obesity lawsuits passed the House. It stalled in the Senate, but the industry managed to pass similar laws in states across the country. The message was that obesity was a matter of willpower. Despite well-documented socio-environmental determinants of diet, the personal responsibility narrative stuck.
Last month’s verdict is being hailed as a break in that pattern, but I am not convinced it is. The pattern across tobacco and processed food suggests a predictable trajectory for social media.
Meta’s internal research documenting harms to teenage girls, which were suppressed then exposed, was its big tobacco moment. The litigation that followed reflects that reckoning. But as the story of tobacco and processed food demonstrated, after exposure come disclosure and warnings—and, above all, a reassertion of personal responsibility. The underlying product remains as it was.
The fixes already being floated around the social media’s verdict follow that pattern exactly. Age verification, parental controls, push notification settings, and various disclosures all place the burden of protection on individual users (or their parents), while leaving the design choices a jury just found unreasonably dangerous exactly where they are.
It all goes back to the notice-and-consent model, the idea that informed individuals can and should manage their own exposure to harm. This framework, which has dominated American consumer protection law for decades, works well for industries that want to avoid liability without changing their business models. It works less well for the people it’s supposed to protect, who are being asked to fend for themselves against platforms that were engineered—by very smart people with very large budgets—to be hard to put down.
The obvious counterargument is that redesigning these platforms would hurt everyone to help a