The people who want to end Section 230 may have botched their case
But even a ruling against the plaintiffs in Gonzalez vs. Google could hurt internet freedoms, observers say
After months of anticipation, on Tuesday the US Supreme Court heard oral arguments in a case with profound implications for the future of the internet. The good news, for people who like the internet mostly the way it is, is that among court watchers there’s a consensus that the plaintiffs’ arguments seemed to land with a thud among the justices. The bad news is that how the justices rule against the plaintiffs might introduce significant uncertainty for platforms.
We’ve talked a few times here about Gonzalez vs. Google, a case that made it to the Supreme Court by offering justices a new way to limit the protections afforded platforms by Section 230 of the Communications and Decency Act. Section 230, of course, is the law that protects platforms from liability for what their users post in most cases; it means that if you defame someone in the comments section here on Platformer, we can’t be sued for libel.
The plaintiffs in Gonzalez said OK, sure, under Section 230 platforms can’t be held liable for what their users post. But perhaps they can be held liable for what they recommend. The question comes out of a case led by Reynaldo Gonzalez, whose daughter died in a 2015 ISIS attack. Gonzalez sued Google for allegedly aiding ISIS with recruitment by suggesting their content in YouTube recommendations.
A really weird thing about the case is that Gonzalez does not allege that anyone involved in the murder of his daughter actually saw any of those recommendations, much less joined ISIS on account of them. The fact that the justices would agree to hear the case and look past decades of case law despite such a flimsy set of facts set platforms’ collective teeth on edge in recent months.
And certainly that could still happen. But my sense, both from the justices’ comments and how they were received by court watchers, is that such a possibility now seems significantly more remote.
“Most of the justices appeared sufficiently spooked by the possibility that they could destroy how the modern-day internet operates that they are likely to find a way to prevent that outcome,” wrote Ian Millhiser at Vox. "As Justice Elena Kagan warned at one point during the Gonzalez argument, the justices are ‘not the nine greatest experts on the internet.’ So it makes sense for them to approach a case that could fundamentally change how foundational websites operate with a degree of humility.”
Even Clarence Thomas, the justice who has suggested that platforms should be conceived of as common carriers and stop routinely moderating content, expressed skepticism that a suggested YouTube video really amounts to an editorial recommendation.
“Thomas said that recommendations were vital to making internet platforms useful,” noted Adam Liptak in the New York Times. “‘If you’re interested in cooking,” he said, ‘you don’t want thumbnails on light jazz.’ He later added, ‘I see these as suggestions and not really recommendations.’”
Part of the plaintiffs’ trouble is that their attorney, Eric Schnapper, struggled to make his case effectively. He seemed to surprise justices by, among other things, suggesting that search algorithms such as Google should lose liability protections, and that users could be held legally responsible for retweeting posts that are later found to be illegal.
“It was obvious to anyone listening/watching that the Petitioner's arguments could not have gone worse,” tweeted Kate Klonick, a law professor at St. John’s University who studies platform governance. “He sighed heavily, couldn't answer direct Qs, 3 justices stated how confused they were, he flubbed softball after softball. It was shocking.”
Perhaps it shouldn’t have been, though, given how many alternative lawyers were conflicted out of representing Gonzalez. Platforms pay so many high-powered lawyers at so many DC firms that plaintiffs had to recruit an 80-year-old employment discrimination law specialist who once argued a case before an appointee of Franklin Roosevelt. (And Schnapper has to argue another, related, equally tricky case tomorrow: Twitter vs. Tamneh, which concerns whether platforms can be sued under the Anti-Terrorism Act for failing to identify and remove ISIS content.)
In any case: Tim Wu, a former antitrust adviser to President Biden who advised the government on the Gonzalez case — which the administration supported — called Schnapper’s argument a “fiasco.” Schnapper “seems a nice guy and has a distinguished record in civil rights litigation, but having him argue this case was like sending out a pitcher to quarterback the Super Bowl,” Wu tweeted.
Wu’s remarks are notable because he is at least somewhat sympathetic to the idea of limits being placed on 230. But those who oppose major changes to the status quo were also cheered by the justices’ skepticism.
“I did not hear 5 votes in favor of the plaintiffs’ position,” wrote Eric Goldman, a law professor at Santa Clara University who opposes the plaintiffs’ arguments. “Indeed, the justices didn’t really engage with the plaintiffs’ core arguments much after their initial dismantling, which I take as a sign of their lack of persuasiveness. For that reason, I have a little optimism that Google will win the votes.”
Assuming that’s true, then, the question is how Google will win the votes. One ominous possibility, suggested by the frequency of justices’ questions about it today, is that the court will preserve liability only for algorithms that are “neutral,” assuming that term can even be defined. “I’m concerned that SCOTUS will attempt to solve this problem with a seven-part balancing test for neutral algorithms,” tweeted Jeff Kosseff, who wrote the defining book on 230.
Good luck with that, though: “Algorithms are never neutral and always discriminate,” Goldman noted.
But “230 could lose even if Google wins,” he added. “The court’s exact reasoning will make a huge difference, and there are many ways it could go sideways.”
One final point worth noting: some justices raised the obvious but necessary question here in a case that comes down to how a statute ought to be interpreted — shouldn’t all this be left to Congress? Particularly when removing these liability protections seems certain to trigger a wave of litigation?
Here’s Robert Barnes at the Washington Post:
Kagan and Justice Brett M. Kavanaugh suggested a ruling on behalf of the Gonzalez family could unleash a wave of lawsuits. Kavanaugh did not seem persuaded when Deputy Solicitor General Malcolm L. Stewart, representing the Justice Department and siding in part with the plaintiffs, said few lawsuits "would have much likelihood of prevailing.”
“Isn’t it better … to keep it the way it is," Kavanaugh replied. "For us … to put the burden on Congress to change that and they can consider the implications and make these predictive judgments?”
This is the right question. The last session of Congress saw at least 25 separate bills attempt to repeal or modify Section 230, and not a single one made it to the president’s desk for signature. There are certainly some cases urgent enough that congressional inaction should be remedied by the court. But with its weak set of facts now joined by inept oral arguments, it’s clear that Gonzalez isn’t one of them — and that 230, for all the harms it can and does cause, should at least for the moment be left alone.
Talk about this edition with us in Discord: This link will get you in for the next week.
Governing
The Supreme Court rejected an appeal from an Ohio man who claimed his constitutional rights were violated when he was arrested for making satirical posts about his local police department on Facebook. (Lawrence Hurley / NBC)
The Federal Trade Commission is launching an Office of Technology and staffing up a team dedicated to keeping Big Tech in check. (Cristiano Lima / Washington Post)
Elon Musk’s Twitter is being sued by landlords, consultants, and vendors who say the company hasn’t paid its bills — with complaints totaling more than $14 million. (Tim Higgins and Alexa Corse / Wall Street Journal)
China’s strict censorship rules have made it difficult for Chinese tech companies to innovate on generative AI. (Li Yuan / New York Times)
The European Commission will propose a new law before summer to fix enforcement gaps in the General Data Protection Regulation. (Clothilde Goujard / Politico)
Meta is updating its terms of service for UK Facebook, Instagram and WhatsApp users to move them outside of the European Union’s jurisdiction and to US agreements. (Thomas Seal / Bloomberg)
TikTok will let researchers at US-based nonprofit universities access public data pending approval from the company’s US Data Security division. (Mia Sato / The Verge)
Industry
Microsoft is going to start testing Bing Chat tones, enabling users to get answers that are either more creative or more focused on their search queries. The company also expanded the number of daily searches available to testers from 50 to 60 after placing limits on the service last week. (Sergiu Gatlan / BleepingComputer)
Bing insists that it can call the cops and report all sorts of personal details if it’s threatened. (James West / Mother Jones)
Microsoft plans to roll outs ads in the new Bing search. But you probably guessed that already. (Sheila Dang / Reuters)
Major news outlets including the Wall Street Journal and CNN are criticizing OpenAI for training ChatGPT on articles the company didn’t pay for. The first step toward either a licensing deal or a lawsuit. (Gerry Smith / Bloomberg)
OpenAI released a portion of its guidelines related to controversial topics and said the chatbot’s biases are bugs, not features. The company promised to enable more customization of the AI’s responses. (OpenAI)
Sci-fi publication Clarkesworld Magazine is temporarily suspending short story submissions, citing a flood of AI-generated stories. (Michael Kan / PCMag)
Startups are getting into the search race, using chatbots in a variety of ways that extend beyond the web. Good news for Google’s antitrust case! (Will Douglas Heaven / MIT Technology Review)
Call centers are using ChatGPT-powered virtual assistants to help workers, but they’re still prone to mistakes. (Lisa Bannon / Wall Street Journal)
More than 200 e-books in Amazon’s Kindle store listed ChatGPT as an author or co-author in February. It seems kind of surprising that authors are admitting to using it when so far there’s no requirement that they do? (Greg Bensinger / Reuters)
A Wharton professor who successfully incorporated ChatGPT into his classes reflects on how students can use generative AI more effectively. (Ethan Mollick / One Useful Thing)
TikTok announced a new version of its creator fund after its initial one received backlash from creators over low payouts. (Kaya Yurieff / The Information)
Microsoft President Brad Smith said the company’s deal with Nintendo to bring Call of Duty games to Nintendo systems is now official. (Taylor Soper / GeekWire)
Amazon is expanding a partnership with artificial intelligence startup Hugging Face, which is developing a ChatGPT rival. (Dina Bass / Bloomberg)
Meta is launching a subscription service called Meta Verified that will allow users to pay for verification badges. If Monday hadn’t been a holiday we would have written a whole column about this! Perhaps another time soon. (Kurt Wagner / Bloomberg)
Meta is rolling out a new office design to block noise, and it looks a lot like a cubicle. (Chip Cutter and Meghan Bobrowsky / Wall Street Journal)
Meta is reportedly working on a powerful smart glasses assistant. (Janko Roettgers / Lowpass)
WhatsApp is rolling out a picture-in-picture feature on iOS so calls won’t be interrupted when users need to access other apps. (Filipe Espósito / 9To5Mac)
Apple has captured the Gen Z market, as younger consumers fear being socially ostracized for not having an iPhone. Or, perhaps more to the point, not having iMessage. (Patrick McGee / Financial Times)
Tech workers and CEOs are becoming early adopters of ChatGPT as they experiment with ways to make their work more efficient. (Karen Hao, Chip Cutter and Benoît Morenne / Wall Street Journal)
HR teams at Big Tech firms use AI to identify top performers and, increasingly, decide who gets laid off. (Pranshu Verma / Washington Post)
A human beat a top-ranked AI system at the board game Go, reversing a 2016 computer victory that was seen as a milestone in AI. (Richard Waters / Financial Times)
Those good tweets
For more good tweets every day, follow Casey’s Instagram stories.



Talk to us
Send us tips, comments, questions, and Gonzalez predictions: casey@platformer.news and zoe@platformer.news.