YouTube defends its structure
One man runs both product and trust and safety at the company. Should he?

One of the core questions around here is what the rules of the internet ought to be. Today I want to shift gears a bit and talk a bit about who, in practice, sets those boundaries: the executives in charge of policy, product, and trust and safety at the major platforms, and the differences in the way companies like Google, YouTube, Twitter, and Facebook structure their organizations. From the outside, the companies may all look roughly equivalent; in practice, though, they can work quite differently.
I’ve been thinking about this all morning, after reading my old boss Nilay Patel ask YouTube’s head of product about the structure of his organization. Neal Mohan has run product at YouTube 2015 and is a top deputy to CEO Susan Wojcicki; he oversees both product questions (how do we get watch time to go up?) and trust and safety questions (are our recommendations radicalizing people?) And a question I have been puzzling over for some time now is whether it’s smart to ask one person to be responsible for both questions.
Here’s how it came up on Patel’s podcast, Decoder:
PATEL: I want to make sure we spend some time at the end here on trust and safety. As far as I can tell, it is unique that the trust and safety organization reports to you as the head of product. Why is that?
NEAL MOHAN: First, it’s hard to imagine a YouTube product experience without taking into account how the community guidelines impact our viewers and our creators. We spent the last hour talking about all of these products and features, and the decisions that go behind creating them. I would argue that a core part of that overall product experience for our entire ecosystem — viewers, creators, advertisers — are our community guidelines and how they work. That’s the first thing that I would say.
The second thing is a little bit more subtle, but I think in some ways is even more critical in terms of how YouTube works, which is [that] when we talk about content moderation or community guidelines, we tend to focus on [a] very, very small portion of what happens there, which is videos that stay up or come down on the platform. That is not by any means the be-all and end-all of how content manifests itself for our users.
What I mean by that is it’s not just about videos that come down or stay up, it’s also about how our recommendation systems work. It’s about how the ranking system works. It’s about how our search systems work. It’s also about how creators, 99.9 percent of whom are looking to do the right thing, are rewarded.
This is, on its face, a reasonable explanation for YouTube’s structure. In Mohan’s view, his job is to take care of users, and that means both showing them lots of interesting videos and preventing terrible videos from reaching their eyeballs. (It goes unstated that also embedded in this job are the critical business functions of making available lots of advertising inventory, and — more recently — generating subscription revenue.)
Of course, you can imagine some conflicts that might arise from this structure. Major policy changes can affect core YouTube metrics like watch time. Filling more recommendation slots with high-quality news may prove less engaging to viewers than, say, low-quality conspiracy theories. Because it’s a private company, we don’t know YouTube resolves these trade-offs when they come up in practice. Nor do we know the effects of these trade-offs, in terms of what people end up watching in aggregate, except via ballpark estimates based on public view counts.
We do know that YouTube’s parent company, Google, structures itself differently. At YouTube, the head of trust and safety (Matt Halprin) reports to the head of product (Mohan). At Google, the head of trust and safety (Kristie Canegallo) reports to its chief legal officer and head of global affairs (Kent Walker). Twitter uses a similar model: its head of trust and safety (Del Harvey) reports to its chief legal officer (Vijaya Gadde).
Does it matter? Some YouTube critics I know believe that the company is sometimes slow to implement policy changes because they get caught up in endless product reviews. Last August, Google banned searches for QAnon in its shopping product; YouTube didn’t move against QAnon until October, and even then only banned some videos related to the conspiracy movement.
If there would be any benefit to one tech monolith owning another, this line of thinking goes, surely it would be that they would act in lockstep to remove dangerous conspiracy content from the platform. In practice, though, YouTube acts independently of its parent company. Maybe that’s a good thing — perhaps Google Drive and YouTube should have very different content policies. Still, it’s worth asking whether YouTube’s organization chart contributes to delays like its QAnon failures.
This morning in the Sidechannel server, I asked folks about all this. (One of my favorite things about Sidechannel is that it is overrun with trust and safety and policy workers.) To my surprise, you mostly came to YouTube’s defense. Having your trust and safety team report to the legal department creates roadblocks of its own, you said. One big reason is that to get anything done, the legal team usually has to ask the product and engineering teams to intervene anyway. YouTube’s structure allows the company to skip this step.
In any case, you told me, YouTube’s approach has the benefit of being more straightforward than Facebook’s, which is more complicated than I thought before I started writing today. Facebook doesn’t have a trust and safety team per se; instead it has a team called “integrity,” the head of which (Guy Rosen) reports to the vice president of product management (Naomi Gleit), who reports to the VP of central products (Javier Olivan). Meanwhile, the vice president of content policy (Monika Bickert) reports to the VP of global public policy (Joel Kaplan), who reports to the VP of global affairs and communications (Nick Clegg), who reports to the COO (Sheryl Sandberg).
The potential benefit of this approach is that you can build products that enhance platform integrity without facing as much pressure over how you’re affecting core engagement metrics, at least in theory. (Rosen has told me that his product teams get a “budget” of engagement that they can sacrifice in order to improve the health of Facebook overall.) The disadvantage is that, on the other side of the organization, the people writing policies ultimately report to the PR and lobbying teams, which has led to some storied distortions in policy enforcement over the years.
Ultimately, all of these organizations have to make similar tradeoffs around business demands, user interests, regulatory pressure, and legal requirements. No matter how they are structured, all of these decisions eventually roll up to their CEOs, who are almost always involved in the hardest calls. And however the organization chart reads, a lot of this work is cross-functional. People work on these issues together across teams.
But if you believe that incentives are destiny, and that most companies eventually ship their org charts, then the structure of YouTube and other platforms probably deserves more scrutiny than it has gotten to date. Not because there are clear “right” and “wrong” structures, but because the various paths all create pitfalls that are worth our scrutiny.
The perfect org chart won’t create a perfect internet. But scrutinizing the structures and systems that have built the rules we live under today might help us create a better one.
I’d love to hear from you on this one: where do trust and safety and content policy belong on a platforms’ org chart? Reply to me (I’ll keep it anonymous) or leave a comment. If I get enough smart thoughts from you I’ll share them in an upcoming edition.
Elsewhere: Mohan went on Decoder to promote the fact that creators can now make up to $10,000 a month using its TikTok clone, Shorts.
The Ratio
Today in news that could affect public perception of the big tech companies.
⬆️ Trending up: Twitter announced a partnership with Reuters and the Associated Press to add more context to news and trends on the service. They’ll also help out with Twitter’s nascent Birdwatch fact-checking service. (Sarah Perez / TechCrunch)
⬇️ Trending down: Twitter verified a fake account for the author Cormac McCarthy. There has got to be a better way of verifying people than however Twitter is doing it. (James Vincent / The Verge)
Governing
⭐ The National Labor Relations Board ruled that Amazon should stage a do-over for the high-profile union election in Bessemer, AL. The reason: Amazon unduly interfered in the first election, in which labor organizers lost the fight to organize. Here are Spencer Soper, Matt Day, and Josh Eidelson in Bloomberg:
Meyers’s recommendation means “there’s a strong likelihood” of a new election, said former NLRB member Wilma Liebman, who chaired the agency under President Barack Obama. Regional directors usually adopt the recommendations of hearing officers in such cases, Liebman said, and this case’s high profile could make the regional director that much more hesitant to overrule Meyers.
Meyers also found that the presence of “vote no” paraphernalia at mandatory employee meetings risked giving workers the impression their stance on the union vote was being tracked by the company. However, she recommended dismissing some of the union’s other allegations, including that Amazon had interrogated employees, manipulated the timing of a traffic light to interfere with organizers who were handing out fliers and hired off-duty police offers to surveil workers.
Related: Amazon illegally confiscated pro-union materials and prevented them from being distributed in a break room. (Lauren Kaori Gurley / Vice)
A look at the struggles of the Ahmadi religious minority group in Pakistan as they attempt to shield their identities on Facebook, which can amplify persecution from the religious majority in their communities. “Ahmadi users describe the exhaustion of having to dodge the constant nudges from Facebook to share more and more: untagging themselves from photos, deleting event and page invites, avoiding check-ins with friends, ducking out of livestreams.” (Alizeh Kohari / Rest of World)
An Australian court ruled that artificial intelligence can be considered an inventor on patent filings. Gulp. (Simon Sharwood / The Register)
⭐ China’s top propaganda agencies want to limit the role of ranking algorithms in promoting content to the country’s biggest consumer apps — a move that could have profound consequences. If ByteDance has to promote propaganda over what its users want, what becomes of Douyin — or TikTok? (Tracy Qu / South China Morning Post)
Tencent said it would introduce new restrictions on its games for younger users after an article in a state-owned newspaper called video games “opium for the mind.” In the meantime, its stock price sank as investors brace for China to open another front in its tech war. (Chong Koh Ping / Wall Street Journal)
Here’s another good effort to understand what’s going on with China’s tech crackdown — a policy the authors call “progressive authoritarianism.” “After 40 years of allowing the market to play an expanding role in driving prosperity, China’s leaders have remembered something important — they’re Communists.” (Tom Hancock and Tom Orlik / Bloomberg)
A look at the fraught, weakened position of tech giants in Hong Kong at the moment. “If servers or data centers are physically in Hong Kong, then they can be monitored, raided, or seized, giving the government critical leverage over the platforms to force compliance with takedown requests.” (Lokman Tsui / Rest of World)
Industry
⭐ Desperate Facebook users are buying Oculus headsets to get priority customer service in order to get their accounts restored. The latest indictment of Facebook’s awful customer service; fantastic story from NPR’s Shannon Bond:
Sherman contacted Oculus with his headset's serial number and heard back right away. He plans to return the unopened device, and while he's glad the strategy worked, he doesn't think it's fair.
"The only way you can get any customer service is if you prove that you've actually purchased something from them," he said.
Facebook is building a team of AI researchers to study whether they can analyze encrypted data for reasons that may include ad targeting. “The research could allow Facebook to target ads based on encrypted messages on its WhatsApp messenger, or to encrypt the data it collects on billions of users without hurting its ad-targeting capabilities, outside experts say.” (Sarah Krouse and Sylvia Varnham O'Regan / The Information)
Fear-mongering neighborhood vigilante app Citizen introduced a $20-a-month service to have someone call 911 for you. “The company says it exists for instances where a user ‘may not want to be seen calling 911.’” OK. (Brian Heater / TechCrunch)
After Snapchat suspended two anonymous messaging apps that used its developer kit, a new one called Sendit is quickly growing on the platform. Sendit has more than 3.5 million installs, speaking to the enduring appeal of anonymous messaging, despite the fact that it inevitably leads to harassment, abuse, and teen suicide. (Sarah Perez / TechCrunch)
Twitter added third-party sign ins via Apple and Google. Presumably not by Twitter’s own choice. (Mitchell Clark / The Verge)
Spotify is supporting a less-restrictive ad-supported tier that costs $0.99 a month. It would remove limits on skipping tracks and let users select tracks to listen to rather than relying on playlists to shuffle. (Jon Porter / The Verge)
Amazon’s once-ballyhooed Prime Air delivery service appears to be on the rocks. In the United Kingdom, where much of the effort was once based, more than 100 people have lost their jobs in recent years. (Andrew Kersley / Wired)
Emojipedia was purchased by the smartphone software company Zedge. Emojipedia is a useful database of emoji information; good luck figuring out what Zedge does (I tried!) (Rob Price / Insider)
A profile of Fuck You Pay Me, a service co-founded by a former Facebook data scientists intended to serve as Glassdoor for creators. “Creators can leave reviews of brands they have worked with, share ad rates, and give and get other crucial information for negotiating sponsored content deals. The aim: to have creators be paid more equitably.” (Taylor Lorenz)
Ad prices are surging across every major social platform, according to this survey of the big networks. They’re up 89 percent year over year at Facebook, 92 percent at TikTok and 108 percent at Google. (Lauren Johnson / Insider)
A look at what Alibaba and Tencent hope to get out of new push to make their platforms interoperable. If China continues to push platforms into interoperability, the next big beneficiary could be ByteDance, the author argues. (Rui Ma / Rest of World)
Those good tweets

Talk to me
Send me tips, comments, questions, and YouTube Shorts: casey@platformer.news.