YouTube opens its doors to deepfakes
Newer AI companies are heavily restricting the use of their tools. Social networks are taking a different approach
Today, let’s talk about platforms’ early moves to moderate the way people create and distribute media created with generative artificial intelligence. Announcements made by YouTube today about its own synthetic media policies suggest that the leverage in moderating deepfakes may not lie where we expected.
I’ll get to the YouTube announcements in a minute. But first, I think it’s helpful to frame how we’ve been approaching digital content moderation up until this point.
I.
In the first era of digital media — from the rise of Facebook onward — we put the primary responsibility for moderating content onto the user and the platform. US users carry the legal liability for most of what they post, thanks to Section 230; similar laws shield platforms from legal responsibility in other big markets. Platforms have some legal responsibilities — they often have to remove terrorist content, for example, or CSAM — but most of the moderation they do serves business interests. Most people don’t like spending time or money in places full of hate speech and other harms, and so platforms remove it.
The AI era of digital media has introduced a third character into the moderation stack: the tool. Generative AI tools can create realistic depictions of human beings, mimic their voices, and animate them on video. They can create erotica of various kinds. They can, if left unchecked, offer detailed instructions on how to build weapons.
We can debate how new this really is. Adobe Photoshop can also create realistic depictions of human beings, and erotica of various kinds. Talented actors can mimic voices and create convincing likenesses on video. You can get pretty far in building a weapon just by Googling.
Still, in the first era of digital media, we saw relatively little pressure on tools like these to perform content moderation during the act of creation. If you draw a naked human form using Photoshop, Adobe won’t interrupt you to ask you what you’re doing. We don’t generally prohibit legal prohibitions on actors from mimicking people. Google won’t delete your account based on your search activity alone.
There are some good reasons for this. One, historically we’ve mostly agreed that what you do on your computer is your own business, as long as it’s not hurting anyone. Two, and maybe more importantly, we’ve been able to count on platforms intervening to stop the spread of harmful material. A deepfake of Joe Biden that I create on my computer for my own enjoyment isn’t going to hurt anyone. That same deepfake, posted on Instagram, might cause chaos. And so we generally put the most pressure to remove material on the platform, rather than the tool.
Then AI came along, and flipped things upside-down.
Take the biggest generative AI platforms: OpenAI’s ChatGPT and DALL-E; Anthropic’s Claude; Google’s Bard; Midjourney; Stability AI. All restrict the use of their tools to produce sexual and violent images, even if those images never leave the user’s computer. Some go further, preventing the tool from creating images of corporate logos or other copyright material. OpenAI banned the use of ChatGPT to create political messages. DALL-E and other AI tools also generally prohibit you from creating images of politicians. In my experience, chatbotsplatforms are also extremely skittish about answering even basic questions about sexual health.
These restrictions reflect a general sense of caution in most AI companies. They saw the beating that social media companies took over the past half-decade, both among lawmakers and in their public reputation, after they under-invested in content moderation. At the same time, most of the AI executives I’ve spoken to are genuinely concerned about the potential for misuse that their tools represent.
That’s a good thing. At the same time, it means that people often can’t use these tools as they used their pre-AI equivalents. It’s a downgrade in usability, and the AI platforms generally do a bad job of educating users about their policies and rehabilitating them if they violate them. (One employee of a big AI company told me that 99 percent of people who got banned from the platform had been trying to create text-based erotica.)
In the old days, digital tools were permissive, and digital platforms were restrictive. Today, digital tools are restrictive. And the platforms?
Well, that brings us to YouTube.
II.
On Tuesday, YouTube announced its “approach to responsible AI innovation.” It is not a list of prohibitions on the ways you can use AI on YouTube. Rather, it the policies serve as a general green light for people to post synthetic media widely on YouTube.
The “responsible” part of the innovation comes in new labels that, at some point “over the coming months,” will be required on synthetic media. In most cases, the label will appear as metadata underneath the video. And “for certain [unspecified] types of content about sensitive topics,” the label will appear overlaid on the video itself.
There are circumstances under which YouTube will remove a manipulated video: if it depicts shocking violence, for example, or if it poses a risk of “egregious harm.”
For the most part, though, you’ll be able to post synthetic media on YouTube as you wish. You can post your deepfake of Joe Biden, or a lesser-known YouTube creator, or a person at your college, and be in compliance with YouTube’s guidelines.
If the subject of your deepfake doesn’t like what you have done with their face and voice, they can fill out a form. The responsibility for addressing harm here falls not on the platform, or the user, or the tool, but on the victim.
And under what circumstances will YouTube respond to your complaint? “We’ll consider a variety of factors when evaluating these requests,” is all that the blog post’s authors, YouTube vice presidents of product management Jennifer Flannery O’Connor and Emily Moxley, have to say about that. (A spokeswoman told me that if you are an average citizen and get deepfaked and don’t like what you see on YouTube, that alone could be enough to get the video removed under the company’s privacy policy. But it all seems a slippery, at least for the moment.)
There are benefits to permitting a wide variety of speech, particularly political speech, and the policy YouTube described today could accomplish that. It’s carving out meaningful permissions for satire and parody, which our elected officials ought to be able to handle.
At the same time, the better that synthetic media gets, the weaker this policy feels. YouTube has not always strongly enforced its policies around harassment and cyberbullying, particularly in cases where creators are involved in conflicts, and I worry that this policy creates a new battlefield.
One class of creators who will get special protections under the policy are major-label musicians, who will be able to request takedowns of videos that mimic their voices even if used in parodies and satires. (The Verge has a good piece on this today.) We know that YouTube is negotiating with record labels to turn artists’ voices into a generative AI creative tool, presumably with some sort of revenue-sharing deal, and in the meantime this feels like a gift designed to buy some goodwill.
III.
The policy YouTube announced today feels like a first draft, and it surely will evolve before being rolled out to users. Right now, many of the most egregious harms from generative AI are still theoretical. Others are covered by existing policies. The new threat models are still under construction.
Still, it seems noteworthy that the most accessible tools we have for creating synthetic media are more restrictive than the tools we have for distributing it. Meta’s synthetic media policy, which would fit on the back of a napkin, is roughly as permissive as YouTube’s. (TikTok is the only one of the three to ban synthetic videos of non-public figures.)
Perhaps all this will balance out over time. For the moment, though, it seems strange that the likeliest source of harms from AI comes from not from leading AI companies, which will block it at the source of creation, but from open-source tools used to create materials hosted and distributed by tech giants.
If social networks hope to avoid a repeat of the post-2016 tech backlash, they should proceed carefully.
Threads DMs
Threads heads were briefly outraged on Tuesday when Instagram chief Adam Mosseri suggested the company would not build direct messages for the app. In a Threads post, I speculated that DMs probably would arrive eventually, if only because of the adage that all software expands until it includes messaging. Happily, Mosseri replied to me a bit later to say that the company might add DMs eventually, but for now the company is focused on building out features for Instagram DMs.
This is a tricky problem, and one of the more obvious product downsides in launching Threads by leveraging the Instagram network. If Threads winds up feeling like a complement to Instagram, perhaps directing Threads users back there for messages (or mirroring the inbox as a tab inside Threads) is the right approach.
So far, though, Threads feels like its own thing to me. It is heavily text-based, with a strong concentration of journalists and pundits and politicians. The photo- and video-heavy experience of Instagram, which is led by a different category of creators, has yet to see similar traction.
Historically, Meta products that are on track to hit 1 billion users all get their own standalone messaging app: Facebook, Instagram, and (of course) WhatsApp. Efforts to unify those into a single inbox seem to have stalled — probably for good reason.
All of which is to say: if Threads continues on its current trajectory, I suspect at some point Meta will decide building a separate DM experience for the app will be the path of least resistance.
Talk about this edition with us in Discord: This link will get you in for the next week.
Governing
In an important case, a federal judge ruled that lawsuits charging Alphabet, Meta, ByteDance and Snap with harming children can proceed. (Jonathan Stempel and Nate Raymond / Reuters)
Epic v. Google: Testimony from a Google executive revealed the company agreed to pay Samsung $8 billion over four years to make its search engine, voice assistant, and Play Store the default on Samsung’s mobile devices. (Malathi Nayak and Leah Nylen / Bloomberg)
A new filing shows that Donald Trump’s Truth Social and its parent Trump Media and Technology Group have lost at least $31.5 million since launch, taking in $3.7 million in net sales. (Alex Weprin / The Hollywood Reporter)
Google filed a lawsuit against two men, alleging they filed fraudulent takedown notices for hundreds of thousands of URLs, harming the company and its customers. (Andy Maxwell / TorrentFreak)
Almost three dozen venture capital firms signed a set of commitments on how the startups they fund should responsibly develop AI. (Shirin Ghaffary / Bloomberg)
Microsoft and Google will not challenge a law under the European Digital Markets Act that requires them to allow users to switch between competing services more easily. (Foo Yun Chee and Supantha Mukherjee / Reuters)
Adobe will face a challenge by European regulators in its $20 billion Figma acquisition, as regulators reportedly prepare to file anti-competitive charges. (Javier Espinoza / Financial Times)
The EU’s AI Act has created divisions among lawmakers about what practices to ban, how to conduct fundamental rights impact assessments, and potential exemptions for national security practices. (Natasha Lomas / TechCrunch)
X is failing to remove posts that violate its published community guidelines under misinformation and hate speech related to the Israel-Hamas conflict, a report by the Center for Countering Digital Hate found. (Jess Weatherbed / The Verge)
Internet browsing data is being collected with greater detail than previously thought, a new report by the Irish Council for Civil Liberties found. Data on people with sensitive professions, like judges and military personnel, are being used for targeting. (Cristina Criddle / Financial Times)
Nepal is banning TikTok, saying that the platform is “disturbing social harmony and family structures”, citing cases of cyberbullying, financial extortion, and sexual exploitation. (Shan Li and Krishna Pokharel / The Wall Street Journal)
Industry
ByteDance’s revenue surged by more than 40 percent to $29 billion in Q2, closing the gap with Meta through advertising and e-commerce. (Cory Weinberg, Juro Osawa and Jing Yang / The Information)
Shein and TikTok Shop are trying to ditch their “made in China” reputation by courting international sellers and diversifying their supply chain. (Peiyue Wu and Daniela Dib / Rest of World)
Amazon and Snap reached a deal that lets users buy Amazon products from ads on Snapchat, similar to Amazon’s ads on Instagram and Facebook. (Sylvia Varnham O’Regan and Theo Wayt / The Information)
WhatsApp chat and media backups stored on Android will soon count towards Google cloud storage limits after a temporary reprieve. (Abner Li / 9to5Google)
Google Deepmind developers say its GraphCast AI model is predicting the weather better than traditional forecasting methods. (Clive Cookson / Financial Times)
A profile of Geoffrey Hinton, the “godfather of AI”, who argues we urgently need to address the ethical implications and potential dangers of the technology. (Joshua Rothman / The New Yorker)
A coder reflects on the challenges and joys of coding, the changing programming landscape, and the potential impact of AI on the future of the profession. (James Somers / The New Yorker)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and deepfakes of your friends: casey@platformer.news and zoe@platformer.news.
I welcome the effort to sketch the early efforts at platform & user liabilities. But, the discussion in Section I lacks much detail to be meaningful.
1. Who is "we"?
2. What did "we mostly agree to"?
3. Exactly what is "harmful conduct to others" ... criminal conduct ... tortious conduct ... defamnation ... false light?Third Country disinformation attacks?Cyber Warfare?
Platform liability cases are appearing on Dockets across the Country. We need a complete Restatement of Digital Law as soon as reasonably possible.
Why are journalists and pundits and politicians concentrating on a platform that specifically eschews “news”?
I don’t get it. What is the point.