Videos like the fake interview above, created with OpenAI’s new app, Sora, show how easily public perceptions can be manipulated by tools that can produce an alternate reality with a series of simple prompts.
In the two months since Sora arrived, deceptive videos have surged on TikTok, X, YouTube, Facebook and Instagram, according to experts who track them. The deluge has raised alarm over a new generation of disinformation and fakes.
Most of the major social media companies have policies that require disclosure of artificial intelligence use and broadly prohibit content intended to deceive. But those guardrails have proved woefully inadequate for the kind of technological leaps OpenAI’s tools represent.
While many videos are silly memes or cute but fake images of babies and pets, others are meant to stoke the kind of vitriol that often characterizes political debate online. They have already figured in foreign influence operations, like Russia’s ongoing campaign to denigrate Ukraine.
Researchers who have tracked deceptive uses said the onus was now on companies to do more to ensure people know what is real and what isn’t.
“Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that,” said Sam Gregory, executive director of Witness, a human rights organization focused on the threats of technology. “Could they do better in proactively looking for A.I.-generated information and labeling it themselves? The answer is yes, as well.”
The video about food stamps was one of several that appeared as the government shutdown in the United States was dragging on, leaving real recipients of the aid, called the Supplemental Nutrition Assistance Program, or SNAP, scrambling to feed their families.
Fox News fell for a similar fake video, treating it as an example of public outrage over the abuse of food stamps in an article that has since been removed from its website. A Fox spokeswoman confirmed that the article had been removed, but did not elaborate.
The fakes have been used to mock not only poor people, but also President Trump. One video on TikTok showed the White House with what sounded like a voice-over of Mr. Trump berating his cabinet over the release of documents involving Jeffrey Epstein, the disgraced financier and convicted sex offender. According to NewsGuard, a company that tracks disinformation, the video, which was not labeled A.I., was viewed by more than three million people in a matter of days.
Until now, the platforms have relied largely on creators to disclose that the content they are posting is not real, but the creators don’t always do so. And though there are ways for platforms like YouTube, TikTok and others to detect that a video was made using artificial intelligence, they don’t always flag it to viewers right away.
“They should have been prepared,” Nabiha Syed, executive director of the Mozilla Foundation, the tech-safety nonprofit behind the Firefox browser, said of the social media companies.
The companies behind the A.I. tools say they are trying to make clear to users what content is generated by computers. Sora and the rival tool offered by Google, called Veo, both embed a visible watermark onto the videos they produce. Sora, for example, puts a “Sora” label on each video. Both companies also include invisible metadata, which can be read by a computer, that establishes the origin of each fake.
The idea is to inform people that what they are seeing is not real and to give the platforms that feature them the digital signals to automatically detect them.
Some platforms are using that technology. TikTok, apparently in response to concerns over how convincing the fake videos are, announced last week that it would tighten its rules around the disclosure of the use of A.I. It also promised new tools to let users decide how much synthetic — as opposed to genuine — content they wanted.
YouTube uses Sora’s invisible watermark to append a small label indicating that the A.I. videos were “altered or synthetic.”
“Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” said Jack Malon, a YouTube spokesman.
Labels, however, can sometimes show up after thousands or even millions of people have already watched the videos. Sometimes they don’t appear at all.
People with malicious intent have discovered that it is easy to get around the disclosure rules. Some simply ignore them. Others manipulate the videos to remove the identifying watermarks. The Times found dozens of examples of Sora videos appearing on YouTube without the automated label.
Several companies have sprung up offering to remove logos and watermarks. And editing or sharing videos can wind up removing the metadata embedded in the original video indicating it was made with A.I.
Even when the logos remain visible, users could miss them when scrolling quickly on their phones.
Nearly two-thirds of more than 3,000 users who commented on the TikTok video about food stamps responded as if it were real, according to an analysis of the comments by The New York Times, which used A.I. tools to help classify the content in the comments.
Inauthentic accounts, including those run by foreign or malicious agents, play a huge role in distributing and promoting social media, but it was not clear whether they did so in the conversations around these videos.
“There’s kind of this individual vigilance model,” Mr. Gregory said. “That doesn’t work if your whole timeline is stuff that you have to apply closer vigilance to. It bears no resemblance to how we interact with our things.”
In a statement, OpenAI said it prohibits deceptive or misleading uses of Sora and takes action against violators of its policies. The company said its app was just one among dozens of similar tools capable of making increasingly lifelike videos — many of which do not employ any safeguards or restrictions on use.
“A.I.-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort,” the company said.
(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied those claims.)
A spokesman for Meta, which owns Facebook and Instagram, said it was not always possible to label every video generated by A.I., especially as the technology is fast evolving. The company, the spokesman said, is working to improve systems to label content.
X and TikTok did not respond to requests for comment about the flood of A.I. fakes.
Alon Yamin, chief executive of Copyleaks, a company that detects A.I. content, said the social media platforms had no financial incentive to restrict the spread of the videos as long as users kept clicking on them.
“In the long term, once 90 percent of the traffic for the content in your platform becomes A.I., it begs some questions about the quality of the platform and the content,” Mr. Yamin said. “So maybe longer term, there might be more financial incentives to actually moderate A.I. content. But in the short term, it’s not a major priority.”
The advent of realistic videos has been a boon for disinformation, fraud and foreign influence operations. Sora videos have already featured in recent Russian disinformation campaigns on TikTok and X. One video, with its watermarks crudely obscured, sought to exploit a ballooning corruption scandal among Ukraine’s political leadership. Others have created fake videos of frontline soldiers weeping.
Two former officials of a now-disbanded State Department office that fought foreign influence operations, James P. Rubin and Darjan Vujica, argued in a new article in Foreign Affairs that advancements in A.I. were intensifying efforts to undermine democratic countries and divide societies.
They cited A.I. videos in India that denigrated Muslims to stoke religious tensions. A recent one on TikTok appeared to show a man preparing biryani rice on the street with water from the gutter. Though the video had a Sora watermark and the creator said it was generated by A.I., it was widely shared on X and Facebook, spread by accounts that commented on it as if it were real.
“They are making things, and will continue to make things, much worse,” Mr. Vujica said in an interview, referring to the new generation of A.I.-made videos. “The barrier to use deepfakes as part of disinformation has collapsed, and once disinformation is spread, it’s hard to correct the record.”
Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul.
The post A.I. Videos Have Flooded Social Media. No One Was Ready. appeared first on New York Times.