Elevate your local knowledge

Sign up for the iNFOnews newsletter today!

Select Region

Fact File: How Sora’s AI videos made going viral easy, and tips on how to spot them

[byline]

Entertainment, emotion, surprise — these are some of the ingredients that make videos go viral. But seeing is no longer believing, and many of the gripping videos now filling social media feeds bear the watermark of an AI app that lets users create seemingly real videos that are anything but.

Videos generated with the Sora 2 text-to-video app, developed by OpenAI, have fooled many internet users since the app launched in September. From fake ICE arrests to the Louvre heist, videos from Sora and other artificial intelligence models claiming to show real events are racking up millions of views online.

And while digital hoaxes once took a lot of time to create and a little luck to go viral, the ease at which they can now be generated and shared has raised alarm about the lack of ethical guardrails and how these videos fuel distrust online. But, as one expert says, some simple digital literacy skills could help protect people from getting fooled.

FROM HUNDREDS OF HOURS TO THE CLICK OF A BUTTON

Clips about current events aren’t the only ones going viral; many feature dangerous human interactions with animals. For example, multiple videos — including one with a visible Sora watermark — show dogs racing to save babies as they are snatched up and lifted into the air by eagles.

The videos are remarkably similar to a fake video that went viral long before generative AI captured our collective attention online.

“Golden Eagle Snatches Kid,” posted to YouTube in 2012, has drawn more than 47 million views. Today, the video by four 3D animation and design students at a Montreal digital art school feels like a relic of a time before internet users could generate viral content at the click of a button.

“Back in the day … there were not a lot of videos like this,” said Félix Marquis-Poulin, a 3D character modeller who worked on the video as a student at the School of Digital Arts, Animation and Design at the University of Quebec at Chicoutimi, or École NAD-UQAC.

The video was the result of a class assignment challenging students to create an online hoax. Receiving 100,000 views would earn them an A-plus, so Marquis-Poulin and his classmates brainstormed a video that would be engaging enough to get people’s attention. They thought the combination of a baby and eagle was a sure bet, since baby and animal videos were popular on YouTube at the time.

“We picked up a story that was ending positively, but that was creating this zone of danger and that was kind of believable,” Marquis-Poulin said.

The students had about seven weeks to work on their assignment, and its creation involved hundreds of hours of pre-production, shooting and post-production. The result was an organic viral video that received three million views on its first day, Marquis-Poulin said.

He said he views the copycat AI baby-and-eagle videos as “junk.”

“It’s like plastic waste,” he said. “(Generative AI is) new technology that is interesting, that is an amazing tool, but at some point it gets in this cycle of production where you can’t get rid of it, and it just accumulates, accumulates, accumulates.”

GROWING DISTRUST ONLINE

The launch of the Sora 2 app generated immediate controversy, as some users chose to create fake videos featuring celebrities and public figures both dead and alive.

Family members of the late human rights activist Malcolm X and actor Robin Williams spoke out after Sora-produced videos featuring them went viral.

After a request from the estate of Martin Luther King Jr., OpenAI prevented Sora from producing videos featuring his likeness. Some users had created videos depicting King saying rude and racist things, which OpenAI acknowledged as “disrespectful.”

Sora has safeguards in place to ensure users have control over whether others can use their likeness. It also blocks attempts to generate harmful content and includes a watermark and metadata that allows viewers to trace videos back to Sora.

“Wherever there are guardrails, there are ways around them,” said Mike Zajko, a sociologist and associate professor at the University of British Columbia’s Okanagan Campus. For example, it’s possible to remove watermarks from AI tools, and it’s relatively simple to do.

Zajko said generative AI tools are creating distrust among internet users who can no longer believe what they see.

“One of the best skills to cultivate is more old-fashioned digital literacy … to use search engines or video image lookup tools to try to see whether or not these things actually happened,” he said.

He added that while platforms like Sora generally exaggerate how groundbreaking their tools are, “it has been clear for a while that video generation is kind of the next frontier.”

“I do think that this is a significant step forward and we’re going to continue to see further developments in video generation platforms. But fundamentally they are, like all (generative AI), repurposing, remixing, regurgitating cultural products that already exist. So they’re still not good for creativity, but they’re great at reworking existing bits of human culture,” he said.

INDUSTRY IMPACT

Marquis-Poulin said that while he hasn’t seen generative AI replace many jobs in his industry yet, “we are seeing it coming.”

He wants to see more regulation and takes issue with the fact that many AI models train on the work of artists without their consent. However, he said there could be a place for AI in the visual effects industry by replacing the more tedious technical aspects of the work.

“It’s kind of creating intense debate in the industry because some people are really interested in using it already … and some others are just like, ‘No, let’s never touch this. It’s going to destroy us, it’s going to replace us,'” he said.

HOW TO SPOT AI-GENERATED VIDEOS

1. Look for the watermark

Text-to-video generation models like Sora often include visible watermarks on their videos. However, these can be removed. In this fake video depicting workers’ arrest by U.S. immigration authorities, the creator placed an icon over the spot where the Sora watermark would be. Some platforms, like TikTok and YouTube, sometimes note in the video’s description that the content was AI-generated.

2. Look for what’s missing or wrong

Missing body parts, text that doesn’t make sense and other “unusual” features can help identify AI content. For example, this AI generated video of the CN Tower on fire included cars without license plates and an unnatural, glossy look. Another video supposedly showing the destruction of Israel’s intelligence agency Mossad spelled the agency’s name incorrectly.

3. Listen for unnatural speech

Listen for unnatural or stilted speech, mispronunciation of common words, awkward pauses and flat tones. Use a search engine to look up a phrase from the video. A Canadian Press search using a line from a video featuring former chief public health officer Dr. Theresa Tam led to a video with an identical script featuring the chief medical officer for England; both videos were fakes.

4. Fact check information

If a video makes a specific claim, use a search engine to see if legitimate sources — like government reports, peer-reviewed studies or trustworthy news outlets and fact-checking websites — are talking about the same thing. If the claim mentions a specific person, asking them directly can help debunk the false information.

5. Find the original source

Some online tools can help verify the authenticity of videos. Taking a screenshot of a video and running it through reverse-image search tools like Google Lens can help find the original source, which can help verify whether it’s real.

This report by The Canadian Press was first published Nov. 21, 2025.

News from © The Canadian Press, . All rights reserved.
This material may not be published, broadcast, rewritten or redistributed.

Join the Conversation!

Want to share your thoughts, add context, or connect with others in your community?

The Canadian Press


The Canadian Press is Canada's trusted news source and leader in providing real-time, bilingual multimedia stories across print, broadcast and digital platforms.