IShowSpeed vs Sora 2 Deepfakes: The explosive collision between viral fame and unrestrained AI creativity has hit another boiling point. Internet sensation Darren Jason Watkins Jr., better known as IShowSpeed, recently found himself at the center of a deepfake controversy after discovering dozens of bizarre and invasive videos made with OpenAI’s Sora 2 tool – all starring his face, voice, and personality doing things he never actually did.
During a recent livestream, the 20-year-old YouTuber – who commands over 44.9 million subscribers – stumbled into an uncanny nightmare: videos depicting him kissing fans, racing wild animals, and even traveling to places he has never been. 
One particularly unsettling clip showed a “coming out” confession, falsely claiming Speed had revealed he was gay. What started as lighthearted disbelief quickly turned into anger and unease.
“I’m turning this s**t off,” Speed exclaimed on stream, visibly rattled. “Why does this look too real? Bro, that’s like, my face. Why do I keep coming out?!” His frustration grew as he realized how difficult it would be to manually track and remove each fake video.
Ironically, IShowSpeed had previously allowed Sora 2 to use his likeness – likely under the assumption that it would be harmless or even fun. But as the stream spiraled, he lashed out at his own audience for encouraging him to opt in. “Whoever told me to make it public, you’re not here for my safety, bro,” he said. “I’m f***ed, chat.”
The final straw came when he stumbled upon a deepfake showing him holding a newborn baby with a caption describing the child as trans. The shock was evident; Speed abruptly ended the stream, calling the whole experience “violating” and “creepy.”
This disturbing incident unfolded just weeks after OpenAI faced backlash for Sora 2’s portrayal of historical figures, particularly a deepfake video of Dr. Martin Luther King Jr., which his estate deemed disrespectful. OpenAI responded by temporarily pausing the creation of images involving King, promising to tighten safeguards for historical figures. However, celebrities and internet personalities remain fair game under the app’s opt-in system – a loophole that is increasingly being tested by both trolls and overzealous fans.
Sora 2, released on October 1, allows users to generate 20-second HD videos featuring realistic motion and sound. It can combine faces, voices, and even stylistic cues from existing footage. In the weeks following its release, social media platforms were flooded with videos featuring not only celebrities like IShowSpeed, but also fictional characters from franchises such as Pokémon, Mario, One Piece, and Demon Slayer. The result has been an ethical minefield of copyright infringement, digital impersonation, and emotional violation.
OpenAI’s CEO, Sam Altman, tried to address the chaos in an October 4 blog post, promising new controls for rightsholders. “We’ll give creators more granular control,” he wrote, allowing them to specify exactly how – or if – their likenesses and characters can be used. Altman described Sora 2’s deepfakes as “interactive fan fiction,” but legal experts are less forgiving. “A lot of these videos will infringe copyright,” said Stanford Law professor Mark Lemley. “OpenAI is opening itself up to quite a lot of lawsuits by doing this.”
Companies like Nintendo and Disney have already responded aggressively. Nintendo vowed to take “necessary action” against intellectual property violations, while Disney and Universal have filed lawsuits against AI art platform Midjourney for using copyrighted characters without authorization. Even The Pokémon Company stepped in after a U.S. government video bizarrely featured Ash Ketchum and the series’ theme song. “Our company was not involved in the creation or distribution of this content,” a spokesperson said.
For IShowSpeed, the line between meme and violation has blurred. The livestream highlighted a broader cultural moment – where digital likenesses can be copied, remixed, and weaponized in seconds. What used to be satire or fan art has evolved into something more invasive, especially when realism crosses ethical boundaries.
The incident echoes a similar plea from filmmaker Zelda Williams, daughter of Robin Williams, who recently begged fans to stop sending AI-generated videos of her late father. “Please, just stop,” she wrote on Instagram. “It’s not what he’d want.”
As AI tools become more sophisticated, the question of consent grows murkier. Should opting in once mean eternal access? What happens when likeness becomes community property? For creators like IShowSpeed, who live online, the stakes are personal and immediate. The same technology that amplifies fame can just as easily distort identity.
And while some viewers found the situation hilarious, others saw it as a glimpse into a dystopian future where truth is negotiable and privacy is a relic. One viewer summarized the growing concern perfectly: “We’re entering an era where we can’t trust anything – videos, news, even science.” It’s a digital Pandora’s box, and no one seems able to close it.
As the dust settles, IShowSpeed’s outburst feels less like an overreaction and more like an early warning. Deepfakes are no longer an experiment – they are entertainment, harassment, and misinformation rolled into one powerful new medium. And until regulation catches up, every creator online is a potential target.
Whether you laugh, cringe, or worry, one thing’s clear: the world just witnessed another reminder that AI creativity, without consent or restraint, can easily become chaos disguised as innovation.
2 comments
Pandora’s box has been opened and there’s no putting it back 😬
We’re literally entering an era where u can’t trust anything online anymore. vids, news, even ppl. scary af