Meta debuts AI filmmaker in challenge to OpenAI’s Sora

Meta debuts AI filmmaker in challenge to OpenAI’s Sora

Unlock the Editor’s Digest for free

Meta is showcasing artificial intelligence models that can generate realistic videos from text instructions, which will compete with rival offerings for filmmakers and content creators from OpenAI and Runway.

Movie Gen is a suite of storytelling models that can be used for a range of tasks, such as generating videos up to 16 seconds long, video editing, matching sounds to videos and personalising video with specific images.

The Instagram and Facebook owner plans to offer video-generation tools to the Hollywood filmmakers, artists and influencers who make content on its social networks. OpenAI announced its own video-generation model, Sora, in February, and has been showing it off to the film industry, although it has not yet been released as a product. 

While Meta released some examples of videos generated by its models on Friday, it said it did not anticipate the models being integrated into its platforms for users until next year at the earliest. 

“Right now . . . if you were using a video editing or video-generation feature in Instagram, that would probably not meet your expectations of how fast you would want something like that to be,” said Connor Hayes, vice-president of generative AI products at Meta. “But broadly speaking, you could imagine these models being really powerful for things like Reels creation, Reels editing across the family of apps, and that’s the direction that we’re looking at for where we can apply it.” Reels is Instagram’s video creation and sharing feature.

The video-generation push is part of an effort by tech companies to make tools that can be used more broadly in the entertainment industry, including advertising, as they look for ways to monetise their AI advancements. Runway, an AI video-generation start-up, signed a deal last month with entertainment company Lionsgate to train a custom model on its library of films, including Twilight and The Hunger Games.

Meta claimed its videos surpassed its rivals, such as Sora and Runway, for “overall quality, motion, naturalness and consistency”, citing blind human evaluations.

Its models were trained on “a combination of licensed and publicly available data sets”, Meta said, but would not give further details. It has used public content from its platforms, such as Facebook and Instagram, for its AI previously.

The realistic nature of AI-generated videos — and the ability to replicate people’s likeness within them — has caused concerns from workers, including actors and production staff, as to how tools may affect their jobs in future.

“While there are many exciting use cases for these foundation models, it’s important to note that generative AI isn’t a replacement for the work of artists and animators,” Meta said, emphasising that it would continue to seek feedback from filmmakers and creators. 

Meta said it would watermark any videos generated by the model in order to avoid copyright concerns and issues that might arise with deepfakes. “These are many of the challenges that we’re going to have to work through before we can responsibly put a product out there, and that’s also a big part of why this is purely a research announcement right now,” Hayes added.