OpenAI’s Sora Turns AI Prompts Into Photorealistic Videos

We already know that OpenAI’s chatbots can pass the bar exam without going to law school. Now, just in time for the Oscars, a new OpenAI app called Sora hopes to master cinema without going to film school. For now a research product, Sora is going out to a few select creators and a number of security experts who will red-team it for safety vulnerabilities. OpenAI plans to make it available to all wannabe auteurs at some unspecified date, but it decided to preview it in advance.

Other companies, from giants like Google to startups like Runway, have already revealed text-to-video AI projects. But OpenAI says that Sora is distinguished by its striking photorealism—something I haven’t seen in its competitors—and its ability to produce longer clips than the brief snippets other models typically do, up to one minute. The researchers I spoke to won’t say how long it takes to render all that video, but when pressed, they described it as more in the “going out for a burrito” ballpark than “taking a few days off.” If the hand-picked examples I saw are to be believed, the effort is worth it.

OpenAI didn’t let me enter my own prompts, but it shared four instances of Sora’s power. (None approached the purported one-minute limit; the longest was 17 seconds.) The first came from a detailed prompt that sounded like an obsessive screenwriter’s setup: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”

AI-generated video made with OpenAI’s Sora.

Courtesy of OpenAI

The result is a convincing view of what is unmistakably Tokyo, in that magic moment when snowflakes and cherry blossoms coexist. The virtual camera, as if affixed to a drone, follows a couple as they slowly stroll through a streetscape. One of the passersby is wearing a mask. Cars rumble by on a riverside roadway to their left, and to the right shoppers flit in and out of a row of tiny shops.

It’s not perfect. Only when you watch the clip a few times do you realize that the main characters—a couple strolling down the snow-covered sidewalk—would have faced a dilemma had the virtual camera kept running. The sidewalk they occupy seems to dead-end; they would have had to step over a small guardrail to a weird parallel walkway on their right. Despite this mild glitch, the Tokyo example is a mind-blowing exercise in world-building. Down the road, production designers will debate whether it’s a powerful collaborator or a job killer. Also, the people in this video—who are entirely generated by a digital neural network—aren’t shown in close-up, and they don’t do any emoting. But the Sora team says that in other instances they’ve had fake actors showing real emotions.

The other clips are also impressive, notably one asking for “an animated scene of a short fluffy monster kneeling beside a red candle,” along with some detailed stage directions (“wide eyes and open mouth”) and a description of the desired vibe of the clip. Sora produces a Pixar-esque creature that seems to have DNA from a Furby, a Gremlin, and Sully in Monsters, Inc. I remember when that latter film came out, Pixar made a huge deal of how difficult it was to create the ultra-complex texture of a monster’s fur as the creature moved around. It took all of Pixar’s wizards months to get it right. OpenAI’s new text-to-video machine … just did it.

Expect Sora to have the same restrictions on content as Dall-E 3 : no violence, no porn, no appropriating real people or the style of named artists. Also as with Dall-E 3, OpenAI will provide a way for viewers to identify the output as AI-created. Even so, OpenAI says that safety and veracity is an ongoing problem that’s bigger than one company. “The solution to misinformation will involve some level of mitigations on our part, but it will also need understanding from society and for social media networks to adapt as well,” says Aditya Ramesh, lead researcher and head of the Dall-E team.

AI-generated video made with the prompt “several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.”Courtesy of OpenAI

Another potential issue is whether the content of the video Sora produces will infringe on the copyrighted work of others. “The training data is from content we’ve licensed and also publicly available content,” says Peebles. Of course, the nub of a number of lawsuits against OpenAI hinges on the question whether “publicly available” copyrighted content is fair game for AI training.

It will be a very long time, if ever, before text-to-video threatens actual filmmaking. No, you can’t make coherent movies by stitching together 120 of the minute-long Sora clips, since the model won’t respond to prompts in the exact same way—continuity isn’t possible. But the time limit is no barrier for Sora and programs like it to transform TikTok, Reels, and other social platforms. “In order to make a professional movie, you need so much expensive equipment,” says Peebles. “This model is going to empower the average person making videos on social media to make very high-quality content.”

As for now, OpenAI is faced with the huge task of making sure that Sora isn’t a misinformation train wreck. But after that, the long countdown begins until the next Christopher Nolan or Celine Song gets a statuette for wizardry in prompting an AI model. The envelope, please!

Source : Wired