OpenAI unveiled Sora, an AI model for converting text to video
OpenAI
OpenAI has announced a new video generation model called Sora, which can create realistic and fantasy videos up to a minute long from a text description.
Here's What We Know
According to OpenAI, Sora creates complex multi-figure scenes by precisely placing objects and characters in the frame. The model is also capable of generating different types of movements.
Introducing Sora, our text-to-video model.
- OpenAI (@OpenAI) February 15, 2024
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W
Prompt: "Beautiful, snowy... pic.twitter.com/ruTEWn87vf
The developers noted the AI's ability to detail backgrounds, individual objects and characters. Moreover, it can generate character faces with colourful and varied emotions.
Prompt: "A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colours." pic.twitter.com/0JzpwPUGPB
- OpenAI (@OpenAI) February 15, 2024
Prompt: "Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance... pic.twitter.com/Um5CWI18nS
- OpenAI (@OpenAI) February 15, 2024
OpenAI claims that their model has a certain "understanding" of the physical laws of the real world. However, there are sometimes difficulties in creating complex scenes and cause-and-effect relationships.
In addition to synthesising videos from scratch, Sora can refine and extend existing videos. It is capable of filling in missing frames in an image sequence.
Sora is currently only available to "red teamers" who are evaluating the model for potential harms and risks. OpenAI has also opened up access to some visual artists, designers and filmmakers to get feedback.
Source: OpenAI