- Source: Sora (text-to-video model)
Sora is an upcoming text-to-video model developed by OpenAI. The model generates short video clips based on user prompts, and can also extend existing short videos. As of November 2024, it is unreleased and not yet available to the public. OpenAI has provided no official Sora release date.
History
Several other text-to-video generating models had been created prior to Sora, including Meta's Make-A-Video, Runway's Gen-2, and Google's Lumiere, the last of which, as of February 2024, is also still in its research phase. OpenAI, the company behind Sora, had released DALL·E 3, the third of its DALL-E text-to-image models, in September 2023.
The team that developed Sora named it after the Japanese word for sky to signify its "limitless creative potential". On February 15, 2024, OpenAI first previewed Sora by releasing multiple clips of high-definition videos that it created, including an SUV driving down a mountain road, an animation of a "short fluffy monster" next to a candle, two people walking through Tokyo in the snow, and fake historical footage of the California gold rush, and stated that it was able to generate videos up to one minute long. The company then shared a technical report, which highlighted the methods used to train the model. OpenAI CEO Sam Altman also posted a series of tweets, responding to Twitter users' prompts with Sora-generated videos of the prompts.
OpenAI has stated that it plans to make Sora available to the public but that it would not be soon; it has not specified when. The company provided limited access to a small "red team", including experts in misinformation and bias, to perform adversarial testing on the model. The company also shared Sora with a small group of creative professionals, including video makers and artists, to seek feedback on its usefulness in creative fields.
Capabilities and limitations
The technology behind Sora is an adaptation of the technology behind DALL-E 3. According to OpenAI, Sora is a diffusion transformer – a denoising latent diffusion model with one Transformer as the denoiser. A video is generated in latent space by denoising 3D "patches", then transformed to standard space by a video decompressor. Re-captioning is used to augment training data, by using a video-to-text model to create detailed captions on videos.
OpenAI trained the model using publicly available videos as well as copyrighted videos licensed for the purpose, but did not reveal the number or the exact source of the videos. Upon its release, OpenAI acknowledged some of Sora's shortcomings, including its struggling to simulate complex physics, to understand causality, and to differentiate left from right. One example shows a group of wolf pups seemingly multiplying and converging, creating a hard-to-follow scenario. OpenAI also stated that, in adherence to the company's existing safety practices, Sora will restrict text prompts for sexual, violent, hateful, or celebrity imagery, as well as content featuring pre-existing intellectual property.
Tim Brooks, a researcher on Sora, stated that the model figured out how to create 3D graphics from its dataset alone, while Bill Peebles, also a Sora researcher, said that the model automatically created different video angles without being prompted. According to OpenAI, Sora-generated videos are tagged with C2PA metadata to indicate that they were AI-generated.
Reception
Will Douglas Heaven of the MIT Technology Review called the demonstration videos "impressive", but noted that they must have been cherry-picked and may not be representative of Sora's typical output. American academic Oren Etzioni expressed concerns over the technology's ability to create online disinformation for political campaigns. For Wired, Steven Levy similarly wrote that it had the potential to become "a misinformation train wreck" and opined that its preview clips were "impressive" but "not perfect" and that it "show[ed] an emergent grasp of cinematic grammar" due to its unprompted shot changes. Levy added, "[i]t will be a very long time, if ever, before text-to-video threatens actual filmmaking." Lisa Lacy of CNET called its example videos "remarkably realistic – except perhaps when a human face appears close up or when sea creatures are swimming".
Filmmaker Tyler Perry announced he would be putting a planned $800 million expansion of his Atlanta studio on hold, expressing concern about Sora's potential impact on the film industry.
See also
VideoPoet – Text-to-video model by Google
Dream Machine (text-to-video model)
References
External links
Official website
Kata Kunci Pencarian:
- Sora (model teks-ke-video)
- OpenAI
- 2024
- Mars
- 2020-an
- Iconiq
- Sora (text-to-video model)
- Text-to-video model
- Dream Machine (text-to-video model)
- Sora
- Kuaishou
- Ideogram (text-to-image model)
- Sora Choi
- Text-to-image model
- OpenAI
- Generative artificial intelligence