OpenAI Announces Sora: Groundbreaking New Text-to-Video AI Model

OpenAI has just announced its latest creation, Sora – and it’s blowing many people’s minds right now across X. It’s basically an artificial intelligence system that can generate videos from text prompts, but unlike previous text-to-video tools, the output videos are actually really good. Sora represents a major leap forward in text-to-video generation technology.

In this post, we’ll be going over some of the highlights from the recent announcement and hopefully break down some of the most important information like availability and features.

What is Sora?

Sora is an exciting new AI model from OpenAI that can generate realistic and creative video content from simple text prompts. It is an example of a class of AI models known as text-to-video generators. Sora takes a written description of a video scene and renders an actual video depicting it. The system is able to produce videos up to 1 minute in length that match the requested visuals with a high degree of accuracy. For example, it can reliably render videos with multiple characters engaged in specific motions and activities described by the user. The generated videos maintain consistency in things like characters, backgrounds, and other elements even as the camera angle changes.

Key Points

  • Sora is an AI model trained by OpenAI to create realistic and imaginative video scenes from text instructions.
  • It can generate videos up to 1 minute long with good visual quality and faithfulness to prompts.
  • The model understands physical properties of scenes and characters, allowing complex and coherent generations.
  • Sora represents some of the most advanced text-to-visual generation capabilities seen to date in an AI system.

Key Capabilities & Features

  • Generate intricate scenes with multiple characters and elements
  • Simulate realistic motion and physics
  • Maintain visual consistency, with characters and elements persisting properly despite camera angle changes
  • Produce multiple camera angle shots within a single generated video
  • Animate still images by extending them into videos
  • Fill in missing frames of existing video footage

Example Sora Videos

In the announcement, OpenAI included quite a lot of example videos that have been generated using Sora. These examples cover many different subjects and styles, but we’ll share a few of our favorite ones down below. To view all of the example videos, please check out the official blog post from OpenAI. ALL of these videos were fully generated by AI!

Prompt: “Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.”

Prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”

Prompt: “A gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”

So, How Does Sora Work?

Time for the technical stuff, for those wondering how Sora actually manages to generate these videos. Sora utilizes diffusion model AI architecture, starting with noise and progressively refining it into crisp and coherent video over many steps. It also employs transformer models which allow for superior scaling. The model is trained on large datasets of captioned images and videos, this gives it a much easier way to fully understand visual concepts and matching words to objects.

OpenAI does note that Sora still has some weaknesses in accurately modeling physics and complex interactions between multiple elements. But its capabilities clearly showcase the rapid progress of AI in understanding and simulating the real world through the video medium. Out of all of the AI video generators which came before Sora, the clarity is clearly much more advanced and shows less artifacts.


Access to Sora is being initially granted to select groups for testing and feedback, such as red team researchers, visual artists, designers, and filmmakers. This is mostly to assess risks, get creative professional input, and identify positive use cases.

There is sadly no mentioned timeline as yet for when Sora may become available in OpenAI products or APIs for public use. The phrasing indicates it is still in an early research stage. Once deployed to the public, we expect that it will likely follow pricing models similar to the ChatGPT Plus subscription. There could potentially be a free tier with usage limits, and paid subscription plans with higher limits.

OpenAI states “We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products.” This suggests extensive testing and safeguards will be put in place before any potential full launch.

So in summary – Sora is not publicly or freely available now. We’re sorry to disappoint! But we’ll be sure to update this page as soon as OpenAI releases any statement or update regarding the official or beta release of Sora.