Title: Video generation with Sora | OpenAI API URL Source: https://platform.openai.com/docs/guides/video-generation Published Time: Wed, 04 Mar 2026 01:52:45 GMT Markdown Content: Video generation with Sora | OpenAI API =============== Image 1: OpenAI Developers (https://platform.openai.com/) Home (https://platform.openai.com/) API (https://platform.openai.com/api) Docs Guides and concepts for the OpenAI API (https://platform.openai.com/api/docs)API reference Endpoints, parameters, and responses (https://platform.openai.com/api/reference/overview) Codex (https://platform.openai.com/codex) ChatGPT (https://platform.openai.com/chatgpt) Apps SDK Build apps to extend ChatGPT (https://platform.openai.com/apps-sdk)Commerce Build commerce flows in ChatGPT (https://platform.openai.com/commerce) Learn (https://platform.openai.com/resources) Resources Assets for developers building with OpenAI (https://platform.openai.com/resources)Cookbook Notebook examples for building with OpenAI models (https://platform.openai.com/cookbook)Blog Learnings and experiences from developers (https://platform.openai.com/blog) API Dashboard (https://platform.openai.com/login) Search the API docs ------------------- Close Primary navigation API API Reference Codex ChatGPT Learn ### Get started - Overview (https://platform.openai.com/api/docs) - Quickstart (https://platform.openai.com/api/docs/quickstart) - Models (https://platform.openai.com/api/docs/models) - Pricing (https://platform.openai.com/api/docs/pricing) - Libraries (https://platform.openai.com/api/docs/libraries) - Latest: GPT-5.2 (https://platform.openai.com/api/docs/guides/latest-model) ### Core concepts - Text generation (https://platform.openai.com/api/docs/guides/text) - Code generation (https://platform.openai.com/api/docs/guides/code-generation) - Images and vision (https://platform.openai.com/api/docs/guides/images-vision) - Audio and speech (https://platform.openai.com/api/docs/guides/audio) - Structured output (https://platform.openai.com/api/docs/guides/structured-outputs) - Function calling (https://platform.openai.com/api/docs/guides/function-calling) - Responses API (https://platform.openai.com/api/docs/guides/migrate-to-responses) ### Agents - Overview (https://platform.openai.com/api/docs/guides/agents) - Build agents - Agent Builder (https://platform.openai.com/api/docs/guides/agent-builder) - Node reference (https://platform.openai.com/api/docs/guides/node-reference) - Safety in building agents (https://platform.openai.com/api/docs/guides/agent-builder-safety) - Agents SDK (https://platform.openai.com/api/docs/guides/agents-sdk) - Deploy in your product - ChatKit (https://platform.openai.com/api/docs/guides/chatkit) - Custom theming (https://platform.openai.com/api/docs/guides/chatkit-themes) - Widgets (https://platform.openai.com/api/docs/guides/chatkit-widgets) - Actions (https://platform.openai.com/api/docs/guides/chatkit-actions) - Advanced integration (https://platform.openai.com/api/docs/guides/custom-chatkit) - Optimize - Agent evals (https://platform.openai.com/api/docs/guides/agent-evals) - Trace grading (https://platform.openai.com/api/docs/guides/trace-grading) - Voice agents (https://platform.openai.com/api/docs/guides/voice-agents) ### Tools - Using tools (https://platform.openai.com/api/docs/guides/tools) - Connectors and MCP (https://platform.openai.com/api/docs/guides/tools-connectors-mcp) - Skills (https://platform.openai.com/api/docs/guides/tools-skills) - Shell (https://platform.openai.com/api/docs/guides/tools-shell) - Web search (https://platform.openai.com/api/docs/guides/tools-web-search) - Code interpreter (https://platform.openai.com/api/docs/guides/tools-code-interpreter) - File search and retrieval - File search (https://platform.openai.com/api/docs/guides/tools-file-search) - Retrieval (https://platform.openai.com/api/docs/guides/retrieval) - More tools - Image generation (https://platform.openai.com/api/docs/guides/tools-image-generation) - Computer use (https://platform.openai.com/api/docs/guides/tools-computer-use) - Local shell tool (https://platform.openai.com/api/docs/guides/tools-local-shell) - Apply patch (https://platform.openai.com/api/docs/guides/tools-apply-patch) ### Run and scale - Conversation state (https://platform.openai.com/api/docs/guides/conversation-state) - Background mode (https://platform.openai.com/api/docs/guides/background) - Streaming (https://platform.openai.com/api/docs/guides/streaming-responses) - WebSocket mode (https://platform.openai.com/api/docs/guides/websocket-mode) - Webhooks (https://platform.openai.com/api/docs/guides/webhooks) - File inputs (https://platform.openai.com/api/docs/guides/file-inputs) - Context management - Compaction (https://platform.openai.com/api/docs/guides/compaction) - Counting tokens (https://platform.openai.com/api/docs/guides/token-counting) - Prompt caching (https://platform.openai.com/api/docs/guides/prompt-caching) - Prompting - Overview (https://platform.openai.com/api/docs/guides/prompting) - Prompt engineering (https://platform.openai.com/api/docs/guides/prompt-engineering) - Reasoning - Reasoning models (https://platform.openai.com/api/docs/guides/reasoning) - Reasoning best practices (https://platform.openai.com/api/docs/guides/reasoning-best-practices) ### Evaluation - Getting started (https://platform.openai.com/api/docs/guides/evaluation-getting-started) - Working with evals (https://platform.openai.com/api/docs/guides/evals) - Prompt optimizer (https://platform.openai.com/api/docs/guides/prompt-optimizer) - External models (https://platform.openai.com/api/docs/guides/external-models) - Best practices (https://platform.openai.com/api/docs/guides/evaluation-best-practices) ### Realtime API - Overview (https://platform.openai.com/api/docs/guides/realtime) - Connect - WebRTC (https://platform.openai.com/api/docs/guides/realtime-webrtc) - WebSocket (https://platform.openai.com/api/docs/guides/realtime-websocket) - SIP (https://platform.openai.com/api/docs/guides/realtime-sip) - Usage - Using realtime models (https://platform.openai.com/api/docs/guides/realtime-models-prompting) - Managing conversations (https://platform.openai.com/api/docs/guides/realtime-conversations) - Webhooks and server-side controls (https://platform.openai.com/api/docs/guides/realtime-server-controls) - Managing costs (https://platform.openai.com/api/docs/guides/realtime-costs) - Realtime transcription (https://platform.openai.com/api/docs/guides/realtime-transcription) - Voice agents (https://openai.github.io/openai-agents-js/guides/voice-agents/quickstart/) ### Model optimization - Optimization cycle (https://platform.openai.com/api/docs/guides/model-optimization) - Fine-tuning - Supervised fine-tuning (https://platform.openai.com/api/docs/guides/supervised-fine-tuning) - Vision fine-tuning (https://platform.openai.com/api/docs/guides/vision-fine-tuning) - Direct preference optimization (https://platform.openai.com/api/docs/guides/direct-preference-optimization) - Reinforcement fine-tuning (https://platform.openai.com/api/docs/guides/reinforcement-fine-tuning) - RFT use cases (https://platform.openai.com/api/docs/guides/rft-use-cases) - Best practices (https://platform.openai.com/api/docs/guides/fine-tuning-best-practices) - Graders (https://platform.openai.com/api/docs/guides/graders) ### Specialized models - Image generation (https://platform.openai.com/api/docs/guides/image-generation) - Video generation (https://platform.openai.com/api/docs/guides/video-generation) - Text to speech (https://platform.openai.com/api/docs/guides/text-to-speech) - Speech to text (https://platform.openai.com/api/docs/guides/speech-to-text) - Deep research (https://platform.openai.com/api/docs/guides/deep-research) - Embeddings (https://platform.openai.com/api/docs/guides/embeddings) - Moderation (https://platform.openai.com/api/docs/guides/moderation) ### Going live - Production best practices (https://platform.openai.com/api/docs/guides/production-best-practices) - Latency optimization - Overview (https://platform.openai.com/api/docs/guides/latency-optimization) - Predicted Outputs (https://platform.openai.com/api/docs/guides/predicted-outputs) - Priority processing (https://platform.openai.com/api/docs/guides/priority-processing) - Cost optimization - Overview (https://platform.openai.com/api/docs/guides/cost-optimization) - Batch (https://platform.openai.com/api/docs/guides/batch) - Flex processing (https://platform.openai.com/api/docs/guides/flex-processing) - Accuracy optimization (https://platform.openai.com/api/docs/guides/optimizing-llm-accuracy) - Safety - Safety best practices (https://platform.openai.com/api/docs/guides/safety-best-practices) - Safety checks (https://platform.openai.com/api/docs/guides/safety-checks) - Cybersecurity checks (https://platform.openai.com/api/docs/guides/safety-checks/cybersecurity) - Under 18 API Guidance (https://platform.openai.com/api/docs/guides/safety-checks/under-18-api-guidance) ### Legacy APIs - Assistants API - Migration guide (https://platform.openai.com/api/docs/assistants/migration) - Deep dive (https://platform.openai.com/api/docs/assistants/deep-dive) - Tools (https://platform.openai.com/api/docs/assistants/tools) ### Resources - Terms and policies (https://openai.com/policies) - Changelog (https://platform.openai.com/api/docs/changelog) - Your data (https://platform.openai.com/api/docs/guides/your-data) - Permissions (https://platform.openai.com/api/docs/guides/rbac) - Rate limits (https://platform.openai.com/api/docs/guides/rate-limits) - Deprecations (https://platform.openai.com/api/docs/deprecations) - MCP for deep research (https://platform.openai.com/api/docs/mcp) - Developer mode (https://platform.openai.com/api/docs/guides/developer-mode) - ChatGPT Actions - Introduction (https://platform.openai.com/api/docs/actions/introduction) - Getting started (https://platform.openai.com/api/docs/actions/getting-started) - Actions library (https://platform.openai.com/api/docs/actions/actions-library) - Authentication (https://platform.openai.com/api/docs/actions/authentication) - Production (https://platform.openai.com/api/docs/actions/production) - Data retrieval (https://platform.openai.com/api/docs/actions/data-retrieval) - Sending files (https://platform.openai.com/api/docs/actions/sending-files) ### Getting Started - Overview (https://platform.openai.com/codex) - Quickstart (https://platform.openai.com/codex/quickstart) - Explore (https://platform.openai.com/codex/explore) - Pricing (https://platform.openai.com/codex/pricing) - Concepts - Prompting (https://platform.openai.com/codex/prompting) - Customization (https://platform.openai.com/codex/concepts/customization) - Multi-agents (https://platform.openai.com/codex/concepts/multi-agents) - Workflows (https://platform.openai.com/codex/workflows) - Models (https://platform.openai.com/codex/models) - Cyber Safety (https://platform.openai.com/codex/concepts/cyber-safety) ### Using Codex - App - Overview (https://platform.openai.com/codex/app) - Features (https://platform.openai.com/codex/app/features) - Settings (https://platform.openai.com/codex/app/settings) - Review (https://platform.openai.com/codex/app/review) - Automations (https://platform.openai.com/codex/app/automations) - Worktrees (https://platform.openai.com/codex/app/worktrees) - Local Environments (https://platform.openai.com/codex/app/local-environments) - Commands (https://platform.openai.com/codex/app/commands) - Troubleshooting (https://platform.openai.com/codex/app/troubleshooting) - IDE Extension - Overview (https://platform.openai.com/codex/ide) - Features (https://platform.openai.com/codex/ide/features) - Settings (https://platform.openai.com/codex/ide/settings) - IDE Commands (https://platform.openai.com/codex/ide/commands) - Slash commands (https://platform.openai.com/codex/ide/slash-commands) - CLI - Overview (https://platform.openai.com/codex/cli) - Features (https://platform.openai.com/codex/cli/features) - Command Line Options (https://platform.openai.com/codex/cli/reference) - Slash commands (https://platform.openai.com/codex/cli/slash-commands) - Web - Overview (https://platform.openai.com/codex/cloud) - Environments (https://platform.openai.com/codex/cloud/environments) - Internet Access (https://platform.openai.com/codex/cloud/internet-access) - Integrations - GitHub (https://platform.openai.com/codex/integrations/github) - Slack (https://platform.openai.com/codex/integrations/slack) - Linear (https://platform.openai.com/codex/integrations/linear) ### Configuration - Config File - Config Basics (https://platform.openai.com/codex/config-basic) - Advanced Config (https://platform.openai.com/codex/config-advanced) - Config Reference (https://platform.openai.com/codex/config-reference) - Sample Config (https://platform.openai.com/codex/config-sample) - Rules (https://platform.openai.com/codex/rules) - AGENTS.md (https://platform.openai.com/codex/guides/agents-md) - MCP (https://platform.openai.com/codex/mcp) - Skills (https://platform.openai.com/codex/skills) - Multi-agents (https://platform.openai.com/codex/multi-agent) ### Administration - Authentication (https://platform.openai.com/codex/auth) - Security (https://platform.openai.com/codex/security) - Enterprise - Admin Setup (https://platform.openai.com/codex/enterprise/admin-setup) - Governance (https://platform.openai.com/codex/enterprise/governance) - Managed configuration (https://platform.openai.com/codex/enterprise/managed-configuration) - Windows (https://platform.openai.com/codex/windows) ### Automation - Non-interactive Mode (https://platform.openai.com/codex/noninteractive) - Codex SDK (https://platform.openai.com/codex/sdk) - App Server (https://platform.openai.com/codex/app-server) - MCP Server (https://platform.openai.com/codex/guides/agents-sdk) - GitHub Action (https://platform.openai.com/codex/github-action) ### Learn - Videos (https://platform.openai.com/codex/videos) - Blog - Building frontend UIs with Codex and Figma (https://platform.openai.com/blog/building-frontend-uis-with-codex-and-figma) - Run long horizon tasks with Codex (https://platform.openai.com/blog/run-long-horizon-tasks-with-codex) - View all (https://platform.openai.com/blog/topic/codex) - Cookbooks - Codex Prompting Guide (https://platform.openai.com/cookbook/examples/gpt-5/codex_prompting_guide) - Long horizon tasks with Codex (https://platform.openai.com/cookbook/examples/codex/long_horizon_tasks) - View all (https://platform.openai.com/cookbook/topic/codex) - Building AI Teams (https://platform.openai.com/codex/guides/build-ai-native-engineering-team) ### Community - Ambassadors (https://platform.openai.com/codex/ambassadors) - Meetups (https://platform.openai.com/codex/community/meetups) ### Releases - Changelog (https://platform.openai.com/codex/changelog) - Feature Maturity (https://platform.openai.com/codex/feature-maturity) - Open Source (https://platform.openai.com/codex/open-source) Apps SDK Commerce - Home (https://platform.openai.com/apps-sdk) - Quickstart (https://platform.openai.com/apps-sdk/quickstart) ### Core Concepts - MCP Apps in ChatGPT (https://platform.openai.com/apps-sdk/mcp-apps-in-chatgpt) - MCP Server (https://platform.openai.com/apps-sdk/concepts/mcp-server) - UX principles (https://platform.openai.com/apps-sdk/concepts/ux-principles) - UI guidelines (https://platform.openai.com/apps-sdk/concepts/ui-guidelines) ### Plan - Research use cases (https://platform.openai.com/apps-sdk/plan/use-case) - Define tools (https://platform.openai.com/apps-sdk/plan/tools) - Design components (https://platform.openai.com/apps-sdk/plan/components) ### Build - Set up your server (https://platform.openai.com/apps-sdk/build/mcp-server) - Build your ChatGPT UI (https://platform.openai.com/apps-sdk/build/chatgpt-ui) - Authenticate users (https://platform.openai.com/apps-sdk/build/auth) - Manage state (https://platform.openai.com/apps-sdk/build/state-management) - Monetize your app (https://platform.openai.com/apps-sdk/build/monetization) - Examples (https://platform.openai.com/apps-sdk/build/examples) ### Deploy - Deploy your app (https://platform.openai.com/apps-sdk/deploy) - Connect from ChatGPT (https://platform.openai.com/apps-sdk/deploy/connect-chatgpt) - Test your integration (https://platform.openai.com/apps-sdk/deploy/testing) - Submit your app (https://platform.openai.com/apps-sdk/deploy/submission) ### Guides - Optimize Metadata (https://platform.openai.com/apps-sdk/guides/optimize-metadata) - Security & Privacy (https://platform.openai.com/apps-sdk/guides/security-privacy) - Troubleshooting (https://platform.openai.com/apps-sdk/deploy/troubleshooting) ### Resources - Changelog (https://platform.openai.com/apps-sdk/changelog) - App submission guidelines (https://platform.openai.com/apps-sdk/app-submission-guidelines) - Reference (https://platform.openai.com/apps-sdk/reference) - Home (https://platform.openai.com/commerce) ### Guides - Get started (https://platform.openai.com/commerce/guides/get-started) - Key concepts (https://platform.openai.com/commerce/guides/key-concepts) - Production readiness (https://platform.openai.com/commerce/guides/production) ### Commerce specs - Agentic Checkout (https://platform.openai.com/commerce/specs/checkout) - Delegated Payment (https://platform.openai.com/commerce/specs/payment) ### Product feeds - Overview (https://platform.openai.com/commerce/product-feeds) - Onboarding (https://platform.openai.com/commerce/product-feeds/onboarding) - Feed spec (https://platform.openai.com/commerce/product-feeds/spec) - Best practices (https://platform.openai.com/commerce/product-feeds/best-practices) Resources Cookbook Blog - Home (https://platform.openai.com/resources) - Docs MCP (https://platform.openai.com/resources/docs-mcp) ### Categories - Code (https://platform.openai.com/resources/code) - Cookbooks (https://platform.openai.com/cookbook) - Guides (https://platform.openai.com/resources/guides) - Videos (https://platform.openai.com/resources/videos) ### Topics - Agents (https://platform.openai.com/resources/agents) - Audio & Voice (https://platform.openai.com/resources/audio) - Computer use (https://platform.openai.com/resources/cua) - Codex (https://platform.openai.com/resources/codex) - Evals (https://platform.openai.com/resources/evals) - gpt-oss (https://platform.openai.com/resources/gpt-oss) - Fine-tuning (https://platform.openai.com/resources/fine-tuning) - Image generation (https://platform.openai.com/resources/imagegen) - Scaling (https://platform.openai.com/resources/scaling) - Tools (https://platform.openai.com/resources/tools) - Video generation (https://platform.openai.com/resources/videogen) - Home (https://platform.openai.com/cookbook) ### Topics - Agents (https://platform.openai.com/cookbook/topic/agents) - Evals (https://platform.openai.com/cookbook/topic/evals) - Multimodal (https://platform.openai.com/cookbook/topic/multimodal) - Text (https://platform.openai.com/cookbook/topic/text) - Guardrails (https://platform.openai.com/cookbook/topic/guardrails) - Optimization (https://platform.openai.com/cookbook/topic/optimization) - ChatGPT (https://platform.openai.com/cookbook/topic/chatgpt) - Codex (https://platform.openai.com/cookbook/topic/codex) - gpt-oss (https://platform.openai.com/cookbook/topic/gpt-oss) ### Contribute - Cookbook on GitHub (https://github.com/openai/openai-cookbook) - All posts (https://platform.openai.com/blog) ### Recent - Building frontend UIs with Codex and Figma (https://platform.openai.com/blog/building-frontend-uis-with-codex-and-figma) - Run long horizon tasks with Codex (https://platform.openai.com/blog/run-long-horizon-tasks-with-codex) - Shell + Skills + Compaction: Tips for long-running agents that do real work (https://platform.openai.com/blog/skills-shell-tips) - 15 lessons learned building ChatGPT Apps (https://platform.openai.com/blog/15-lessons-building-chatgpt-apps) - Testing Agent Skills Systematically with Evals (https://platform.openai.com/blog/eval-skills) ### Topics - General (https://platform.openai.com/blog/topic/general) - API (https://platform.openai.com/blog/topic/api) - Apps SDK (https://platform.openai.com/blog/topic/apps-sdk) - Audio (https://platform.openai.com/blog/topic/audio) - Codex (https://platform.openai.com/blog/topic/codex) API Dashboard (https://platform.openai.com/login) Search ⌘ K ### Get started - Overview (https://platform.openai.com/api/docs) - Quickstart (https://platform.openai.com/api/docs/quickstart) - Models (https://platform.openai.com/api/docs/models) - Pricing (https://platform.openai.com/api/docs/pricing) - Libraries (https://platform.openai.com/api/docs/libraries) - Latest: GPT-5.2 (https://platform.openai.com/api/docs/guides/latest-model) ### Core concepts - Text generation (https://platform.openai.com/api/docs/guides/text) - Code generation (https://platform.openai.com/api/docs/guides/code-generation) - Images and vision (https://platform.openai.com/api/docs/guides/images-vision) - Audio and speech (https://platform.openai.com/api/docs/guides/audio) - Structured output (https://platform.openai.com/api/docs/guides/structured-outputs) - Function calling (https://platform.openai.com/api/docs/guides/function-calling) - Responses API (https://platform.openai.com/api/docs/guides/migrate-to-responses) ### Agents - Overview (https://platform.openai.com/api/docs/guides/agents) - Build agents - Agent Builder (https://platform.openai.com/api/docs/guides/agent-builder) - Node reference (https://platform.openai.com/api/docs/guides/node-reference) - Safety in building agents (https://platform.openai.com/api/docs/guides/agent-builder-safety) - Agents SDK (https://platform.openai.com/api/docs/guides/agents-sdk) - Deploy in your product - ChatKit (https://platform.openai.com/api/docs/guides/chatkit) - Custom theming (https://platform.openai.com/api/docs/guides/chatkit-themes) - Widgets (https://platform.openai.com/api/docs/guides/chatkit-widgets) - Actions (https://platform.openai.com/api/docs/guides/chatkit-actions) - Advanced integration (https://platform.openai.com/api/docs/guides/custom-chatkit) - Optimize - Agent evals (https://platform.openai.com/api/docs/guides/agent-evals) - Trace grading (https://platform.openai.com/api/docs/guides/trace-grading) - Voice agents (https://platform.openai.com/api/docs/guides/voice-agents) ### Tools - Using tools (https://platform.openai.com/api/docs/guides/tools) - Connectors and MCP (https://platform.openai.com/api/docs/guides/tools-connectors-mcp) - Skills (https://platform.openai.com/api/docs/guides/tools-skills) - Shell (https://platform.openai.com/api/docs/guides/tools-shell) - Web search (https://platform.openai.com/api/docs/guides/tools-web-search) - Code interpreter (https://platform.openai.com/api/docs/guides/tools-code-interpreter) - File search and retrieval - File search (https://platform.openai.com/api/docs/guides/tools-file-search) - Retrieval (https://platform.openai.com/api/docs/guides/retrieval) - More tools - Image generation (https://platform.openai.com/api/docs/guides/tools-image-generation) - Computer use (https://platform.openai.com/api/docs/guides/tools-computer-use) - Local shell tool (https://platform.openai.com/api/docs/guides/tools-local-shell) - Apply patch (https://platform.openai.com/api/docs/guides/tools-apply-patch) ### Run and scale - Conversation state (https://platform.openai.com/api/docs/guides/conversation-state) - Background mode (https://platform.openai.com/api/docs/guides/background) - Streaming (https://platform.openai.com/api/docs/guides/streaming-responses) - WebSocket mode (https://platform.openai.com/api/docs/guides/websocket-mode) - Webhooks (https://platform.openai.com/api/docs/guides/webhooks) - File inputs (https://platform.openai.com/api/docs/guides/file-inputs) - Context management - Compaction (https://platform.openai.com/api/docs/guides/compaction) - Counting tokens (https://platform.openai.com/api/docs/guides/token-counting) - Prompt caching (https://platform.openai.com/api/docs/guides/prompt-caching) - Prompting - Overview (https://platform.openai.com/api/docs/guides/prompting) - Prompt engineering (https://platform.openai.com/api/docs/guides/prompt-engineering) - Reasoning - Reasoning models (https://platform.openai.com/api/docs/guides/reasoning) - Reasoning best practices (https://platform.openai.com/api/docs/guides/reasoning-best-practices) ### Evaluation - Getting started (https://platform.openai.com/api/docs/guides/evaluation-getting-started) - Working with evals (https://platform.openai.com/api/docs/guides/evals) - Prompt optimizer (https://platform.openai.com/api/docs/guides/prompt-optimizer) - External models (https://platform.openai.com/api/docs/guides/external-models) - Best practices (https://platform.openai.com/api/docs/guides/evaluation-best-practices) ### Realtime API - Overview (https://platform.openai.com/api/docs/guides/realtime) - Connect - WebRTC (https://platform.openai.com/api/docs/guides/realtime-webrtc) - WebSocket (https://platform.openai.com/api/docs/guides/realtime-websocket) - SIP (https://platform.openai.com/api/docs/guides/realtime-sip) - Usage - Using realtime models (https://platform.openai.com/api/docs/guides/realtime-models-prompting) - Managing conversations (https://platform.openai.com/api/docs/guides/realtime-conversations) - Webhooks and server-side controls (https://platform.openai.com/api/docs/guides/realtime-server-controls) - Managing costs (https://platform.openai.com/api/docs/guides/realtime-costs) - Realtime transcription (https://platform.openai.com/api/docs/guides/realtime-transcription) - Voice agents (https://openai.github.io/openai-agents-js/guides/voice-agents/quickstart/) ### Model optimization - Optimization cycle (https://platform.openai.com/api/docs/guides/model-optimization) - Fine-tuning - Supervised fine-tuning (https://platform.openai.com/api/docs/guides/supervised-fine-tuning) - Vision fine-tuning (https://platform.openai.com/api/docs/guides/vision-fine-tuning) - Direct preference optimization (https://platform.openai.com/api/docs/guides/direct-preference-optimization) - Reinforcement fine-tuning (https://platform.openai.com/api/docs/guides/reinforcement-fine-tuning) - RFT use cases (https://platform.openai.com/api/docs/guides/rft-use-cases) - Best practices (https://platform.openai.com/api/docs/guides/fine-tuning-best-practices) - Graders (https://platform.openai.com/api/docs/guides/graders) ### Specialized models - Image generation (https://platform.openai.com/api/docs/guides/image-generation) - Video generation (https://platform.openai.com/api/docs/guides/video-generation) - Text to speech (https://platform.openai.com/api/docs/guides/text-to-speech) - Speech to text (https://platform.openai.com/api/docs/guides/speech-to-text) - Deep research (https://platform.openai.com/api/docs/guides/deep-research) - Embeddings (https://platform.openai.com/api/docs/guides/embeddings) - Moderation (https://platform.openai.com/api/docs/guides/moderation) ### Going live - Production best practices (https://platform.openai.com/api/docs/guides/production-best-practices) - Latency optimization - Overview (https://platform.openai.com/api/docs/guides/latency-optimization) - Predicted Outputs (https://platform.openai.com/api/docs/guides/predicted-outputs) - Priority processing (https://platform.openai.com/api/docs/guides/priority-processing) - Cost optimization - Overview (https://platform.openai.com/api/docs/guides/cost-optimization) - Batch (https://platform.openai.com/api/docs/guides/batch) - Flex processing (https://platform.openai.com/api/docs/guides/flex-processing) - Accuracy optimization (https://platform.openai.com/api/docs/guides/optimizing-llm-accuracy) - Safety - Safety best practices (https://platform.openai.com/api/docs/guides/safety-best-practices) - Safety checks (https://platform.openai.com/api/docs/guides/safety-checks) - Cybersecurity checks (https://platform.openai.com/api/docs/guides/safety-checks/cybersecurity) - Under 18 API Guidance (https://platform.openai.com/api/docs/guides/safety-checks/under-18-api-guidance) ### Legacy APIs - Assistants API - Migration guide (https://platform.openai.com/api/docs/assistants/migration) - Deep dive (https://platform.openai.com/api/docs/assistants/deep-dive) - Tools (https://platform.openai.com/api/docs/assistants/tools) ### Resources - Terms and policies (https://openai.com/policies) - Changelog (https://platform.openai.com/api/docs/changelog) - Your data (https://platform.openai.com/api/docs/guides/your-data) - Permissions (https://platform.openai.com/api/docs/guides/rbac) - Rate limits (https://platform.openai.com/api/docs/guides/rate-limits) - Deprecations (https://platform.openai.com/api/docs/deprecations) - MCP for deep research (https://platform.openai.com/api/docs/mcp) - Developer mode (https://platform.openai.com/api/docs/guides/developer-mode) - ChatGPT Actions - Introduction (https://platform.openai.com/api/docs/actions/introduction) - Getting started (https://platform.openai.com/api/docs/actions/getting-started) - Actions library (https://platform.openai.com/api/docs/actions/actions-library) - Authentication (https://platform.openai.com/api/docs/actions/authentication) - Production (https://platform.openai.com/api/docs/actions/production) - Data retrieval (https://platform.openai.com/api/docs/actions/data-retrieval) - Sending files (https://platform.openai.com/api/docs/actions/sending-files) Copy Page More page actions Copy Page More page actions Video generation with Sora ========================== Create, iterate on, and manage videos with the Videos API. Explore Image 2 (https://platform.openai.com/docs/guides/video-generation#)Image 3 (https://platform.openai.com/docs/guides/video-generation#)Image 4 (https://platform.openai.com/docs/guides/video-generation#)Image 5 (https://platform.openai.com/docs/guides/video-generation#)Image 6 (https://platform.openai.com/docs/guides/video-generation#)Image 7 (https://platform.openai.com/docs/guides/video-generation#)Image 8 (https://platform.openai.com/docs/guides/video-generation#)Image 9 (https://platform.openai.com/docs/guides/video-generation#)Image 10 (https://platform.openai.com/docs/guides/video-generation#)Image 11 (https://platform.openai.com/docs/guides/video-generation#)Image 12 (https://platform.openai.com/docs/guides/video-generation#)Image 13 (https://platform.openai.com/docs/guides/video-generation#) Overview -------- Sora is OpenAI’s newest frontier in generative media – a state-of-the-art video model capable of creating richly detailed, dynamic clips with audio from natural language or images. Built on years of research into multimodal diffusion and trained on diverse visual data, Sora brings a deep understanding of 3D space, motion, and scene continuity to text-to-video generation. The Videos API (https://platform.openai.com/api/reference/resources/videos) (in preview) exposes these capabilities to developers for the first time, enabling programmatic creation, extension, and remixing of videos. It provides five endpoints, each with distinct capabilities: - Create video: Start a new render job from a prompt, with optional reference inputs or a remix ID. - Get video status: Retrieve the current state of a render job and monitor its progress. - Download video: Fetch the finished MP4 once the job is completed. - List videos: Enumerate your videos with pagination for history, dashboards, or housekeeping. - Delete videos: Remove an individual video ID from OpenAI’s storage. Models ------ The second generation Sora model comes in two variants, each tailored for different use cases. ### Sora 2 sora-2 is designed for speed and flexibility. It’s ideal for the exploration phase, when you’re experimenting with tone, structure, or visual style and need quick feedback rather than perfect fidelity. It generates good quality results quickly, making it well suited for rapid iteration, concepting, and rough cuts. sora-2 is often more than sufficient for social media content, prototypes, and scenarios where turnaround time matters more than ultra-high fidelity. ### Sora 2 Pro sora-2-pro produces higher quality results. It’s the better choice when you need production-quality output. sora-2-pro takes longer to render and is more expensive to run, but it produces more polished, stable results. It’s best for high-resolution cinematic footage, marketing assets, and any situation where visual precision is critical. Generate a video ---------------- Generating a video is an asynchronous process: When you call the POST /videos endpoint, the API returns a job object with a job id and an initial status. You can either poll the GET /videos/{video_id} endpoint until the status transitions to completed, or – for a more efficient approach – use webhooks (see the webhooks section below) to be notified automatically when the job finishes. Once the job has reached the completed state you can fetch the final MP4 file with GET /videos/{video_id}/content. ### Start a render job Start by calling POST /videos with a text prompt and the required parameters. The prompt defines the creative look and feel – subjects, camera, lighting, and motion – while parameters like size and seconds control the video’s resolution and length. Create a video javascript 1 2 3 4 5 6 7 8 9 10 import OpenAI from 'openai'; const openai = new OpenAI(); let video = await openai.videos.create({ model: 'sora-2', prompt: "A video of the words 'Thank you' in sparkling letters", }); console.log('Video generation started: ', video); 1 2 3 4 5 6 7 8 9 10 from openai import OpenAI openai = OpenAI() video = openai.videos.create( model="sora-2", prompt="A video of a cool cat on a motorcycle in the night", ) print("Video generation started:", video) 1 2 3 4 5 6 7 curl -X POST "https://api.openai.com/v1/videos" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: multipart/form-data" \ -F prompt="Wide tracking shot of a teal coupe driving through a desert highway, heat ripples visible, hard sun overhead." \ -F model="sora-2-pro" \ -F size="1280x720" \ -F seconds="8" \ The response is a JSON object with a unique id and an initial status such as queued or in_progress. This means the render job has started. 1 2 3 4 5 6 7 8 9 10 { "id": "video_68d7512d07848190b3e45da0ecbebcde004da08e1e0678d5", "object": "video", "created_at": 1758941485, "status": "queued", "model": "sora-2-pro", "progress": 0, "seconds": "8", "size": "1280x720" } ### Guardrails and restrictions The API enforces several content restrictions: - Only content suitable for audiences under 18 (a setting to bypass this restriction will be available in the future). - Copyrighted characters and copyrighted music will be rejected. - Real people—including public figures—cannot be generated. - Input images with faces of humans are currently rejected. Make sure prompts, reference images, and transcripts respect these rules to avoid failed generations. ### Effective prompting For best results, describe shot type, subject, action, setting, and lighting. For example: - _“Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.”_ - _“Close-up of a steaming coffee cup on a wooden table, morning light through blinds, soft depth of field.”_ This level of specificity helps the model produce consistent results without inventing unwanted details. For more advanced prompting techniques, please refer to our dedicated Sora 2 prompting guide (https://platform.openai.com/cookbook/examples/sora/sora2_prompting_guide). ### Monitor progress Video generation takes time. Depending on model, API load and resolution, a single render may take several minutes. To manage this efficiently, you can poll the API to request status updates or you can get notified via a webhook. #### Poll the status endpoint Call GET /videos/{video_id} with the id returned from the create call. The response shows the job’s current status, progress percentage (if available), and any errors. Typical states are queued, in_progress, completed, and failed. Poll at a reasonable interval (for example, every 10–20 seconds), use exponential backoff if necessary, and provide feedback to users that the job is still in progress. Poll the status endpoint javascript 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import OpenAI from 'openai'; const openai = new OpenAI(); async function main() { const video = await openai.videos.createAndPoll({ model: 'sora-2', prompt: "A video of the words 'Thank you' in sparkling letters", }); if (video.status === 'completed') { console.log('Video successfully completed: ', video); } else { console.log('Video creation failed. Status: ', video.status); } } main(); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 import asyncio from openai import AsyncOpenAI client = AsyncOpenAI() async def main() -> None: video = await client.videos.create_and_poll( model="sora-2", prompt="A video of a cat on a motorcycle", ) if video.status == "completed": print("Video successfully completed: ", video) else: print("Video creation failed. Status: ", video.status) asyncio.run(main()) Response example: 1 2 3 4 5 6 7 8 9 10 { "id": "video_68d7512d07848190b3e45da0ecbebcde004da08e1e0678d5", "object": "video", "created_at": 1758941485, "status": "in_progress", "model": "sora-2-pro", "progress": 33, "seconds": "8", "size": "1280x720" } #### Use webhooks for notifications Instead of polling job status repeatedly with GET, register a webhook (https://platform.openai.com/api/docs/guides/webhooks) to be notified automatically when a video generation completes or fails. Webhooks can be configured in your webhook settings page (https://platform.openai.com/settings/project/webhooks). When a job finishes, the API emits one of two event types: video.completed and video.failed. Each event includes the ID of the job that triggered it. Example webhook payload: 1 2 3 4 5 6 7 8 9 { "id": "evt_abc123", "object": "event", "created_at": 1758941485, "type": "video.completed", // or "video.failed" "data": { "id": "video_abc123" } } ### Retrieve results #### Download the MP4 Once the job reaches status completed, fetch the MP4 with GET /videos/{video_id}/content. This endpoint streams the binary video data and returns standard content headers, so you can either save the file directly to disk or pipe it to cloud storage. Download the MP4 javascript 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 import OpenAI from 'openai'; const openai = new OpenAI(); let video = await openai.videos.create({ model: 'sora-2', prompt: "A video of the words 'Thank you' in sparkling letters", }); console.log('Video generation started: ', video); let progress = video.progress ?? 0; while (video.status === 'in_progress' || video.status === 'queued') { video = await openai.videos.retrieve(video.id); progress = video.progress ?? 0; // Display progress bar const barLength = 30; const filledLength = Math.floor((progress / 100) * barLength); // Simple ASCII progress visualization for terminal output const bar = '='.repeat(filledLength) + '-'.repeat(barLength - filledLength); const statusText = video.status === 'queued' ? 'Queued' : 'Processing'; process.stdout.write(`${statusText}: [${bar}] ${progress.toFixed(1)}%`); await new Promise((resolve) => setTimeout(resolve, 2000)); } // Clear the progress line and show completion process.stdout.write('\n'); if (video.status === 'failed') { console.error('Video generation failed'); return; } console.log('Video generation completed: ', video); console.log('Downloading video content...'); const content = await openai.videos.downloadContent(video.id); const body = content.arrayBuffer(); const buffer = Buffer.from(await body); require('fs').writeFileSync('video.mp4', buffer); console.log('Wrote video.mp4'); 1 2 3 curl -L "https://api.openai.com/v1/videos/video_abc123/content" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ --output video.mp4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 from openai import OpenAI import sys import time openai = OpenAI() video = openai.videos.create( model="sora-2", prompt="A video of a cool cat on a motorcycle in the night", ) print("Video generation started:", video) progress = getattr(video, "progress", 0) bar_length = 30 while video.status in ("in_progress", "queued"): # Refresh status video = openai.videos.retrieve(video.id) progress = getattr(video, "progress", 0) filled_length = int((progress / 100) * bar_length) bar = "=" * filled_length + "-" * (bar_length - filled_length) status_text = "Queued" if video.status == "queued" else "Processing" sys.stdout.write(f" {status_text}: [{bar}] {progress:.1f}%") sys.stdout.flush() time.sleep(2) # Move to next line after progress loop sys.stdout.write(" ") if video.status == "failed": message = getattr( getattr(video, "error", None), "message", "Video generation failed" ) print(message) return print("Video generation completed:", video) print("Downloading video content...") content = openai.videos.download_content(video.id, variant="video") content.write_to_file("video.mp4") print("Wrote video.mp4") You now have the final video file ready for playback, editing, or distribution. Download URLs are valid for a maximum of 1 hour after generation. If you need long-term storage, copy the file to your own storage system promptly. #### Download supporting assets For each completed video, you can also download a thumbnail and a spritesheet. These are lightweight assets useful for previews, scrubbers, or catalog displays. Use the variant query parameter to specify what you want to download. The default is variant=video for the MP4. 1 2 3 4 5 6 7 8 9 # Download a thumbnail curl -L "https://api.openai.com/v1/videos/video_abc123/content?variant=thumbnail" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ --output thumbnail.webp # Download a spritesheet curl -L "https://api.openai.com/v1/videos/video_abc123/content?variant=spritesheet" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ --output spritesheet.jpg Use image references -------------------- You can guide a generation with an input image, which acts as the first frame of your video. This is useful if you need the output video to preserve the look of a brand asset, a character, or a specific environment. Include an image file as the input_reference parameter in your POST /videos request. The image must match the target video’s resolution (size). Supported file formats are image/jpeg, image/png, and image/webp. 1 2 3 4 5 6 7 8 curl -X POST "https://api.openai.com/v1/videos" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: multipart/form-data" \ -F prompt="She turns around and smiles, then slowly walks out of the frame." \ -F model="sora-2-pro" \ -F size="1280x720" \ -F seconds="8" \ -F input_reference="@sample_720p.jpeg;type=image/jpeg" | Input image generated with OpenAI GPT Image (https://platform.openai.com/api/docs/guides/image-generation) | Generated video using Sora 2 (converted to GIF) | | --- | --- | | Image 14Download this image (https://cdn.openai.com/API/docs/images/sora/woman_skyline_original_720p.jpeg) | Image 15 Prompt: _“She turns around and smiles, then slowly walks out of the frame.”_ | | Image 16Download this image (https://cdn.openai.com/API/docs/images/sora/monster_original_720p.jpeg) | Image 17 Prompt: _“The fridge door opens. A cute, chubby purple monster comes out of it.”_ | Remix completed videos ---------------------- Remix lets you take an existing video and make targeted adjustments without regenerating everything from scratch. Provide the remix_video_id of a completed job along with a new prompt that describes the change, and the system reuses the original’s structure, continuity, and composition while applying the modification. This works best when you make a single, well-defined change because smaller, focused edits preserve more of the original fidelity and reduce the risk of introducing artifacts. 1 2 3 4 5 6 curl -X POST "https://api.openai.com/v1/videos//remix" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "prompt": "Shift the color palette to teal, sand, and rust, with a warm backlight." }' Remix is especially valuable for iteration because it lets you refine without discarding what already works. By constraining each remix to one clear adjustment, you keep the visual style, subject consistency, and camera framing stable, while still exploring variations in mood, palette, or staging. This makes it far easier to build polished sequences through small, reliable steps. | Original video | Remix generated video | | --- | --- | | Image 18 | Image 19 Prompt: _“Change the color of the monster to orange.”_ | | Image 20 | Image 21 Prompt: _“A second monster comes out right after.”_ | Maintain your library --------------------- Use GET /videos to enumerate your videos. The endpoint supports optional query parameters for pagination and sorting. 1 2 3 # default curl "https://api.openai.com/v1/videos" \ -H "Authorization: Bearer $OPENAI_API_KEY" | jq . 1 2 3 # with params curl "https://api.openai.com/v1/videos?limit=20&after=video_123&order=asc" \ -H "Authorization: Bearer $OPENAI_API_KEY" | jq . Use DELETE /videos/{video_id} to remove videos you no longer need from OpenAI’s storage. curl -X DELETE "https://api.openai.com/v1/videos/[REPLACE_WITH_YOUR_VIDEO_ID]" \ -H "Authorization: Bearer $OPENAI_API_KEY" | jq .