Create cinematic AI videos with ByteDance Seedance 2.0. Combine images, videos, audio, and text with multi-modal control, precise reference capabilities, and natural language prompting.
Drop your images, videos, or audio here
PNG, JPG, MP4, WAV — up to 12 files
The global API release is Coming Soon. Stay tuned for the latest updates on availability and official launch announcements.
A truly controllable multi-modal AI video model by ByteDance. Reference anything, edit anything, create anything.
Upload up to 9 images, 3 videos (15s total), and 3 audio files. Combine text, images, videos, and audio freely for unprecedented creative control.
CoreReference motion, effects, camera movements, characters, scenes, and sounds from any uploaded content. Simply describe what you want in natural language.
UniqueMaintain perfect consistency for faces, clothing, text, scenes, and visual styles across your entire video. No more character drift or style inconsistencies.
QualityUpload a reference video to replicate complex choreography, cinematic camera movements, and action sequences. Just show what you want.
MotionSmoothly extend existing videos, merge multiple clips, or edit specific segments. Replace characters, add elements, or modify actions seamlessly.
EditingAutomatically generate context-aware sound effects and background music. Sync video to uploaded audio or music beats for timed creative content.
AudioFrom viral content to professional productions, ByteDance Seedance 2.0 empowers creators across industries.
Create compelling promotional content by referencing successful ad templates with your own products and branding.
Create animated explanations, historical reconstructions, and interactive learning materials that bring lessons to life.
Craft unique narratives using multi-modal inputs. Reference film techniques, replicate cinematic styles across scenes.
Generate scroll-stopping content by referencing trending templates and effects. Replicate viral formats with your own twist.
Upload reference choreography and apply them to any character. Perfect for dance covers and action sequences.
Extend existing videos seamlessly, merge clips, or edit segments without regenerating everything from scratch.
Reference film clips to replicate camera movements, transitions, and visual effects before actual production.
Transform property photos into immersive virtual tours with dynamic walkthroughs and atmospheric presentations.
Upload audio and create perfectly beat-synced videos. Generate sound effects that match your visual content.
Three simple steps to transform your ideas into cinematic AI-generated videos.
Upload images, videos, or audio files as references. Combine up to 12 files across different modalities to express your creative vision.
Use natural language to describe what you want. Reference specific assets by tagging them, like 'Use @image1 as the first frame with @video1's camera movement.'
Generate your video in 4-15 seconds length. Extend, edit, or refine your creation by uploading the result and making targeted adjustments.
Flexible plans for every creator. API pricing details will be announced upon global launch.
Everything you need to know about ByteDance Seedance 2.0 and its upcoming API release.
Seedance 2.0 is ByteDance's latest AI video generation model launched in February 2026. It's a truly multi-modal model that can combine images, videos, audio, and text prompts to generate high-quality cinematic videos with precise reference capabilities and natural language control.
The global API release was originally planned for February 24, 2026, but has been indefinitely postponed due to copyright concerns raised by the Motion Picture Association (MPA). ByteDance is implementing enhanced copyright protection and deepfake defense measures. No new release date has been announced yet. Stay tuned for updates.
Currently, Seedance 2.0 is available on ByteDance's Jimeng (即梦) platform for paying members in China. Several third-party platforms like Kie.ai, PiAPI, and Fal.ai offer early API access. We'll update this page when the official global API launches.
Seedance 2.0's key differentiator is its true multi-modal capability. You can upload up to 9 images, 3 videos, and 3 audio files simultaneously, then use natural language to describe how these elements should be combined. Features like "Reference Anything" allow you to replicate motion, camera movements, and visual styles from uploaded content — something most competing models cannot do.
Seedance 2.0 can generate videos in multiple aspect ratios (16:9, 9:16, 1:1, etc.) with up to 4K resolution. Video length ranges from 4 to 15 seconds per generation, with the ability to extend and chain videos for longer content.
Upload any image, video, or audio file, and use natural language tags (like @image1, @video2) to tell the model what to reference. You can replicate a character's appearance from one image, apply camera movements from a video clip, and sync to an uploaded music track — all in a single generation.
Yes, ByteDance plans to offer the Seedance 2.0 API through BytePlus for global developers. The API will support RESTful endpoints for text-to-video, image-to-video, and multi-modal generation. Pricing and documentation will be available upon launch. Meanwhile, some third-party platforms already offer unofficial API access.
bytedance2.xyz is an independent information resource dedicated to tracking ByteDance's Seedance 2.0 AI video model. We are not officially affiliated with or endorsed by ByteDance Ltd. All product information is sourced from public announcements and documentation.
ByteDance's revolutionary AI video generation model is preparing for global API launch. Enhanced copyright protection and safety measures are being implemented to ensure the best experience.
Check back regularly for the latest updates on availability.