What is Wan 2.7?
Wan 2.7 is Alibaba's latest open-source AI video generation model, built on a 27-billion-parameter Mixture-of-Experts (MoE) diffusion transformer — with only 14 billion parameters active per inference pass. It was made available on platforms like Together AI in early April 2026, with open weights under Apache 2.0 expected in mid-to-late Q2 2026. The model supports text-to-video, image-to-video, reference-to-video with voice cloning, and instruction-based video editing.
Is Wan 2.7 AI free to try?
Yes. New accounts receive free credits to explore text-to-video, image-to-video, and audio-driven generation. No payment required to get started — create your first clip and see the quality firsthand.
What is the MoE architecture and why does it matter?
Mixture-of-Experts (MoE) splits the model into specialized sub-networks (experts), activating only the most relevant ones per inference pass. Wan 2.7 has 27B total parameters but only 14B active at any moment. This means you get the capacity of a 27B dense model at roughly the compute cost of a 14B model — better quality without proportionally higher resource cost.
What inputs does Wan 2.7 accept?
Wan 2.7 accepts text prompts (for T2V), single or paired reference images for first-and-last-frame control (I2V), up to 9 reference images for the 9-grid I2V workflow, audio URLs (MP3 or WAV) for native audio conditioning, and existing video clips for instruction-based editing. All input modes are available through the same platform.
How does native audio work in Wan 2.7?
Unlike tools that dub audio after the video is generated, Wan 2.7 conditions on audio during the diffusion process itself. You provide a publicly accessible audio URL, and the model synchronizes character motion and lip movement to that track during generation — resulting in precise lip-sync without any post-processing step.
What is first-and-last-frame control?
First-and-last-frame control lets you specify the exact starting image, the exact ending image, or both for a clip. Wan 2.7 then generates all the motion between those two frames. This gives creators precise control over narrative arc and visual transitions — ideal for product reveals, cinematic cuts, and branded content.
What resolutions and aspect ratios does Wan 2.7 support?
Wan 2.7 generates video at 720P or native 1080P, at 24fps. Supported aspect ratios include 16:9 (widescreen), 9:16 (vertical for Reels and Shorts), and 1:1 (square). All exports are watermark-free MP4 files.
Is Wan 2.7 open source?
Earlier versions of the Wan series (2.1 and 2.2) were open-sourced under Apache 2.0. Wan 2.7 was released via cloud APIs first (Together AI, April 2026), with open weights expected under Apache 2.0 in mid-to-late Q2 2026. Once available, Wan 2.7 will support self-hosted local deployment — including on consumer GPUs like the RTX 4090.
Do pay-as-you-go credits expire?
No. Credits purchased via one-time top-up packages never expire. They remain in your account indefinitely — add more whenever you need them.
What is the refund policy?
All new subscription plans include a 7-day money-back guarantee. If you have a concern about a renewal charge, contact support within 72 hours and we will review it with you directly.