WAN 2.7 I2V
Alibaba · AI videoWAN 2.7 Image-to-Video (I2V) is Alibaba's Wan 2.7 model for animating a still image with a text prompt—camera motion, character animation, and environmental effects—while optionally controlling the end frame, continuing a short clip, or syncing to audio. The upstream model on Replicate documents 720p and 1080p output and durations from 2 to 15 seconds.
For a long time this model was known in the community by the codename Happy Horse—the same underlying stack as Wan 2.7 I2V. On independent image-to-video leaderboards it has ranked at the top; for example, Artificial Analysis lists a model called HappyHorse-1.0 (sometimes written Happy Horse 1.0) at number one on their image-to-video chart, above entries such as Dreamina Seedance 2.0 720p. That leaderboard entry is the Wan 2.7–family image-to-video model, not a separate product. See https://artificialanalysis.ai/video/leaderboard/image-to-video for the current rankings and methodology.
On VidMachine, WAN 2.7 I2V is wired through Replicate as wan-video/wan-2.7-i2v. We always request 720p output for predictable quality and cost. Credits are charged per second of target scene length at the rate shown on Pricing. Add it to your project's video model priority as a primary or fallback alongside other Replicate-backed video models.
Compared with WAN 2.5 on VidMachine, WAN 2.7 I2V targets newer Wan 2.7 motion and control features (such as optional last-frame guidance for continuous transitions) at a higher per-second credit rate; choose based on whether you need the 2.7 stack and longer duration options up to 15 seconds.
Key features and benefits
Image-to-video with prompt
You provide a starting image and a prompt describing motion and action. The model treats the image as the first frame and generates video that extends it. Prompt expansion can be enabled upstream for short prompts to improve interpretation at the cost of a bit more latency.
First-and-last-frame and transitions
When your scene uses a continuous transition, VidMachine can pass the next scene's start frame as an optional last frame so the clip can end closer to that composition—matching the provider's first-and-last-frame mode. This helps bridge shots without a hard cut when the next frame already exists.
720p on VidMachine
The Replicate model allows 720p or 1080p. VidMachine always sets resolution to 720p for this integration so billing and output stay consistent across projects.
Duration and audio
Scene target duration is clamped to 2–15 seconds to match the API. The provider also documents optional user audio for synchronized generation and auto-generated audio when no file is supplied; VidMachine's main pipeline uses image plus prompt (and optional last frame) like other scene generators.
Technical specifications
ProviderAlibaba Wan (via Replicate wan-video/wan-2.7-i2v)
InputFirst-frame image URL; text prompt; optional last-frame image URL
Output resolution (VidMachine)720p only
Duration2–15 seconds (from scene target, clamped)
CreditsPer second of target duration (see Pricing)
Use cases and applications
Use WAN 2.7 I2V when you have a polished start frame—from WAN 2.7 Image, another image model, or an upload—and want strong motion control with optional end-frame alignment for continuous scenes.
Short social clips, product motion, and concept-art animation fit well; prefer shorter durations (around 2–5 seconds) when you want the most coherent motion, as the provider notes degradation can increase on very long clips.
On VidMachine, combine WAN 2.7 I2V with WAN 2.7 Image in the same Alibaba ecosystem: generate stills, then animate them with this video model in your priority list.
Why this model
Pick WAN 2.7 I2V if you want the newer Wan image-to-video stack with up to 15 seconds and optional last-frame control, and you are fine with 720p output and the listed per-second credits.
If you want lower per-second cost and the existing WAN 2.5 integration, keep WAN 2.5 in priority; if you need 1080p from Alibaba on VidMachine today, WAN 2.5 still exposes 720p/1080p upstream whereas this model is fixed to 720p here.
How VidMachine uses it
Select WAN 2.7 I2V in video model priority. For each scene we call Replicate with your start frame URL, prompt, clamped duration, resolution 720p, and optional last frame when the screenplay requests a continuous transition and the next scene has a start frame.
Credits deduct after successful generation using the same per-second formula as other video models, at the WAN 2.7 I2V rate in lib/credits/calculations.js.
What you should know
Why is output only 720p on VidMachine?
We pin resolution to 720p for this integration so quality and pricing stay predictable. The upstream model also supports 1080p on Replicate if you use it outside VidMachine.
How are credits calculated?
Video generation uses credits per second of the scene's target duration, rounded up like other models. See the Pricing page for the current WAN 2.7 I2V rate.
Does it support text-to-video without an image?
No. Like other I2V models on VidMachine, a starting frame is required.
What is Happy Horse, and how does it relate to WAN 2.7 I2V?
Happy Horse was a widely used codename for this model before it was released under the Wan 2.7 name. Benchmark sites and leaderboards (including Artificial Analysis) may still label it as HappyHorse-1.0 or Happy Horse 1.0; that is the same image-to-video line as Wan 2.7 I2V. On Artificial Analysis’s image-to-video leaderboard at https://artificialanalysis.ai/video/leaderboard/image-to-video it has appeared at the top of the chart, ahead of models such as Seedance 2.0 in the same ranking.