VidMachine

VidMachine

Alibaba

Happy Horse 1.0

Alibaba · AI video
Happy Horse 1.0 is an Alibaba video generation model that supports text-to-video from a prompt and image-to-video from a first-frame image; when both are provided, the prompt steers motion. It is published as its own model line—distinct from naming overlap in older community discussions of WAN-family codenames.
Compared with Alibaba's WAN 2.7 image-to-video stack, Happy Horse 1.0 is a separate product with its own parameters and motion characteristics. Evaluators typically compare them on motion style, optional controls (such as end-frame or audio paths), and which resolutions and durations each API exposes.
Hosted variants commonly offer high-definition output tiers (for example 720p and 1080p-class presets) and clip lengths within a documented minimum–maximum range. Exact defaults depend on the cloud or API surface you use.

Key features and benefits

Text-to-video or image-to-video

Provide a prompt for text-to-video, or a first-frame image for image-to-video. With both, the prompt guides how the still animates.

High-definition output

Many deployments list both 720p and 1080p-style presets so teams can balance sharpness with compute. Confirm supported labels on your provider.

Aspect ratio

Text-to-video endpoints often default to vertical formats suitable for short-form platforms. Image-to-video output typically follows the input frame's aspect ratio—use a vertical start frame when you need portrait video.

Duration

Typical integrations allow several seconds per clip within a bounded range (for example on the order of three to fifteen seconds on some APIs). Shorter generations often yield tighter temporal coherence.

Technical specifications

ProviderAlibaba (Happy Horse 1.0)
Primary inputsText prompt; optional first-frame image URL
Typical resolutions720p and 1080p tiers commonly listed on hosted variants
Aspect ratioOften 9:16 for text-to-video; image-to-video follows source image
Typical durationProvider-dependent range (often several seconds up to ~15s on documented APIs)

Use cases and applications

Use Happy Horse 1.0 for social-ready clips, motion tests from still key art, and rapid iteration when Alibaba's Happy Horse motion profile fits your creative direction.
Pair image-to-video with images from any upstream generator or photography—motion quality depends on the strength of the start frame and prompt.

Why this model

Pick Happy Horse 1.0 when you want this Alibaba motion stack specifically and your API exposes the resolutions and durations you need.
If you need WAN 2.7-specific controls (such as structured last-frame conditioning or documented lip-sync audio lanes), compare Alibaba's WAN 2.7 image-to-video offerings side by side with Happy Horse on paper before committing.

What you should know

Is Happy Horse 1.0 the same as WAN 2.7 I2V?
No. They are different model products in Alibaba's ecosystem with different capabilities and API surfaces. Treat naming overlap from older community posts as unreliable; read the provider's model card for each.
What resolution should I use?
Choose based on quality needs and what your host lists—720p-class output is common for efficiency; 1080p when you need extra detail and the endpoint supports it at your target duration.
Does it support text-only generation?
Yes, where the API offers text-to-video. Image-to-video requires a starting frame on implementations that follow that mode.