Quick Start
Disaggregation is controlled by a single flag:--disagg-role. Each component is launched independently, just like LLM PD disaggregation.
—disagg-role | What it runs |
|---|---|
monolithic | (Default) Standard single-server mode |
encoder | All stages with the default RoleType.ENCODER affinity: InputValidationStage, TextEncodingStage (plus ImageEncodingStage / ImageVAEEncodingStage for image-conditioned pipelines), LatentPreparationStage, TimestepPreparationStage, and any model-specific “before denoising” stage (e.g. QwenImageLayeredBeforeDenoisingStage, GlmImageBeforeDenoisingStage). |
denoiser | DenoisingStage (and its subclasses: CausalDMDDenoisingStage, DmdDenoisingStage, LTX2AVDenoisingStage, LTX2RefinementStage, Hunyuan3DShapeDenoisingStage, …) — the DiT forward loop plus the scheduler stepping it drives. |
decoder | DecodingStage (VAE decode) and its subclasses (LTX2AVDecodingStage, HeliosDecodingStage, …). |
server | DiffusionServer head node + HTTP server (no GPU) |
Each stage declares its role via therole_affinityproperty onPipelineStage(defaultENCODER). When--disagg-roleis notmonolithic, the pipeline only instantiates stages whose affinity matches, so the above table is the source of truth for what actually runs in each process.
Single-Machine Example (Verified)
The following commands have been tested end-to-end on an 8×H200 machine withWan-AI/Wan2.1-T2V-1.3B-Diffusers. Each role runs on a separate GPU via
--base-gpu-id; the server head node requires no GPU.
Tested result (8×H200): Encoder 2.3 s (TextEncoding) → Denoiser 312.8 s (50 steps, layerwise offload) → Decoder 7.1 s (VAE decode). Total ~322 s for 81-frame 1024×1024 video.
Tip:--base-gpu-idcontrols which physical GPU the role uses. Encoder and Decoder can share a GPU (e.g. both--base-gpu-id 0) to save resources, but make sure the combined GPU memory is sufficient.
Multi-Machine Example
The exact same CLI pattern — just replace127.0.0.1 with actual IPs and add
RDMA flags for direct transfer:
ZMQ handles startup order gracefully — instances and head can start in any order.
Multiple Instances per Role
Use semicolons in--*-urls to register multiple instances:
Port Convention
Result endpoints are derived deterministically from the head node’s--scheduler-port (default: 5555):
| Socket | Port |
|---|---|
| DS frontend (ROUTER) | scheduler_port |
| Encoder result (PULL) | scheduler_port + 1 |
| Denoiser result (PULL) | scheduler_port + 2 |
| Decoder result (PULL) | scheduler_port + 3 |
--disagg-server-addr. No manual endpoint configuration needed.
Transfer Mechanism
Tensor data between roles (encoder→denoiser, denoiser→decoder) is transferred via a P2P transfer engine. The DiffusionServer only routes lightweight control messages (alloc/push/ready); actual tensor data flows directly between instances. mooncake-transfer-engine is required for disaggregated diffusion. It provides RDMA for direct GPU-to-GPU data movement.Transfer Flow
- Sender (encoder/denoiser) stages tensors: async copy to transfer buffer (GPU or CPU pinned, depending on GPUDirect support), overlapped with metadata JSON serialization.
- Sender sends
transfer_stagedcontrol message to DiffusionServer (metadata only, no tensor data). - DiffusionServer sends
transfer_allocto receiver → receiver allocates buffer slot → repliestransfer_allocated. - DiffusionServer sends
transfer_pushto receiver with sender’s address info. - Receiver pulls data via transfer engine (Mooncake RDMA or mock), sends
transfer_ready. - Receiver loads tensors async on a dedicated transfer stream, overlapped with the previous request’s compute.
RDMA Flags
| Flag | Default | Description |
|---|---|---|
—disagg-p2p-hostname | 127.0.0.1 | RDMA-reachable hostname/IP of this instance |
—disagg-ib-device | None | InfiniBand device (e.g., mlx5_0, mlx5_roce0) |
—disagg-transfer-pool-size | 256 MiB | Pinned memory pool per instance |
--disagg-p2p-hostname to the actual IP on each machine. For multi-machine, --disagg-ib-device specifies the RDMA NIC.
Per-Role Parallelism
| Flag | Description |
|---|---|
—encoder-tp | Encoder tensor parallelism |
—denoiser-tp / —denoiser-sp / —denoiser-ulysses / —denoiser-ring | Denoiser parallelism |
—decoder-tp | Decoder tensor parallelism |
--num-gpus.
Other Options
| Flag | Default | Description |
|---|---|---|
—disagg-timeout | 600 | Timeout (seconds) for pending requests |
—disagg-dispatch-policy | round_robin | round_robin or max_free_slots |
Python API
For programmatic single-machine deployment,launch_pool_disagg_server() is available:
Architecture
Request State Machine
FAILED or TIMED_OUT.