Quick Reference
Use these paths:--model-path: the base or original model--transformer-path: a quantized transformers-style transformer component directory that already contains its ownconfig.json--transformer-weights-path: quantized transformer weights provided as a single safetensors file, a sharded safetensors directory, a local path, or a Hugging Face repo ID
--model-path, but that is a compatibility path. If a
repo contains multiple candidate checkpoints, pass
--transformer-weights-path explicitly.
Quant Families
Here,quant_family means a checkpoint and loading family with shared CLI
usage and loader behavior. It is not just the numeric precision or a kernel
backend.
| quant_family | checkpoint form | canonical CLI | supported models | extra dependency | platform / notes |
|---|---|---|---|---|---|
fp8 | Quantized transformer component folder, or safetensors with quantization_config metadata | —transformer-path or —transformer-weights-path | ALL | None | Component-folder and single-file flows are both supported |
modelopt-fp8 | Converted ModelOpt FP8 transformer directory or repo with config.json | —transformer-path | FLUX.1, FLUX.2, Wan2.2 | None | Serialized config stays quant_method=modelopt with quant_algo=FP8; dit_layerwise_offload is supported and dit_cpu_offload stays disabled |
modelopt-nvfp4 | Mixed transformer directory/repo with config.json, or raw NVFP4 safetensors export/repo | —transformer-path for mixed overrides; —transformer-weights-path for raw exports | FLUX.1, FLUX.2, Wan2.2 | None | Mixed override repos keep the base model separate; raw exports such as black-forest-labs/FLUX.2-dev-NVFP4 still use the weights-path flow |
nunchaku-svdq | Pre-quantized Nunchaku transformer weights, usually named svdq-{int4|fp4}_r{rank}-… | —transformer-weights-path | Model-specific support such as Qwen-Image, FLUX, and Z-Image | nunchaku | SGLang can infer precision and rank from the filename and supports both int4 and nvfp4 |
msmodelslim | Pre-quantized msmodelslim transformer weights | —model-path | Wan2.2 family | None | Currently only compatible with the Ascend NPU family and supports both w8a8 and w4a4 |
Validated ModelOpt Checkpoints
This section is the canonical support matrix for the six diffusion ModelOpt checkpoints currently wired up in SGLang docs and B200 CI coverage. Published checkpoints keep the serialized quantization config asquant_method=modelopt; the FP8 vs NVFP4 split below is a documentation label
derived from quant_algo.
Five of the six repos live under BBuf/*. The FLUX.2 NVFP4 entry keeps the
official black-forest-labs/FLUX.2-dev-NVFP4 repo.
| Quant Algo | Base Model | Preferred CLI | HF Repo | Current Scope | Notes |
|---|---|---|---|---|---|
FP8 | black-forest-labs/FLUX.1-dev | —transformer-path | BBuf/flux1-dev-modelopt-fp8-sglang-transformer | single-transformer override, deterministic latent/image comparison, H100 benchmark, torch-profiler trace | SGLang converter keeps a validated BF16 fallback set for modulation and FF projection layers; use —model-id FLUX.1-dev for local mirrors |
FP8 | black-forest-labs/FLUX.2-dev | —transformer-path | BBuf/flux2-dev-modelopt-fp8-sglang-transformer | single-transformer override load and generation path | published SGLang-ready transformer override |
FP8 | Wan-AI/Wan2.2-T2V-A14B-Diffusers | —transformer-path | BBuf/wan22-t2v-a14b-modelopt-fp8-sglang-transformer | primary transformer quantized, transformer_2 kept BF16 | primary-transformer-only path; keep transformer_2 on the base checkpoint, and do not describe this as dual-transformer full-model FP8 unless that path is validated separately |
NVFP4 | black-forest-labs/FLUX.1-dev | —transformer-path | BBuf/flux1-dev-modelopt-nvfp4-sglang-transformer | mixed BF16+NVFP4 transformer override, correctness validation, 4x RTX 5090 benchmark, torch-profiler trace | use build_modelopt_nvfp4_transformer.py; validated builder keeps selected FLUX.1 modules in BF16 and sets swap_weight_nibbles=false |
NVFP4 | black-forest-labs/FLUX.2-dev | —transformer-weights-path | black-forest-labs/FLUX.2-dev-NVFP4 | packed-QKV load path | official raw export repo; validated packed export detection and runtime layout handling |
NVFP4 | Wan-AI/Wan2.2-T2V-A14B-Diffusers | —transformer-path | BBuf/wan22-t2v-a14b-modelopt-nvfp4-sglang-transformer | primary transformer quantized with ModelOpt NVFP4, transformer_2 kept BF16 | primary-transformer-only path; keep transformer_2 on the base checkpoint, and current B200/Blackwell bring-up uses SGLANG_DIFFUSION_FLASHINFER_FP4_GEMM_BACKEND=cudnn |
multimodal-gen-test-1-b200).
ModelOpt FP8
Usage Examples
Converted ModelOpt FP8 checkpoints should be loaded as transformer component overrides. If the repo or local directory already containsconfig.json, use
--transformer-path.
Notes
--transformer-pathis the canonical flag for converted ModelOpt FP8 transformer component repos or directories that already carryconfig.json.- If the override repo or local directory contains its own
config.json, SGLang reads the quantization config from that override instead of relying on the base model config. --transformer-weights-pathstill works when you intentionally point at raw weight files or a directory that should be metadata-probed as weights first.dit_layerwise_offloadis supported for ModelOpt FP8 checkpoints.dit_cpu_offloadstill stays disabled for ModelOpt FP8 checkpoints.- The layerwise offload path now preserves the non-contiguous FP8 weight stride expected by the runtime FP8 GEMM path.
- On disk, the quantization config stays
quant_method=modeloptwithquant_algo=FP8; themodelopt-fp8label in this document is a support family name, not a serialized config key. - To build the converted checkpoint yourself from a ModelOpt diffusers export,
use
python -m sglang.multimodal_gen.tools.build_modelopt_fp8_transformer.
ModelOpt NVFP4
Usage Examples
For mixed ModelOpt NVFP4 transformer overrides that already containconfig.json, keep the base model and quantized transformer separate and use
--transformer-path:
--transformer-weights-path:
--model-path:
transformer
was quantized:
Notes
- Use
--transformer-pathfor mixed ModelOpt NVFP4 transformer repos or local directories that already includeconfig.json. - Use
--transformer-weights-pathfor raw NVFP4 exports, individual safetensors files, or repo layouts that should be treated as weights first. - For dual-transformer pipelines such as
Wan2.2-T2V-A14B-Diffusers, the primary--transformer-pathoverride targets onlytransformer. Use a per-component override such as--transformer-2-pathonly when you intentionally want a non-defaulttransformer_2. - On Blackwell, the validated Wan2.2 ModelOpt NVFP4 path currently prefers
FlashInfer FP4 GEMM via
SGLANG_DIFFUSION_FLASHINFER_FP4_GEMM_BACKEND=cudnn. - This environment-variable override is a current workaround for NVFP4 cases
where the default sglang JIT/CUTLASS
sm100path rejects a large-M shape atcan_implement(). The intended long-term fix is to add a validated CUTLASS fallback for those shapes rather than rely on the override. - Direct
--model-pathloading is a compatibility path for FLUX.2 NVFP4-style repos or local directories. - If
--transformer-weights-pathis provided explicitly, it takes precedence over the compatibility--model-pathflow. - For local directories, SGLang first looks for
*-mixed.safetensors, then falls back to loading from the directory. - To force the generic diffusion ModelOpt FP4 path onto a specific FlashInfer
backend, set
SGLANG_DIFFUSION_FLASHINFER_FP4_GEMM_BACKEND. Supported values includeflashinfer_cudnn,flashinfer_cutlass, andflashinfer_trtllm. - On disk, the quantization config stays
quant_method=modeloptwithquant_algo=NVFP4; themodelopt-nvfp4label here is again a documentation family name rather than a serialized config key.
Nunchaku (SVDQuant)
Install
Install the runtime dependency first:File Naming and Auto-Detection
For Nunchaku checkpoints,--model-path should still point to the original
base model, while --transformer-weights-path points to the quantized
transformer weights.
If the basename of --transformer-weights-path contains the pattern
svdq-(int4|fp4)_r{rank}, SGLang will automatically:
- enable SVDQuant
- infer
--quantization-precision - infer
--quantization-rank
| checkpoint name fragment | inferred precision | inferred rank | notes |
|---|---|---|---|
svdq-int4_r32 | int4 | 32 | Standard INT4 checkpoint |
svdq-int4_r128 | int4 | 128 | Higher-quality INT4 checkpoint |
svdq-fp4_r32 | nvfp4 | 32 | fp4 in the filename maps to CLI value nvfp4 |
svdq-fp4_r128 | nvfp4 | 128 | Higher-quality NVFP4 checkpoint |
| filename | precision | rank | typical use |
|---|---|---|---|
svdq-int4_r32-qwen-image.safetensors | int4 | 32 | Balanced default |
svdq-int4_r128-qwen-image.safetensors | int4 | 128 | Quality-focused |
svdq-fp4_r32-qwen-image.safetensors | nvfp4 | 32 | RTX 50-series / NVFP4 path |
svdq-fp4_r128-qwen-image.safetensors | nvfp4 | 128 | Quality-focused NVFP4 |
svdq-int4_r32-qwen-image-lightningv1.0-4steps.safetensors | int4 | 32 | Lightning 4-step |
svdq-int4_r128-qwen-image-lightningv1.1-8steps.safetensors | int4 | 128 | Lightning 8-step |
--enable-svdquant, --quantization-precision, and --quantization-rank
explicitly.
Usage Examples
Recommended auto-detected flow:Notes
--transformer-weights-pathis the canonical flag for Nunchaku checkpoints. Older config names such asquantized_model_pathare treated as compatibility aliases.- Auto-detection only happens when the checkpoint basename matches
svdq-(int4|fp4)_r{rank}. - The CLI values are
int4andnvfp4. In filenames, the NVFP4 variant is written asfp4. - Lightning checkpoints usually expect matching
--num-inference-steps, such as4or8. - Current runtime validation only allows Nunchaku on NVIDIA CUDA Ampere (SM8x) or SM12x GPUs. Hopper (SM90) is currently rejected.
ModelSlim
MindStudio-ModelSlim (msModelSlim) is a model offline quantization compression tool launched by MindStudio and optimized for Ascend hardware.-
Installation
-
Multimodal_sd quantization
Download the original floating-point weights of the large model. Taking Wan2.2-T2V-A14B as an example, you can go to Wan2.2-T2V-A14B to obtain the original model weights. Then install other dependencies (related to the model, refer to the modelscope model card).
Note: You can find pre-quantized validated models on modelscope/Eco-Tech.
Run quantization using one-click quantization (recommended):For more detailed examples of quantization of models, as well as information about their support, see the examples section in ModelSLim repo.Note: SGLang does not support quantized embeddings, please disable this option when quantizing using msmodelslim.
-
Auto-Detection and different formats
For msmodelslim checkpoints, it’s enough to specify only
--model-path, the detection of quantization occurs automatically for each layer using parsing ofquant_model_description.jsonconfig. In the case ofWan2.2onlyDiffusersweights storage format are supported, whereas modelslim saves the quantized model in the originalWan2.2format, for conversion in usepython/sglang/multimodal_gen/tools/wan_repack.pyscript:After that, please copy all files from originalDiffuserscheckpoint (instead oftransformer/tranfsormer_2folders) -
Usage Example
With auto-detected flow:
-
Available Quantization Methods:
-
W4A4_DYNAMIClinear with online quantization of activations -
W8A8linear with offline quantization of activations -
W8A8_DYNAMIClinear with online quantization of activations -
mxfp8linear in progress
-
