DeepSeek V3.2 Usage#
DeepSeek-V3.2-Exp equips DeepSeek-V3.1-Terminus with DeepSeek Sparse Attention (DSA) through continued training. With DSA, a fine-grained sparse attention mechanism powered by a lightning indexer, DeepSeek-V3.2 achieves efficiency improvements in long-context scenarios.
For reporting issues or tracking upcoming features, please refer to this Roadmap.
Installation#
Docker#
# H200/B200
docker pull lmsysorg/sglang:latest
# MI350/MI355
docker pull lmsysorg/sglang:dsv32-rocm
# NPUs
docker pull lmsysorg/sglang:dsv32-a2
docker pull lmsysorg/sglang:dsv32-a3
Build From Source#
# Install SGLang
git clone https://github.com/sgl-project/sglang
cd sglang
pip3 install pip --upgrade
pip3 install -e "python"
Launch DeepSeek V3.2 with SGLang#
To serve DeepSeek-V3.2-Exp on 8xH200/B200 GPUs:
# Launch with TP + DP
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention
# Launch with EP + DP
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --ep 8 --dp 8 --enable-dp-attention
Configuration Tips#
DP Attention: For DeepSeek V3.2 model, the kernels are customized for the use case of
dp_size=8, so DP attention is enabled by default for better stability and performance. The feature of launching with pure TP is still under development.Short-sequence MHA prefill (adaptive): For short prefill sequences (default threshold: 2048 tokens), the NSA backend uses standard MHA automatically (no extra flags). On H200 (SM90) this path uses the FlashAttention variable-length kernel; on B200 (SM100) it uses TRT-LLM ragged MHA. MHA uses
MHA_ONE_SHOTfor best performance.MHA_ONE_SHOTcomputes multi-head attention over all tokens (both cached prefix and newly extended tokens) in a single kernel invocation, avoiding the overhead of chunked KV cache processing. This achieves optimal throughput for short sequences where total sequence length fits within the chunk capacity limit.Choices of Attention Kernels: The attention backend is automatically set to
nsaattention backend for DeepSeek V3.2 model. In this backend, different kernels for sparse prefilling/decoding are implemented, which can be specified by--nsa-prefill-backendand--nsa-decode-backendserver arguments. The choices of nsa prefill/decode attention kernels include:flashmla_sparse:flash_mla_sparse_fwdkernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, kv inputs.flashmla_kv:flash_mla_with_kvcachekernel fromflash_mlalibrary. Can run on both Hopper and Blackwell GPUs. It requires bf16 q, fp8 k_cache inputs.fa3:flash_attn_with_kvcachekernel fromflash_attnlibrary. Can only run on Hopper GPUs. It requires bf16 q, kv inputs.tilelang:tilelangimplementation that can run on GPU, HPU and NPU.alter: Alter kernel on AMD HPUs. Can only be used as decode kernel.
On the basis of performance benchmarks, the default configuration on H200 and B200 are set as follows :
H200:
flashmla_sparseprefill attention (short-seq prefill uses MHA via FlashAttention varlen),fa3decode attention,bf16kv cache dtype.B200:
flashmla_autoprefill attention (short-seq prefill uses MHA via TRT-LLM ragged),flashmla_kvdecode attention,fp8_e4m3kv cache dtype.flashmla_autoenables automatic selection of eitherflashmla_sparseorflashmla_kvkernel for prefill based on KV cache dtype, hardware, and heuristics. When FP8 KV cache is enabled andtotal_kv_tokens < total_q_tokens * 512, it uses theflashmla_sparsekernel; otherwise, it falls back to theflashmla_kvkernel. The heuristics may need to be tuned if the performance of either theflashmla_sparseorflashmla_kvkernel changes significantly.
Multi-token Prediction#
SGLang implements Multi-Token Prediction (MTP) for DeepSeek V3.2 based on EAGLE speculative decoding. With this optimization, the decoding speed can be improved significantly on small batch sizes. Please look at this PR for more information.
Example usage:
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
The best configuration for
--speculative-num-steps,--speculative-eagle-topkand--speculative-num-draft-tokenscan be searched with bench_speculative.py script for given batch size. The minimum configuration is--speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2, which can achieve speedup for larger batch sizes.The default value of
--max-running-requestsis set to48for MTP. For larger batch sizes, this value should be increased beyond the default value.
Function Calling and Reasoning Parser#
The usage of function calling and reasoning parser is the same as DeepSeek V3.1. Please refer to Reasoning Parser and Tool Parser documents.
PD Disaggregation#
Prefill Command:
python -m sglang.launch_server \
--model-path deepseek-ai/DeepSeek-V3.2-Exp \
--disaggregation-mode prefill \
--host $LOCAL_IP \
--port $PORT \
--tp 8 \
--dp 8 \
--enable-dp-attention \
--dist-init-addr ${HOST}:${DIST_PORT} \
--trust-remote-code \
--disaggregation-bootstrap-port 8998 \
--mem-fraction-static 0.9 \
Decode command:
python -m sglang.launch_server \
--model-path deepseek-ai/DeepSeek-V3.2-Exp \
--disaggregation-mode decode \
--host $LOCAL_IP \
--port $PORT \
--tp 8 \
--dp 8 \
--enable-dp-attention \
--dist-init-addr ${HOST}:${DIST_PORT} \
--trust-remote-code \
--mem-fraction-static 0.9 \
Router command:
python -m sglang_router.launch_router --pd-disaggregation \
--prefill $PREFILL_ADDR 8998 \
--decode $DECODE_ADDR \
--host 127.0.0.1 \
--port 8000 \
If you need more advanced deployment methods or production-ready deployment methods, such as RBG or LWS-based deployment, please refer to references/multi_node_deployment/rbg_pd/deepseekv32_pd.md. Additionally, you can also find startup commands for DeepEP-based EP parallelism in the aforementioned documentation.
Benchmarking Results#
Accuracy Test with gsm8k#
A simple accuracy benchmark can be tested with gsm8k dataset:
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 1319
The result is 0.956, which matches our expectation:
Accuracy: 0.956
Invalid: 0.000
Latency: 25.109 s
Output throughput: 5226.235 token/s
To test long-context accuracy, run gsm8k with --num-shots 20. The results are very close to the 8 shots results:
Accuracy: 0.956
Invalid: 0.000
Latency: 29.545 s
Output throughput: 4418.617 token/s
Accuracy Test with gpqa-diamond#
Accuracy benchmark on long context can be tested on GPQA-diamond dataset with long output tokens and thinking enabled:
python3 -m sglang.test.run_eval --port 30000 --eval-name gpqa --num-examples 198 --max-tokens 120000 --repeat 8 --thinking-mode deepseek-v3
The mean accuracy over 8 runs shows 0.797, which matches the number 79.9 in official tech report.
Repeat: 8, mean: 0.797
Scores: ['0.808', '0.798', '0.808', '0.798', '0.783', '0.788', '0.803', '0.793']
Accuracy Test with aime 2025#
Prepare the environment by installing NeMo-Skills in the docker or your own virtual environment:
pip install git+https://github.com/NVIDIA/NeMo-Skills.git --ignore-installed blinker
Modify the jinja chat_template by replacing
{% set thinking = false %}
with
{% set thinking = true %}
and save it to chat_template_thinking.jinja.
Launch the SGLang server with the modified chat-template file:
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --dp 8 --enable-dp-attention --chat-template chat_template_thinking.jinja
Run the following script to evaluate AIME 2025:
#! /bin/bash
export NEMO_SKILLS_DISABLE_UNCOMMITTED_CHANGES_CHECK=1
ns prepare_data aime25
PORT=30000
BACKEND=sglang
MODEL="deepseek-ai/DeepSeek-V3.2-Exp"
MODEL_NAME="dsv32-fp8"
echo "Starting AIME25 evaluation with model $MODEL on port $PORT using backend $BACKEND..."
ns eval \
--benchmarks=aime25:4 \
--server_type=$BACKEND \
--model=$MODEL \
--server_address=http://localhost:${PORT}/v1 \
--output_dir=nemo_skills_aime25_${MODEL_NAME}_output_${BACKEND}_$(date +%Y%m%d_%H%M%S) \
++max_concurrent_requests=512 \
++server.api_key=dummy \
++inference.tokens_to_generate=64000
Test results:
evaluation_mode |
num_entries |
avg_tokens |
gen_seconds |
symbolic_correct |
no_answer |
|---|---|---|---|---|---|
pass@1[avg-of-4] |
30 |
14410 |
1758 |
85.83% ± 4.19% |
0.00% |
majority@4 |
30 |
14410 |
1758 |
90.00% |
0.00% |
pass@4 |
30 |
14410 |
1758 |
93.33% |
0.00% |
Note that the result of problem#3 with id aime25-2 is marked as false by nemo-skills but is actually correct because nemo-skills fails to match predicted_answer 016 with expected_answer 16. If we add 1/30 = 3.33% to the results, the pass@1[avg-of-4] result matches with reference which is 89.3.
DSA long sequence context parallel optimization(experimental)#
Accuracy benchmark on long context can be tested on GPQA-diamond dataset with long output tokens and thinking enabled:
Example usage:
# Launch with EP + DP
python -m sglang.launch_server --model deepseek-ai/DeepSeek-V3.2-Exp --tp 8 --ep 8 --dp 2 --enable-dp-attention --enable-nsa-prefill-context-parallel --max-running-requests 32
Context-parallel Tips#
CP_size reuses atten_tp_size, which is equal to TP_size / DP_size.
Some features are still not supported at present.
Multi-batch prefill: Currently, only single-request processing is supported during the prefill process.
disaggregation: P/D disaggregation.
Cross-machine support: - Currently only tested on a single machine (TP=8,EP=8).
Other Args: Currently only supports moe_dense_tp_size=1, kv_cache_dtype = “bf16”, moe_a2a_backend = “deepep”,
DP_size:
CP_sizereusesatten_tp_size, which is equal toTP_size/DP_size. For the cp function to work correctly,TP_sizemust be divisible byDP_size, and TP_size / DP_size > 1 (to ensure CP_size > 1).Detailed design reference: https://github.com/sgl-project/sglang/pull/12065