Launch GLM-4.5 / GLM-4.6 with SGLang#

To serve GLM-4.5 / GLM-4.6 FP8 models on 8xH100/H200 GPUs:

python3 -m sglang.launch_server --model zai-org/GLM-4.6-FP8 --tp 8

Configuration Tips#

  • --max-mamba-cache-size: Adjust --max-mamba-cache-size to increase mamba cache space and max running requests capability. It will decrease KV cache space as a trade-off. You can adjust it according to workload.

EAGLE Speculative Decoding#

Description: SGLang has supported GLM-4.5 / GLM-4.6 models with EAGLE speculative decoding.

Usage: Add arguments --speculative-algorithm, --speculative-num-steps, --speculative-eagle-topk and --speculative-num-draft-tokens to enable this feature. For example:

python3 -m sglang.launch_server \
  --model-path zai-org/GLM-4.6-FP8 \
  --tp-size 8 \
  --tool-call-parser glm45  \
  --reasoning-parser glm45  \
  --speculative-algorithm EAGLE \
  --speculative-num-steps 3  \
  --speculative-eagle-topk 1  \
  --speculative-num-draft-tokens 4 \
  --mem-fraction-static 0.9 \
  --served-model-name glm-4.6-fp8 \
  --enable-custom-logit-processor

Thinking Budget for GLM-4.5 / GLM-4.6#

In SGLang, we can implement thinking budget with CustomLogitProcessor.

Launch a server with --enable-custom-logit-processor flag on.

Sample Request:

import openai
from rich.pretty import pprint
from sglang.srt.sampling.custom_logit_processor import Glm4MoeThinkingBudgetLogitProcessor


client = openai.Client(base_url="http://127.0.0.1:30000/v1", api_key="*")
response = client.chat.completions.create(
    model="zai-org/GLM-4.6",
    messages=[
        {
            "role": "user",
            "content": "Question: Is Paris the Capital of France?",
        }
    ],
    max_tokens=1024,
    extra_body={
        "custom_logit_processor": Glm4MoeThinkingBudgetLogitProcessor().to_str(),
        "custom_params": {
            "thinking_budget": 512,
        },
    },
)
pprint(response)