SGLang Model Gateway#
SGLang Model Gateway is a high-performance model-routing gateway for large-scale LLM deployments. It centralizes worker lifecycle management, balances traffic across heterogeneous protocols (HTTP, gRPC, OpenAI-compatible), and provides enterprise-ready control over history storage, MCP tooling, and privacy-sensitive workflows. The gateway is deeply optimized for the SGLang serving runtime, but can route to any OpenAI-compatible backend.
Table of Contents#
Overview#
Unified control plane for registering, monitoring, and orchestrating regular, prefill, and decode workers across heterogeneous model fleets.
Multi-protocol data plane that routes traffic across HTTP, PD (prefill/decode), gRPC, and OpenAI-compatible backends with shared reliability primitives.
Industry-first gRPC pipeline with native Rust tokenization, reasoning parsers, and tool-call execution for high-throughput, OpenAI-compatible serving; supports both single-stage and PD topologies.
Inference Gateway Mode (
--enable-igw) dynamically instantiates multiple router stacks (HTTP regular/PD, gRPC) and applies per-model policies for multi-tenant deployments.Conversation & responses connectors centralize chat history inside the router so the same context can be reused across models and MCP loops without leaking data to upstream vendors (memory, none, Oracle ATP, PostgreSQL).
Enterprise privacy: agentic multi-turn
/v1/responses, native MCP client (STDIO/HTTP/SSE/Streamable), and history storage all operate within the router boundary.Reliability core: retries with jitter, worker-scoped circuit breakers, token-bucket rate limiting with queuing, background health checks, and cache-aware load monitoring.
Comprehensive observability: 40+ Prometheus metrics, OpenTelemetry distributed tracing, structured logging, and request ID propagation.
Architecture#
Control Plane#
Worker Manager discovers capabilities (
/get_server_info,/get_model_info), tracks load, and registers/removes workers in the shared registry.Job Queue serializes add/remove requests and exposes status (
/workers/{worker_id}) so clients can track onboarding progress.Load Monitor feeds cache-aware and power-of-two policies with live worker load statistics.
Health Checker continuously probes workers and updates readiness, circuit breaker state, and router metrics.
Tokenizer Registry manages dynamically registered tokenizers with async loading from HuggingFace or local paths.
Data Plane#
HTTP routers (regular & PD) implement
/generate,/v1/chat/completions,/v1/completions,/v1/responses,/v1/embeddings,/v1/rerank,/v1/classify,/v1/tokenize,/v1/detokenize, and associated admin endpoints.gRPC router streams tokenized requests directly to SRT gRPC workers, running fully in Rust—tokenizer, reasoning parser, and tool parser all reside in-process. Supports both single-stage and PD routing, including embeddings and classification.
OpenAI router proxies OpenAI-compatible endpoints to external vendors (OpenAI, xAI, etc.) while keeping chat history and multi-turn orchestration local.
Storage and Privacy#
Conversation and response history is stored at the router tier (memory, none, Oracle ATP, or PostgreSQL). The same history can power multiple models or MCP loops without sending data to upstream vendors.
/v1/responsesagentic flows, MCP sessions, and conversation APIs share the same storage layer, enabling compliance for regulated workloads.
Installation#
Docker#
Pre-built Docker images are available on Docker Hub with multi-architecture support (x86_64 and ARM64):
docker pull lmsysorg/sgl-model-gateway:latest
Prerequisites#
Rust and Cargo
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source "$HOME/.cargo/env" rustc --version cargo --version
Python with
pipand virtualenv tooling available.
Rust Binary#
cd sgl-model-gateway
cargo build --release
Python Package#
pip install maturin
# Fast development mode
cd sgl-model-gateway/bindings/python
maturin develop
# Production build
maturin build --release --out dist --features vendored-openssl
pip install --force-reinstall dist/*.whl
Quick Start#
Regular HTTP Routing#
# Rust binary
./target/release/sgl-model-gateway \
--worker-urls http://worker1:8000 http://worker2:8000 \
--policy cache_aware
# Python launcher
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8000 \
--policy cache_aware
gRPC Routing#
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--reasoning-parser deepseek-r1 \
--tool-call-parser json \
--host 0.0.0.0 --port 8080
Deployment Modes#
Co-launch Router and Workers#
Launch the router and a fleet of SGLang workers in one process:
python -m sglang_router.launch_server \
--model meta-llama/Meta-Llama-3.1-8B-Instruct \
--dp-size 4 \
--host 0.0.0.0 \
--port 30000
Comprehensive example with router arguments (prefixed with --router-):
python -m sglang_router.launch_server \
--host 0.0.0.0 \
--port 8080 \
--model meta-llama/Llama-3.1-8B-Instruct \
--tp-size 1 \
--dp-size 8 \
--grpc-mode \
--log-level debug \
--router-prometheus-port 10001 \
--router-tool-call-parser llama \
--router-model-path meta-llama/Llama-3.1-8B-Instruct \
--router-policy round_robin \
--router-log-level debug
Separate Launch (HTTP)#
Run workers independently and point the router at their HTTP endpoints:
# Worker nodes
python -m sglang.launch_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --port 8000
python -m sglang.launch_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --port 8001
# Router node
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--policy cache_aware \
--host 0.0.0.0 --port 30000
gRPC Launch#
Use SRT gRPC workers to unlock the highest throughput and access native reasoning/tool pipelines:
# Workers expose gRPC endpoints
python -m sglang.launch_server \
--model meta-llama/Llama-3.1-8B-Instruct \
--grpc-mode \
--port 20000
# Router
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--reasoning-parser deepseek-r1 \
--tool-call-parser json \
--host 0.0.0.0 --port 8080
The gRPC router supports both regular HTTP-equivalent serving and PD (prefill/decode) serving. Provide --tokenizer-path or --model-path (HuggingFace ID or local directory) whenever connection mode resolves to gRPC.
Prefill-Decode Disaggregation#
Split prefill and decode workers for PD-aware caching and balancing:
python -m sglang_router.launch_router \
--pd-disaggregation \
--prefill http://prefill1:30001 9001 \
--decode http://decode1:30011 \
--prefill-policy cache_aware \
--decode-policy power_of_two
Prefill entries accept an optional bootstrap port. PD mode merges prefill metadata with decode outputs and streams results back to the client.
OpenAI Backend Proxy#
Proxy OpenAI-compatible endpoints while keeping history and MCP sessions local:
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend memory
OpenAI backend mode expects exactly one --worker-urls entry per router instance.
Multi-Model Inference Gateway#
Enable IGW mode to route multiple models through a single router:
./target/release/sgl-model-gateway \
--enable-igw \
--policy cache_aware \
--max-concurrent-requests 512
# Register workers dynamically
curl -X POST http://localhost:30000/workers \
-H "Content-Type: application/json" \
-d '{
"url": "http://worker-a:8000",
"model_id": "mistral",
"priority": 10,
"labels": {"tier": "gold"}
}'
API Reference#
Inference Endpoints#
Method |
Path |
Description |
|---|---|---|
|
|
SGLang generate API |
|
|
OpenAI-compatible chat completions (streaming/tool calls) |
|
|
OpenAI-compatible text completions |
|
|
Embedding generation (HTTP and gRPC) |
|
|
Reranking requests |
|
|
Text classification |
Tokenization Endpoints#
The gateway provides HTTP endpoints for text tokenization with batch support, designed to mirror the SGLang Python tokenization API.
Method |
Path |
Description |
|---|---|---|
|
|
Tokenize text to token IDs (single or batch) |
|
|
Convert token IDs back to text (single or batch) |
|
|
Register a new tokenizer (async, returns job status) |
|
|
List all registered tokenizers |
|
|
Get tokenizer info by UUID |
|
|
Check async tokenizer loading status |
|
|
Remove a tokenizer from the registry |
Tokenize Request#
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"prompt": "Hello, world!"
}
Batch Tokenize Request#
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"prompt": ["Hello", "World", "How are you?"]
}
Tokenize Response#
{
"tokens": [15339, 11, 1917, 0],
"count": 4,
"char_count": 13
}
Detokenize Request#
{
"model": "meta-llama/Llama-3.1-8B-Instruct",
"tokens": [15339, 11, 1917, 0],
"skip_special_tokens": true
}
Detokenize Response#
{
"text": "Hello, world!"
}
Add Tokenizer (Async)#
curl -X POST http://localhost:30000/v1/tokenizers \
-H "Content-Type: application/json" \
-d '{"name": "llama3", "source": "meta-llama/Llama-3.1-8B-Instruct"}'
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"message": "Tokenizer registration queued"
}
Check status:
curl http://localhost:30000/v1/tokenizers/550e8400-e29b-41d4-a716-446655440000/status
Parser Endpoints#
The gateway provides admin endpoints for parsing reasoning content and function calls from LLM outputs.
Method |
Path |
Description |
|---|---|---|
|
|
Separate reasoning ( |
|
|
Parse function/tool calls from text |
Separate Reasoning Request#
{
"text": "<think>Let me analyze this step by step...</think>The answer is 42.",
"parser": "deepseek-r1"
}
Response#
{
"normal_text": "The answer is 42.",
"reasoning_text": "Let me analyze this step by step..."
}
Function Call Parsing#
{
"text": "{\"name\": \"get_weather\", \"arguments\": {\"city\": \"NYC\"}}",
"parser": "json"
}
Classification API#
The /v1/classify endpoint provides text classification using sequence classification models (e.g., Qwen2ForSequenceClassification, BertForSequenceClassification).
Request#
curl http://localhost:30000/v1/classify \
-H "Content-Type: application/json" \
-d '{
"model": "jason9693/Qwen2.5-1.5B-apeach",
"input": "I love this product!"
}'
Response#
{
"id": "classify-a1b2c3d4-5678-90ab-cdef-1234567890ab",
"object": "list",
"created": 1767034308,
"model": "jason9693/Qwen2.5-1.5B-apeach",
"data": [
{
"index": 0,
"label": "positive",
"probs": [0.12, 0.88],
"num_classes": 2
}
],
"usage": {
"prompt_tokens": 6,
"completion_tokens": 0,
"total_tokens": 6
}
}
Response Fields#
Field |
Description |
|---|---|
|
Predicted class label (from model’s |
|
Probability distribution over all classes (softmax of logits) |
|
Number of classification classes |
Notes#
Classification reuses the embedding backend—the scheduler returns logits which are converted to probabilities via softmax
Labels come from the model’s HuggingFace config (
id2labelfield); models without this mapping use generic labels (LABEL_0,LABEL_1, etc.)Both HTTP and gRPC routers support classification
Conversation and Response APIs#
Method |
Path |
Description |
|---|---|---|
|
|
Create background responses (agentic loops) |
|
|
Retrieve stored response |
|
|
Cancel background response |
|
|
Delete response |
|
|
List response input items |
|
|
Create conversation |
|
|
Get conversation |
|
|
Update conversation |
|
|
Delete conversation |
|
|
List conversation items |
|
|
Add items to conversation |
|
|
Get conversation item |
|
|
Delete conversation item |
Worker Management APIs#
Method |
Path |
Description |
|---|---|---|
|
|
Queue worker registration (returns 202 Accepted) |
|
|
List workers with health, load, and policy metadata |
|
|
Inspect specific worker or job queue entry |
|
|
Queue worker update |
|
|
Queue worker removal |
Add Worker#
curl -X POST http://localhost:30000/workers \
-H "Content-Type: application/json" \
-d '{"url":"grpc://0.0.0.0:31000","worker_type":"regular"}'
List Workers#
curl http://localhost:30000/workers
Response:
{
"workers": [
{
"id": "2f3a0c3e-3a7b-4c3f-8c70-1b7d4c3a6e1f",
"url": "http://0.0.0.0:31378",
"model_id": "mistral",
"priority": 50,
"cost": 1.0,
"worker_type": "regular",
"is_healthy": true,
"load": 0,
"connection_mode": "Http"
}
],
"total": 1,
"stats": {
"prefill_count": 0,
"decode_count": 0,
"regular_count": 1
}
}
Admin and Health Endpoints#
Method |
Path |
Description |
|---|---|---|
|
|
Health check (always returns OK) |
|
|
Readiness check (checks healthy worker availability) |
|
|
Alias for liveness |
|
|
Health generate test |
|
|
Engine-level metrics from workers |
|
|
List available models |
|
|
Get model information |
|
|
Get server information |
|
|
Clear all caches |
|
|
Get all worker loads |
|
|
Upload WASM module |
|
|
List WASM modules |
|
|
Remove WASM module |
Load Balancing Policies#
Policy |
Description |
Usage |
|---|---|---|
|
Uniform random selection |
|
|
Cycles through workers in order |
|
|
Samples two workers and picks the lighter one |
|
|
Combines cache locality with load balancing (default) |
|
|
Divides workers into load buckets with dynamic boundaries |
|
Cache-Aware Policy Tuning#
--cache-threshold 0.5 \
--balance-abs-threshold 32 \
--balance-rel-threshold 1.5 \
--eviction-interval-secs 120 \
--max-tree-size 67108864
Parameter |
Default |
Description |
|---|---|---|
|
0.3 |
Minimum prefix match ratio for cache hit |
|
64 |
Absolute load difference before rebalancing |
|
1.5 |
Relative load ratio before rebalancing |
|
120 |
Cache eviction cadence in seconds |
|
67108864 |
Maximum nodes in cache tree |
Reliability and Flow Control#
Retries#
Configure exponential backoff retries:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--retry-max-retries 5 \
--retry-initial-backoff-ms 50 \
--retry-max-backoff-ms 30000 \
--retry-backoff-multiplier 1.5 \
--retry-jitter-factor 0.2
Parameter |
Default |
Description |
|---|---|---|
|
5 |
Maximum retry attempts |
|
50 |
Initial backoff duration (ms) |
|
5000 |
Maximum backoff duration (ms) |
|
2.0 |
Exponential backoff multiplier |
|
0.1 |
Random jitter factor (0.0-1.0) |
|
false |
Disable retries entirely |
Retryable Status Codes: 408, 429, 500, 502, 503, 504
Circuit Breaker#
Per-worker circuit breakers prevent cascading failures:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--cb-failure-threshold 5 \
--cb-success-threshold 2 \
--cb-timeout-duration-secs 30 \
--cb-window-duration-secs 60
Parameter |
Default |
Description |
|---|---|---|
|
5 |
Consecutive failures to open circuit |
|
2 |
Successes to close from half-open |
|
30 |
Time before half-open attempt |
|
60 |
Failure counting window |
|
false |
Disable circuit breaker |
Circuit Breaker States:
Closed: Normal operation, requests allowed
Open: Failing, requests rejected immediately
Half-Open: Testing recovery, limited requests allowed
Rate Limiting and Queuing#
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 http://worker2:8001 \
--max-concurrent-requests 256 \
--rate-limit-tokens-per-second 512 \
--queue-size 128 \
--queue-timeout-secs 30
Requests beyond the concurrency limit wait in a FIFO queue. Returns:
429 Too Many Requestswhen queue is full408 Request Timeoutwhen queue timeout expires
Health Checks#
--health-check-interval-secs 30 \
--health-check-timeout-secs 10 \
--health-success-threshold 2 \
--health-failure-threshold 3 \
--health-check-endpoint /health
Reasoning Parser Integration#
The gateway includes built-in reasoning parsers for models that use Chain-of-Thought (CoT) reasoning with explicit thinking blocks.
Supported Parsers#
Parser ID |
Model Family |
Think Tokens |
|---|---|---|
|
DeepSeek-R1 |
|
|
Qwen-3 |
|
|
Qwen-3 Thinking |
|
|
Kimi K2 |
Unicode think tokens |
|
GLM-4.5/4.6/4.7 |
|
|
Step-3 |
|
|
MiniMax |
|
Usage#
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path deepseek-ai/DeepSeek-R1 \
--reasoning-parser deepseek-r1
The gRPC router automatically:
Detects reasoning blocks in streaming output
Separates reasoning content from normal text
Applies incremental streaming parsing with buffer management
Handles partial token detection for correct streaming behavior
Tool Call Parsing#
The gateway supports parsing function/tool calls from LLM outputs in multiple formats.
Supported Formats#
Parser |
Format |
Description |
|---|---|---|
|
JSON |
Standard JSON tool calls |
|
Pythonic |
Python function call syntax |
|
XML |
XML-formatted tool calls |
Usage#
python -m sglang_router.launch_router \
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--tool-call-parser json
Tokenizer Management#
Tokenizer Sources#
The gateway supports multiple tokenizer backends:
HuggingFace: Load from HuggingFace Hub by model ID
Local: Load from local
tokenizer.jsonor directoryTiktoken: Auto-detect OpenAI GPT models (gpt-4, davinci, etc.)
Configuration#
# HuggingFace model
--model-path meta-llama/Llama-3.1-8B-Instruct
# Local tokenizer
--tokenizer-path /path/to/tokenizer.json
# With chat template override
--chat-template /path/to/template.jinja
Tokenizer Caching#
Two-level caching for optimal performance:
Cache |
Type |
Description |
|---|---|---|
L0 |
Exact match |
Whole-string caching for repeated prompts |
L1 |
Prefix match |
Prefix boundary matching for incremental prompts |
--enable-l0-cache \
--l0-max-entries 10000 \
--enable-l1-cache \
--l1-max-memory 52428800 # 50MB
MCP Integration#
The gateway provides native Model Context Protocol (MCP) client integration for tool execution.
Supported Transports#
Transport |
Description |
|---|---|
STDIO |
Local process execution |
SSE |
Server-Sent Events (HTTP) |
Streamable |
Bidirectional streaming |
Configuration#
python -m sglang_router.launch_router \
--mcp-config-path /path/to/mcp-config.yaml \
--worker-urls http://worker1:8000
MCP Configuration File#
servers:
- name: "filesystem"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
protocol: "stdio"
required: false
- name: "github"
url: "https://api.github.com/mcp"
token: "ghp_xxxxx"
protocol: "sse"
required: false
- name: "custom-tools"
url: "https://tools.example.com/mcp"
protocol: "streamable"
required: true
pool:
max_connections: 100
idle_timeout: 300
proxy:
http: "http://proxy.internal:8080"
https: "https://proxy.internal:8443"
no_proxy: "localhost,127.0.0.1,*.internal"
inventory:
enable_refresh: true
tool_ttl: 300
refresh_interval: 300
Service Discovery (Kubernetes)#
Enable automatic worker discovery via Kubernetes pod selectors:
python -m sglang_router.launch_router \
--service-discovery \
--selector app=sglang-worker role=inference \
--service-discovery-namespace production \
--service-discovery-port 8000
PD Mode Discovery#
--pd-disaggregation \
--prefill-selector app=sglang component=prefill \
--decode-selector app=sglang component=decode \
--service-discovery
Prefill pods can expose bootstrap ports via the sglang.ai/bootstrap-port annotation. RBAC must allow get, list, and watch on pods.
History and Data Connectors#
Backend |
Description |
Usage |
|---|---|---|
|
In-memory storage (default) |
|
|
No persistence |
|
|
Oracle Autonomous Database |
|
|
PostgreSQL Database |
|
Oracle Configuration#
# Connection descriptor
export ATP_DSN="(description=(address=(protocol=tcps)(port=1522)(host=adb.region.oraclecloud.com))(connect_data=(service_name=service_name)))"
# Or TNS alias (requires wallet)
export ATP_TNS_ALIAS="sglroutertestatp_high"
export ATP_WALLET_PATH="/path/to/wallet"
# Credentials
export ATP_USER="admin"
export ATP_PASSWORD="secret"
export ATP_POOL_MIN=4
export ATP_POOL_MAX=32
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend oracle
PostgreSQL Configuration#
export POSTGRES_DB_URL="postgres://user:password@host:5432/dbname"
python -m sglang_router.launch_router \
--backend openai \
--worker-urls https://api.openai.com \
--history-backend postgres
WASM Middleware#
The gateway supports WebAssembly (WASM) middleware modules for custom request/response processing. This enables organization-specific logic for authentication, rate limiting, billing, logging, and more—without modifying or recompiling the gateway.
Overview#
WASM middleware runs in a sandboxed environment with memory isolation, no network/filesystem access, and configurable resource limits.
Attach Point |
When Executed |
Use Cases |
|---|---|---|
|
Before forwarding to workers |
Auth, rate limiting, request modification |
|
After receiving worker response |
Logging, response modification, error handling |
Action |
Description |
|---|---|
|
Proceed without modification |
|
Reject request with HTTP status code |
|
Modify headers, body, or status |
Examples#
Complete working examples are available in examples/wasm/:
Example |
Description |
|---|---|
|
API key authentication for protected routes |
|
Per-client rate limiting (requests/minute) |
|
Request tracking headers and response modification |
The interface definition is located at src/wasm/interface.
Building Modules#
# Prerequisites
rustup target add wasm32-wasip2
cargo install wasm-tools
# Build
cargo build --target wasm32-wasip2 --release
# Convert to component format
wasm-tools component new \
target/wasm32-wasip2/release/my_middleware.wasm \
-o my_middleware.component.wasm
Deploying Modules#
# Enable WASM support
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--enable-wasm
# Upload module
curl -X POST http://localhost:30000/wasm \
-H "Content-Type: application/json" \
-d '{
"modules": [{
"name": "auth-middleware",
"file_path": "/absolute/path/to/auth.component.wasm",
"module_type": "Middleware",
"attach_points": [{"Middleware": "OnRequest"}]
}]
}'
# List modules
curl http://localhost:30000/wasm
# Remove module
curl -X DELETE http://localhost:30000/wasm/{module_uuid}
Runtime Configuration#
Parameter |
Default |
Description |
|---|---|---|
|
1024 (64MB) |
Maximum WASM memory |
|
1000 |
Execution timeout |
|
1MB |
Stack size limit |
|
10 |
Cached modules per worker |
Note: Rate limiting state is per-worker thread and not shared across gateway replicas. For production, consider implementing rate limiting at a shared layer (e.g., Redis)
Language Bindings#
SGLang Model Gateway provides official language bindings for Python and Go, enabling integration with different technology stacks and organizational requirements.
Python Bindings#
The Python bindings provide a PyO3-based wrapper around the Rust gateway library. This is a straightforward binding that calls the gateway server startup from Python.
Installation#
# From PyPI
pip install sglang-router
# Development build
cd sgl-model-gateway/bindings/python
pip install maturin && maturin develop --features vendored-openssl
Usage#
The Python bindings are used throughout this documentation. See the Quick Start and Deployment Modes sections for detailed examples.
Key components:
RouterArgsdataclass with 50+ configuration optionsRouter.from_args()for programmatic startupCLI commands:
smg launch,smg server,python -m sglang_router.launch_router
Go Bindings#
The Go bindings provide a high-performance gRPC client library for organizations with Go-based infrastructure. This is ideal for:
Integration with internal Go services and tooling
High-performance client applications
Building custom OpenAI-compatible proxy servers
Architecture#
┌─────────────────────────────────────────┐
│ High-Level Go API │
│ (client.go - OpenAI-style interface) │
├─────────────────────────────────────────┤
│ gRPC Layer │
├─────────────────────────────────────────┤
│ Rust FFI Layer │
│ (Tokenization, Parsing, Conversion) │
└─────────────────────────────────────────┘
Key Features:
Native Rust tokenization via FFI (thread-safe, lock-free)
Full streaming support with context cancellation
Configurable channel buffer sizes for high concurrency
Built-in tool call parsing and chat template application
Installation#
# Build the FFI library first
cd sgl-model-gateway/bindings/golang
make build && make lib
# Then use in your Go project
go get github.com/sgl-project/sgl-go-sdk
Requirements: Go 1.24+, Rust toolchain
Examples#
Complete working examples are available in bindings/golang/examples/:
Example |
Description |
|---|---|
|
Non-streaming chat completion |
|
Streaming chat completion with SSE |
|
Full OpenAI-compatible HTTP server |
# Run examples
cd sgl-model-gateway/bindings/golang/examples/simple && ./run.sh
cd sgl-model-gateway/bindings/golang/examples/streaming && ./run.sh
cd sgl-model-gateway/bindings/golang/examples/oai_server && ./run.sh
Testing#
cd sgl-model-gateway/bindings/golang
# Unit tests
go test -v ./...
# Integration tests (requires running SGLang server)
export SGL_GRPC_ENDPOINT=grpc://localhost:20000
export SGL_TOKENIZER_PATH=/path/to/tokenizer
go test -tags=integration -v ./...
Comparison#
Feature |
Python |
Go |
|---|---|---|
Primary Use |
Gateway server launcher |
gRPC client library |
CLI Support |
Full CLI (smg, sglang-router) |
Library only |
K8s Discovery |
Native support |
N/A (client library) |
PD Mode |
Built-in |
N/A (client library) |
When to Use Python: Launching and managing the gateway server, service discovery, PD disaggregation.
When to Use Go: Building custom client applications, integration with Go microservices, OpenAI-compatible proxy servers
Security and Authentication#
Router API Key#
python -m sglang_router.launch_router \
--api-key "your-router-api-key" \
--worker-urls http://worker1:8000
Clients must supply Authorization: Bearer <key> for protected endpoints.
Worker API Keys#
# Add worker with explicit key
curl -H "Authorization: Bearer router-key" \
-X POST http://localhost:8080/workers \
-H "Content-Type: application/json" \
-d '{"url":"http://worker:8000","api_key":"worker-key"}'
Security Configurations#
No Authentication (default): Use only in trusted environments
Router-only Authentication: Clients authenticate to router
Worker-only Authentication: Router open, workers require keys
Full Authentication: Both router and workers protected
TLS (HTTPS) for Gateway Server#
Enable TLS to serve the gateway over HTTPS:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--tls-cert-path /path/to/server.crt \
--tls-key-path /path/to/server.key
Parameter |
Description |
|---|---|
|
Path to server certificate (PEM format) |
|
Path to server private key (PEM format) |
Both parameters must be provided together. The gateway uses rustls with the ring crypto provider for TLS termination. If TLS is not configured, the gateway falls back to plain HTTP.
mTLS for Worker Communication#
Enable mutual TLS (mTLS) for secure communication with workers in HTTP mode:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--client-cert-path /path/to/client.crt \
--client-key-path /path/to/client.key \
--ca-cert-path /path/to/ca.crt
Parameter |
Description |
|---|---|
|
Path to client certificate for mTLS (PEM format) |
|
Path to client private key for mTLS (PEM format) |
|
Path to CA certificate for verifying worker TLS (PEM format, repeatable) |
Key Points:
Client certificate and key must be provided together
Multiple CA certificates can be added with multiple
--ca-cert-pathflagsUses rustls backend when TLS is configured
Single HTTP client is created for all workers (assumes single security domain)
TCP keepalive (30 seconds) is enabled for long-lived connections
Full TLS Configuration Example#
Gateway HTTPS + Worker mTLS + API Key authentication:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--tls-cert-path /etc/certs/server.crt \
--tls-key-path /etc/certs/server.key \
--client-cert-path /etc/certs/client.crt \
--client-key-path /etc/certs/client.key \
--ca-cert-path /etc/certs/ca.crt \
--api-key "secure-api-key" \
--policy cache_aware
Observability#
Prometheus Metrics#
Enable with --prometheus-host/--prometheus-port (defaults to 0.0.0.0:29000).
Metric Categories (40+ metrics)#
Layer |
Prefix |
Metrics |
|---|---|---|
HTTP |
|
|
Router |
|
|
Inference |
|
|
Worker |
|
|
Circuit Breaker |
|
|
Retry |
|
|
Discovery |
|
|
MCP |
|
|
Database |
|
|
Key Inference Metrics (gRPC mode)#
Metric |
Type |
Description |
|---|---|---|
|
Histogram |
Time to first token |
|
Histogram |
Time per output token |
|
Counter |
Total tokens (input/output) |
|
Histogram |
End-to-end generation time |
Duration Buckets#
1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 2.5s, 5s, 10s, 15s, 30s, 45s, 60s, 90s, 120s, 180s, 240s
OpenTelemetry Tracing#
Enable distributed tracing with OTLP export:
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--enable-trace \
--otlp-traces-endpoint localhost:4317
Features#
OTLP/gRPC exporter (default port 4317)
W3C Trace Context propagation for HTTP and gRPC
Batch span processing (500ms delay, 64 span batch size)
Custom filtering to reduce noise
Trace context injection into upstream worker requests
Service name:
sgl-router
Logging#
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--log-level debug \
--log-dir ./router_logs
Structured tracing with optional file sink. Log levels: debug, info, warn, error.
Request ID Propagation#
--request-id-headers x-request-id x-trace-id x-correlation-id
Responses include x-request-id header for correlation.
Production Recommendations#
This section provides guidance for deploying SGLang Model Gateway in production environments.
Security Best Practices#
Always enable TLS in production:
python -m sglang_router.launch_router \
--worker-urls https://worker1:8443 https://worker2:8443 \
--tls-cert-path /etc/certs/server.crt \
--tls-key-path /etc/certs/server.key \
--client-cert-path /etc/certs/client.crt \
--client-key-path /etc/certs/client.key \
--ca-cert-path /etc/certs/ca.crt \
--api-key "${ROUTER_API_KEY}"
Security Checklist:
Enable TLS for gateway HTTPS termination
Enable mTLS for worker communication when workers are on untrusted networks
Set
--api-keyto protect router endpointsUse Kubernetes Secrets or a secrets manager for credentials
Rotate certificates and API keys periodically
Restrict network access with firewalls or network policies
High Availability#
Scaling Strategy:
The gateway supports running multiple replicas behind a load balancer for high availability. However, there are important considerations:
Component |
Shared Across Replicas |
Impact |
|---|---|---|
Worker Registry |
No (independent) |
Each replica discovers workers independently |
Radix Cache Tree |
No (independent) |
Cache hits may decrease by 10-20% |
Circuit Breaker State |
No (independent) |
Each replica tracks failures independently |
Rate Limiting |
No (independent) |
Limits apply per-replica, not globally |
Recommendations:
Prefer horizontal scaling over vertical scaling: Deploy multiple smaller gateway replicas rather than one large instance with excessive CPU and memory. This provides:
Better fault tolerance (single replica failure doesn’t take down the gateway)
More predictable resource usage
Easier capacity planning
Use Kubernetes Service Discovery: Let the gateway automatically discover and manage workers:
python -m sglang_router.launch_router \ --service-discovery \ --selector app=sglang-worker \ --service-discovery-namespace production
Accept cache efficiency trade-off: With multiple replicas, the cache-aware routing policy’s radix tree is not synchronized across replicas. This means:
Each replica builds its own cache tree
Requests from the same user may hit different replicas
Expected cache hit rate reduction: 10-20%
This is often acceptable given the HA benefits
Configure session affinity (optional): If cache efficiency is critical, configure your load balancer for session affinity based on a consistent hash of the request (e.g., user ID or API key).
Example HA Architecture:
┌─────────────────┐
│ Load Balancer │
│ (L4/L7) │
└────────┬────────┘
┌──────────────┼──────────────┐
│ │ │
┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
│ Gateway │ │ Gateway │ │ Gateway │
│ Replica 1 │ │ Replica 2 │ │ Replica 3 │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
└──────────────┼──────────────┘
│
┌──────────────┼──────────────┐
│ │ │
┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
│ Worker │ │ Worker │ │ Worker │
│ Pod 1 │ │ Pod 2 │ │ Pod N │
└───────────┘ └───────────┘ └───────────┘
Performance#
Use gRPC mode for high throughput:
gRPC mode provides the highest performance for SGLang workers:
# Start workers in gRPC mode
python -m sglang.launch_server \
--model meta-llama/Llama-3.1-8B-Instruct \
--grpc-mode \
--port 20000
# Configure gateway for gRPC
python -m sglang_router.launch_router \
--worker-urls grpc://worker1:20000 grpc://worker2:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--policy cache_aware
Performance Benefits of gRPC:
Native Rust tokenization (no Python overhead)
Streaming with lower latency
Built-in reasoning parser execution
Tool call parsing in the gateway
Reduced serialization overhead
Tuning Recommendations:
Parameter |
Recommendation |
Reason |
|---|---|---|
|
|
Best for repeated prompts, ~30% latency reduction |
|
2-4x worker count |
Prevent overload while maximizing throughput |
|
2x max-concurrent |
Buffer for burst traffic |
|
Based on max generation length |
Prevent stuck requests |
Kubernetes Deployment#
Pod Labeling for Service Discovery:
For the gateway to discover workers automatically, label your worker pods consistently:
# Worker Deployment (Regular Mode)
apiVersion: apps/v1
kind: Deployment
metadata:
name: sglang-worker
namespace: production
spec:
replicas: 4
selector:
matchLabels:
app: sglang-worker
component: inference
template:
metadata:
labels:
app: sglang-worker
component: inference
model: llama-3-8b
spec:
containers:
- name: worker
image: lmsysorg/sglang:latest
ports:
- containerPort: 8000
name: http
- containerPort: 20000
name: grpc
Gateway configuration for discovery:
python -m sglang_router.launch_router \
--service-discovery \
--selector app=sglang-worker component=inference \
--service-discovery-namespace production \
--service-discovery-port 8000
PD (Prefill/Decode) Mode Labeling:
# Prefill Worker
metadata:
labels:
app: sglang-worker
component: prefill
annotations:
sglang.ai/bootstrap-port: "9001"
# Decode Worker
metadata:
labels:
app: sglang-worker
component: decode
Gateway configuration for PD discovery:
python -m sglang_router.launch_router \
--service-discovery \
--pd-disaggregation \
--prefill-selector app=sglang-worker component=prefill \
--decode-selector app=sglang-worker component=decode \
--service-discovery-namespace production
RBAC Requirements:
The gateway needs permissions to watch pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sglang-gateway
namespace: production
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sglang-gateway
namespace: production
subjects:
- kind: ServiceAccount
name: sglang-gateway
namespace: production
roleRef:
kind: Role
name: sglang-gateway
apiGroup: rbac.authorization.k8s.io
Monitoring with PromQL#
Configure Prometheus to scrape the gateway metrics endpoint (default: :29000/metrics).
Essential Dashboards:
1. Request Rate and Latency:
# Request rate by endpoint
sum(rate(smg_http_requests_total[5m])) by (path, method)
# P50 latency
histogram_quantile(0.50, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le))
# P99 latency
histogram_quantile(0.99, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le))
# Error rate
sum(rate(smg_http_responses_total{status=~"5.."}[5m])) / sum(rate(smg_http_responses_total[5m]))
2. Worker Health:
# Healthy workers
sum(smg_worker_pool_size)
# Active connections per worker
smg_worker_connections_active
# Worker health check failures
sum(rate(smg_worker_health_checks_total{result="failure"}[5m])) by (worker_id)
3. Circuit Breaker Status:
# Circuit breaker states (0=closed, 1=open, 2=half-open)
smg_worker_cb_state
# Circuit breaker transitions
sum(rate(smg_worker_cb_transitions_total[5m])) by (worker_id, from_state, to_state)
# Workers with open circuits
count(smg_worker_cb_state == 1)
4. Inference Performance (gRPC mode):
# Time to first token (P50)
histogram_quantile(0.50, sum(rate(smg_router_ttft_seconds_bucket[5m])) by (le, model))
# Time per output token (P99)
histogram_quantile(0.99, sum(rate(smg_router_tpot_seconds_bucket[5m])) by (le, model))
# Token throughput
sum(rate(smg_router_tokens_total[5m])) by (model, direction)
# Generation duration P95
histogram_quantile(0.95, sum(rate(smg_router_generation_duration_seconds_bucket[5m])) by (le))
5. Rate Limiting and Queuing:
# Rate limit rejections
sum(rate(smg_http_rate_limit_total{decision="rejected"}[5m]))
# Queue depth (if using concurrency limiting)
smg_worker_requests_active
# Retry attempts
sum(rate(smg_worker_retries_total[5m])) by (worker_id)
# Exhausted retries (failures after all retries)
sum(rate(smg_worker_retries_exhausted_total[5m]))
6. MCP Tool Execution:
# Tool call rate
sum(rate(smg_mcp_tool_calls_total[5m])) by (server, tool)
# Tool latency P95
histogram_quantile(0.95, sum(rate(smg_mcp_tool_duration_seconds_bucket[5m])) by (le, tool))
# Active MCP server connections
smg_mcp_servers_active
Alerting Rules Example:
groups:
- name: sglang-gateway
rules:
- alert: HighErrorRate
expr: |
sum(rate(smg_http_responses_total{status=~"5.."}[5m]))
/ sum(rate(smg_http_responses_total[5m])) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate on SGLang Gateway"
- alert: CircuitBreakerOpen
expr: count(smg_worker_cb_state == 1) > 0
for: 2m
labels:
severity: warning
annotations:
summary: "Worker circuit breaker is open"
- alert: HighLatency
expr: |
histogram_quantile(0.99, sum(rate(smg_http_request_duration_seconds_bucket[5m])) by (le)) > 30
for: 5m
labels:
severity: warning
annotations:
summary: "P99 latency exceeds 30 seconds"
- alert: NoHealthyWorkers
expr: sum(smg_worker_pool_size) == 0
for: 1m
labels:
severity: critical
annotations:
summary: "No healthy workers available"
Configuration Reference#
Core Settings#
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
str |
127.0.0.1 |
Router host |
|
int |
30000 |
Router port |
|
list |
[] |
Worker URLs (HTTP or gRPC) |
|
str |
cache_aware |
Routing policy |
|
int |
-1 |
Concurrency limit (-1 disables) |
|
int |
600 |
Request timeout |
|
int |
256MB |
Maximum request payload |
Prefill/Decode#
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
flag |
false |
Enable PD mode |
|
list |
[] |
Prefill URLs + optional bootstrap ports |
|
list |
[] |
Decode URLs |
|
str |
None |
Override policy for prefill nodes |
|
str |
None |
Override policy for decode nodes |
|
int |
600 |
Worker init timeout |
Kubernetes Discovery#
Parameter |
Type |
Description |
|---|---|---|
|
flag |
Enable discovery |
|
list |
Label selectors (key=value) |
|
list |
PD mode selectors |
|
str |
Namespace to watch |
|
int |
Worker port (default 80) |
|
str |
Annotation for bootstrap ports |
TLS Configuration#
Parameter |
Type |
Description |
|---|---|---|
|
str |
Server certificate for gateway HTTPS (PEM) |
|
str |
Server private key for gateway HTTPS (PEM) |
|
str |
Client certificate for worker mTLS (PEM) |
|
str |
Client private key for worker mTLS (PEM) |
|
str |
CA certificate for verifying workers (PEM, repeatable) |
Troubleshooting#
Workers Never Ready#
Increase --worker-startup-timeout-secs or ensure health probes respond before router startup.
Load Imbalance / Hot Workers#
Inspect smg_router_requests_total by worker and tune cache-aware thresholds (--balance-*, --cache-threshold).
Circuit Breaker Flapping#
Increase --cb-failure-threshold or extend the timeout/window durations. Consider temporarily disabling retries.
Queue Overflow (429)#
Increase --queue-size or reduce client concurrency. Ensure --max-concurrent-requests matches downstream capacity.
Memory Growth#
Reduce --max-tree-size or lower --eviction-interval-secs for more aggressive cache pruning.
Debugging#
python -m sglang_router.launch_router \
--worker-urls http://worker1:8000 \
--log-level debug \
--log-dir ./router_logs
gRPC Connection Issues#
Ensure workers are started with --grpc-mode and verify --model-path or --tokenizer-path is provided to the router.
Tokenizer Loading Failures#
Check HuggingFace Hub credentials (HF_TOKEN environment variable) for private models. Verify local paths are accessible.
SGLang Model Gateway continues to evolve alongside the SGLang runtime. Keep CLI flags, integrations, and documentation aligned when adopting new features or contributing improvements.