Documentation Index
Fetch the complete documentation index at: https://docs.sglang.io/llms.txt
Use this file to discover all available pages before exploring further.
The document addresses how to set up the SGLang environment and run LLM inference on Intel GPU, see more context about Intel GPU support within PyTorch ecosystem.
Specifically, SGLang is optimized for Intel® Arc™ Pro B-Series Graphics and
Intel® Arc™ B-Series Graphics.
Optimized Model List
A list of LLMs have been optimized on Intel GPU, and more are on the way:
Note: The model identifiers listed in the table above
have been verified on Intel® Arc™ B580 Graphics.
Installation
Install From Source
Currently SGLang XPU only supports installation from source. Please refer to “Getting Started on Intel GPU” to install XPU dependency.
# Create and activate a conda environment
conda create -n sgl-xpu python=3.12 -y
conda activate sgl-xpu
# Set PyTorch XPU as primary pip install channel to avoid installing the larger CUDA-enabled version and prevent potential runtime issues.
pip3 install torch==2.11.0+xpu torchao torchvision torchaudio==2.11.0+xpu --index-url https://download.pytorch.org/whl/xpu
pip3 install xgrammar --no-deps # xgrammar will introduce CUDA-enabled triton which might conflict with XPU
# Clone the SGLang code
git clone https://github.com/sgl-project/sglang.git
cd sglang
git checkout <YOUR-DESIRED-VERSION>
# Use dedicated toml file
cd python
cp pyproject_xpu.toml pyproject.toml
# Install SGLang dependent libs, and build SGLang main package
pip install --upgrade pip setuptools
pip install -v . --extra-index-url https://download.pytorch.org/whl/xpu
Install Using Docker
The SGLang XPU Dockerfile is provided to facilitate the installation.
Replace <secret> below with your HuggingFace access token.
# Clone the SGLang repository
git clone https://github.com/sgl-project/sglang.git
cd sglang/docker
# Build the docker image
docker build -t sglang-xpu:latest -f xpu.Dockerfile .
# Initiate a docker container
docker run \
-it \
--privileged \
--ipc=host \
--network=host \
--group-add $(getent group video | cut -d: -f3) \
--device /dev/dri \
-v /dev/dri/by-path:/dev/dri/by-path \
-v /dev/shm:/dev/shm \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-p 30000:30000 \
-e "HF_TOKEN=<secret>" \
sglang-xpu:latest /bin/bash
Launch of the Serving Engine
Example command to launch SGLang serving:
sglang serve \
--model-path <MODEL_ID_OR_PATH> \
--trust-remote-code \
--disable-overlap-schedule \
--device xpu \
--host 0.0.0.0 \
--tp 2 \ # using multi GPUs
--attention-backend intel_xpu \ # using intel optimized XPU attention backend
--page-size \ # intel_xpu attention backend supports [32, 64, 128]
Benchmarking with Requests
You can benchmark the performance via the bench_serving script.
Run the command in another terminal.
python -m sglang.bench_serving \
--dataset-name random \
--random-input-len 1024 \
--random-output-len 1024 \
--num-prompts 1 \
--request-rate inf \
--random-range-ratio 1.0
The detail explanations of the parameters can be looked up by the command:
python -m sglang.bench_serving -h
Additionally, the requests can be formed with
OpenAI Completions API
and sent via the command line (e.g. using curl) or via your own script.