Contributing to SGLang Diffusion#
This guide outlines the requirements for contributing to the SGLang Diffusion module (sglang.multimodal_gen).
On AI-Assisted (“Vibe Coding”) PRs#
Vibe-coded PRs are welcome — we judge code quality, not how it was produced. The bar is the same for all PRs:
No over-commenting. If the name says it all, skip the docstring.
No over-catching. Don’t guard against errors that virtually never happen in practice.
Test before submitting. AI-generated code can be subtly wrong — verify correctness end-to-end.
Commit Message Convention#
We follow a structured commit message format to maintain a clean history.
Format:
[diffusion] <scope>: <subject>
Examples:
[diffusion] cli: add --perf-dump-path argument[diffusion] scheduler: fix deadlock in batch processing[diffusion] model: support Stable Diffusion 3.5
Rules:
Prefix: Always start with
[diffusion].Scope (Optional):
cli,scheduler,model,pipeline,docs, etc.Subject: Imperative mood, short and clear (e.g., “add feature” not “added feature”).
Performance Reporting#
For PRs that impact latency, throughput, or memory usage, you should provide a performance comparison report.
How to Generate a Report#
Baseline: run the benchmark (for a single generation task)
$ sglang generate --model-path <model> --prompt "A benchmark prompt" --perf-dump-path baseline.json
New: run the same benchmark, without modifying any server_args or sampling_params
$ sglang generate --model-path <model> --prompt "A benchmark prompt" --perf-dump-path new.json
Compare: run the compare script, which will print a Markdown table to the console
$ python python/sglang/multimodal_gen/benchmarks/compare_perf.py baseline.json new.json [new2.json ...] ### Performance Comparison Report ...
Paste: paste the table into the PR description
CI-Based Change Protection#
Consider adding tests to the pr-test or nightly-test suites to safeguard your changes, especially for PRs that:
support a new model
add a testcase for this new model to
testcase_configs.py
support or fix important features
significantly improve performance
Please run the according testcase, then update/add the baseline to perf_baselines.json by following the instruction in console if applicable.
See test for examples