Skip to content

Commit 206e257

Browse files
authored
Move requirements into their own directory (vllm-project#12547)
Signed-off-by: Harry Mellor <[email protected]>
1 parent e02883c commit 206e257

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+125
-128
lines changed

.buildkite/nightly-benchmarks/scripts/run-nightly-benchmarks.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -426,7 +426,7 @@ main() {
426426

427427
pip install -U transformers
428428

429-
pip install -r requirements-dev.txt
429+
pip install -r requirements/dev.txt
430430
which genai-perf
431431

432432
# check storage

.buildkite/run-cpu-test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ function cpu_tests() {
3535
# Run basic model test
3636
docker exec cpu-test-"$BUILDKITE_BUILD_NUMBER"-"$NUMA_NODE" bash -c "
3737
set -e
38-
pip install -r vllm/requirements-test.txt
38+
pip install -r vllm/requirements/test.txt
3939
pytest -v -s tests/models/decoder_only/language -m cpu_model
4040
pytest -v -s tests/models/embedding/language -m cpu_model
4141
pytest -v -s tests/models/encoder_decoder/language -m cpu_model

.buildkite/test-pipeline.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ steps:
3535
fast_check: true
3636
no_gpu: True
3737
commands:
38-
- pip install -r requirements-docs.txt
38+
- pip install -r ../../requirements/docs.txt
3939
- SPHINXOPTS=\"-W\" make html
4040
# Check API reference (if it fails, you may have missing mock imports)
4141
- grep \"sig sig-object py\" build/html/api/inference_params.html

.github/workflows/publish.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ jobs:
5050
# matrix:
5151
# os: ['ubuntu-20.04']
5252
# python-version: ['3.9', '3.10', '3.11', '3.12']
53-
# pytorch-version: ['2.4.0'] # Must be the most recent version that meets requirements-cuda.txt.
53+
# pytorch-version: ['2.4.0'] # Must be the most recent version that meets requirements/cuda.txt.
5454
# cuda-version: ['11.8', '12.1']
5555

5656
# steps:

.github/workflows/scripts/build.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ PATH=${cuda_home}/bin:$PATH
99
LD_LIBRARY_PATH=${cuda_home}/lib64:$LD_LIBRARY_PATH
1010

1111
# Install requirements
12-
$python_executable -m pip install -r requirements-build.txt -r requirements-cuda.txt
12+
$python_executable -m pip install -r requirements/build.txt -r requirements/cuda.txt
1313

1414
# Limit the number of parallel jobs to avoid OOM
1515
export MAX_JOBS=1

.pre-commit-config.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ repos:
4444
rev: 0.6.2
4545
hooks:
4646
- id: pip-compile
47-
args: [requirements-test.in, -o, requirements-test.txt]
48-
files: ^requirements-test\.(in|txt)$
47+
args: [requirements/test.in, -o, requirements/test.txt]
48+
files: ^requirements/test\.(in|txt)$
4949
- repo: local
5050
hooks:
5151
- id: mypy-local

.readthedocs.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@ formats: []
1818
# Optionally declare the Python requirements required to build your docs
1919
python:
2020
install:
21-
- requirements: docs/requirements-docs.txt
21+
- requirements: requirements/docs.txt

Dockerfile

+12-12
Original file line numberDiff line numberDiff line change
@@ -55,10 +55,10 @@ RUN --mount=type=cache,target=/root/.cache/uv \
5555
uv pip install --index-url https://download.pytorch.org/whl/nightly/cu126 "torch==2.7.0.dev20250121+cu126" "torchvision==0.22.0.dev20250121"; \
5656
fi
5757

58-
COPY requirements-common.txt requirements-common.txt
59-
COPY requirements-cuda.txt requirements-cuda.txt
58+
COPY requirements/common.txt requirements/common.txt
59+
COPY requirements/cuda.txt requirements/cuda.txt
6060
RUN --mount=type=cache,target=/root/.cache/uv \
61-
uv pip install -r requirements-cuda.txt
61+
uv pip install -r requirements/cuda.txt
6262

6363
# cuda arch list used by torch
6464
# can be useful for both `dev` and `test`
@@ -76,14 +76,14 @@ FROM base AS build
7676
ARG TARGETPLATFORM
7777

7878
# install build dependencies
79-
COPY requirements-build.txt requirements-build.txt
79+
COPY requirements/build.txt requirements/build.txt
8080

8181
# This timeout (in seconds) is necessary when installing some dependencies via uv since it's likely to time out
8282
# Reference: https://github.com/astral-sh/uv/pull/1694
8383
ENV UV_HTTP_TIMEOUT=500
8484

8585
RUN --mount=type=cache,target=/root/.cache/uv \
86-
uv pip install -r requirements-build.txt
86+
uv pip install -r requirements/build.txt
8787

8888
COPY . .
8989
ARG GIT_REPO_CHECK=0
@@ -151,11 +151,11 @@ FROM base as dev
151151
# Reference: https://github.com/astral-sh/uv/pull/1694
152152
ENV UV_HTTP_TIMEOUT=500
153153

154-
COPY requirements-lint.txt requirements-lint.txt
155-
COPY requirements-test.txt requirements-test.txt
156-
COPY requirements-dev.txt requirements-dev.txt
154+
COPY requirements/lint.txt requirements/lint.txt
155+
COPY requirements/test.txt requirements/test.txt
156+
COPY requirements/dev.txt requirements/dev.txt
157157
RUN --mount=type=cache,target=/root/.cache/uv \
158-
uv pip install -r requirements-dev.txt
158+
uv pip install -r requirements/dev.txt
159159
#################### DEV IMAGE ####################
160160

161161
#################### vLLM installation IMAGE ####################
@@ -230,9 +230,9 @@ COPY examples examples
230230
# some issues w.r.t. JIT compilation. Therefore we need to
231231
# install build dependencies for JIT compilation.
232232
# TODO: Remove this once FlashInfer AOT wheel is fixed
233-
COPY requirements-build.txt requirements-build.txt
233+
COPY requirements/build.txt requirements/build.txt
234234
RUN --mount=type=cache,target=/root/.cache/uv \
235-
uv pip install -r requirements-build.txt
235+
uv pip install -r requirements/build.txt
236236

237237
#################### vLLM installation IMAGE ####################
238238

@@ -249,7 +249,7 @@ ENV UV_HTTP_TIMEOUT=500
249249

250250
# install development dependencies (for testing)
251251
RUN --mount=type=cache,target=/root/.cache/uv \
252-
uv pip install -r requirements-dev.txt
252+
uv pip install -r requirements/dev.txt
253253

254254
# install development dependencies (for testing)
255255
RUN --mount=type=cache,target=/root/.cache/uv \

Dockerfile.arm

+5-5
Original file line numberDiff line numberDiff line change
@@ -26,18 +26,18 @@ WORKDIR /workspace
2626
ARG PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
2727
ENV PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}
2828
RUN --mount=type=cache,target=/root/.cache/pip \
29-
--mount=type=bind,src=requirements-build.txt,target=requirements-build.txt \
29+
--mount=type=bind,src=requirements/build.txt,target=requirements/build.txt \
3030
pip install --upgrade pip && \
31-
pip install -r requirements-build.txt
31+
pip install -r requirements/build.txt
3232

3333
FROM cpu-test-arm AS build
3434

3535
WORKDIR /workspace/vllm
3636

3737
RUN --mount=type=cache,target=/root/.cache/pip \
38-
--mount=type=bind,src=requirements-common.txt,target=requirements-common.txt \
39-
--mount=type=bind,src=requirements-cpu.txt,target=requirements-cpu.txt \
40-
pip install -v -r requirements-cpu.txt
38+
--mount=type=bind,src=requirements/common.txt,target=requirements/common.txt \
39+
--mount=type=bind,src=requirements/cpu.txt,target=requirements/cpu.txt \
40+
pip install -v -r requirements/cpu.txt
4141

4242
COPY . .
4343
ARG GIT_REPO_CHECK=0

Dockerfile.cpu

+5-5
Original file line numberDiff line numberDiff line change
@@ -29,18 +29,18 @@ WORKDIR /workspace
2929
ARG PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
3030
ENV PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}
3131
RUN --mount=type=cache,target=/root/.cache/pip \
32-
--mount=type=bind,src=requirements-build.txt,target=requirements-build.txt \
32+
--mount=type=bind,src=requirements/build.txt,target=requirements/build.txt \
3333
pip install --upgrade pip && \
34-
pip install -r requirements-build.txt
34+
pip install -r requirements/build.txt
3535

3636
FROM cpu-test-1 AS build
3737

3838
WORKDIR /workspace/vllm
3939

4040
RUN --mount=type=cache,target=/root/.cache/pip \
41-
--mount=type=bind,src=requirements-common.txt,target=requirements-common.txt \
42-
--mount=type=bind,src=requirements-cpu.txt,target=requirements-cpu.txt \
43-
pip install -v -r requirements-cpu.txt
41+
--mount=type=bind,src=requirements/common.txt,target=requirements/common.txt \
42+
--mount=type=bind,src=requirements/cpu.txt,target=requirements/cpu.txt \
43+
pip install -v -r requirements/cpu.txt
4444

4545
COPY . .
4646
ARG GIT_REPO_CHECK=0

Dockerfile.hpu

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ COPY ./ /workspace/vllm
44

55
WORKDIR /workspace/vllm
66

7-
RUN pip install -v -r requirements-hpu.txt
7+
RUN pip install -v -r requirements/hpu.txt
88

99
ENV no_proxy=localhost,127.0.0.1
1010
ENV PT_HPU_ENABLE_LAZY_COLLECTIVES=true

Dockerfile.neuron

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ RUN --mount=type=bind,source=.git,target=.git \
3636

3737
RUN python3 -m pip install -U \
3838
'cmake>=3.26' ninja packaging 'setuptools-scm>=8' wheel jinja2 \
39-
-r requirements-neuron.txt
39+
-r requirements/neuron.txt
4040

4141
ENV VLLM_TARGET_DEVICE neuron
4242
RUN --mount=type=bind,source=.git,target=.git \

Dockerfile.openvino

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ RUN --mount=type=bind,source=.git,target=.git \
1616

1717
RUN python3 -m pip install -U pip
1818
# install build requirements
19-
RUN PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" python3 -m pip install -r /workspace/requirements-build.txt
19+
RUN PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" python3 -m pip install -r /workspace/requirements/build.txt
2020
# build vLLM with OpenVINO backend
2121
RUN PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" VLLM_TARGET_DEVICE="openvino" python3 -m pip install /workspace
2222

Dockerfile.ppc64le

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ENV PATH="/usr/local/cargo/bin:$PATH:/opt/conda/bin/"
66

77
RUN apt-get update -y && apt-get install -y git wget kmod curl vim libnuma-dev libsndfile-dev libprotobuf-dev build-essential ffmpeg libsm6 libxext6 libgl1 libssl-dev
88

9-
# Some packages in requirements-cpu are installed here
9+
# Some packages in requirements/cpu are installed here
1010
# IBM provides optimized packages for ppc64le processors in the open-ce project for mamba
1111
# Currently these may not be available for venv or pip directly
1212
RUN micromamba install -y -n base -c https://ftp.osuosl.org/pub/open-ce/1.11.0-p10/ -c defaults python=3.10 rust && micromamba clean --all --yes
@@ -21,7 +21,7 @@ RUN --mount=type=bind,source=.git,target=.git \
2121
RUN --mount=type=cache,target=/root/.cache/pip \
2222
RUSTFLAGS='-L /opt/conda/lib' pip install -v --prefer-binary --extra-index-url https://repo.fury.io/mgiessing \
2323
'cmake>=3.26' ninja packaging 'setuptools-scm>=8' wheel jinja2 \
24-
-r requirements-cpu.txt \
24+
-r requirements/cpu.txt \
2525
xformers uvloop==0.20.0
2626

2727
RUN --mount=type=bind,source=.git,target=.git \

Dockerfile.rocm

+3-3
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ FROM fetch_vllm AS build_vllm
3838
ARG USE_CYTHON
3939
# Build vLLM
4040
RUN cd vllm \
41-
&& python3 -m pip install -r requirements-rocm.txt \
41+
&& python3 -m pip install -r requirements/rocm.txt \
4242
&& python3 setup.py clean --all \
4343
&& if [ ${USE_CYTHON} -eq "1" ]; then python3 setup_cython.py build_ext --inplace; fi \
4444
&& python3 setup.py bdist_wheel --dist-dir=dist
@@ -60,7 +60,7 @@ RUN python3 -m pip install --upgrade pip && rm -rf /var/lib/apt/lists/*
6060
# Install vLLM
6161
RUN --mount=type=bind,from=export_vllm,src=/,target=/install \
6262
cd /install \
63-
&& pip install -U -r requirements-rocm.txt \
63+
&& pip install -U -r requirements/rocm.txt \
6464
&& pip uninstall -y vllm \
6565
&& pip install *.whl
6666

@@ -99,7 +99,7 @@ RUN if [ ${BUILD_RPD} -eq "1" ]; then \
9999
# Install vLLM
100100
RUN --mount=type=bind,from=export_vllm,src=/,target=/install \
101101
cd /install \
102-
&& pip install -U -r requirements-rocm.txt \
102+
&& pip install -U -r requirements/rocm.txt \
103103
&& pip uninstall -y vllm \
104104
&& pip install *.whl
105105

Dockerfile.s390x

+4-4
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ RUN --mount=type=cache,target=/root/.cache/uv \
5858
cd ../../python && \
5959
export PYARROW_PARALLEL=4 && \
6060
export ARROW_BUILD_TYPE=release && \
61-
uv pip install -r requirements-build.txt && \
61+
uv pip install -r requirements/build.txt && \
6262
python setup.py build_ext --build-type=$ARROW_BUILD_TYPE --bundle-arrow-cpp bdist_wheel
6363

6464
FROM python-install AS numa-build
@@ -120,16 +120,16 @@ RUN --mount=type=cache,target=/root/.cache/uv \
120120
--mount=type=bind,from=rust,source=/root/.rustup,target=/root/.rustup,rw \
121121
--mount=type=bind,from=pyarrow,source=/tmp/arrow/python/dist,target=/tmp/arrow-wheels \
122122
--mount=type=bind,from=torch-vision,source=/tmp/vision/dist,target=/tmp/vision-wheels/ \
123-
sed -i '/^torch/d' requirements-build.txt && \
123+
sed -i '/^torch/d' requirements/build.txt && \
124124
ARROW_WHL_FILE=$(ls /tmp/arrow-wheels/pyarrow-*.whl | head -n 1) && \
125125
VISION_WHL_FILE=$(ls /tmp/vision-wheels/*.whl | head -n 1) && \
126126
uv pip install -v \
127127
$ARROW_WHL_FILE \
128128
$VISION_WHL_FILE \
129129
--extra-index-url https://download.pytorch.org/whl/nightly/cpu \
130130
--index-strategy unsafe-best-match \
131-
-r requirements-build.txt \
132-
-r requirements-cpu.txt
131+
-r requirements/build.txt \
132+
-r requirements/cpu.txt
133133

134134
# Build and install vllm
135135
RUN --mount=type=cache,target=/root/.cache/uv \

Dockerfile.tpu

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ ENV VLLM_TARGET_DEVICE="tpu"
1919
RUN --mount=type=cache,target=/root/.cache/pip \
2020
--mount=type=bind,source=.git,target=.git \
2121
python3 -m pip install \
22-
-r requirements-tpu.txt
22+
-r requirements/tpu.txt
2323
RUN python3 setup.py develop
2424

2525
# install development dependencies (for testing)

Dockerfile.xpu

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,12 @@ RUN apt-get update -y && \
2525
wget
2626

2727
WORKDIR /workspace/vllm
28-
COPY requirements-xpu.txt /workspace/vllm/requirements-xpu.txt
29-
COPY requirements-common.txt /workspace/vllm/requirements-common.txt
28+
COPY requirements/xpu.txt /workspace/vllm/requirements/xpu.txt
29+
COPY requirements/common.txt /workspace/vllm/requirements/common.txt
3030

3131
RUN --mount=type=cache,target=/root/.cache/pip \
3232
pip install --no-cache-dir \
33-
-r requirements-xpu.txt
33+
-r requirements/xpu.txt
3434

3535
RUN git clone https://github.com/intel/pti-gpu && \
3636
cd pti-gpu/sdk && \

MANIFEST.in

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
include LICENSE
2-
include requirements-common.txt
3-
include requirements-cuda.txt
4-
include requirements-rocm.txt
5-
include requirements-neuron.txt
6-
include requirements-cpu.txt
2+
include requirements/common.txt
3+
include requirements/cuda.txt
4+
include requirements/rocm.txt
5+
include requirements/neuron.txt
6+
include requirements/cpu.txt
77
include CMakeLists.txt
88

99
recursive-include cmake *

docs/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
```bash
66
# Install dependencies.
7-
pip install -r requirements-docs.txt
7+
pip install -r ../requirements/docs.txt
88

99
# Build the docs.
1010
make clean

docs/source/contributing/overview.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Check out the [building from source](#build-from-source) documentation for detai
2323
## Testing
2424

2525
```bash
26-
pip install -r requirements-dev.txt
26+
pip install -r requirements/dev.txt
2727

2828
# Linting, formatting and static type checking
2929
pre-commit install --hook-type pre-commit --hook-type commit-msg

docs/source/getting_started/installation/ai_accelerator/hpu-gaudi.inc.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ To build and install vLLM from source, run:
6363
```console
6464
git clone https://github.com/vllm-project/vllm.git
6565
cd vllm
66-
pip install -r requirements-hpu.txt
66+
pip install -r requirements/hpu.txt
6767
python setup.py develop
6868
```
6969

@@ -73,7 +73,7 @@ Currently, the latest features and performance optimizations are developed in Ga
7373
git clone https://github.com/HabanaAI/vllm-fork.git
7474
cd vllm-fork
7575
git checkout habana_main
76-
pip install -r requirements-hpu.txt
76+
pip install -r requirements/hpu.txt
7777
python setup.py develop
7878
```
7979

docs/source/getting_started/installation/ai_accelerator/neuron.inc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Once neuronx-cc and transformers-neuronx packages are installed, we will be able
116116
```console
117117
git clone https://github.com/vllm-project/vllm.git
118118
cd vllm
119-
pip install -U -r requirements-neuron.txt
119+
pip install -U -r requirements/neuron.txt
120120
VLLM_TARGET_DEVICE="neuron" pip install .
121121
```
122122

docs/source/getting_started/installation/ai_accelerator/openvino.inc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Second, clone vLLM and install prerequisites for the vLLM OpenVINO backend insta
3232
```console
3333
git clone https://github.com/vllm-project/vllm.git
3434
cd vllm
35-
pip install -r requirements-build.txt --extra-index-url https://download.pytorch.org/whl/cpu
35+
pip install -r requirements/build.txt --extra-index-url https://download.pytorch.org/whl/cpu
3636
```
3737

3838
Finally, install vLLM with OpenVINO backend:

docs/source/getting_started/installation/ai_accelerator/tpu.inc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ pip uninstall torch torch-xla -y
151151
Install build dependencies:
152152

153153
```bash
154-
pip install -r requirements-tpu.txt
154+
pip install -r requirements/tpu.txt
155155
sudo apt-get install libopenblas-base libopenmpi-dev libomp-dev
156156
```
157157

docs/source/getting_started/installation/cpu/apple.inc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ After installation of XCode and the Command Line Tools, which include Apple Clan
2525
```console
2626
git clone https://github.com/vllm-project/vllm.git
2727
cd vllm
28-
pip install -r requirements-cpu.txt
28+
pip install -r requirements/cpu.txt
2929
pip install -e .
3030
```
3131

docs/source/getting_started/installation/cpu/build.inc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Third, install Python packages for vLLM CPU backend building:
1818
```console
1919
pip install --upgrade pip
2020
pip install "cmake>=3.26" wheel packaging ninja "setuptools-scm>=8" numpy
21-
pip install -v -r requirements-cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu
21+
pip install -v -r requirements/cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu
2222
```
2323

2424
Finally, build and install vLLM CPU backend:

0 commit comments

Comments
 (0)