Skip to content

Commit 07964e2

Browse files
dilipgbhmellor
andauthored
docs: Add documentation for s390x cpu implementation (vllm-project#14198)
Signed-off-by: Dilip Gowda Bhagavan <[email protected]> Co-authored-by: Harry Mellor <[email protected]>
1 parent 4bf82d4 commit 07964e2

File tree

3 files changed

+97
-0
lines changed

3 files changed

+97
-0
lines changed

docs/source/getting_started/installation.md

+1
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ installation/ai_accelerator
2121
- Intel/AMD x86
2222
- ARM AArch64
2323
- Apple silicon
24+
- IBM Z (S390X)
2425
- <project:installation/ai_accelerator.md>
2526
- Google TPU
2627
- Intel Gaudi

docs/source/getting_started/installation/cpu.md

+34
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,16 @@ vLLM is a Python library that supports the following CPU variants. Select your C
3636

3737
::::
3838

39+
::::{tab-item} IBM Z (S390X)
40+
:sync: s390x
41+
42+
:::{include} cpu/s390x.inc.md
43+
:start-after: "# Installation"
44+
:end-before: "## Requirements"
45+
:::
46+
47+
::::
48+
3949
:::::
4050

4151
## Requirements
@@ -75,6 +85,16 @@ vLLM is a Python library that supports the following CPU variants. Select your C
7585

7686
::::
7787

88+
::::{tab-item} IBM Z (S390X)
89+
:sync: s390x
90+
91+
:::{include} cpu/s390x.inc.md
92+
:start-after: "## Requirements"
93+
:end-before: "## Set up using Python"
94+
:::
95+
96+
::::
97+
7898
:::::
7999

80100
## Set up using Python
@@ -123,6 +143,16 @@ Currently, there are no pre-built CPU wheels.
123143

124144
::::
125145

146+
::::{tab-item} IBM Z (s390x)
147+
:sync: s390x
148+
149+
:::{include} cpu/s390x.inc.md
150+
:start-after: "### Build wheel from source"
151+
:end-before: "## Set up using Docker"
152+
:::
153+
154+
::::
155+
126156
:::::
127157

128158
## Set up using Docker
@@ -147,6 +177,10 @@ $ docker run -it \
147177
For ARM or Apple silicon, use `Dockerfile.arm`
148178
::::
149179

180+
::::{tip}
181+
For IBM Z (s390x), use `Dockerfile.s390x` and in `docker run` use flag `--dtype float`
182+
::::
183+
150184
## Supported features
151185

152186
vLLM CPU backend supports the following vLLM features:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# Installation
2+
3+
vLLM has experimental support for s390x architecture on IBM Z platform. For now, users shall build from the vLLM source to natively run on IBM Z platform.
4+
5+
Currently the CPU implementation for s390x architecture supports FP32 datatype only.
6+
7+
:::{attention}
8+
There are no pre-built wheels or images for this device, so you must build vLLM from source.
9+
:::
10+
11+
## Requirements
12+
13+
- OS: `Linux`
14+
- SDK: `gcc/g++ >= 12.3.0` or later with Command Line Tools
15+
- Instruction Set Architecture (ISA): VXE support is required. Works with Z14 and above.
16+
- Build install python packages: `pyarrow`, `torch` and `torchvision`
17+
18+
## Set up using Python
19+
20+
### Pre-built wheels
21+
22+
### Build wheel from source
23+
24+
Install the following packages from the package manager before building the vLLM. For example on RHEL 9.4:
25+
26+
```console
27+
dnf install -y \
28+
which procps findutils tar vim git gcc g++ make patch make cython zlib-devel \
29+
libjpeg-turbo-devel libtiff-devel libpng-devel libwebp-devel freetype-devel harfbuzz-devel \
30+
openssl-devel openblas openblas-devel wget autoconf automake libtool cmake numactl-devel
31+
```
32+
33+
Install rust>=1.80 which is needed for `outlines-core` and `uvloop` python packages installation.
34+
35+
```console
36+
curl https://sh.rustup.rs -sSf | sh -s -- -y && \
37+
. "$HOME/.cargo/env"
38+
```
39+
40+
Execute the following commands to build and install vLLM from the source.
41+
42+
::::{tip}
43+
Please build the following dependencies, `torchvision`, `pyarrow` from the source before building vLLM.
44+
::::
45+
46+
```console
47+
sed -i '/^torch/d' requirements-build.txt # remove torch from requirements-build.txt since we use nightly builds
48+
pip install -v \
49+
--extra-index-url https://download.pytorch.org/whl/nightly/cpu \
50+
-r requirements-build.txt \
51+
-r requirements-cpu.txt \
52+
VLLM_TARGET_DEVICE=cpu python setup.py bdist_wheel && \
53+
pip install dist/*.whl
54+
```
55+
56+
## Set up using Docker
57+
58+
### Pre-built images
59+
60+
### Build image from source
61+
62+
## Extra information

0 commit comments

Comments
 (0)