Skip to content

Commit c6e14a6

Browse files
authored
[Hardware][Intel GPU] upgrade IPEX dependency to 2.6.10. (vllm-project#14564)
Signed-off-by: Kunshang Ji <[email protected]>
1 parent 07b4b7a commit c6e14a6

File tree

3 files changed

+35
-22
lines changed

3 files changed

+35
-22
lines changed

Dockerfile.xpu

+10-12
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM intel/oneapi-basekit:2024.2.1-0-devel-ubuntu22.04 AS vllm-base
1+
FROM intel/deep-learning-essentials:2025.0.1-0-devel-ubuntu22.04 AS vllm-base
22

33
RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | tee /usr/share/keyrings/intel-oneapi-archive-keyring.gpg > /dev/null && \
44
echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " | tee /etc/apt/sources.list.d/oneAPI.list && \
@@ -21,7 +21,8 @@ RUN apt-get update -y && \
2121
python3 \
2222
python3-dev \
2323
python3-pip \
24-
# vim \
24+
libze-intel-gpu-dev \
25+
libze-intel-gpu1 \
2526
wget
2627

2728
WORKDIR /workspace/vllm
@@ -32,19 +33,10 @@ RUN --mount=type=cache,target=/root/.cache/pip \
3233
pip install --no-cache-dir \
3334
-r requirements/xpu.txt
3435

35-
RUN git clone https://github.com/intel/pti-gpu && \
36-
cd pti-gpu/sdk && \
37-
git checkout 6c491f07a777ed872c2654ca9942f1d0dde0a082 && \
38-
mkdir build && \
39-
cd build && \
40-
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/icpx_toolchain.cmake -DBUILD_TESTING=OFF .. && \
41-
make -j && \
42-
cmake --install . --config Release --prefix "/usr/local"
43-
4436
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib/"
4537

4638
COPY . .
47-
ARG GIT_REPO_CHECK
39+
ARG GIT_REPO_CHECK=0
4840
RUN --mount=type=bind,source=.git,target=.git \
4941
if [ "$GIT_REPO_CHECK" != 0 ]; then bash tools/check_repo.sh; fi
5042

@@ -54,6 +46,12 @@ RUN --mount=type=cache,target=/root/.cache/pip \
5446
--mount=type=bind,source=.git,target=.git \
5547
python3 setup.py install
5648

49+
# Please refer xpu doc, we need manually install intel-extension-for-pytorch 2.6.10+xpu due to there are some conflict dependencies with torch 2.6.0+xpu
50+
# FIXME: This will be fix in ipex 2.7. just leave this here for awareness.
51+
RUN --mount=type=cache,target=/root/.cache/pip \
52+
pip install intel-extension-for-pytorch==2.6.10+xpu \
53+
--extra-index-url=https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
54+
5755
CMD ["/bin/bash"]
5856

5957
FROM vllm-base AS vllm-openai

docs/source/getting_started/installation/gpu/xpu.inc.md

+13-5
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM
99
## Requirements
1010

1111
- Supported Hardware: Intel Data Center GPU, Intel ARC GPU
12-
- OneAPI requirements: oneAPI 2024.2
12+
- OneAPI requirements: oneAPI 2025.0
1313

1414
## Set up using Python
1515

@@ -19,21 +19,27 @@ Currently, there are no pre-built XPU wheels.
1919

2020
### Build wheel from source
2121

22-
- First, install required driver and intel OneAPI 2024.2 or later.
22+
- First, install required driver and Intel OneAPI 2025.0 or later.
2323
- Second, install Python packages for vLLM XPU backend building:
2424

2525
```console
26-
source /opt/intel/oneapi/setvars.sh
2726
pip install --upgrade pip
2827
pip install -v -r requirements/xpu.txt
2928
```
3029

31-
- Finally, build and install vLLM XPU backend:
30+
- Then, build and install vLLM XPU backend:
3231

3332
```console
3433
VLLM_TARGET_DEVICE=xpu python setup.py install
3534
```
3635

36+
- Finally, due to a known issue of conflict dependency(oneapi related) in torch-xpu 2.6 and ipex-xpu 2.6, we install ipex here. This will be fixed in the ipex-xpu 2.7.
37+
38+
```console
39+
pip install intel-extension-for-pytorch==2.6.10+xpu \
40+
--extra-index-url=https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
41+
```
42+
3743
:::{note}
3844
- FP16 is the default data type in the current XPU backend. The BF16 data
3945
type is supported on Intel Data Center GPU, not supported on Intel Arc GPU yet.
@@ -59,7 +65,7 @@ $ docker run -it \
5965

6066
## Supported features
6167

62-
XPU platform supports tensor-parallel inference/serving and also supports pipeline parallel as a beta feature for online serving. We requires Ray as the distributed runtime backend. For example, a reference execution likes following:
68+
XPU platform supports **tensor parallel** inference/serving and also supports **pipeline parallel** as a beta feature for online serving. We requires Ray as the distributed runtime backend. For example, a reference execution likes following:
6369

6470
```console
6571
python -m vllm.entrypoints.openai.api_server \
@@ -73,3 +79,5 @@ python -m vllm.entrypoints.openai.api_server \
7379
```
7480

7581
By default, a ray instance will be launched automatically if no existing one is detected in system, with `num-gpus` equals to `parallel_config.world_size`. We recommend properly starting a ray cluster before execution, referring to the <gh-file:examples/online_serving/run_cluster.sh> helper script.
82+
83+
There are some new features coming with ipex-xpu 2.6, eg: **chunked prefill**, **V1 engine support**, **lora**, **MoE**, etc.

requirements/xpu.txt

+12-5
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,24 @@
11
# Common dependencies
22
-r common.txt
33

4-
ray >= 2.9
4+
ray>=2.9
55
cmake>=3.26
66
ninja
77
packaging
88
setuptools-scm>=8
99
setuptools>=75.8.0
1010
wheel
1111
jinja2
12+
datasets # for benchmark scripts
1213

13-
torch @ https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_dev/xpu/torch-2.5.0a0%2Bgite84e33f-cp310-cp310-linux_x86_64.whl
14-
intel-extension-for-pytorch @ https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_dev/xpu/intel_extension_for_pytorch-2.5.10%2Bgit9d489a8-cp310-cp310-linux_x86_64.whl
15-
oneccl_bind_pt @ https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/ipex_dev/xpu/oneccl_bind_pt-2.5.0%2Bxpu-cp310-cp310-linux_x86_64.whl
14+
torch==2.6.0+xpu
15+
torchaudio
16+
torchvision
17+
pytorch-triton-xpu
18+
--extra-index-url=https://download.pytorch.org/whl/xpu
1619

17-
triton-xpu == 3.0.0b1
20+
# Please refer xpu doc, we need manually install intel-extension-for-pytorch 2.6.10+xpu due to there are some conflict dependencies with torch 2.6.0+xpu
21+
# FIXME: This will be fix in ipex 2.7. just leave this here for awareness.
22+
# intel-extension-for-pytorch==2.6.10+xpu
23+
oneccl_bind_pt==2.6.0+xpu
24+
--extra-index-url=https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

0 commit comments

Comments
 (0)