You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Dockerfile.xpu
+10-12
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
FROM intel/oneapi-basekit:2024.2.1-0-devel-ubuntu22.04 AS vllm-base
1
+
FROM intel/deep-learning-essentials:2025.0.1-0-devel-ubuntu22.04 AS vllm-base
2
2
3
3
RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor | tee /usr/share/keyrings/intel-oneapi-archive-keyring.gpg > /dev/null && \
4
4
echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " | tee /etc/apt/sources.list.d/oneAPI.list && \
@@ -21,7 +21,8 @@ RUN apt-get update -y && \
21
21
python3 \
22
22
python3-dev \
23
23
python3-pip \
24
-
# vim \
24
+
libze-intel-gpu-dev \
25
+
libze-intel-gpu1 \
25
26
wget
26
27
27
28
WORKDIR /workspace/vllm
@@ -32,19 +33,10 @@ RUN --mount=type=cache,target=/root/.cache/pip \
32
33
pip install --no-cache-dir \
33
34
-r requirements/xpu.txt
34
35
35
-
RUN git clone https://github.com/intel/pti-gpu && \
if [ "$GIT_REPO_CHECK" != 0 ]; then bash tools/check_repo.sh; fi
50
42
@@ -54,6 +46,12 @@ RUN --mount=type=cache,target=/root/.cache/pip \
54
46
--mount=type=bind,source=.git,target=.git \
55
47
python3 setup.py install
56
48
49
+
# Please refer xpu doc, we need manually install intel-extension-for-pytorch 2.6.10+xpu due to there are some conflict dependencies with torch 2.6.0+xpu
50
+
# FIXME: This will be fix in ipex 2.7. just leave this here for awareness.
Copy file name to clipboardExpand all lines: docs/source/getting_started/installation/gpu/xpu.inc.md
+13-5
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM
9
9
## Requirements
10
10
11
11
- Supported Hardware: Intel Data Center GPU, Intel ARC GPU
12
-
- OneAPI requirements: oneAPI 2024.2
12
+
- OneAPI requirements: oneAPI 2025.0
13
13
14
14
## Set up using Python
15
15
@@ -19,21 +19,27 @@ Currently, there are no pre-built XPU wheels.
19
19
20
20
### Build wheel from source
21
21
22
-
- First, install required driver and intel OneAPI 2024.2 or later.
22
+
- First, install required driver and Intel OneAPI 2025.0 or later.
23
23
- Second, install Python packages for vLLM XPU backend building:
24
24
25
25
```console
26
-
source /opt/intel/oneapi/setvars.sh
27
26
pip install --upgrade pip
28
27
pip install -v -r requirements/xpu.txt
29
28
```
30
29
31
-
-Finally, build and install vLLM XPU backend:
30
+
-Then, build and install vLLM XPU backend:
32
31
33
32
```console
34
33
VLLM_TARGET_DEVICE=xpu python setup.py install
35
34
```
36
35
36
+
- Finally, due to a known issue of conflict dependency(oneapi related) in torch-xpu 2.6 and ipex-xpu 2.6, we install ipex here. This will be fixed in the ipex-xpu 2.7.
- FP16 is the default data type in the current XPU backend. The BF16 data
39
45
type is supported on Intel Data Center GPU, not supported on Intel Arc GPU yet.
@@ -59,7 +65,7 @@ $ docker run -it \
59
65
60
66
## Supported features
61
67
62
-
XPU platform supports tensor-parallel inference/serving and also supports pipeline parallel as a beta feature for online serving. We requires Ray as the distributed runtime backend. For example, a reference execution likes following:
68
+
XPU platform supports **tensorparallel** inference/serving and also supports **pipeline parallel** as a beta feature for online serving. We requires Ray as the distributed runtime backend. For example, a reference execution likes following:
By default, a ray instance will be launched automatically if no existing one is detected in system, with `num-gpus` equals to `parallel_config.world_size`. We recommend properly starting a ray cluster before execution, referring to the <gh-file:examples/online_serving/run_cluster.sh> helper script.
82
+
83
+
There are some new features coming with ipex-xpu 2.6, eg: **chunked prefill**, **V1 engine support**, **lora**, **MoE**, etc.
# Please refer xpu doc, we need manually install intel-extension-for-pytorch 2.6.10+xpu due to there are some conflict dependencies with torch 2.6.0+xpu
21
+
# FIXME: This will be fix in ipex 2.7. just leave this here for awareness.
0 commit comments