Skip to content

Commit c909bb9

Browse files
authored
Merge pull request #3 from guilherme-marcello/docker-compose-doc
GPUs with Docker Compose
2 parents 4bd4ab5 + ca6eda3 commit c909bb9

File tree

2 files changed

+65
-1
lines changed

2 files changed

+65
-1
lines changed

README.md

+48-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,8 @@ My hope is furthermore that dedicated Docker containers will make ML models a lo
5555
You will need the following hard- and software setup to be able to run Docker with GPU support:
5656

5757
- An Ubuntu computer/server with a Nvidia CUDA GPU
58-
- Docker with version >= 1.4
58+
- Docker Engine with version >= 19.03.0
59+
- Docker Compose with version >= 1.28.0
5960
- Nvidia drivers with version >= 361
6061

6162

@@ -74,6 +75,18 @@ docker version
7475

7576
The output should be a long list with infos like "API version: 1.4" etc.
7677

78+
### Install Docker Compose
79+
80+
Follow the official documentation: [https://docs.docker.com/compose/install/linux/](https://docs.docker.com/compose/install/linux/)
81+
82+
Verify Docker Compose version:
83+
84+
```bash
85+
docker compose version
86+
```
87+
88+
The output should be similar to "Docker Compose version v2.9.0".
89+
7790
### Install nVidia CUDA driver
7891

7992
Install CUDA along with latest nVidia driver for your graphics card.
@@ -238,6 +251,40 @@ docker run -it --runtime=nvidia --volume $(pwd)/:/shared --workdir /shared deepf
238251

239252
- **deepfill:v0** run a docker container named deepfill with version v0
240253

254+
### Run the DeepFill container (with Docker Compose)
255+
In a more complex scenario where you want to leverage GPUs to run your services, it can be useful to set up a Docker Compose configuration file. For example, you can create a `docker-compose.yml` file with the following content:
256+
257+
```yaml
258+
version: "3"
259+
services:
260+
deepfill:
261+
image: deepfill:v0
262+
runtime: nvidia
263+
volumes:
264+
- $(pwd)/:/shared
265+
working_dir: /shared
266+
tty: true
267+
stdin_open: true
268+
command: bash
269+
deploy:
270+
resources:
271+
reservations:
272+
devices:
273+
- driver: nvidia
274+
count: 1
275+
```
276+
277+
This configuration file defines a service named `deepfill` that uses the `deepfill:v0` Docker image and leverages the nvidia runtime to utilize GPUs. The devices section maps 1 of the host's NVIDIA GPUs by using nvidia driver. This configuration file can also be useful to specify GPUs to specific services in the same compose.
278+
279+
After running `docker compose up -d`, you can enter the running container and start a bash session using the `docker compose exec` command. The following command can be used to enter the `deepfill` service container and start a bash session:
280+
281+
```bash
282+
docker compose exec deepfill bash
283+
```
284+
285+
The container will continue to run in the background, and you can start another bash session using the same command if needed.
286+
287+
241288
### Download pretrained DeepFill models
242289

243290
Download the [pretrained models](https://github.com/JiahuiYu/generative_inpainting#pretrained-models) e.g. Places2 (places background) or CelebA-HQ (faces) and copy it to folder `model_logs`. The demo relies on it.

docker-compose.yml

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
version: "3"
2+
services:
3+
deepfill:
4+
image: deepfill:v0
5+
runtime: nvidia
6+
volumes:
7+
- $(pwd)/:/shared
8+
working_dir: /shared
9+
tty: true
10+
stdin_open: true
11+
command: bash
12+
deploy:
13+
resources:
14+
reservations:
15+
devices:
16+
- driver: nvidia
17+
count: 1

0 commit comments

Comments
 (0)