You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-296Lines changed: 9 additions & 296 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Requirements:
17
17
18
18
> ℹ️ This is project is in Beta. `torch::deploy` is ready for use in production environments but may have some rough edges that we're continuously working on improving. We're always interested in hearing feedback and usecases that you might have. Feel free to reach out!
19
19
20
-
## Installation
20
+
## The Easy Path to Installation
21
21
22
22
### Building via Docker
23
23
@@ -40,92 +40,12 @@ docker run --rm multipy multipy/runtime/build/test_deploy
40
40
41
41
### Installing via `pip install`
42
42
43
-
We support installing both python modules and the runtime libs using `pip
44
-
install`, with the caveat of having to manually install the C++ dependencies
45
-
first. This serves as a single-command source build, essentially being a wrapper
46
-
around `python setup.py develop`, once all the dependencies have been installed.
43
+
The second easiest way of using `torch::deploy` is through our single command `pip install`.
44
+
However, the C++ dependencies have to manually be installed before hand. Specifically a `-fpic`
45
+
enabled version of python. For full instructions for getting the C++ dependencies up and
46
+
running and more detailed guide on `torch::deploy` installation can be found [here](https://pytorch.org/multipy/latest/setup.html#installing-via-pip-install).
47
47
48
-
49
-
To start with, the multipy repo should be cloned first:
We support both `conda` and `pyenv`+`virtualenv` to create isolated environments to build and run in. Since `multipy` requires a position-independent version of python to launch interpreters with, for `conda` environments we use the prebuilt `libpython-static=3.x` libraries from `conda-forge` to link with at build time, and for `virtualenv`/`pyenv` we compile python with `-fPIC` to create the linkable library.
71
-
72
-
> **NOTE** We support Python versions 3.7 through 3.10 for `multipy`; note that for `conda` environments the `libpython-static` libraries are available for `3.8` onwards. With `virtualenv`/`pyenv` any version from 3.7 through 3.10 can be used, as the PIC library is built explicitly.
#### Installing python, pytorch and related dependencies
93
-
94
-
Multipy requires the latest version of pytorch to run models successfully, and we recommend fetching the latest _nightlies_ for pytorch and also cuda, if required.
95
-
96
-
##### In a `conda` environment, we would do the following:
Once all the dependencies are successfully installed, most importantly including a PIC-library of python and the latest nightly of pytorch, we can run the following, in either `conda` or `virtualenv`, to install both the python modules and the runtime/interpreter libraries:
48
+
Once all the dependencies are successfully installed, you can run the following, in either `conda` or `virtualenv`, to install both the python modules and the runtime/interpreter libraries:
129
49
```shell
130
50
# from base multipy directory
131
51
pip install -e .
@@ -137,216 +57,9 @@ Alternatively, one can install only the python modules without invoking `cmake`
137
57
pip install -e . --install-option="--cmakeoff"
138
58
```
139
59
140
-
> **NOTE** As of 10/11/2022 the linking of prebuilt static fPIC versions of python downloaded from `conda-forge` can be problematic on certain systems (for example Centos 8), with linker errors like `libpython_multipy.a: error adding symbols: File format not recognized`. This seems to be an issue with `binutils`, and the steps in https://wiki.gentoo.org/wiki/Project:Toolchain/Binutils_2.32_upgrade_notes/elfutils_0.175:_unable_to_initialize_decompress_status_for_section_.debug_info can help. Alternatively, the user can go with the `virtualenv`/`pyenv` flow above.
141
-
142
-
## Development
143
-
144
-
### Manually building `multipy::runtime` from source
145
-
146
-
Both `docker` and `pip install` options above are wrappers around the `cmake`
147
-
build of multipy's runtime. For development purposes it's often helpful to
148
-
invoke `cmake` separately.
149
-
150
-
See the install section for how to correctly setup the Python environment.
We first need to generate the neccessary examples. First make sure your python environment has [torch](https://pytorch.org). Afterwards, once `multipy::runtime` is built, run the following (executed automatically for `docker` and `pip` above):
178
-
179
-
```
180
-
cd multipy/multipy/runtime
181
-
python example/generate_examples.py
182
-
cd build
183
-
./test_deploy
184
-
```
185
-
186
-
## Examples
187
-
188
-
See the [examples directory](./examples) for complete examples.
189
-
190
-
### Packaging a model `for multipy::runtime`
191
-
192
-
``multipy::runtime`` can load and run Python models that are packaged with
193
-
``torch.package``. You can learn more about ``torch.package`` in the ``torch.package``[documentation](https://pytorch.org/docs/stable/package.html#tutorials).
194
-
195
-
For now, let's create a simple model that we can load and run in ``multipy::runtime``.
196
-
197
-
```python
198
-
from torch.package import PackageExporter
199
-
import torchvision
200
-
201
-
# Instantiate some model
202
-
model = torchvision.models.resnet.resnet18()
203
-
204
-
# Package and export it.
205
-
with PackageExporter("my_package.pt") as e:
206
-
e.intern("torchvision.**")
207
-
e.extern("numpy.**")
208
-
e.extern("sys")
209
-
e.extern("PIL.*")
210
-
e.extern("typing_extensions")
211
-
e.save_pickle("model", "model.pkl", model)
212
-
```
213
-
214
-
Note that since "numpy", "sys", "PIL" were marked as "extern", `torch.package` will
215
-
look for these dependencies on the system that loads this package. They will not be packaged
216
-
with the model.
217
-
218
-
Now, there should be a file named ``my_package.pt`` in your working directory.
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES") // Ensure to set this environment variable (e.g. /home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages)
target_link_libraries(example-app PUBLIC "-Wl,--no-as-needed -rdynamic" dl pthread util multipy c10 torch_cpu)
308
-
```
309
-
310
-
Currently, it is necessary to build ``multipy::runtime`` as a static library.
311
-
In order to correctly link to a static library, the utility ``caffe2_interface_library``
312
-
is used to appropriately set and unset ``--whole-archive`` flag.
313
-
314
-
Furthermore, the ``-rdynamic`` flag is needed when linking to the executable
315
-
to ensure that symbols are exported to the dynamic table, making them accessible
316
-
to the deploy interpreters (which are dynamically loaded).
317
-
318
-
**Updating LIBRARY_PATH and LD_LIBRARY_PATH**
319
-
320
-
In order to locate dependencies provided by PyTorch (e.g. `libshm`), we need to update the `LIBRARY_PATH` and `LD_LIBRARY_PATH` environment variables to include the path to PyTorch's C++ libraries. If you installed PyTorch using pip or conda, this path is usually in the site-packages. An example of this is provided below.
Copy file name to clipboardExpand all lines: docs/source/index.rst
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,9 +3,8 @@
3
3
``torch::deploy`` [Beta]
4
4
=====================
5
5
6
-
``torch::deploy`` is a system that allows you to load multiple python interpreters which execute PyTorch models, and run them in a single C++ process. Effectively, it allows people to multithread their pytorch models.
7
-
For more information on how torch::deploy works please see the related `arXiv paper <https://arxiv.org/pdf/2104.00254.pdf>`_. We plan to further generalize ``torch::deploy`` into a more generic system, ``multipy::runtime``,
8
-
which is more suitable for arbitrary python programs rather than just pytorch applications.
6
+
``torch::deploy`` (MultiPy for non-PyTorch use cases) is a C++ library that enables you to run eager mode PyTorch models in production without any modifications to your model to support tracing. ``torch::deploy`` provides a way to run using multiple independent Python interpreters in a single process without a shared global interpreter lock (GIL).
7
+
For more information on how ``torch::deploy`` works please see the related `arXiv paper <https://arxiv.org/pdf/2104.00254.pdf>`_.
0 commit comments