diff --git a/README.md b/README.md index 3851ad8f..5e87d837 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,8 @@ Requirements: > ℹ️ `torch::deploy` is ready for use in production environments, but is in Beta and may have some rough edges that we're continuously working on improving. We're always interested in hearing feedback and usecases that you might have. Feel free to reach out! +## The Easy Path to Installation + ## Installation ### Building via Docker @@ -183,170 +185,9 @@ cd build ./test_deploy ``` -## Examples - -See the [examples directory](./examples) for complete examples. - -### Packaging a model `for multipy::runtime` - -``multipy::runtime`` can load and run Python models that are packaged with -``torch.package``. You can learn more about ``torch.package`` in the ``torch.package`` [documentation](https://pytorch.org/docs/stable/package.html#tutorials). - -For now, let's create a simple model that we can load and run in ``multipy::runtime``. - -```python -from torch.package import PackageExporter -import torchvision - -# Instantiate some model -model = torchvision.models.resnet.resnet18() - -# Package and export it. -with PackageExporter("my_package.pt") as e: - e.intern("torchvision.**") - e.extern("numpy.**") - e.extern("sys") - e.extern("PIL.*") - e.extern("typing_extensions") - e.save_pickle("model", "model.pkl", model) -``` - -Note that since "numpy", "sys", "PIL" were marked as "extern", `torch.package` will -look for these dependencies on the system that loads this package. They will not be packaged -with the model. - -Now, there should be a file named ``my_package.pt`` in your working directory. - -
- -### Load the model in C++ -```cpp -#include -#include -#include -#include - -#include -#include - -int main(int argc, const char* argv[]) { - if (argc != 2) { - std::cerr << "usage: example-app \n"; - return -1; - } - - // Start an interpreter manager governing 4 embedded interpreters. - std::shared_ptr env = - std::make_shared( - std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES") // Ensure to set this environment variable (e.g. /home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages) - ); - multipy::runtime::InterpreterManager manager(4, env); - - try { - // Load the model from the multipy.package. - multipy::runtime::Package package = manager.loadPackage(argv[1]); - multipy::runtime::ReplicatedObj model = package.loadPickle("model", "model.pkl"); - } catch (const c10::Error& e) { - std::cerr << "error loading the model\n"; - std::cerr << e.msg(); - return -1; - } - - std::cout << "ok\n"; -} - -``` - -This small program introduces many of the core concepts of ``multipy::runtime``. - -An ``InterpreterManager`` abstracts over a collection of independent Python -interpreters, allowing you to load balance across them when running your code. - -``PathEnvironment`` enables you to specify the location of Python -packages on your system which are external, but necessary, for your model. - -Using the ``InterpreterManager::loadPackage`` method, you can load a -``multipy.package`` from disk and make it available to all interpreters. - -``Package::loadPickle`` allows you to retrieve specific Python objects -from the package, like the ResNet model we saved earlier. - -Finally, the model itself is a ``ReplicatedObj``. This is an abstract handle to -an object that is replicated across multiple interpreters. When you interact -with a ``ReplicatedObj`` (for example, by calling ``forward``), it will select -an free interpreter to execute that interaction. - -
- -### Build and execute the C++ example - -Assuming the above C++ program was stored in a file called, `example-app.cpp`, a -minimal `CMakeLists.txt` file would look like: - -```cmake -cmake_minimum_required(VERSION 3.12 FATAL_ERROR) -project(multipy_tutorial) - -set(MULTIPY_PATH ".." CACHE PATH "The repo where multipy is built or the PYTHONPATH") - -# include the multipy utils to help link against -include(${MULTIPY_PATH}/multipy/runtime/utils.cmake) - -# add headers from multipy -include_directories(${MULTIPY_PATH}) - -# link the multipy prebuilt binary -add_library(multipy_internal STATIC IMPORTED) -set_target_properties(multipy_internal - PROPERTIES - IMPORTED_LOCATION - ${MULTIPY_PATH}/multipy/runtime/build/libtorch_deploy.a) -caffe2_interface_library(multipy_internal multipy) - -add_executable(example-app example-app.cpp) -target_link_libraries(example-app PUBLIC "-Wl,--no-as-needed -rdynamic" dl pthread util multipy c10 torch_cpu) -``` - -Currently, it is necessary to build ``multipy::runtime`` as a static library. -In order to correctly link to a static library, the utility ``caffe2_interface_library`` -is used to appropriately set and unset ``--whole-archive`` flag. - -Furthermore, the ``-rdynamic`` flag is needed when linking to the executable -to ensure that symbols are exported to the dynamic table, making them accessible -to the deploy interpreters (which are dynamically loaded). - -**Updating LIBRARY_PATH and LD_LIBRARY_PATH** - -In order to locate dependencies provided by PyTorch (e.g. `libshm`), we need to update the `LIBRARY_PATH` and `LD_LIBRARY_PATH` environment variables to include the path to PyTorch's C++ libraries. If you installed PyTorch using pip or conda, this path is usually in the site-packages. An example of this is provided below. - -```bash -export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib" -export LIBRARY_PATH="$LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib" -``` - -The last step is configuring and building the project. Assuming that our code -directory is laid out like this: - -``` -example-app/ - CMakeLists.txt - example-app.cpp -``` - - -We can now run the following commands to build the application from within the -``example-app/`` folder: - -```bash -cmake -S . -B build -DMULTIPY_PATH="/home/user/repos/multipy" # the parent directory of multipy (i.e. the git repo) -cmake --build build --config Release -j -``` - -Now we can run our app: - -```bash -./example-app /path/to/my_package.pt -``` +## Getting Started with `torch::deploy` +Once you have `torch::deploy` built, check out our [tutorials](https://pytorch.org/multipy/latest/tutorials/tutorial_root.html) and +[API documentation](https://pytorch.org/multipy/latest/api/library_root.html). ## Contributing diff --git a/docs/source/index.rst b/docs/source/index.rst index 045b4263..d74d4bab 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -3,10 +3,10 @@ ``torch::deploy`` [Beta] ===================== -``torch::deploy`` is a system that allows you to load multiple python interpreters which execute PyTorch models, and run them in a single C++ process. Effectively, it allows people to multithread their pytorch models. -For more information on how torch::deploy works please see the related `arXiv paper `_. We plan to further generalize ``torch::deploy`` into a more generic system, ``multipy::runtime``, -which is more suitable for arbitrary python programs rather than just pytorch applications. +``torch::deploy`` (MultiPy for non-PyTorch use cases) is a C++ library that enables you to run eager mode PyTorch models in production without any modifications to your model to support tracing. ``torch::deploy`` provides a way to run using multiple independent Python interpreters in a single process without a shared global interpreter lock (GIL). +For more information on how ``torch::deploy`` works please see the related `arXiv paper `_. +The most up to date installation instructions for ``torch::deploy`` can be found in our `README `__. Documentation --------------- @@ -15,7 +15,6 @@ Documentation :maxdepth: 2 :caption: Usage - setup.md tutorials/tutorial_root api/library_root diff --git a/docs/source/setup.rst b/docs/source/setup.rst deleted file mode 100644 index 54022294..00000000 --- a/docs/source/setup.rst +++ /dev/null @@ -1,186 +0,0 @@ -Installation -============ - -Building ``torch::deploy`` via Docker -------------------------------------- - -The easiest way to build ``torch::deploy``, along with fetching all interpreter -dependencies, is to do so via docker. - -.. code:: shell - - git clone https://github.com/pytorch/multipy.git - cd multipy - export DOCKER_BUILDKIT=1 - docker build -t multipy . - -The built artifacts are located in ``multipy/runtime/build``. - -To run the tests: - -.. code:: shell - - docker run --rm multipy multipy/runtime/build/test_deploy - -Installing via ``pip install`` ------------------------------- - -We support installing both the python modules and the c++ bits (through ``CMake``) -using a single ``pip install -e .`` command, with the caveat of having to manually -install the dependencies first. - -First clone multipy and update the submodules: - -.. code:: shell - - git clone https://github.com/pytorch/multipy.git - cd multipy - git submodule sync && git submodule update --init --recursive - -Installing system dependencies -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The runtime system dependencies are specified in -``build-requirements.txt``. To install them on Debian-based systems, one -could run: - -.. code:: shell - - sudo apt update - xargs sudo apt install -y -qq --no-install-recommends < build-requirements.txt - -Installing environment encapsulators -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -We recommend using the isolated python environments of either `conda -`__ -or `pyenv + virtualenv `__ -because ``torch::deploy`` requires a -position-independent version of python to launch interpreters with. For -``conda`` environments we use the prebuilt ``libpython-static=3.x`` -libraries from ``conda-forge`` to link with at build time. For -``virtualenv``/``pyenv``, we compile python with the ``-fPIC`` flag to create the -linkable library. - -.. warning:: - While `torch::deploy` supports Python versions 3.7 through 3.10, - the ``libpython-static`` libraries used with ``conda`` environments - are only available for ``3.8`` onwards. With ``virtualenv``/``pyenv`` - any version from 3.7 through 3.10 can be - used, as python can be built with the ``-fPIC`` flag explicitly. - -Installing pytorch and related dependencies -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -``torch::deploy`` requires the latest version of pytorch to run models -successfully, and we recommend fetching the latest *nightlies* for -pytorch and also cuda. - -Installing the python dependencies in a ``conda`` environment: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: shell - - conda create -n newenv - conda activate newenv - - conda install python=3.8 # or 3.8/3.10 - conda install -c conda-forge libpython-static=3.8 # or 3.8/3.10 - - # install your desired flavor of pytorch from https://pytorch.org/get-started/locally/ - conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly - -Installing the python dependencies in a ``pyenv`` / ``virtualenv`` setup -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: shell - - # feel free to replace 3.8.6 with any python version > 3.7.0 - export CFLAGS="-fPIC -g" - ~/.pyenv/bin/pyenv install --force 3.8.6 - virtualenv -p ~/.pyenv/versions/3.8.6/bin/python3 ~/venvs/multipy - source ~/venvs/multipy/bin/activate - pip install -r dev-requirements.txt - - # install your desired flavor of pytorch from https://pytorch.org/get-started/locally/ - pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu - -Running ``pip install`` -~~~~~~~~~~~~~~~~~~~~~~~ - -Once all the dependencies are successfully installed, -including a ``-fPIC`` enabled build of python and the latest nightly of pytorch, we -can run the following, in either ``conda`` or ``virtualenv``, to install -both the python modules and the runtime/interpreter libraries: - -.. code:: shell - - # from base torch::deploy directory - pip install -e . - # alternatively one could run - python setup.py develop - -The C++ binaries should be available in ``/opt/dist``. - -Alternatively, one can install only the python modules without invoking -``cmake`` as follows: - -.. code:: shell - - # from base multipy directory - pip install -e . --install-option="--cmakeoff" - -.. warning:: - As of 10/11/2022 the linking of prebuilt static ``-fPIC`` - versions of python downloaded from ``conda-forge`` can be problematic - on certain systems (for example Centos 8), with linker errors like - ``libpython_multipy.a: error adding symbols: File format not recognized``. - This seems to be an issue with ``binutils``, and `these steps - `__ - can help. Alternatively, the user can go with the - ``virtualenv``/``pyenv`` flow above. - -Running ``torch::deploy`` build steps from source -------------------------------------------------- - -Both ``docker`` and ``pip install`` options above are wrappers around -the cmake build of `torch::deploy`. If the user wishes to run the -build steps manually instead, as before the dependencies would have to -be installed in the user’s (isolated) environment of choice first. After -that the following steps can be executed: - -Building -~~~~~~~~ - -.. code:: bash - - # checkout repo - git checkout https://github.com/pytorch/multipy.git - git submodule sync && git submodule update --init --recursive - - cd multipy - # install python parts of `torch::deploy` in multipy/multipy/utils - pip install -e . --install-option="--cmakeoff" - - cd multipy/runtime - - # build runtime - mkdir build - cd build - # use cmake -DABI_EQUALS_1=ON .. instead if you want ABI=1 - cmake .. - cmake --build . --config Release - -Running unit tests for ``torch::deploy`` ----------------------------------------- - -We first need to generate the neccessary examples. First make sure your -python enviroment has `torch `__. Afterwards, once -``torch::deploy`` is built, run the following (executed automatically -for ``docker`` and ``pip`` above): - -.. code:: bash - - cd multipy/multipy/runtime - python example/generate_examples.py - cd build - ./test_deploy