ExecuTorch is an end-to-end solution for on-device inference and training. It powers much of Meta's on-device AI experiences across Facebook, Instagram, Meta Quest, Ray-Ban Meta Smart Glasses, WhatsApp, and more.
It supports a wide range of models including LLMs (Large Language Models), CV (Computer Vision), ASR (Automatic Speech Recognition), and TTS (Text to Speech).
Platform Support:
-
Operating Systems:
- iOS
- Mac
- Android
- Linux
- Microcontrollers
-
Hardware Acceleration:
- Apple
- Arm
- Cadence
- MediaTek
- OpenVINO
- Qualcomm
- Vulkan
- XNNPACK
Key value propositions of ExecuTorch are:
- Portability: Compatibility with a wide variety of computing platforms, from high-end mobile phones to highly constrained embedded systems and microcontrollers.
- Productivity: Enabling developers to use the same toolchains and Developer Tools from PyTorch model authoring and conversion, to debugging and deployment to a wide variety of platforms.
- Performance: Providing end users with a seamless and high-performance experience due to a lightweight runtime and utilizing full hardware capabilities such as CPUs, NPUs, and DSPs.
To get started you can:
- Visit the Step by Step Tutorial on getting things running locally and deploy a model to a device
- Use this Colab Notebook to start playing around right away
- Jump straight into LLMs use cases by following specific instructions for Llama and Llava
We welcome any feedback, suggestions, and bug reports from the community to help us improve our technology. Check out the Discussion Board or chat real time with us on Discord
We welcome contributions. To get started review the guidelines and chat with us on Discord
Please refer to the Codebase structure section of the Contributing Guidelines for more details.
ExecuTorch is BSD licensed, as found in the LICENSE file.