diff --git a/.github/workflows/apple.yml b/.github/workflows/apple.yml index 214d4f13fc8..0436a914dd0 100644 --- a/.github/workflows/apple.yml +++ b/.github/workflows/apple.yml @@ -37,7 +37,7 @@ jobs: id: set_version shell: bash run: | - VERSION="0.5.0.$(TZ='PST8PDT' date +%Y%m%d)" + VERSION="0.7.0.$(TZ='PST8PDT' date +%Y%m%d)" echo "version=$VERSION" >> "$GITHUB_OUTPUT" build-demo-ios: diff --git a/docs/README.md b/docs/README.md index dd1fded5aa9..e6dc66d335e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -39,17 +39,20 @@ To build the documentation locally: 1. Clone the ExecuTorch repo to your machine. -1. If you don't have it already, start a conda environment: + ```bash + git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch + ``` - ```{note} - The below command generates a completely new environment and resets - any existing dependencies. If you have an environment already, skip - the `conda create` command. +1. If you don't have it already, start either a Python virtual envitonment: + + ```bash + python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` + Or a Conda environment: + ```bash - conda create -yn executorch python=3.10.0 - conda activate executorch + conda create -yn executorch python=3.10.0 && conda activate executorch ``` 1. Install dependencies: @@ -57,15 +60,11 @@ To build the documentation locally: ```bash pip3 install -r ./.ci/docker/requirements-ci.txt ``` -1. Update submodules - ```bash - git submodule sync && git submodule update --init - ``` 1. Run: ```bash - bash install_executorch.sh + ./install_executorch.sh ``` 1. Go to the `docs/` directory. diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md index 741454fed27..fbca80cf23b 100644 --- a/docs/source/getting-started.md +++ b/docs/source/getting-started.md @@ -137,7 +137,7 @@ For a full example of running a model on Android, see the [DeepLabV3AndroidDemo] #### Installation ExecuTorch supports both iOS and MacOS via C++, as well as hardware backends for CoreML, MPS, and CPU. The iOS runtime library is provided as a collection of .xcframework targets and are made available as a Swift PM package. -To get started with Xcode, go to File > Add Package Dependencies. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format “swiftpm-”, (e.g. “swiftpm-0.5.0”). The ExecuTorch dependency can also be added to the package file manually. See [Using ExecuTorch on iOS](using-executorch-ios.md) for more information. +To get started with Xcode, go to File > Add Package Dependencies. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format “swiftpm-”, (e.g. “swiftpm-0.6.0”). The ExecuTorch dependency can also be added to the package file manually. See [Using ExecuTorch on iOS](using-executorch-ios.md) for more information. #### Runtime APIs Models can be loaded and run from Objective-C using the C++ APIs. @@ -151,7 +151,7 @@ ExecuTorch provides C++ APIs, which can be used to target embedded or mobile dev CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) for a detailed description of build integration, targets, and cross compilation. ``` -git clone -b release/0.5 https://github.com/pytorch/executorch.git +git clone -b viable/strict https://github.com/pytorch/executorch.git ``` ```python # CMakeLists.txt diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index 066bb3f3d1c..035da31f119 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -43,15 +43,17 @@ Instructions on installing miniconda can be [found here](https://docs.anaconda.c mkdir et-nanogpt cd et-nanogpt -# Clone the ExecuTorch repository and submodules. +# Clone the ExecuTorch repository. mkdir third-party -git clone -b release/0.4 https://github.com/pytorch/executorch.git third-party/executorch -cd third-party/executorch -git submodule update --init +git clone -b viable/strict https://github.com/pytorch/executorch.git third-party/executorch && cd third-party/executorch -# Create a conda environment and install requirements. -conda create -yn executorch python=3.10.0 -conda activate executorch +# Create either a Python virtual environment: +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip + +# Or a Conda environment: +conda create -yn executorch python=3.10.0 && conda activate executorch + +# Install requirements ./install_executorch.sh cd ../.. @@ -76,11 +78,8 @@ pyenv install -s 3.10 pyenv virtualenv 3.10 executorch pyenv activate executorch -# Clone the ExecuTorch repository and submodules. -mkdir third-party -git clone -b release/0.4 https://github.com/pytorch/executorch.git third-party/executorch -cd third-party/executorch -git submodule update --init +# Clone the ExecuTorch repository. +git clone -b viable/strict https://github.com/pytorch/executorch.git third-party/executorch && cd third-party/executorch # Install requirements. PYTHON_EXECUTABLE=python ./install_executorch.sh diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index a146556b4fc..668f696f040 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -36,27 +36,23 @@ portability details. ## Environment Setup -### Create a Virtual Environment +### Clone ExecuTorch -[Install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). Then, create a virtual environment to manage our dependencies. ```bash - # Create and activate a conda environment named "executorch" - conda create -yn executorch python=3.10.0 - conda activate executorch + # Clone the ExecuTorch repo from GitHub + git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -### Clone ExecuTorch +### Create a Virtual Environment +Create and activate a Python virtual environment: ```bash - # Clone the ExecuTorch repo from GitHub - # 'main' branch is the primary development branch where you see the latest changes. - # 'viable/strict' contains all of the commits on main that pass all of the necessary CI checks. - git clone --branch viable/strict https://github.com/pytorch/executorch.git - cd executorch - - # Update and pull submodules - git submodule sync - git submodule update --init + python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip + ``` + +Or alternatively, [install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). Then, create a Conda environment named "executorch". + ```bash + conda create -yn executorch python=3.10.0 && conda activate executorch ``` ## Install ExecuTorch pip package from Source diff --git a/docs/source/using-executorch-ios.md b/docs/source/using-executorch-ios.md index 70c2b366fa8..1d03284ec2c 100644 --- a/docs/source/using-executorch-ios.md +++ b/docs/source/using-executorch-ios.md @@ -25,7 +25,7 @@ The prebuilt ExecuTorch runtime, backend, and kernels are available as a [Swift #### Xcode -In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-0.5.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-0.5.0-20250228") for a nightly build on a specific date. +In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-0.6.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-0.7.0-20250401") for a nightly build on a specific date. ![](_static/img/swiftpm_xcode1.png) @@ -58,7 +58,7 @@ let package = Package( ], dependencies: [ // Use "swiftpm-." branch name for a nightly build. - .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-0.5.0") + .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-0.6.0") ], targets: [ .target( @@ -97,7 +97,7 @@ xcode-select --install 2. Clone ExecuTorch: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules && cd executorch +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` 3. Set up [Python](https://www.python.org/downloads/macos/) 3.10+ and activate a virtual environment: @@ -106,15 +106,16 @@ git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodul python3 -m venv .venv && source .venv/bin/activate && ./install_requirements.sh ``` -4. Install the required dependencies, including those needed for the backends like [Core ML](backends-coreml.md) or [MPS](backends-mps.md). Choose one: +4. Install the required dependencies, including those needed for the backends like [Core ML](backends-coreml.md) or [MPS](backends-mps.md). Choose one, or both: ```bash # ExecuTorch with xnnpack and CoreML backend -./install_executorch.sh --pybind xnnpack +./backends/apple/coreml/scripts/install_requirements.sh +./install_executorch.sh --pybind coreml xnnpack -# Optional: ExecuTorch with xnnpack, CoreML, and MPS backend +# ExecuTorch with xnnpack and MPS backend ./backends/apple/mps/install_requirements.sh -./install_executorch.sh --pybind xnnpack mps +./install_executorch.sh --pybind mps xnnpack ``` 5. Install [CMake](https://cmake.org): diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md index 4d1346963c7..dcfd07918cd 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/mediatek_README.md @@ -21,23 +21,29 @@ Phone verified: MediaTek Dimensity 9300 (D9300) chip. ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -yn et_mtk python=3.10.0 -conda activate et_mtk +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + +``` +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init + +Or a Conda environment: + +``` +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` + Install dependencies ``` ./install_executorch.sh ``` + ## Setup Environment Variables ### Download Buck2 and make executable * Download Buck2 from the official [Release Page](https://github.com/facebook/buck2/releases/tag/2024-02-01) diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md index 92afe613f7b..f6952df97ad 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md @@ -19,19 +19,24 @@ Phone verified: OnePlus 12, Samsung 24+, Samsung 23 ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -n et_qnn python=3.10.0 -conda activate et_qnn +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` + +Or a Conda environment: + +``` +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack +``` + Install dependencies ``` ./install_executorch.sh @@ -74,7 +79,7 @@ cmake --build cmake-out -j16 --target install --config Release ### Setup Llama Runner Next we need to build and compile the Llama runner. This is similar to the requirements for running Llama with XNNPACK. ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh cmake -DPYTHON_EXECUTABLE=python \ -DCMAKE_INSTALL_PREFIX=cmake-out \ diff --git a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md index 2b9bad21b7a..59b74a3c1ac 100644 --- a/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md @@ -21,35 +21,34 @@ Phone verified: OnePlus 12, OnePlus 9 Pro. Samsung S23 (Llama only), Samsung S24 ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules + ``` -conda create -yn executorch python=3.10.0 -conda activate executorch +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -Install dependencies + +Or a Conda environment: + ``` -./install_executorch.sh +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` -Optional: Use the --pybind flag to install with pybindings. +Install dependencies ``` -./install_executorch.sh --pybind xnnpack +./install_executorch.sh ``` - ## Prepare Models In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5. * You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/). * For chat use-cases, download the instruct models instead of pretrained. -* Run `examples/models/llama/install_requirements.sh` to install dependencies. +* Run `./examples/models/llama/install_requirements.sh` to install dependencies. * Rename tokenizer for Llama3.x with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly. ### For Llama 3.2 1B and 3B SpinQuant models diff --git a/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj b/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj index 2ee4db5361d..7c88eff27a5 100644 --- a/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj +++ b/examples/demo-apps/apple_ios/ExecuTorchDemo/ExecuTorchDemo.xcodeproj/project.pbxproj @@ -806,7 +806,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch"; requirement = { - branch = "swiftpm-0.5.0.20250317"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md b/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md index 844c83d2200..a66a1f75954 100644 --- a/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md +++ b/examples/demo-apps/apple_ios/ExecuTorchDemo/README.md @@ -44,8 +44,7 @@ Follow the [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting tutorial to configure the basic environment: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules -cd executorch +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch python3 -m venv .venv && source .venv/bin/activate diff --git a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj index a067873a0b9..0cfc4ddaa74 100644 --- a/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj +++ b/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.pbxproj @@ -852,7 +852,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch"; requirement = { - branch = "swiftpm-0.5.0.20250228"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md index f5292fe5c05..bffe4465eee 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md @@ -14,26 +14,29 @@ More specifically, it covers: ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules ``` -conda create -n et_mps python=3.10.0 -conda activate et_mps +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: + +``` +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip +``` + +Or a Conda environment ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +conda create -n et_mps python=3.10.0 && conda activate et_mps ``` Install dependencies ``` ./install_executorch.sh +./backends/apple/mps/install_requirements.sh ``` ## Prepare Models @@ -42,7 +45,7 @@ In this demo app, we support text-only inference with Llama 3.1, Llama 3, and Ll Install the required packages to export the model ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh ``` Export the model @@ -76,17 +79,7 @@ sudo /Applications/CMake.app/Contents/bin/cmake-gui --install The prebuilt ExecuTorch runtime, backend, and kernels are available as a Swift PM package. ### Xcode -Open the project in Xcode.In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version, e.g., “swiftpm-0.5.0”, or a branch name in format "swiftpm-." (e.g. "swiftpm-0.5.0-20250228") for a nightly build on a specific date. - -Note: If you're running into any issues related to package dependencies, quit Xcode entirely, delete the whole executorch repo, clean the caches by running the command below in terminal and clone the repo again. - -``` -rm -rf \ - ~/Library/org.swift.swiftpm \ - ~/Library/Caches/org.swift.swiftpm \ - ~/Library/Caches/com.apple.dt.Xcode \ - ~/Library/Developer/Xcode/DerivedData -``` +Open the project in Xcode.In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the ExecuTorch repo into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version, e.g., “swiftpm-0.6.0”, or a branch name in format "swiftpm-." (e.g. "swiftpm-0.7.0-20250401") for a nightly build on a specific date. Link your binary with the ExecuTorch runtime and any backends or kernels used by the exported ML model. It is recommended to link the core runtime to the components that use ExecuTorch directly, and link kernels and backends against the main app target. diff --git a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md index c45871a1fe5..f9cc3da1641 100644 --- a/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md +++ b/examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md @@ -13,31 +13,30 @@ More specifically, it covers: ## Setup ExecuTorch In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)). The commands below are running on Linux (CentOS). -Create a Conda environment +Checkout ExecuTorch repo and sync submodules ``` -conda create -n et_xnnpack python=3.10.0 -conda activate et_xnnpack +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` -Checkout ExecuTorch repo and sync submodules +Create either a Python virtual environment: ``` -git clone https://github.com/pytorch/executorch.git -cd executorch -git submodule sync -git submodule update --init +python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -Install dependencies +Or a Conda environment: ``` -./install_executorch.sh +conda create -n et_xnnpack python=3.10.0 && conda activate et_xnnpack ``` -Optional: Use the --pybind flag to install with pybindings. + +Install dependencies + ``` -./install_executorch.sh --pybind xnnpack +./install_executorch.sh ``` + ## Prepare Models In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5. * You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/). @@ -45,8 +44,9 @@ In this demo app, we support text-only inference with up-to-date Llama models an * Install the required packages to export the model: ``` -sh examples/models/llama/install_requirements.sh +./examples/models/llama/install_requirements.sh ``` + ### For Llama 3.2 1B and 3B SpinQuant models Meta has released prequantized INT4 SpinQuant Llama 3.2 models that ExecuTorch supports on the XNNPACK backend. * Export Llama model and generate .pte file as below: @@ -112,27 +112,13 @@ There are two options to add ExecuTorch runtime package into your XCode project: The current XCode project is pre-configured to automatically download and link the latest prebuilt package via Swift Package Manager. -If you have an old ExecuTorch package cached before in XCode, or are running into any package dependencies issues (incorrect checksum hash, missing package, outdated package), close XCode and run the following command in terminal inside your ExecuTorch directory - -``` -rm -rf \ - ~/Library/org.swift.swiftpm \ - ~/Library/Caches/org.swift.swiftpm \ - ~/Library/Caches/com.apple.dt.Xcode \ - ~/Library/Developer/Xcode/DerivedData \ - examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj/project.xcworkspace/xcshareddata/swiftpm -``` - -The command above will clear all the package cache, and when you re-open the XCode project, it should re-download the latest package and link them correctly. - #### (Optional) Changing the prebuilt package version While we recommended using the latest prebuilt package pre-configured with the XCode project, you can also change the package version manually to your desired version. Go to Project Navigator, click on LLaMA. `Project --> LLaMA --> Package Dependencies`, and update the package dependencies to any of the available options below: -- Branch --> swiftpm-0.5.0.20250228 (amend to match the latest nightly build) -- Branch --> swiftpm-0.5.0 -- Branch --> swiftpm-0.4.0 +- Branch --> swiftpm-0.7.0.20250401 (amend to match the latest nightly build) +- Branch --> swiftpm-0.6.0 ### 2.2 Manually build the package locally and link them diff --git a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj index 612dd410a1a..ea08f8cf772 100644 --- a/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj +++ b/examples/demo-apps/react-native/rnllama/ios/rnllama.xcodeproj/project.pbxproj @@ -947,7 +947,7 @@ isa = XCRemoteSwiftPackageReference; repositoryURL = "https://github.com/pytorch/executorch.git"; requirement = { - branch = "swiftpm-0.5.0.20250228"; + branch = "swiftpm-0.6.0"; kind = branch; }; }; diff --git a/examples/llm_pte_finetuning/README.md b/examples/llm_pte_finetuning/README.md index bdd317109e5..8aeea31608c 100644 --- a/examples/llm_pte_finetuning/README.md +++ b/examples/llm_pte_finetuning/README.md @@ -7,7 +7,7 @@ In this tutorial, we show how to fine-tune an LLM using executorch. You will need to have a model's checkpoint, in the Hugging Face format. For example: ```console -git clone git clone https://huggingface.co/Qwen/Qwen2-0.5B-Instruct +git clone https://huggingface.co/Qwen/Qwen2-0.5B-Instruct ``` You will need to install [torchtune](https://github.com/pytorch/torchtune) following [its installation instructions](https://github.com/pytorch/torchtune?tab=readme-ov-file#installation). diff --git a/extension/benchmark/apple/Benchmark/README.md b/extension/benchmark/apple/Benchmark/README.md index e993ae4f970..a68a9bf8abb 100644 --- a/extension/benchmark/apple/Benchmark/README.md +++ b/extension/benchmark/apple/Benchmark/README.md @@ -24,8 +24,7 @@ It provides a flexible framework for dynamically generating and running performa To get started, clone the ExecuTorch repository and cd into the source code directory: ```bash -git clone https://github.com/pytorch/executorch.git --depth 1 --recurse-submodules --shallow-submodules -cd executorch +git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch ``` This command performs a shallow clone to speed up the process. diff --git a/scripts/test_ios.sh b/scripts/test_ios.sh index 09461e0953e..385c85f3dfe 100755 --- a/scripts/test_ios.sh +++ b/scripts/test_ios.sh @@ -47,7 +47,7 @@ say() { say "Cloning the Code" pushd . > /dev/null -git clone https://github.com/pytorch/executorch.git "$OUTPUT" +git clone -b viable/strict https://github.com/pytorch/executorch.git "$OUTPUT" cd "$OUTPUT" say "Updating the Submodules"