Skip to content

Commit b700d30

Browse files
authored
Switch docs to 0.6 branch (#10212)
1 parent e42c504 commit b700d30

File tree

31 files changed

+65
-65
lines changed

31 files changed

+65
-65
lines changed

Package.swift

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
//
1616
// For details on building frameworks locally or using prebuilt binaries,
1717
// see the documentation:
18-
// https://pytorch.org/executorch/main/using-executorch-ios.html
18+
// https://pytorch.org/executorch/0.6/using-executorch-ios.html
1919

2020
import PackageDescription
2121

README-wheel.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@ to run ExecuTorch `.pte` files, with some restrictions:
1414
operators](https://pytorch.org/executorch/stable/ir-ops-set-definition.html)
1515
are linked into the prebuilt module
1616
* Only the [XNNPACK backend
17-
delegate](https://pytorch.org/executorch/main/native-delegates-executorch-xnnpack-delegate.html)
17+
delegate](https://pytorch.org/executorch/0.6/backends-xnnpack)
1818
is linked into the prebuilt module.
19-
* \[macOS only] [Core ML](https://pytorch.org/executorch/main/build-run-coreml.html)
20-
and [MPS](https://pytorch.org/executorch/main/build-run-mps.html) backend
19+
* \[macOS only] [Core ML](https://pytorch.org/executorch/0.6/backends-coreml)
20+
and [MPS](https://pytorch.org/executorch/0.6/backends-mps) backend
2121
delegates are also linked into the prebuilt module.
2222

2323
Please visit the [ExecuTorch website](https://pytorch.org/executorch/) for
@@ -30,7 +30,7 @@ tutorials and documentation. Here are some starting points:
3030
* Learn how to use ExecuTorch to export and accelerate a large-language model
3131
from scratch.
3232
* [Exporting to
33-
ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html)
33+
ExecuTorch](https://pytorch.org/executorch/0.6/tutorials/export-to-executorch-tutorial.html)
3434
* Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
3535
optimizing its performance using quantization and hardware delegation.
3636
* Running LLaMA on

backends/cadence/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
## Tutorial
88

9-
Please follow the [tutorial](https://pytorch.org/executorch/main/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
9+
Please follow the [tutorial](https://pytorch.org/executorch/0.6/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
1010

1111
## Directory Structure
1212

backends/qualcomm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This backend is implemented on the top of
88
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
99
Please follow [tutorial](../../docs/source/backends-qualcomm.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
1010

11-
A website version of the tutorial is [here](https://pytorch.org/executorch/main/backends-qualcomm).
11+
A website version of the tutorial is [here](https://pytorch.org/executorch/0.6/backends-qualcomm).
1212

1313
## Delegate Options
1414

backends/xnnpack/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -132,5 +132,5 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
132132

133133
## See Also
134134
For more information about the XNNPACK Backend, please check out the following resources:
135-
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
136-
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
135+
- [XNNPACK Backend](https://pytorch.org/executorch/0.6/backends-xnnpack)
136+
- [XNNPACK Backend Internals](https://pytorch.org/executorch/0.6/backend-delegates-xnnpack-reference)

docs/source/index.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ ExecuTorch provides support for:
7979
- [Executorch Runtime API Reference](executorch-runtime-api-reference)
8080
- [Runtime Python API Reference](runtime-python-api-reference)
8181
- [API Life Cycle](api-life-cycle)
82-
- [Javadoc](https://pytorch.org/executorch/main/javadoc/)
82+
- [Javadoc](https://pytorch.org/executorch/0.6/javadoc/)
8383
#### Quantization
8484
- [Overview](quantization-overview)
8585
#### Kernel Library
@@ -208,7 +208,7 @@ export-to-executorch-api-reference
208208
executorch-runtime-api-reference
209209
runtime-python-api-reference
210210
api-life-cycle
211-
Javadoc <https://pytorch.org/executorch/main/javadoc/>
211+
Javadoc <https://pytorch.org/executorch/0.6/javadoc/>
212212
```
213213

214214
```{toctree}

docs/source/llm/getting-started.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ example_inputs = (torch.randint(0, 100, (1, model.config.block_size), dtype=torc
159159
# long as they adhere to the rules specified in the dynamic shape configuration.
160160
# Here we set the range of 0th model input's 1st dimension as
161161
# [0, model.config.block_size].
162-
# See https://pytorch.org/executorch/main/concepts#dynamic-shapes
162+
# See https://pytorch.org/executorch/0.6/concepts#dynamic-shapes
163163
# for details about creating dynamic shapes.
164164
dynamic_shape = (
165165
{1: torch.export.Dim("token_dim", max=model.config.block_size)},
@@ -478,7 +478,7 @@ example_inputs = (
478478
# long as they adhere to the rules specified in the dynamic shape configuration.
479479
# Here we set the range of 0th model input's 1st dimension as
480480
# [0, model.config.block_size].
481-
# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes
481+
# See https://pytorch.org/executorch/0.6/concepts.html#dynamic-shapes
482482
# for details about creating dynamic shapes.
483483
dynamic_shape = (
484484
{1: torch.export.Dim("token_dim", max=model.config.block_size - 1)},

docs/source/memory-planning-inspection.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Memory Planning Inspection in ExecuTorch
22

3-
After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
3+
After the [Memory Planning](https://pytorch.org/executorch/0.6/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/0.6/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
44

55
## Usage
6-
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
6+
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
77

88
```python
99
from executorch.util.activation_memory_profiler import generate_memory_trace
@@ -13,7 +13,7 @@ generate_memory_trace(
1313
enable_memory_offsets=True,
1414
)
1515
```
16-
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
16+
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
1717
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.
1818

1919
## Chrome Trace
@@ -27,4 +27,4 @@ Note that, since we are repurposing the Chrome trace tool, the axes in this cont
2727
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.
2828

2929
## Further Reading
30-
* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
30+
* [Memory Planning](https://pytorch.org/executorch/0.6/compiler-memory-planning.html)

docs/source/new-contributor-guide.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
129129
git push # push updated local main to your GitHub fork
130130
```
131131

132-
6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
132+
6. [Build the project](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
133133

134134
Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.
135135

docs/source/using-executorch-android.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.
44

5-
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
5+
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html#cross-compilation).
66

77
## Installation
88

@@ -41,8 +41,8 @@ dependencies {
4141
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.
4242

4343
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
44-
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
45-
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
44+
<a href="https://pytorch.org/executorch/0.6/_static/img/android_studio.mp4">
45+
<img src="https://pytorch.org/executorch/0.6/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
4646
</a>
4747

4848
## Using AAR file directly
@@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b
130130

131131
#### Using MediaTek backend
132132

133-
To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html),
133+
To use [MediaTek backend](https://pytorch.org/executorch/0.6/backends-mediatek.html),
134134
after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path.
135135

136136
#### Using Qualcomm AI Engine Backend
137137

138-
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend),
138+
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/0.6/backends-qualcomm.html#qualcomm-ai-engine-backend),
139139
after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path.
140140

141141
#### Using Vulkan Backend
142142

143-
To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend),
143+
To use [Vulkan Backend](https://pytorch.org/executorch/0.6/backends-vulkan.html#vulkan-backend),
144144
set `EXECUTORCH_BUILD_VULKAN` to `ON`.
145145

146146
## Android Backends
@@ -195,4 +195,4 @@ using ExecuTorch AAR package.
195195

196196
## Java API reference
197197

198-
Please see [Java API reference](https://pytorch.org/executorch/main/javadoc/).
198+
Please see [Java API reference](https://pytorch.org/executorch/0.6/javadoc/).

docs/source/using-executorch-ios.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ Then select which ExecuTorch framework should link against which target.
3535

3636
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model on iOS.
3737

38-
<a href="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.mp4">
39-
<img src="https://pytorch.org/executorch/main/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
38+
<a href="https://pytorch.org/executorch/0.6/_static/img/swiftpm_xcode.mp4">
39+
<img src="https://pytorch.org/executorch/0.6/_static/img/swiftpm_xcode.png" width="800" alt="Integrating and Running ExecuTorch on Apple Platforms">
4040
</a>
4141

4242
#### CLI
@@ -293,7 +293,7 @@ From existing memory buffers:
293293

294294
From `NSData` / `Data`:
295295
- `init(data:shape:dataType:...)`: Creates a tensor using an `NSData` object, referencing its bytes without copying.
296-
296+
297297
From scalar arrays:
298298
- `init(_:shape:dataType:...)`: Creates a tensor from an array of `NSNumber` scalars. Convenience initializers exist to infer shape or data type.
299299

examples/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ExecuTorch's extensive support spans from simple modules like "Add" to comprehen
99
## Directory structure
1010
```
1111
examples
12-
├── llm_manual # A storage place for the files that [LLM Maunal](https://pytorch.org/executorch/main/llm/getting-started.html) needs
12+
├── llm_manual # A storage place for the files that [LLM Maunal](https://pytorch.org/executorch/0.6/llm/getting-started.html) needs
1313
├── models # Contains a set of popular and representative PyTorch models
1414
├── portable # Contains end-to-end demos for ExecuTorch in portable mode
1515
├── selective_build # Contains demos of selective build for optimizing the binary size of the ExecuTorch runtime

examples/arm/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ To run these scripts. On a Linux system, in a terminal, with a working internet
2424
$ cd <EXECUTORCH-ROOT-FOLDER>
2525
$ executorch/examples/arm/setup.sh --i-agree-to-the-contained-eula [optional-scratch-dir]
2626
27-
# Step [2] - Setup Patch to tools, The `setup.sh` script has generated a script that you need to source everytime you restart you shell.
27+
# Step [2] - Setup Patch to tools, The `setup.sh` script has generated a script that you need to source everytime you restart you shell.
2828
$ source executorch/examples/arm/ethos-u-scratch/setup_path.sh
2929
3030
# Step [3] - build + run ExecuTorch and executor_runner baremetal application
@@ -34,6 +34,6 @@ $ executorch/examples/arm/run.sh --model_name=mv2 --target=ethos-u85-128 [--scra
3434

3535
### Online Tutorial
3636

37-
We also have a [tutorial](https://pytorch.org/executorch/main/backends-arm-ethos-u) explaining the steps performed in these
37+
We also have a [tutorial](https://pytorch.org/executorch/0.6/backends-arm-ethos-u) explaining the steps performed in these
3838
scripts, expected results, possible problems and more. It is a step-by-step guide
3939
you can follow to better understand this delegate.

examples/demo-apps/apple_ios/LLaMA/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
5656

5757
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
5858

59-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios).
59+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios).
6060

6161
### XCode
6262
* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`.

examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
8585

8686
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
8787

88-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
88+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios.html).
8989

9090
<p align="center">
9191
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" style="width:600px">

examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ If you cannot add the package into your app target (it's greyed out), it might h
164164

165165

166166

167-
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/main/using-executorch-ios#local-build).
167+
More details on integrating and Running ExecuTorch on Apple Platforms, check out the detailed guide [here](https://pytorch.org/executorch/0.6/using-executorch-ios#local-build).
168168

169169
### 3. Configure Build Schemes
170170

@@ -176,7 +176,7 @@ Navigate to `Product --> Scheme --> Edit Scheme --> Info --> Build Configuration
176176

177177
We recommend that you only use the Debug build scheme during development, where you might need to access additional logs. Debug build has logging overhead and will impact inferencing performance, while release build has compiler optimizations enabled and all logging overhead removed.
178178

179-
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios).
179+
For more details integrating and Running ExecuTorch on Apple Platforms or building the package locally, checkout this [link](https://pytorch.org/executorch/0.6/using-executorch-ios).
180180

181181
### 4. Build and Run the project
182182

examples/llm_manual/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
# LLM Manual
22

3-
This repository is a storage place for the files that [LLM Manual](https://pytorch.org/executorch/main/llm/getting-started) needs. Please refer to the documentation website for more information.
3+
This repository is a storage place for the files that [LLM Manual](https://pytorch.org/executorch/0.6/llm/getting-started) needs. Please refer to the documentation website for more information.

examples/models/deepseek-r1-distill-llama-8B/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ This example demonstrates how to run [Deepseek R1 Distill Llama 8B](https://hugg
33

44
# Instructions
55
## Step 1: Setup
6-
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
6+
1. Follow the [tutorial](https://pytorch.org/executorch/0.6/getting-started-setup) to set up ExecuTorch. For installation run `./install_executorch.sh`
77

88
2. Run the installation step for Llama specific requirements
99
```

0 commit comments

Comments
 (0)