Skip to content

Commit e518c00

Browse files
pytorchbotGasoonjia
andcommitted
use reference link for html doc (#15029)
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #15004 by @Gasoonjia ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/54/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/54/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/54/orig Differential Revision: [D84367515](https://our.internmc.facebook.com/intern/diff/D84367515/) @diff-train-skip-merge Co-authored-by: gasoonjia <[email protected]> (cherry picked from commit 7533df6)
1 parent b4d2ba9 commit e518c00

11 files changed

+12
-12
lines changed

docs/source/backend-delegates-xnnpack-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre
7070
When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors.
7171

7272
#### **Profiling**
73-
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
73+
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
7474

7575

7676
[comment]: <> (TODO: Refactor quantizer to a more official quantization doc)

docs/source/bundled-io.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ This stage mainly focuses on the creation of a `BundledProgram` and dumping it o
1717

1818
### Step 1: Create a Model and Emit its ExecuTorch Program.
1919

20-
ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
20+
ExecuTorch Program can be emitted from user's model by using ExecuTorch APIs. Follow the [Generate and emit sample ExecuTorch program](getting-started.md#exporting) or [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.
2121

2222
### Step 2: Construct `List[MethodTestSuite]` to hold test info
2323

docs/source/devtools-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
## Developer Tools Usage Tutorial
22

3-
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools.
3+
Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> for a walkthrough on how to profile a model in ExecuTorch using the Developer Tools.

docs/source/export-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,5 +11,5 @@ program, making it easier for you to understand and implement the process.
1111

1212
To learn more about exporting your model:
1313

14-
* Complete the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
14+
* Complete the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.
1515
* Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html).

docs/source/extension-module.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ In the [Detailed C++ Runtime APIs Tutorial](running-a-model-cpp-tutorial.md), we
66

77
## Example
88

9-
Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) using the `Module` and [`TensorPtr`](extension-tensor.md) APIs:
9+
Let's see how we can run the `SimpleConv` model generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> using the `Module` and [`TensorPtr`](extension-tensor.md) APIs:
1010

1111
```cpp
1212
#include <executorch/extension/module/module.h>

docs/source/llm/export-custom-llm.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ with open("nanogpt.pte", "wb") as file:
8181

8282
To export, run the script with `python export_nanogpt.py` (or python3, as appropriate for your environment). It will generate a `nanogpt.pte` file in the current directory.
8383

84-
For more information, see [Exporting to ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) and
84+
For more information, see [Exporting to ExecuTorch](../tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> and
8585
[torch.export](https://pytorch.org/docs/stable/export.html).
8686

8787
## Backend delegation
@@ -143,7 +143,7 @@ example_inputs = (
143143
# long as they adhere to the rules specified in the dynamic shape configuration.
144144
# Here we set the range of 0th model input's 1st dimension as
145145
# [0, model.config.block_size].
146-
# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes
146+
# See ../concepts.html#dynamic-shapes
147147
# for details about creating dynamic shapes.
148148
dynamic_shape = (
149149
{1: torch.export.Dim("token_dim", max=model.config.block_size - 1)},

docs/source/running-a-model-cpp-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference
1212
## Prerequisites
1313

1414
You will need an ExecuTorch model to follow along. We will be using
15-
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial).
15+
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.
1616

1717
## Model Loading
1818

docs/source/runtime-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Works](intro-how-it-works.md).
1111
At the highest level, the ExecuTorch runtime is responsible for:
1212

1313
* Loading binary `.pte` program files that were generated by the
14-
[`to_executorch()`](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial) step of the
14+
[`to_executorch()`](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore --> step of the
1515
model-lowering process.
1616
* Executing the series of instructions that implement a lowered model.
1717

docs/source/runtime-profiling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model
2020
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.
2121

2222

23-
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model.
23+
Please refer to the [Developer Tools tutorial](tutorials/devtools-integration-tutorial) <!-- @lint-ignore --> for a step-by-step walkthrough of the above process on a sample model.

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run
1111
:::{grid-item-card} Before you begin it is recommended you go through the following:
1212
:class-card: card-prerequisites
1313
* [Setting up ExecuTorch](getting-started-setup.rst)
14-
* [Model Lowering Tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial)
14+
* [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->
1515
* [ExecuTorch XNNPACK Delegate](backends-xnnpack.md)
1616
:::
1717
::::

0 commit comments

Comments
 (0)