diff --git a/.ci/scripts/test_ios_ci.sh b/.ci/scripts/test_ios_ci.sh
index 6908d61483c..a89c2cc5809 100755
--- a/.ci/scripts/test_ios_ci.sh
+++ b/.ci/scripts/test_ios_ci.sh
@@ -36,7 +36,7 @@ say() {
say "Cloning the Demo App"
-git clone --depth 1 https://github.com/pytorch-labs/executorch-examples.git
+git clone --depth 1 https://github.com/meta-pytorch/executorch-examples.git
say "Installing CoreML Backend Requirements"
diff --git a/backends/apple/mps/setup.md b/backends/apple/mps/setup.md
index 0ecb4151e61..f4819c104a5 100644
--- a/backends/apple/mps/setup.md
+++ b/backends/apple/mps/setup.md
@@ -15,7 +15,7 @@ The MPS backend device maps machine learning computational graphs and primitives
* [Introduction to ExecuTorch](../../../docs/source/intro-how-it-works.md)
* [Setting up ExecuTorch](../../../docs/source/getting-started-setup.rst)
* [Building ExecuTorch with CMake](../../../docs/source/using-executorch-cpp.md#building-with-cmake)
-* [ExecuTorch iOS Demo App](https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
+* [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
* [ExecuTorch iOS LLaMA Demo App](../../../docs/source/llm/llama-demo-ios.md)
:::
::::
diff --git a/backends/test/facto/test_facto.py b/backends/test/facto/test_facto.py
index dc2979a733c..405381f9643 100644
--- a/backends/test/facto/test_facto.py
+++ b/backends/test/facto/test_facto.py
@@ -8,7 +8,7 @@
#
# This file contains logic to run generated operator tests using the FACTO
-# library (https://github.com/pytorch-labs/FACTO). To run the tests, first
+# library (https://github.com/meta-pytorch/FACTO). To run the tests, first
# clone and install FACTO by running pip install . from the FACTO source
# directory. Then, from the executorch root directory, run the following:
#
diff --git a/docs/source/backends-mps.md b/docs/source/backends-mps.md
index 0dcf8b13c13..c1d8d8eaf1d 100644
--- a/docs/source/backends-mps.md
+++ b/docs/source/backends-mps.md
@@ -15,7 +15,7 @@ The MPS backend device maps machine learning computational graphs and primitives
* [Introduction to ExecuTorch](intro-how-it-works.md)
* [Getting Started](getting-started.md)
* [Building ExecuTorch with CMake](using-executorch-building-from-source.md)
-* [ExecuTorch iOS Demo App](https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
+* [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
* [ExecuTorch iOS LLaMA Demo App](llm/llama-demo-ios.md)
:::
::::
diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md
index dc0cade3fbb..d3d9662f5c3 100644
--- a/docs/source/getting-started.md
+++ b/docs/source/getting-started.md
@@ -101,7 +101,7 @@ print("Comparing against original PyTorch module")
print(torch.allclose(output[0], eager_reference_output, rtol=1e-3, atol=1e-5))
```
-For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/python).
+For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/python).
Additionally, if you work with Hugging Face models, the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch, using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
@@ -147,7 +147,7 @@ EValue[] output = model.forward(input_evalue);
float[] scores = output[0].toTensor().getDataAsFloatArray();
```
-For a full example of running a model on Android, see the [DeepLabV3AndroidDemo](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo). For more information on Android development, including building from source, a full description of the Java APIs, and information on using ExecuTorch from Android native code, see [Using ExecuTorch on Android](using-executorch-android.md).
+For a full example of running a model on Android, see the [DeepLabV3AndroidDemo](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo). For more information on Android development, including building from source, a full description of the Java APIs, and information on using ExecuTorch from Android native code, see [Using ExecuTorch on Android](using-executorch-android.md).
### iOS
@@ -214,7 +214,7 @@ if (result.ok()) {
For more information on the C++ APIs, see [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md) and [Managing Tensor Memory in C++](extension-tensor.md).
-For complete examples of building and running C++ application, please refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp).
+For complete examples of building and running C++ application, please refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/cpp).
diff --git a/docs/source/index.md b/docs/source/index.md
index 7fc4181c511..ff3eefec7f5 100644
--- a/docs/source/index.md
+++ b/docs/source/index.md
@@ -41,8 +41,8 @@ ExecuTorch provides support for:
- [Quantization](quantization-overview)
- [FAQs](using-executorch-faqs)
#### Examples
-- [Android Demo Apps](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app)
-- [iOS Demo Apps](https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
+- [Android Demo Apps](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app)
+- [iOS Demo Apps](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo)
- [Hugging Face Models](https://github.com/huggingface/optimum-executorch/blob/main/README.md)
#### Backends
- [Overview](backends-overview)
@@ -147,7 +147,7 @@ using-executorch-faqs
:hidden:
Building an ExecuTorch Android Demo App
-Building an ExecuTorch iOS Demo App
+Building an ExecuTorch iOS Demo App
tutorial-arm.md
```
diff --git a/docs/source/llm/run-with-c-plus-plus.md b/docs/source/llm/run-with-c-plus-plus.md
index 77c8990b42d..f987fcab2a5 100644
--- a/docs/source/llm/run-with-c-plus-plus.md
+++ b/docs/source/llm/run-with-c-plus-plus.md
@@ -251,7 +251,7 @@ Supported tokenizer formats include:
3. **TikToken**: BPE tokenizers
4. **Llama2c**: BPE tokenizers in the Llama2.c format
-For custom tokenizers, you can find implementations in the [pytorch-labs/tokenizers](https://github.com/pytorch-labs/tokenizers) repository.
+For custom tokenizers, you can find implementations in the [meta-pytorch/tokenizers](https://github.com/meta-pytorch/tokenizers) repository.
## Other APIs
diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md
index ade9a8d665c..23513302063 100644
--- a/docs/source/using-executorch-android.md
+++ b/docs/source/using-executorch-android.md
@@ -201,7 +201,7 @@ adb push extension/module/test/resources/add.pte /data/local/tmp/
This example loads an ExecuTorch module, prepares input data, runs inference, and processes the output data.
-Please use [DeepLabV3AndroidDemo](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo)
+Please use [DeepLabV3AndroidDemo](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo)
and [LlamaDemo](https://github.com/pytorch/executorch/tree/main/examples/demo-apps/android/LlamaDemo) for the code examples
using ExecuTorch AAR package.
diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md
index 59f3365f661..d48f9d26db7 100644
--- a/docs/source/using-executorch-building-from-source.md
+++ b/docs/source/using-executorch-building-from-source.md
@@ -392,7 +392,7 @@ See backend-specific documentation for more details.
2. Copy over the generated `.xcframework` bundles to your Xcode project, link them against
your targets and don't forget to add an extra linker flag `-all_load`.
-Check out the [iOS Demo App](https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) tutorial for more info.
+Check out the [iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) tutorial for more info.
@@ -499,5 +499,5 @@ Output 0: tensor(sizes=[1, 1000], [
## Next Steps
* [Selective Build](kernel-library-selective-build.md) to link only kernels used by the program. This can provide significant binary size savings.
-* Tutorials on building [Android](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) and [iOS](https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) demo apps.
+* Tutorials on building [Android](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) and [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) demo apps.
* Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](backends-arm-ethos-u.md) and [XTensa HiFi DSP](backends-cadence.md).
diff --git a/docs/source/using-executorch-cpp.md b/docs/source/using-executorch-cpp.md
index b1227aec7b3..f68f412943c 100644
--- a/docs/source/using-executorch-cpp.md
+++ b/docs/source/using-executorch-cpp.md
@@ -32,7 +32,7 @@ if (result.ok()) {
For more information on the Module class, see [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md). For information on high-level tensor APIs, see [Managing Tensor Memory in C++](extension-tensor.md).
-For complete examples of building and running a C++ application using the Module API, refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp).
+For complete examples of building and running a C++ application using the Module API, refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/cpp).
## Low-Level APIs
diff --git a/docs/source/using-executorch-export.md b/docs/source/using-executorch-export.md
index 51347e3a3dc..2a887bb346d 100644
--- a/docs/source/using-executorch-export.md
+++ b/docs/source/using-executorch-export.md
@@ -194,7 +194,7 @@ method = program.load_method("forward")
outputs = method.execute([input_tensor])
```
-Pybindings currently does not support loading program and data. To run a model with PTE and PTD components, please use the [Extension Module](extension-module.md). There is also an E2E demo in [executorch-examples](https://github.com/pytorch-labs/executorch-examples/tree/main/program-data-separation).
+Pybindings currently does not support loading program and data. To run a model with PTE and PTD components, please use the [Extension Module](extension-module.md). There is also an E2E demo in [executorch-examples](https://github.com/meta-pytorch/executorch-examples/tree/main/program-data-separation).
For more information, see [Runtime API Reference](executorch-runtime-api-reference.md).
diff --git a/examples/models/llama/experimental/generate.py b/examples/models/llama/experimental/generate.py
index 01b5d6668c3..f97b4c543b2 100644
--- a/examples/models/llama/experimental/generate.py
+++ b/examples/models/llama/experimental/generate.py
@@ -4,7 +4,7 @@
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
-# Adapted from gpt-fast: https://github.com/pytorch-labs/gpt-fast/blob/main/generate.py
+# Adapted from gpt-fast: https://github.com/meta-pytorch/gpt-fast/blob/main/generate.py
import argparse
from typing import Optional, Tuple
diff --git a/scripts/test_ios.sh b/scripts/test_ios.sh
index b2b3ce94e35..8cb86f8f43c 100755
--- a/scripts/test_ios.sh
+++ b/scripts/test_ios.sh
@@ -54,7 +54,7 @@ say "Installing Requirements"
say "Cloning the Demo App"
-git clone --depth 1 https://github.com/pytorch-labs/executorch-examples.git
+git clone --depth 1 https://github.com/meta-pytorch/executorch-examples.git
say "Installing CoreML Backend Requirements"