Skip to content

Conversation

psiddh
Copy link
Contributor

@psiddh psiddh commented Oct 10, 2025

Test Plan:
examples/raspberry_pi/rpi_setup.sh pi5
...
[100%] Linking CXX executable llama_main
[100%] Built target llama_main
[SUCCESS] LLaMA runner built successfully

==== Extracting Bundled Libraries ====
[INFO] Extracting GLIBC libraries from toolchain... [WARNING] Use bundled GLIBC script on RPI device ONLY if you encounter a GLIBC mismatch error when running llama_main. [SUCCESS] Bundled libraries prepared in: /home/sidart/working/executorch/cmake-out/bundled-libs [INFO] On Raspberry Pi, run: sudo ./install_libs.sh

==== Verifying Build Outputs ====
[INFO] Checking required binaries...
[SUCCESS] ✓ llama_main (6.1M)
[SUCCESS] ✓ libllama_runner.so (4.0M)
[SUCCESS] ✓ libextension_module.a (89K) - static library [SUCCESS] All required binaries built successfully!

==== Setup Complete! ====

✓ ExecuTorch cross-compilation setup completed successfully!

📦 Built binaries:
• llama_main: /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
• libllama_runner.so: /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
• libextension_module.a: Statically linked into llama_main ✅
• Bundled libraries: /home/sidart/working/executorch/cmake-out/bundled-libs/

📋 Next steps:

  1. Copy binaries to your Raspberry Pi pi5: scp /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main pi@:/ scp /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so pi@:/ scp -r /home/sidart/working/executorch/cmake-out/bundled-libs/ pi@:~/

  2. Copy shared libraries to system location: sudo cp libllama_runner.so /lib/ # Only this one needed! sudo ldconfig

  3. Dry run to check for GLIBC or other issues: ./llama_main --help # Ensure there are no GLIBC or other errors before proceeding.

  4. If you see GLIBC errors, install bundled libraries: cd ~/bundled-libs && sudo ./install_libs.sh source setup_env.sh # Only do this if you encounter a GLIBC version mismatch or similar error.

  5. Download your model and tokenizer: # Refer to the official documentation for exact details.

  6. Run ExecuTorch with your model: ./llama_main --model_path ./model.pte --tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the meaning of life ?"

🎯 Deployment Summary:
📁 Files to copy: 2 (llama_main + libllama_runner.so)
🏗️ Extension module: Built-in (no separate .so needed)

🔧 Toolchain saved at: /home/sidart/working/executorch/arm-toolchain/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-linux-gnu
🔧 CMake toolchain file: /home/sidart/working/executorch/arm-toolchain-pi5.cmake

Happy inferencing! 🚀

Summary

[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.

[PLEASE REMOVE] If this PR closes an issue, please add a Fixes #<issue-id> line.

[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.

Test plan

[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.

Copy link

pytorch-bot bot commented Oct 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15014

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 5 New Failures, 3 Cancelled Jobs

As of commit 7120fad with merge base fca0f38 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 10, 2025
@psiddh psiddh requested a review from digantdesai October 10, 2025 20:06
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@psiddh psiddh changed the title Summary: RaspberryPi Tutorial to deply & infer llama models (Rpi4 & R… Summary: Add the Cross compilation Script for RPi (4 & 5) for Linux host machine Oct 10, 2025
…pi5)

Test Plan:
examples/raspberry_pi/rpi_setup.sh pi5
...
[100%] Linking CXX executable llama_main
[100%] Built target llama_main
[SUCCESS] LLaMA runner built successfully

==== Extracting Bundled Libraries ====
[INFO] Extracting GLIBC libraries from toolchain...
[WARNING] Use bundled GLIBC script on RPI device ONLY if you encounter a GLIBC mismatch error when running llama_main.
[SUCCESS] Bundled libraries prepared in: /home/sidart/working/executorch/cmake-out/bundled-libs
[INFO] On Raspberry Pi, run: sudo ./install_libs.sh

==== Verifying Build Outputs ====
[INFO] Checking required binaries...
[SUCCESS] ✓ llama_main (6.1M)
[SUCCESS] ✓ libllama_runner.so (4.0M)
[SUCCESS] ✓ libextension_module.a (89K) - static library
[SUCCESS] All required binaries built successfully!

==== Setup Complete! ====

✓ ExecuTorch cross-compilation setup completed successfully!

📦 Built binaries:
  • llama_main: /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main
  • libllama_runner.so: /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so
  • libextension_module.a: Statically linked into llama_main ✅
  • Bundled libraries: /home/sidart/working/executorch/cmake-out/bundled-libs/

📋 Next steps:
1. Copy binaries to your Raspberry Pi pi5:
   scp /home/sidart/working/executorch/cmake-out/examples/models/llama/llama_main pi@<rpi-ip>:~/
   scp /home/sidart/working/executorch/cmake-out/examples/models/llama/runner/libllama_runner.so pi@<rpi-ip>:~/
   scp -r /home/sidart/working/executorch/cmake-out/bundled-libs/ pi@<rpi-ip>:~/

2. Copy shared libraries to system location:
   sudo cp libllama_runner.so /lib/  # Only this one needed!
   sudo ldconfig

3. Dry run to check for GLIBC or other issues:
   ./llama_main --help
   # Ensure there are no GLIBC or other errors before proceeding.

4. If you see GLIBC errors, install bundled libraries:
   cd ~/bundled-libs && sudo ./install_libs.sh
   source setup_env.sh
   # Only do this if you encounter a GLIBC version mismatch or similar error.

5. Download your model and tokenizer:
   # Refer to the official documentation for exact details.

6. Run ExecuTorch with your model:
   ./llama_main --model_path ./model.pte --tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the meaning of life ?"

🎯 Deployment Summary:
  📁 Files to copy: 2 (llama_main + libllama_runner.so)
  🏗️   Extension module: Built-in (no separate .so needed)

🔧 Toolchain saved at: /home/sidart/working/executorch/arm-toolchain/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-linux-gnu
🔧 CMake toolchain file: /home/sidart/working/executorch/arm-toolchain-pi5.cmake

Happy inferencing! 🚀
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant