- ⚡ Microsecond inference on ARM Cortex-M, RISC-V, x86
- 📦 Zero dependencies - single static library deployment
- 🎯 ONNX native - direct model deployment from ONNX
- 🔧 30+ operators - comprehensive neural network support
- 📷 Built-in image processing - JPEG decode + preprocessing
- 🧠 Smart optimization - quantization, pruning, memory efficiency
- 🏭 Edge AI: Real-time anomaly detection, predictive maintenance
- 🤖 IoT & Autonomous Systems: Lightweight AI models for drones, robots, vehicles, IoT devices
- 📱 Mobile Applications: On-device inference for privacy-preserving AI
- 🏥 Medical Devices: Real-time health monitoring and diagnostics
- 🎮 Gaming: AI-powered gameplay enhancement on embedded systems
Prerequisites
# Clone and verify installation
git clone https://github.com/ZantFoundation/Z-Ant.git
cd Z-Ant
# - put your onnx model inside /datasets/models in a folder with the same of the model to to have: /datasets/models/my_model/my_model.onnx
# - simplify and prepare the model for zant inference engine
./zant input_setter --path /datasets/models/my_model/my_model.onnx --shape "your,model,sha,pe"
# - Generate test data
./zant user_tests_gen --model my_model
# --- GENERATING THE Single Node lib and test it ---
#For a N nodes model it creates N onnx models, one for each node with respective tests.
./zant onnx_extract --path /datasets/models/my_model/my_model.onnx
#generate libs for extracted nodes
zig build extractor-gen -Dmodel="my_model"
#test extracted nodes
zig build extractor-test -Dmodel="my_model"
# --- GENERATING THE LIBRARY and TESTS ---
# Generate code for a specific model
zig build lib-gen -Dmodel="my_model" -Denable_user_tests [-Ddynamic -Ddo_export -Dlog -Dcomm ... ]
# Test the generated code
zig build lib-test -Dmodel="my_model" -Denable_user_tests [-Ddynamic -Ddo_export -Dlog -Dcomm ... ]
# Build the static library
zig build lib -Dmodel="my_model" [-Dtarget=... -Dcpu=...]
IMPORTANT: see ZANT CLI for a better understanding and more details!
Command | What it does |
---|---|
zig build test |
Verify everything works |
zig build codegen -Dmodel=<name> |
Generate code from ONNX model |
zig build lib -Dmodel=<name> |
Build deployable static library |
zig build test-generated-lib -Dmodel=<name> |
Test your generated code |
Platform | Target Flag | CPU Examples |
---|---|---|
ARM Cortex-M | -Dtarget=thumb-freestanding |
-Dcpu=cortex_m33 , -Dcpu=cortex_m4 |
RISC-V | -Dtarget=riscv32-freestanding |
-Dcpu=generic_rv32 |
x86/Native | -Dtarget=native |
(auto-detected) |
Option | Description | Example |
---|---|---|
-Dmodel=<name> |
Your model name | -Dmodel=my_classifier |
-Dmodel_path=<path> |
Custom ONNX file | -Dmodel_path=models/custom.onnx |
-Dlog=true |
Enable detailed logging | -Dlog=true |
-Dcomm=true |
Add comments to generated code | -Dcomm=true |
Z-Ant includes Python scripts for ONNX model preparation:
# Prepare your model: set input shapes and infer all tensor shapes
./zant input_setter --path model.onnx --shape 1,3,224,224
# Generate test data for validation
./zant user_tests_gen --model model.onnx --iterations 10
# Create operator test models
./zant onnx_gen --op Conv --iterations 5
target_link_libraries(your_project PUBLIC path/to/libzant.a)
#include "lib_my_model.h"
// Optional: Set custom logging
extern void setLogFunction(void (*log_function)(uint8_t *string));
// Your inference code here
# Generate optimized library for image classifier
zig build codegen -Dmodel=mobilenet_v2 -Dmodel_path=models/mobilenet.onnx
zig build lib -Dmodel=mobilenet_v2 -Dtarget=thumb-freestanding -Dcpu=cortex_m33 -Doutput_path=deployment/
# Test on different architectures
zig build test-generated-lib -Dmodel=my_model -Dtarget=native
zig build test-generated-lib -Dmodel=my_model -Dtarget=thumb-freestanding -Dcpu=cortex_m4
# Run full test suite
zig build test --summary all
# Test heavy computational operations
zig build test -Dheavy=true
# Test specific operator implementations
zig build op-codegen-test -Dop=Conv
# Generate and test single operations
zig build op-codegen-gen -Dop=Add
Z-Ant/
├── src/ # Core source code
│ ├── Core/ # Neural network core functionality
│ ├── CodeGen/ # Code generation engine
│ ├── ImageToTensor/ # Image preprocessing pipeline
│ ├── onnx/ # ONNX model parsing
│ └── Utils/ # Utilities and helpers
├── tests/ # Comprehensive test suite
├── datasets/ # Sample models and test data
├── generated/ # Generated code output
├── examples/ # Arduino and microcontroller examples
└── docs/ # Documentation and guides
We welcome contributions from developers of all skill levels! Here's how to get involved:
- Fork the repository on GitHub
- Clone your fork locally
- Create a feature branch for your work
- Make your changes following our coding standards
- Run tests to ensure everything works
- Submit a pull request for review
- 🐛 Bug Reports: Found an issue? Let us know!
- ✨ Feature Requests: Have an idea? Share it with us!
- 💻 Code Contributions: Improve the codebase or add new features
- 📚 Documentation: Help make the project easier to understand
- 🧪 Testing: Write tests or improve test coverage
- Follow our Code of Conduct
- Check out the Contributing Guide for detailed guidelines
- Join discussions on GitHub Issues and Discussions
All contributors are recognized in our Contributors list. Thank you for helping shape the future of tinyML!
This project is licensed under the LICENSE file in the repository.
Join us in revolutionizing AI on edge devices! 🚀
GitHub • Documentation • Examples • Community