A production-ready AI agent platform that scales. Built with Go, Rust, and Python.
PolyAgent lets you build AI agents that actually work in production:
- Secure execution - Python code runs in isolated WASI sandbox
- Budget control - Set token limits to prevent runaway costs
- Multi-agent workflows - Coordinate multiple agents automatically
- Provider agnostic - Works with OpenAI, Anthropic, Google, and others
- Debugging - Replay any failed workflow step-by-step
- Observability - Prometheus metrics and distributed tracing
# Clone and setup
git clone https://github.com/Kocoro-lab/PolyAgent.git
cd PolyAgent
make setup-env
# Add your API key
echo "OPENAI_API_KEY=your-key-here" >> .env
# Start everything
make dev
# Test it works
make smoke# Using REST API
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{"query": "Analyze the sentiment of this text: PolyAgent is great!"}'
# Using script
./scripts/submit_task.sh "What is 2+2?"- Python code runs in WASI sandbox
- No access to host system
- Memory and time limits enforced
- Automatic task decomposition
- Parallel execution where possible
- Built-in error handling and retries
- Set hard token limits per user/session
- Real-time usage tracking
- Cost alerts and cutoffs
- Horizontal scaling with Temporal workflows
- PostgreSQL for state, Redis for sessions
- Comprehensive monitoring and alerting
# Submit task
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{"query": "Your task here", "session_id": "session-123"}'
# Check status
curl http://localhost:8080/api/v1/tasks/task-id-123
# Stream events
curl -N http://localhost:8081/stream/sse?workflow_id=task-id-123# Submit via gRPC
grpcurl -plaintext \
-d '{"query":"Your task","sessionId":"session-123"}' \
localhost:50052 polyagent.orchestrator.OrchestratorService/SubmitTask┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Client │────▶│ Orchestrator │────▶│ Agent Core │
│ (HTTP/gRPC)│ │ (Go) │ │ (Rust) │
└─────────────┘ └──────────────┘ └─────────────┘
│ │
▼ ▼
┌──────────────┐ ┌─────────────┐
│ Temporal │ │ LLM Service │
│ Workflows │ │ (Python) │
└──────────────┘ └─────────────┘
Key configuration files:
config/polyagent.yaml- Main platform config.env- API keys and secretsconfig/models.yaml- LLM provider settings
# Run tests
make test
# Format code
make fmt
# Run linters
make lint
# View logs
make logs
# Check service health
make psWhen something goes wrong:
# Find the workflow ID from logs
grep ERROR logs/orchestrator.log
# Replay the workflow
./scripts/replay_workflow.sh workflow-id-123
# Check Temporal UI
open http://localhost:8088- Set proper API keys in
.env - Configure resource limits in
config/polyagent.yaml - Set up monitoring (Prometheus + Grafana included)
- Review security policies in
config/policies/
- Python Code Execution
- Multi-Agent Workflows
- Authentication & Multi-tenancy
- Streaming APIs
- Platform Architecture
- Docker and Docker Compose
- Go 1.21+ (for development)
- Rust 1.70+ (for development)
- Python 3.11+ (for development)
MIT - see LICENSE file.
- GitHub Issues: Report bugs
- Discussions: Ask questions
- Discord: Join community
Built for production. Ready to scale.
This project is completely based on Shannon and iterated according to personal circumstances.