This starter template helps you quickly get started with the BeeAI framework for Python
- 🔒 Safely execute an arbitrary Python Code via BeeAI Code Interpreter.
- ⚡ Fully fledged Python project setup with linting and formatting.
- Python Version 3.11+
- uv (fast Python package manager) - See installation: https://docs.astral.sh/uv/getting-started/installation/
- Container system (with Compose support):
- LLM Provider - External WatsonX (OpenAI, Groq, ...) or local Ollama
- IDE/Code Editor (e.g., WebStorm, VSCode) - Optional but recommended for smooth configuration handling
Step 1: Clone this repository or use it as a template
git clone https://github.com/i-am-bee/beeai-framework-py-starter.git
cd beeai-framework-py-starter
Step 2: Install dependencies
uv sync
Step 3: Create an .env
file with the contents from .env.template
Step 4: Ollama must be installed and running, with the granite3.3:8b model pulled.
ollama pull granite3.3:8b
Step 5: Start all services related to beeai-code-interpreter
uv run poe infra --type start
Note
beeai-code-interpreter runs on http://127.0.0.1:50081
Get complete visibility of the agent's inner workings via OpenInference Instrumentation for BeeAI.
Be sure the OpenInference Instrumentation for BeeAI supports the newest BeeAI framework before updating the framework version in this repository.
- In order to see spans in Phoenix, begin running a Phoenix server. This can be done in one command using docker.
docker run -p 6006:6006 -i -t arizephoenix/phoenix:latest
- Run the agent
python beeai_framework_starter/agent_observe.py
- You should see your spans exported in your console. If you've set up a locally running Phoenix server, head to localhost:6006 to see your spans.
Now that you’ve set up your project, let’s run the agent example. To exit the conversation, type "q" and press enter.
You have two options:
Option 1: Interactive mode
uv run python beeai_framework_starter/agent.py
Option 2: Define your prompt up front
uv run python beeai_framework_starter/agent.py <<< "I am going out tomorrow morning to walk around Boston. What should I plan to wear?"
Note
Notice that this prompt triggers the agent to call a tool.
Now let's run the code interpreter agent example located at /beeai_framework_starter/agent_code_interpreter.py
.
Try the PythonTool
and ask the agent to perform a complex calculation:
uv run python beeai_framework_starter/agent_code_interpreter.py <<< "Calculate 534*342?"
Try the SandboxTool
and run a custom Python function get_riddle()
:
uv run python beeai_framework_starter/agent_code_interpreter.py <<< "Generate a riddle"
This example demonstrates a multi-agent workflow where different agents work together to provide a comprehensive understanding of a location.
The workflow includes three agents:
- Researcher: Gathers information about the location using the Wikipedia tool.
- WeatherForecaster: Retrieves and reports weather details using the OpenMeteo API.
- DataSynthesizer: Combines the historical and weather data into a final summary.
To run the workflow:
uv run python beeai_framework_starter/agent_workflow.py
For additional examples to try, check out the examples directory of BeeAI framework for Python repository here.
If you are developing with this repository as a base, or updating this template, see additional information in [./DEVELOP.md].