Skip to content

Add a sample agent for deploying and interacting with Model Garden models. #294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions python/agents/model_garden_agent/.hgignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.env
126 changes: 126 additions & 0 deletions python/agents/model_garden_agent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Model Garden Deploy Agent
## Overview of the Agent

This project implements an ADK-based agent that provides a seamless, conversational interface for Google's Vertex AI Model Garden, a Google Cloud service for discovering, customizing, and deploying a variety of models from Google and Google Cloud partners.
Current methods of interacting with Model Garden, while powerful, are not very user-friendly, often requiring users to write code or navigate complex web console interfaces. This agent bridges that gap by allowing users to discover models, deploy them to an endpoint, and run inference on those models, using natural language prompts.

## Agent Details
### Features
The key features of the Model Garden Assistant include:

| Feature | Description |
| :--- | :--- |
| Interaction Type | Conversational |
| Complexity | Advanced |
| Agent Type | Multi Agent |
| Components | Tools, AgentTools |
| Vertical | LLMOps |

## Setup and Installation
### Prerequisites

* Python 3.11+
* Poetry
* For dependency management and packaging. Please follow the instructions on the official Poetry website for installation.
```
pip install poetry
```
* A project on Google Cloud Platform
* Google Cloud CLI
* For installation, please follow the instructions on the official [Google Cloud website.](https://cloud.google.com/sdk/docs/install)

## Installation
```
# Clone this repository.
git clone https://github.com/google/adk-samples.git
cd adk-samples/python/agents/model-garden-agent
# Install the package and dependencies.
poetry install
```
## Configuration
Set up Google Cloud credentials.
* You may set the following environment variables in your shell, or in a .env file instead.

```
export GOOGLE_GENAI_USE_VERTEXAI=true
export GOOGLE_CLOUD_PROJECT=<your-project-id>
export GOOGLE_CLOUD_LOCATION=<your-project-location>
export GOOGLE_CLOUD_STORAGE_BUCKET=<your-storage-bucket>
# Only required for deployment on Agent Engine
```
* Authenticate your GCloud account.

```
gcloud auth application-default login
gcloud auth application-default set-quota-project $GOOGLE_CLOUD_PROJECT
```
## Running Agent
Using ```adk```
ADK provides convenient ways to bring up agents locally and interact with them. You may talk to the agent using the CLI:
```
adk run model_garden_agent
```
Or on a web interface:
```
adk web
```

The command adk web will start a web server on your machine and print the URL. You may open the URL, select "model_garden_agent" in the top-left drop-down menu, and a chatbot interface will appear on the right. The conversation is initially blank. Here are some example requests you may ask the Model Garden Agent to verify:
```
Who are you?
```
```
I am a helpful agent that assists users in deploying and managing AI models using Vertex AI Model Garden. I can help you with tasks like discovering models, getting setup recommendations, deploying models to endpoints, and running inference.
```
## Example Interaction

# Running Test/Eval
For running tests and evaluation, install the extra dependencies:
```
poetry install --with dev
```
Then the tests and evaluation can be run from the model_garden_agent directory using the pytest module:
```
python3 -m pytest tests
python3 -m pytest eval
```
```tests``` runs the agent on a sample request, and makes sure that every component is functional. ```eval``` is a demonstration of how to evaluate the agent, using the ```AgentEvaluator``` in ADK. It sends a couple requests to the agent and expects that the agent's responses match a pre-defined response reasonably well.

# Deployment
The Model Garden Agent can be deployed to Vertex AI Agent Engine using the following commands:
```
poetry install --with deployment
python3 deployment/deploy.py --create
```
When the deployment finishes, it will print a line like this:
```
Created remote agent: projects/<PROJECT_NUMBER>/locations/<PROJECT_LOCATION>/reasoningEngines/<AGENT_ENGINE_ID>
```
If you forgot the AGENT_ENGINE_ID, you can list existing agents using:
```
python3 deployment/deploy.py --list
```
The output will be like:
```
All remote agents:

123456789 ("academic_research")
- Create time: 2025-05-10 09:33:46.188760+00:00
- Update time: 2025-05-10 09:34:32.763434+00:00
```
You may interact with the deployed agent using the ```test_deployment.py``` script
```
$ export USER_ID=<any string>
$ python3 deployment/test_deployment.py --resource_id=${AGENT_ENGINE_ID} --user_id=${USER_ID}
Found agent with resource ID: ...
Created session for user ID: ...
Type 'quit' to exit.
Input: Hello. What can you do for me?
Response: Hello! I'm an AI Research Assistant. I can help you analyze a seminal academic paper.

To get started, please provide the seminal paper you wish to analyze as a PDF.
```
To delete the deployed agent, you may run the following command:
```
python3 deployment/deploy.py --delete --resource_id=${AGENT_ENGINE_ID}
```
3 changes: 3 additions & 0 deletions python/agents/model_garden_agent/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"""Model Garden Agent."""

from . import agent
131 changes: 131 additions & 0 deletions python/agents/model_garden_agent/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""Agent."""

import os

from google.adk.agents import Agent
from google.adk.agents import SequentialAgent
from google.adk.tools import agent_tool
from google.adk.tools import google_search
from google.api_core import exceptions
import vertexai

from . import deploy_model_agent
from . import model_discov_agent
from . import model_inference_agent
from . import setup_rec_agent

NotFound = exceptions.NotFound
InvalidArgument = exceptions.InvalidArgument
GoogleAPIError = exceptions.GoogleAPIError
ServiceUnavailable = exceptions.ServiceUnavailable

vertexai.init(
project=os.environ.get("GOOGLE_CLOUD_PROJECT", None),
location=os.environ.get("GOOGLE_CLOUD_LOCATION", None),
)

search_agent = Agent(
model="gemini-2.5-flash",
name="search_agent",
description="""
Searches the web and preferably Vertex AI Model Garden Platform to help users find AI models for specific tasks using public information.
""",
instruction="""
You're a specialist in Google Search.
Your purpose is to help users discover and compare AI models from Vertex AI Model Garden.
ALWAYS cite sources when providing information, like the model name and the source of the information directly.
Dont return any information that is not directly available in the sources.

When a user asks about models to use for a specific task (e.g., image generation), your job is to:
- Search the Vertex AI Model Garden for relevant models
- Return a clean, bulleted list of multiple model options
- Include a short 1-sentence description of each model
- Only include what’s necessary: the model name and what it's good at
- Avoid making up any model names or capabilities not found in documentation

Preferred sources:
- Vertex AI Model Garden documentation
- Google Cloud blog/model comparison posts (only if relevant to Vertex AI)
- GitHub repos linked from Vertex AI Model Garden

Output example:
- **Imagen 2** : High-quality text-to-image generation, fast to deploy via Vertex AI with notebooks.
- **SDXL Lite**: Lightweight version of Stable Diffusion, optimized for cost-effective and fast deployment.
- **DreamBooth (Vertex Fine-Tuned)**: Customizable image generation, fine-tuned on your own data.

Stick to concise summaries and avoid general platform details or features unrelated to the models themselves.

""",
tools=[google_search],
)

search_agent_tool = agent_tool.AgentTool(agent=search_agent)
discovery_agent_tool = agent_tool.AgentTool(
agent=model_discov_agent.model_discovery_agent
)
deploy_model_agent_tool = agent_tool.AgentTool(
agent=deploy_model_agent.deploy_model_agent
)
setup_rec_agent_tool = agent_tool.AgentTool(
agent=setup_rec_agent.setup_rec_agent
)
model_inference_agent_tool = agent_tool.AgentTool(
agent=model_inference_agent.model_inference_agent
)

root_agent = Agent(
model="gemini-2.5-flash",
name="model_garden_deploy_agent",
tools=[
search_agent_tool,
deploy_model_agent_tool,
model_inference_agent_tool,
discovery_agent_tool,
setup_rec_agent_tool,
],
description=("""
A helpful agent that helps users deploy and manage AI models using Vertex AI Model Garden.
This agent coordinates between multiple domain-specific agents to complete tasks such as model
discovery, retrieving setup recommendations, deploying models to endpoints, running inference on deployed models,
listing endpoints, and deleting endpoints.
"""),
instruction=(""""
You are the primary interface for users interacting with the Vertex AI Model Garden Assistant.

Your goal is to help users:
- Discover, compare, and understand available models
- Get recommendations for deployment setups
- Deploy models to endpoints
- Generate inference code samples

You should act as a unified assistant — do not reveal sub-agents, tools, or system internals. The user should always feel like they are speaking to a single smart assistant.

Depending on the user’s request, route the task to the appropriate tool or full workflow.

Use the following guidance:
- If the user asks for a full deployment journey (e.g., "Help me deploy a model that can generate images"), use the Guided Workflow Agent (a SequentialAgent).
- If the user makes a targeted request (e.g., "List deployable models," "Give me setup recommendations for Gemma"), call the specific tool that handles that task.
- Use natural conversation. Ask clarifying questions if the request is ambiguous.
- Never say you’re using another agent. Just respond with helpful, friendly answers as if you're doing it all.

You have access to tools that allow you to:
- Search and discover models
- Get configuration recommendations
- Deploy models and list endpoints
- Generate inference examples
- Run full workflows (search, setup, deploy, inference)

Always maintain context and guide users smoothly through the model lifecycle.
"""),
)

guided_agent = SequentialAgent(
name="guided_agent",
sub_agents=[
search_agent,
model_discov_agent.model_discovery_agent,
setup_rec_agent.setup_rec_agent,
deploy_model_agent.deploy_model_agent,
model_inference_agent.model_inference_agent,
],
)
131 changes: 131 additions & 0 deletions python/agents/model_garden_agent/agent.py.orig
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""Agent."""

import os

from google.adk.agents import Agent
from google.adk.agents import SequentialAgent
from google.adk.tools import agent_tool
from google.adk.tools import google_search
from google.api_core import exceptions
import vertexai

from . import deploy_model_agent
from . import model_discov_agent
from . import model_inference_agent
from . import setup_rec_agent

NotFound = exceptions.NotFound
InvalidArgument = exceptions.InvalidArgument
GoogleAPIError = exceptions.GoogleAPIError
ServiceUnavailable = exceptions.ServiceUnavailable

vertexai.init(
project=os.environ.get("GOOGLE_CLOUD_PROJECT", None),
location=os.environ.get("GOOGLE_CLOUD_LOCATION", None),
)

search_agent = Agent(
model="gemini-2.5-flash",
name="search_agent",
description="""
Searches the web and preferably Vertex AI Model Garden Platform to help users find AI models for specific tasks using public information.
""",
instruction="""
You're a specialist in Google Search.
Your purpose is to help users discover and compare AI models from Vertex AI Model Garden.
ALWAYS cite sources when providing information, like the model name and the source of the information directly.
Dont return any information that is not directly available in the sources.

When a user asks about models to use for a specific task (e.g., image generation), your job is to:
- Search the Vertex AI Model Garden for relevant models
- Return a clean, bulleted list of multiple model options
- Include a short 1-sentence description of each model
- Only include what’s necessary: the model name and what it's good at
- Avoid making up any model names or capabilities not found in documentation

Preferred sources:
- Vertex AI Model Garden documentation
- Google Cloud blog/model comparison posts (only if relevant to Vertex AI)
- GitHub repos linked from Vertex AI Model Garden

Output example:
- **Imagen 2** : High-quality text-to-image generation, fast to deploy via Vertex AI with notebooks.
- **SDXL Lite**: Lightweight version of Stable Diffusion, optimized for cost-effective and fast deployment.
- **DreamBooth (Vertex Fine-Tuned)**: Customizable image generation, fine-tuned on your own data.

Stick to concise summaries and avoid general platform details or features unrelated to the models themselves.

""",
tools=[google_search],
)

search_agent_tool = agent_tool.AgentTool(agent=search_agent)
discovery_agent_tool = agent_tool.AgentTool(
agent=model_discov_agent.model_discovery_agent
)
deploy_model_agent_tool = agent_tool.AgentTool(
agent=deploy_model_agent.deploy_model_agent
)
setup_rec_agent_tool = agent_tool.AgentTool(
agent=setup_rec_agent.setup_rec_agent
)
model_inference_agent_tool = agent_tool.AgentTool(
agent=model_inference_agent.model_inference_agent
)

root_agent = Agent(
model="gemini-2.5-flash",
name="model_garden_deploy_agent",
tools=[
search_agent_tool,
deploy_model_agent_tool,
model_inference_agent_tool,
discovery_agent_tool,
setup_rec_agent_tool,
],
description=("""
A helpful agent that helps users deploy and manage AI models using Vertex AI Model Garden.
This agent coordinates between multiple domain-specific agents to complete tasks such as model
discovery, retrieving setup recommendations, and deploying models to endpoints, listing endpoints,
and deleting endpoints.
"""),
instruction=(""""
You are the primary interface for users interacting with the Vertex AI Model Garden Assistant.

Your goal is to help users:
- Discover, compare, and understand available models
- Get recommendations for deployment setups
- Deploy models to endpoints
- Generate inference code samples

You should act as a unified assistant — do not reveal sub-agents, tools, or system internals. The user should always feel like they are speaking to a single smart assistant.

Depending on the user’s request, route the task to the appropriate tool or full workflow.

Use the following guidance:
- If the user asks for a full deployment journey (e.g., "Help me deploy a model that can generate images"), use the Guided Workflow Agent (a SequentialAgent).
- If the user makes a targeted request (e.g., "List deployable models," "Give me setup recommendations for Gemma"), call the specific tool that handles that task.
- Use natural conversation. Ask clarifying questions if the request is ambiguous.
- Never say you’re using another agent. Just respond with helpful, friendly answers as if you're doing it all.

You have access to tools that allow you to:
- Search and discover models
- Get configuration recommendations
- Deploy models and list endpoints
- Generate inference examples
- Run full workflows (search, setup, deploy, inference)

Always maintain context and guide users smoothly through the model lifecycle.
"""),
)

guided_agent = SequentialAgent(
name="guided_agent",
sub_agents=[
search_agent,
model_discov_agent.model_discovery_agent,
setup_rec_agent.setup_rec_agent,
deploy_model_agent.deploy_model_agent,
model_inference_agent.model_inference_agent,
],
)
Loading