Skip to content

Commit ee0495c

Browse files
Introduce Workflows (#15067)
1 parent 6cba89a commit ee0495c

File tree

54 files changed

+3082
-51
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+3082
-51
lines changed

docs/docs/community/llama_packs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This directly tackles a big pain point in building LLM apps; every use case requ
88

99
They can be used in two ways:
1010

11-
- On one hand, they are **prepackaged modules** that can be initialized with parameters and run out of the box to achieve a given use case (whether that’s a full RAG pipeline, application template, or more). You can also import submodules (e.g. LLMs, query engines) to use directly.
11+
- On one hand, they are **prepackaged modules** that can be initialized with parameters and run out of the box to achieve a given use case (whether that’s a full RAG flow, application template, or more). You can also import submodules (e.g. LLMs, query engines) to use directly.
1212
- On the other hand, LlamaPacks are **templates** that you can inspect, modify, and use.
1313

1414
**All packs are found on [LlamaHub](https://llamahub.ai/).** Go to the dropdown menu and select "LlamaPacks" to filter by packs.
Lines changed: 377 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,377 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Workflow for a Function Calling Agent\n",
8+
"\n",
9+
"This notebook walks through setting up a `Workflow` to construct a function calling agent from scratch.\n",
10+
"\n",
11+
"Function calling agents work by using an LLM that supports tools/functions in its API (OpenAI, Ollama, Anthropic, etc.) to call functions an use tools.\n",
12+
"\n",
13+
"Our workflow will be stateful with memory, and will be able to call the LLM to select tools and process incoming user messages."
14+
]
15+
},
16+
{
17+
"cell_type": "code",
18+
"execution_count": null,
19+
"metadata": {},
20+
"outputs": [],
21+
"source": [
22+
"!pip install -U llama-index"
23+
]
24+
},
25+
{
26+
"cell_type": "code",
27+
"execution_count": null,
28+
"metadata": {},
29+
"outputs": [],
30+
"source": [
31+
"import os\n",
32+
"\n",
33+
"os.environ[\"OPENAI_API_KEY\"] = \"sk-proj-...\""
34+
]
35+
},
36+
{
37+
"cell_type": "markdown",
38+
"metadata": {},
39+
"source": [
40+
"Since workflows are async first, this all runs fine in a notebook. If you were running in your own code, you would want to use `asyncio.run()` to start an async event loop if one isn't already running.\n",
41+
"\n",
42+
"```python\n",
43+
"async def main():\n",
44+
" <async code>\n",
45+
"\n",
46+
"if __name__ == \"__main__\":\n",
47+
" import asyncio\n",
48+
" asyncio.run(main())\n",
49+
"```"
50+
]
51+
},
52+
{
53+
"cell_type": "markdown",
54+
"metadata": {},
55+
"source": [
56+
"## Designing the Workflow\n",
57+
"\n",
58+
"An agent consists of several steps\n",
59+
"1. Handling the latest incoming user message, including adding to memory and getting the latest chat history\n",
60+
"2. Calling the LLM with tools + chat history\n",
61+
"3. Parsing out tool calls (if any)\n",
62+
"4. If there are tool calls, call them, and loop until there are none\n",
63+
"5. When there is no tool calls, return the LLM response\n",
64+
"\n",
65+
"### The Workflow Events\n",
66+
"\n",
67+
"To handle these steps, we need to define a few events:\n",
68+
"1. An event to handle new messages and prepare the chat history\n",
69+
"2. An event to trigger tool calls\n",
70+
"3. An event to handle the results of tool calls\n",
71+
"\n",
72+
"The other steps will use the built-in `StartEvent` and `StopEvent` events."
73+
]
74+
},
75+
{
76+
"cell_type": "code",
77+
"execution_count": null,
78+
"metadata": {},
79+
"outputs": [],
80+
"source": [
81+
"from llama_index.core.llms import ChatMessage\n",
82+
"from llama_index.core.tools import ToolSelection, ToolOutput\n",
83+
"from llama_index.core.workflow import Event\n",
84+
"\n",
85+
"\n",
86+
"class InputEvent(Event):\n",
87+
" input: list[ChatMessage]\n",
88+
"\n",
89+
"\n",
90+
"class ToolCallEvent(Event):\n",
91+
" tool_calls: list[ToolSelection]\n",
92+
"\n",
93+
"\n",
94+
"class FunctionOutputEvent(Event):\n",
95+
" output: ToolOutput"
96+
]
97+
},
98+
{
99+
"cell_type": "markdown",
100+
"metadata": {},
101+
"source": [
102+
"### The Workflow Itself\n",
103+
"\n",
104+
"With our events defined, we can construct our workflow and steps. \n",
105+
"\n",
106+
"Note that the workflow automatically validates itself using type annotations, so the type annotations on our steps are very helpful!"
107+
]
108+
},
109+
{
110+
"cell_type": "code",
111+
"execution_count": null,
112+
"metadata": {},
113+
"outputs": [],
114+
"source": [
115+
"from typing import Any, List\n",
116+
"\n",
117+
"from llama_index.core.llms.function_calling import FunctionCallingLLM\n",
118+
"from llama_index.core.memory import ChatMemoryBuffer\n",
119+
"from llama_index.core.tools.types import BaseTool\n",
120+
"from llama_index.core.workflow import Workflow, StartEvent, StopEvent, step\n",
121+
"\n",
122+
"\n",
123+
"class FuncationCallingAgent(Workflow):\n",
124+
" def __init__(\n",
125+
" self,\n",
126+
" *args: Any,\n",
127+
" llm: FunctionCallingLLM | None = None,\n",
128+
" tools: List[BaseTool] | None = None,\n",
129+
" **kwargs: Any,\n",
130+
" ) -> None:\n",
131+
" super().__init__(*args, **kwargs)\n",
132+
" self.tools = tools or []\n",
133+
"\n",
134+
" self.llm = llm or OpenAI()\n",
135+
" assert self.llm.metadata.is_function_calling_model\n",
136+
"\n",
137+
" self.memory = ChatMemoryBuffer.from_defaults(llm=llm)\n",
138+
" self.sources = []\n",
139+
"\n",
140+
" @step()\n",
141+
" async def prepare_chat_history(self, ev: StartEvent) -> InputEvent:\n",
142+
" # clear sources\n",
143+
" self.sources = []\n",
144+
"\n",
145+
" # get user input\n",
146+
" user_input = ev.get(\"input\")\n",
147+
" user_msg = ChatMessage(role=\"user\", content=user_input)\n",
148+
" self.memory.put(user_msg)\n",
149+
"\n",
150+
" # get chat history\n",
151+
" chat_history = self.memory.get()\n",
152+
" return InputEvent(input=chat_history)\n",
153+
"\n",
154+
" @step()\n",
155+
" async def handle_llm_input(\n",
156+
" self, ev: InputEvent\n",
157+
" ) -> ToolCallEvent | StopEvent:\n",
158+
" chat_history = ev.input\n",
159+
"\n",
160+
" response = await self.llm.achat_with_tools(\n",
161+
" self.tools, chat_history=chat_history\n",
162+
" )\n",
163+
" self.memory.put(response.message)\n",
164+
"\n",
165+
" tool_calls = self.llm.get_tool_calls_from_response(\n",
166+
" response, error_on_no_tool_call=False\n",
167+
" )\n",
168+
"\n",
169+
" if not tool_calls:\n",
170+
" return StopEvent(\n",
171+
" result={\"response\": response, \"sources\": [*self.sources]}\n",
172+
" )\n",
173+
" else:\n",
174+
" return ToolCallEvent(tool_calls=tool_calls)\n",
175+
"\n",
176+
" @step()\n",
177+
" async def handle_tool_calls(self, ev: ToolCallEvent) -> InputEvent:\n",
178+
" tool_calls = ev.tool_calls\n",
179+
" tools_by_name = {tool.metadata.get_name(): tool for tool in self.tools}\n",
180+
"\n",
181+
" tool_msgs = []\n",
182+
"\n",
183+
" # call tools -- safely!\n",
184+
" for tool_call in tool_calls:\n",
185+
" tool = tools_by_name.get(tool_call.tool_name)\n",
186+
" additional_kwargs = {\n",
187+
" \"tool_call_id\": tool_call.tool_id,\n",
188+
" \"name\": tool.metadata.get_name(),\n",
189+
" }\n",
190+
" if not tool:\n",
191+
" tool_msgs.append(\n",
192+
" ChatMessage(\n",
193+
" role=\"tool\",\n",
194+
" content=f\"Tool {tool_call.tool_name} does not exist\",\n",
195+
" additional_kwargs=additional_kwargs,\n",
196+
" )\n",
197+
" )\n",
198+
" continue\n",
199+
"\n",
200+
" try:\n",
201+
" tool_output = tool(**tool_call.tool_kwargs)\n",
202+
" self.sources.append(tool_output)\n",
203+
" tool_msgs.append(\n",
204+
" ChatMessage(\n",
205+
" role=\"tool\",\n",
206+
" content=tool_output.content,\n",
207+
" additional_kwargs=additional_kwargs,\n",
208+
" )\n",
209+
" )\n",
210+
" except Exception as e:\n",
211+
" tool_msgs.append(\n",
212+
" ChatMessage(\n",
213+
" role=\"tool\",\n",
214+
" content=f\"Encountered error in tool call: {e}\",\n",
215+
" additional_kwargs=additional_kwargs,\n",
216+
" )\n",
217+
" )\n",
218+
"\n",
219+
" for msg in tool_msgs:\n",
220+
" self.memory.put(msg)\n",
221+
"\n",
222+
" chat_history = self.memory.get()\n",
223+
" return InputEvent(input=chat_history)"
224+
]
225+
},
226+
{
227+
"cell_type": "markdown",
228+
"metadata": {},
229+
"source": [
230+
"And thats it! Let's explore the workflow we wrote a bit.\n",
231+
"\n",
232+
"`prepare_chat_history()`:\n",
233+
"This is our main entry point. It handles adding the user message to memory, and uses the memory to get the latest chat history. It returns an `InputEvent`.\n",
234+
"\n",
235+
"`handle_llm_input()`:\n",
236+
"Triggered by an `InputEvent`, it uses the chat history and tools to prompt the llm. If tool calls are found, a `ToolCallEvent` is emitted. Otherwise, we say the workflow is done an emit a `StopEvent`\n",
237+
"\n",
238+
"`handle_tool_calls()`:\n",
239+
"Triggered by `ToolCallEvent`, it calls tools with error handling and returns tool outputs. This event triggers a **loop** since it emits an `InputEvent`, which takes us back to `handle_llm_input()`"
240+
]
241+
},
242+
{
243+
"cell_type": "markdown",
244+
"metadata": {},
245+
"source": [
246+
"## Run the Workflow!\n",
247+
"\n",
248+
"**NOTE:** With loops, we need to be mindful of runtime. Here, we set a timeout of 120s."
249+
]
250+
},
251+
{
252+
"cell_type": "code",
253+
"execution_count": null,
254+
"metadata": {},
255+
"outputs": [
256+
{
257+
"name": "stdout",
258+
"output_type": "stream",
259+
"text": [
260+
"Running step prepare_chat_history\n",
261+
"Step prepare_chat_history produced event InputEvent\n",
262+
"Running step handle_llm_input\n",
263+
"Step handle_llm_input produced event StopEvent\n"
264+
]
265+
}
266+
],
267+
"source": [
268+
"from llama_index.core.tools import FunctionTool\n",
269+
"from llama_index.llms.openai import OpenAI\n",
270+
"\n",
271+
"\n",
272+
"def add(x: int, y: int) -> int:\n",
273+
" \"\"\"Useful function to add two numbers.\"\"\"\n",
274+
" return x + y\n",
275+
"\n",
276+
"\n",
277+
"def multiply(x: int, y: int) -> int:\n",
278+
" \"\"\"Useful function to multiply two numbers.\"\"\"\n",
279+
" return x * y\n",
280+
"\n",
281+
"\n",
282+
"tools = [\n",
283+
" FunctionTool.from_defaults(add),\n",
284+
" FunctionTool.from_defaults(multiply),\n",
285+
"]\n",
286+
"\n",
287+
"agent = FuncationCallingAgent(\n",
288+
" llm=OpenAI(model=\"gpt-4o-mini\"), tools=tools, timeout=120, verbose=True\n",
289+
")\n",
290+
"\n",
291+
"ret = await agent.run(input=\"Hello!\")"
292+
]
293+
},
294+
{
295+
"cell_type": "code",
296+
"execution_count": null,
297+
"metadata": {},
298+
"outputs": [
299+
{
300+
"name": "stdout",
301+
"output_type": "stream",
302+
"text": [
303+
"assistant: Hello! How can I assist you today?\n"
304+
]
305+
}
306+
],
307+
"source": [
308+
"print(ret[\"response\"])"
309+
]
310+
},
311+
{
312+
"cell_type": "code",
313+
"execution_count": null,
314+
"metadata": {},
315+
"outputs": [
316+
{
317+
"name": "stdout",
318+
"output_type": "stream",
319+
"text": [
320+
"Running step prepare_chat_history\n",
321+
"Step prepare_chat_history produced event InputEvent\n",
322+
"Running step handle_llm_input\n",
323+
"Step handle_llm_input produced event ToolCallEvent\n",
324+
"Running step handle_tool_calls\n",
325+
"Step handle_tool_calls produced event InputEvent\n",
326+
"Running step handle_llm_input\n",
327+
"Step handle_llm_input produced event ToolCallEvent\n",
328+
"Running step handle_tool_calls\n",
329+
"Step handle_tool_calls produced event InputEvent\n",
330+
"Running step handle_llm_input\n",
331+
"Step handle_llm_input produced event StopEvent\n"
332+
]
333+
}
334+
],
335+
"source": [
336+
"ret = await agent.run(input=\"What is (2123 + 2321) * 312?\")"
337+
]
338+
},
339+
{
340+
"cell_type": "code",
341+
"execution_count": null,
342+
"metadata": {},
343+
"outputs": [
344+
{
345+
"name": "stdout",
346+
"output_type": "stream",
347+
"text": [
348+
"assistant: The result of \\((2123 + 2321) \\times 312\\) is \\(1,386,528\\).\n"
349+
]
350+
}
351+
],
352+
"source": [
353+
"print(ret[\"response\"])"
354+
]
355+
}
356+
],
357+
"metadata": {
358+
"kernelspec": {
359+
"display_name": "llama-index-cDlKpkFt-py3.11",
360+
"language": "python",
361+
"name": "python3"
362+
},
363+
"language_info": {
364+
"codemirror_mode": {
365+
"name": "ipython",
366+
"version": 3
367+
},
368+
"file_extension": ".py",
369+
"mimetype": "text/x-python",
370+
"name": "python",
371+
"nbconvert_exporter": "python",
372+
"pygments_lexer": "ipython3"
373+
}
374+
},
375+
"nbformat": 4,
376+
"nbformat_minor": 2
377+
}

0 commit comments

Comments
 (0)