Need help preventing base LLM response after tool call while still allowing subsequent tool calls #8762
Replies: 1 comment
-
|
This discussion was automatically locked because it has not been updated in over 30 days. If you still have questions about this topic, please ask us at community.vercel.com/ai-sdk |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team,
I’m working on a scenario using the Vercel AI SDK v5 where tool calls send results directly into a message queue.
In this setup, I don’t want the base LLM to always generate an additional response — I want more fine-grained control over whether the LLM processes the tool’s result or not.
Current behavior
I experimented with aborting the streaming once the tool call finishes. While this prevents the base LLM from adding extra messages to the queue, it also stops subsequent tool calls from being executed, which breaks cases where multiple tool calls are needed.
Goal
I’d like to support a flow where each tool call result can explicitly decide whether the base LLM should respond. For this, I’m returning a result object that contains both the
messageand a flagignoreMessage.If
ignoreMessage: true:If
ignoreMessage: false:In both cases:
Example use case
Suppose I have a tool that generates a joke.
Case 1: Tool handles everything itself
The tool directly adds the joke into the message queue.
It then returns:
{ "message": "Request Processed", "ignoreMessage": true }In this case, I don’t want the LLM to generate a new joke on its own, since the tool has already fulfilled the request.
Case 2: Tool relies on LLM for final response
The tool returns:
{ "message": "Here is a joke for you: Why did the scarecrow win the award? Because he was outstanding.", "ignoreMessage": false }In this case, the LLM should process the result and provide a response to the user.
Question
Is there a recommended approach or SDK feature in Vercel AI SDK v5 that would allow me to:
ignoreMessageistrue,ignoreMessageisfalse,Thanks in advance for your guidance!
Beta Was this translation helpful? Give feedback.
All reactions