Replies: 6 comments 3 replies
-
I'm also running into this behavior. Another side effect of this is that the import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
async onFinish({ text, toolCalls, toolResults, usage, finishReason, responseMessages }) {
// implement your own storage logic:
// Note: toolCalls and toolResults are always empty even when tools were called
await saveChat({ text, toolCalls, toolResults });
},
});
return result.toDataStreamResponse();
} The [
{
role: 'assistant',
content: [
{ type: 'text', text: '' },
{
type: 'tool-call',
toolCallId: 'call_9bchLFghtVPs1ub4JXgBFjHu',
toolName: 'getWeatherInformation',
args: { city: 'Kapstadt' }
}
]
},
{
role: 'tool',
content: [
{
type: 'tool-result',
toolCallId: 'call_9bchLFghtVPs1ub4JXgBFjHu',
toolName: 'getWeatherInformation',
result: 'windy'
}
]
},
{
role: 'assistant',
content: [
{
type: 'text',
text: 'Das Wetter in Kapstadt ist derzeit windig.'
}
]
}
] The vercel/ai-chatbot currently uses the Would be great to know whether this is a bug in the way messages are structured, or expected behavior and the |
Beta Was this translation helpful? Give feedback.
-
Is there any update on this. Tool calls still return no message to show to user |
Beta Was this translation helpful? Give feedback.
-
Hi guys, any update on this ? My tool call results still give |
Beta Was this translation helpful? Give feedback.
-
same issue. pretty much a blocker for me. about 50% of the time, when tool calls and used, it returns empty |
Beta Was this translation helpful? Give feedback.
-
same issue with v4.3.15, for anyone needing this, here's a quick workaround: onFinish: (data) => {
// HACK: data.toolsCalled returns empty array, so we need to extract tool calls manually
const toolsCalled = data.response.messages
.filter((m) => m.role === 'assistant')
.flatMap((m) => {
if (Array.isArray(m.content)) {
return m.content
.filter((c: any) => c.type === 'tool-call')
.map((c: any) => ({
name: c.toolName,
action: c.args.action,
}));
}
return [];
});
dataStream.writeData({
toolsCalled,
});
},
}); |
Beta Was this translation helpful? Give feedback.
-
I created another issue for the v5 that addresses this behavior: #8578 Also, there is another workaround to avoid processing the messages in the response object. You can wrap your |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone!
I am building a RAG application which basically works like any other RAG.
I want the LLM to choose between 3 tools:
I am using:
"ai": 3.4.33
"next": "14.1.3"
For some reason, whenever the LLM calls a tool, it appends an empty message to the messages Array, which leads to my UI rendering one or more empty chat bubbles:

This is my Chat component:
This is my route:
Is this expected behaviour and I need to adjust my UI and filter out those emtpy messges or am I doing something wrong?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions