-
-
Notifications
You must be signed in to change notification settings - Fork 16
feat: Support function call requests/responses #56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support function call requests/responses #56
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A functional test is missing to verify that the new feature works as intended.
Adding one would help ensure the feature behaves correctly and prevent regressions in future changes.
langchain_openai_api_bridge/chat_completion/chat_completion_chunk_choice_adapter.py
Show resolved
Hide resolved
d115c6e to
1cfb88b
Compare
|
I added a simple unittest and functional test for the function call functionality. I also found where the behavior of the server differs from the actual openai server, so I changed it (only the first chunk should contain role) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job, one last minor comment and it will be ready to be merged.
| assert result.delta.role == "assistant" | ||
|
|
||
| def test_message_have_assistant_role_by_default(self): | ||
| def test_delta_message_have_none_role_by_default(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was there for a reason. It might break something, I think the behaviour should remains as before except if you have a good reason for it.
| assert result.choices[0].delta.role == "assistant" | ||
|
|
||
| def test_message_have_assistant_role_by_default(self): | ||
| def test_delta_message_have_none_role_by_default(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was there for a reason. It might break something, I think the behaviour should remains as before except if you have a good reason for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If so, is it okay to leave “assistant” as the default, but in the first actual behavior, only send “assistant” for the first chunk and None for the rest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is due to:
I also found where the behavior of the server differs from the actual openai server, so I changed it (only the first chunk should contain role)
When returning a chunk, the openai server returns None by default, and only the very first chunk returns role.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's ok if this is the default openai api behaviour.
Please add a unit test in the langchain_openai_api_bridge/chat_completion/langchain_stream_adapter.py unit test.
Have a test function name that explain that behaviour, "Only the first chunk has role assistant, afterward role is None". That will document the use case.
| result = to_openai_chat_completion_chunk_object(event) | ||
|
|
||
| assert result.choices[0].delta.function_call is not None | ||
| assert result.choices[0].delta.function_call.name == "my_func" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test does not validate the args property. I think it should have a test for the name and another for the args
|
|
||
|
|
||
| def test_chat_completion_function_call_weather_stream(openai_client: OpenAI): | ||
| tools = [{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since tools section is the same in each tests, it could be re-used instead of copy/paste
34bf370 to
219e233
Compare
| assert result.choices[0].delta.role == "assistant" | ||
|
|
||
| def test_message_have_assistant_role_by_default(self): | ||
| def test_delta_message_have_none_role_by_default(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's ok if this is the default openai api behaviour.
Please add a unit test in the langchain_openai_api_bridge/chat_completion/langchain_stream_adapter.py unit test.
Have a test function name that explain that behaviour, "Only the first chunk has role assistant, afterward role is None". That will document the use case.
|
I've add it |
e0b6a06
into
samuelint:feat-Support-function-call-requests/responses-(from-fork)


I have implemented the issue (#50)
I also changed the OpenAI related messages internally from BaseModel to the actual internal data format of openai python sdk. This shouldn't be an issue in the data format actually used by users, since we only added tools and tool_choice to CreateAgentDto.
In the chat completion unit test, message.role = "ai" was incorrect for the openai message format, so i changed it to "assistant".
Example:
Server
Client