Skip to content

Conversation

@samuelint
Copy link
Owner

update multiple dependencies

openai = "^1.35.1" -> "^1.93.0"

langchain = { version = "^0.2.6", optional = true } -> { version = "^0.3.26", optional = true }
langchain-openai = { version = "^0.1.8", optional = true } -> { version = "^0.3.27", optional = true }
fastapi = { version = "^0.111.0", optional = true } -> { version = "^0.115.14", extras = ["standard"], optional = true }
langgraph = { version = "^0.2.16", optional = true } -> { version = "^0.5.0", optional = true }
langchain-anthropic = { version = "^0.1.19", optional = true } -> { version = "^0.3.16", optional = true }
langchain-groq = { version = "^0.1.6", optional = true } -> { version = "^0.3.4", optional = true }

langchain-llamacpp-chat-model = "^0.2.2" -> drop
llama-cpp-python = "^0.3.9" -> new
langchain-community = "^0.2.9" -> "^0.3.26"
langchain-experimental = "^0.0.62" -> "^0.3.4"


The code change in langgraph changes the code below.

        # messages_modifier -> prompt
        return create_react_agent(
            llm,
            [],
-           messages_modifier="""You are a helpful assistant.""",
+           prompt="""You are a helpful assistant.""",
        )
# langgraph.graph.graph -> CompiledStateGraph
-from langgraph.graph.graph import CompiledGraph
+from langgraph.graph.state import CompiledStateGraph

The code change in openai changes the code below. But I don't know if this is the right way to do it.

+metadata = {key: str(value) for key, value in metadata.items()}

https://github.com/caffeinism/langchain-openai-api-bridge/blob/patch-1/langchain_openai_api_bridge/assistant/adapter/openai_event_factory.py#L71

agent wrapped with async context manager

Similar idea to #48, but this implementation wraps all functions in an async context manager.

This can be used as before.

    def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        return create_react_agent(llm, tools, prompt=prompt)

Like the idea in the previous PR, we also allow the following

    async def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        return create_react_agent(llm, tools, prompt=prompt)

Additionally, it allows the following code

    @asynccontextmanager
    async def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        yield create_react_agent(llm, tools, prompt=prompt)

    async def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        yield create_react_agent(llm, tools, prompt=prompt)

    @contextmanager
    def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        yield create_react_agent(llm, tools, prompt=prompt)

    def create_agent(self, dto: CreateAgentDto) -> Runnable:
        ...
        yield create_react_agent(llm, tools, prompt=prompt)

This enables connection management by using methods from the context manager.

I've verified that anthropic, openai passes pytest except for groq, which I don't have a key for, and the additional behavior works in my use case, the bind_openai_chat_completion method.

@samuelint samuelint merged commit 38f1bc0 into main Jul 5, 2025
1 check passed
@samuelint samuelint deleted the feat/async-invoke branch July 5, 2025 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants