Skip to content

Conversation

jrobertboos
Copy link
Contributor

@jrobertboos jrobertboos commented Aug 5, 2025

Description

This PR makes "tool_call" responses return a structured JSON with tool_name, arguments, summary, or response as fields (every "tool_call" non inference response will always have tool_name as well as one of the other fields in the response). Please let me know if anything should be changed.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue LCORE-385
  • Closes LCORE-385

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Streamed token data is now provided as structured JSON objects, offering clearer and more explicit information in streaming responses.
  • Bug Fixes

    • Improved handling of the final response in streaming, ensuring it is accurately set when a conversation turn is complete.
  • Tests

    • Updated tests to reflect the new structured format of streamed tokens and more granular streaming events.

…ool execution events and improve handling of turn completion. Update tests to reflect changes in response format and ensure proper assertions.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 5, 2025

Walkthrough

The changes update the streaming query endpoint to emit structured JSON tokens for tool events instead of concatenated strings, and adjust how the final response is captured from the stream. Corresponding unit tests are updated to expect the new token structure, more granular chunking, and a turn completion event.

Changes

Cohort / File(s) Change Summary
Streaming Query Endpoint Logic
src/app/endpoints/streaming_query.py
Refactored streaming event handling to output structured JSON tokens for tool execution events, updated token formatting for turn completion, and changed final response aggregation to capture only the turn completion output.
Unit Tests for Streaming Query
tests/unit/app/endpoints/test_streaming_query.py
Modified test data to use more granular streaming chunks, added a turn completion event, updated assertions for the new structured token format, and adjusted indices and expected values to match the revised streaming logic.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant API
    participant Model/Tools

    Client->>API: Send streaming query request
    loop For each event in stream
        API->>Model/Tools: Process next event
        alt Tool execution event
            Model/Tools-->>API: Tool event (structured JSON token)
            API-->>Client: Stream JSON token (tool_name, arguments, etc.)
        else Turn complete event
            Model/Tools-->>API: Turn complete event (interleaved content)
            API-->>Client: Stream JSON token (final response)
        end
    end
    API-->>Client: Stream ends
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

  • Streaming improvements #254: Further refines the streaming event token formatting and final response aggregation in streaming_query.py, building on structured event handling.

Suggested reviewers

  • umago
  • tisnik

Poem

In the meadow where JSON flows,
Tokens now bloom in structured rows.
Tool calls and answers, neat and clear,
Stream to the client—no string soup here!
With every hop, the test bunnies cheer,
For code that streams with less to fear.
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jrobertboos jrobertboos marked this pull request as ready for review August 5, 2025 16:58
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/app/endpoints/streaming_query.py (1)

470-476: Consider more robust JSON parsing for turn_complete events.

While the logic correctly captures the turn_complete token, the current implementation manually parses the SSE format. Consider a more robust approach:

-                    if (
-                        json.loads(event.replace("data: ", ""))["event"]
-                        == "turn_complete"
-                    ):
-                        complete_response = json.loads(event.replace("data: ", ""))[
-                            "data"
-                        ]["token"]
+                    # Parse SSE data more safely
+                    if event.startswith("data: "):
+                        try:
+                            data = json.loads(event[6:])  # Skip "data: " prefix
+                            if data.get("event") == "turn_complete":
+                                complete_response = data.get("data", {}).get("token", complete_response)
+                        except json.JSONDecodeError:
+                            logger.warning("Failed to parse event data: %s", event)

This approach:

  • Safely checks for the "data: " prefix
  • Handles JSON parsing errors gracefully
  • Uses .get() to avoid KeyError exceptions
  • Maintains the default response if parsing fails
tests/unit/app/endpoints/test_streaming_query.py (1)

164-330: Consider adding edge case tests for robustness.

While the existing tests cover the happy path, consider adding tests for:

  1. Missing turn_complete event (should return default "No response from the model")
  2. Tool responses with missing or malformed structure
  3. Multiple turn_complete events in a stream

Example test case for missing turn_complete:

@pytest.mark.asyncio
async def test_streaming_query_missing_turn_complete(mocker):
    """Test streaming query when turn_complete event is missing."""
    # ... setup code ...
    
    # Create response without turn_complete event
    mock_streaming_response.__aiter__.return_value = [
        AgentTurnResponseStreamChunk(
            event=TurnResponseEvent(
                payload=AgentTurnResponseStepProgressPayload(
                    event_type="step_progress",
                    step_type="inference",
                    delta=TextDelta(text="Incomplete response", type="text"),
                    step_id="s1",
                )
            )
        ),
    ]
    
    # ... rest of test ...
    # Assert that stored response is "No response from the model"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5afee1 and 566e5a7.

📒 Files selected for processing (2)
  • src/app/endpoints/streaming_query.py (6 hunks)
  • tests/unit/app/endpoints/test_streaming_query.py (6 hunks)
🔇 Additional comments (9)
src/app/endpoints/streaming_query.py (6)

216-218: LGTM! Proper content formatting for turn completion.

The use of interleaved_content_as_str ensures consistent string formatting of the turn's output message content.


340-343: Correctly implements structured tool call tokens.

The token now returns a structured dictionary with tool_name and arguments fields, aligning perfectly with the PR objective of standardizing tool_call responses.


357-360: Proper structured response for memory query tool.

The implementation correctly returns a dictionary with tool_name and response fields, providing clear information about the memory fetch operation.


391-394: Knowledge search response properly structured.

The token correctly returns a dictionary with tool_name and summary fields, maintaining the existing summary extraction logic while conforming to the new structured format.


406-409: Generic tool responses correctly structured.

The catch-all handler properly returns a dictionary with tool_name and response fields for all other tool types.


463-463: Good default response initialization.

Setting a meaningful default message helps identify cases where no turn_complete event is received, improving debuggability.

tests/unit/app/endpoints/test_streaming_query.py (3)

184-248: Test correctly models the new streaming behavior.

The test properly reflects:

  1. Text being streamed in chunks during inference
  2. Turn complete event containing the full assembled response
  3. Proper event structure with all required fields

297-297: Assertions correctly validate new response handling.

The test properly validates that:

  • The complete response comes from the turn_complete event
  • The chunk count includes the new turn_complete event

Also applies to: 311-311


973-978: Test assertions correctly validate structured tool tokens.

The assertions properly verify that tool events return structured JSON objects with the required fields (tool_name with arguments or summary), aligning with the PR objectives.

Copy link
Contributor

@manstis manstis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@TamiTakamiya FYI (thanks for your PR in preparation!)

@eranco74
Copy link
Contributor

eranco74 commented Aug 6, 2025

/cc @rawagner

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants