Skip to content

Conversation

@ascend-direct-dev
Copy link
Contributor

Description

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

Checklist

  • I have performed a self-review of my own code.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

@ascend-direct-dev ascend-direct-dev marked this pull request as draft November 24, 2025 01:43
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ascend-direct-dev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the AscendDirectTransport by implementing a robust asynchronous transfer mechanism. Instead of blocking on synchronous transfer operations, the system now dispatches transfers asynchronously and utilizes a dedicated queryThread to continuously monitor their status, manage timeouts, and ultimately mark their completion or failure. This architectural shift aims to improve the overall efficiency and responsiveness of the transfer engine by decoupling the initiation of transfers from their status resolution.

Highlights

  • Asynchronous Transfer Mechanism: Introduced a new queryThread and associated infrastructure (queue, mutex, condition variable) to handle asynchronous transfer status checks, moving away from synchronous blocking calls.
  • Slice Structure Enhancement: The Slice structure now includes handle (void*) and start_time (int64_t) fields to track individual asynchronous transfer requests and their initiation times, enabling proper monitoring.
  • Timeout Handling Refinement: The transfer_timeout_ variable type was updated to int64_t and is now applied in the queryThread for asynchronous transfer monitoring, with the value being converted to microseconds for precision.
  • Buffer Pool Configuration: The default ADXL buffer pool setting has been changed from '4:8' to '0:0' during engine initialization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adapts the AscendDirectTransport to use asynchronous transfers by introducing a queryThread for polling completion status. The overall approach is solid, replacing synchronous calls with asynchronous ones and correctly managing the new thread's lifecycle. I have a couple of suggestions to enhance clarity and consistency in timeout handling and timestamping.

LOG(INFO) << "Set transfer timeout to:" << transfer_timeout_;
}
}
transfer_timeout_ = transfer_timeout_ * 1000000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This conversion to nanoseconds is correct for the timeout logic. However, the corresponding log message for transfer_timeout_ at line 200 is emitted before this conversion, showing the value in what seems to be milliseconds. This could be misleading for users debugging timeout issues. It would be clearer to log the timeout value after this conversion, and to include the units (e.g., 'ns') in the log message to avoid confusion.

Comment on lines +696 to +699
for (auto &slice : slice_list) {
slice->ascend_direct.start_time = getCurrentTimeInNano();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency, it's better to capture the start time once before the loop and assign the same value to all slices in the batch. The current implementation gives each slice a slightly different start time, while the timeout logic in queryThread only considers the start time of the first slice (slice_list[0]). Using a single timestamp for the whole batch will make the timeout calculation more accurate.

You could change it to something like this:

    auto start_time = getCurrentTimeInNano();
    for (auto &slice : slice_list) {
        slice->ascend_direct.start_time = start_time;
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant