-
Notifications
You must be signed in to change notification settings - Fork 603
Address Gemini code review comments on PR #494 #496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
High Priority: - Separate setup logic from measured operations in connection benchmarks - Create shared connection once, reuse across all benchmark iterations - Dramatically improved benchmark accuracy (2.4M ops/sec for ping vs 33K before) Medium Priority: - Lazily initialize suite results in track-performance.mjs to prevent empty entries - Remove redundant test.include from vitest.bench.config.mjs - Add bench:compare script to package.json to match README documentation Updated baseline.json with new accurate performance measurements: - Connection operations now measure actual send performance (not setup) - Frame serialization remains consistent at 4M+ ops/sec - Ping/pong operations: 2.4M / 1.9M ops/sec (previously 33K) - Message sending: 900K / 220K / 108K ops/sec (previously 25-28K) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Summary of ChangesHello @theturtle32, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses several code review comments from a previous performance benchmarking PR. The primary focus is on enhancing the accuracy of WebSocket connection operation benchmarks by isolating setup overhead from the actual measured operations. Additionally, it includes fixes for benchmark output parsing, simplification of the Vitest configuration, and an update to the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses the feedback from the previous review by refactoring the connection benchmarks to separate setup logic from the measured operations. This results in much more accurate performance metrics. The changes to the benchmark output parser and the Vitest configuration are also good improvements for robustness and simplicity. The code is well-structured and the changes are clearly explained. I have one suggestion to make the benchmark setup code more idiomatic.
| // Pre-create connection for send operations (created once, reused across all iterations) | ||
| const sharedSocket = new MockSocket(); | ||
| const sharedConnection = new WebSocketConnection(sharedSocket, [], 'echo-protocol', false, {}); | ||
| sharedConnection._addSocketEventListeners(); | ||
| sharedConnection.state = 'open'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the current setup works, using a beforeAll hook is more idiomatic for setting up state that is shared across all benchmarks in a suite. This makes the setup logic more explicit and aligns with common testing practices in frameworks like Vitest.
You could refactor this part like so:
import { bench, describe, beforeAll } from 'vitest';
// ...
describe('WebSocketConnection Performance', () => {
// ... (message and buffer allocations)
let sharedConnection;
beforeAll(() => {
// Pre-create connection for send operations (created once, reused across all iterations)
const sharedSocket = new MockSocket();
sharedConnection = new WebSocketConnection(sharedSocket, [], 'echo-protocol', false, {});
sharedConnection._addSocketEventListeners();
sharedConnection.state = 'open';
});
// ... (benchmarks using sharedConnection)
});This change would require importing beforeAll from vitest.
Vitest's benchmark runner doesn't execute hooks (beforeAll/beforeEach) before benchmarks in the same way as test(). Direct initialization at module scope ensures the shared connection is available when benchmarks run. Attempted using beforeAll() but it resulted in 0 hz for all benchmarks using sharedConnection, indicating the hook wasn't executed before benchmark iterations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Summary
Addresses all code review comments from Gemini on PR #494 (performance benchmarking).
Changes
High Priority ✅
Separate setup from measured operations in connection benchmarks
Medium Priority ✅
Fix benchmark output parsing
Simplify vitest benchmark config
test.includepropertytest.benchmark.includeas recommendedFix README documentation
bench:comparescript to package.json to match documented commandPerformance Results
Benchmarks now show accurate performance (without setup overhead):
Before (with setup included):
After (setup separated):
Files Changed
test/benchmark/connection-operations.bench.mjs- Refactored to separate setuptest/benchmark/track-performance.mjs- Lazy suite initializationvitest.bench.config.mjs- Removed redundant includepackage.json- Added bench:compare scripttest/benchmark/baseline.json- Updated with accurate measurementsTest Plan
pnpm run benchsuccessfullypnpm run bench:compareworkspnpm lint:fixwith no errors🤖 Generated with Claude Code