Skip to content

Conversation

@theturtle32
Copy link
Owner

Summary

Addresses all code review comments from Gemini on PR #494 (performance benchmarking).

Changes

High Priority ✅

Separate setup from measured operations in connection benchmarks

  • Created shared connection instance once, reused across all benchmark iterations
  • Benchmark now measures actual operation performance, not setup overhead
  • Results: 70-80x performance improvement in measured ops/sec (accurate measurement)

Medium Priority ✅

Fix benchmark output parsing

  • Lazily initialize suite results to prevent empty entries in baseline.json
  • Only create suite object when first benchmark is found

Simplify vitest benchmark config

  • Removed redundant test.include property
  • Kept only test.benchmark.include as recommended

Fix README documentation

  • Added bench:compare script to package.json to match documented command

Performance Results

Benchmarks now show accurate performance (without setup overhead):

Before (with setup included):

  • Ping/Pong: 33-34K ops/sec
  • UTF-8 messages: 25-28K ops/sec
  • Connection creation: 30K ops/sec

After (setup separated):

  • Ping: 2.4M ops/sec ⚡
  • Pong: 1.9M ops/sec ⚡
  • Small UTF-8: 914K ops/sec ⚡
  • Binary (1KB): 223K ops/sec ⚡
  • Medium UTF-8 (1KB): 108K ops/sec ⚡
  • Connection creation: 29K ops/sec (unchanged, correctly measures creation)

Files Changed

  • test/benchmark/connection-operations.bench.mjs - Refactored to separate setup
  • test/benchmark/track-performance.mjs - Lazy suite initialization
  • vitest.bench.config.mjs - Removed redundant include
  • package.json - Added bench:compare script
  • test/benchmark/baseline.json - Updated with accurate measurements

Test Plan

  • Ran pnpm run bench successfully
  • Verified baseline.json has no empty suite entries
  • Confirmed pnpm run bench:compare works
  • Ran pnpm lint:fix with no errors

🤖 Generated with Claude Code

High Priority:
- Separate setup logic from measured operations in connection benchmarks
- Create shared connection once, reuse across all benchmark iterations
- Dramatically improved benchmark accuracy (2.4M ops/sec for ping vs 33K before)

Medium Priority:
- Lazily initialize suite results in track-performance.mjs to prevent empty entries
- Remove redundant test.include from vitest.bench.config.mjs
- Add bench:compare script to package.json to match README documentation

Updated baseline.json with new accurate performance measurements:
- Connection operations now measure actual send performance (not setup)
- Frame serialization remains consistent at 4M+ ops/sec
- Ping/pong operations: 2.4M / 1.9M ops/sec (previously 33K)
- Message sending: 900K / 220K / 108K ops/sec (previously 25-28K)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@gemini-code-assist
Copy link

Summary of Changes

Hello @theturtle32, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses several code review comments from a previous performance benchmarking PR. The primary focus is on enhancing the accuracy of WebSocket connection operation benchmarks by isolating setup overhead from the actual measured operations. Additionally, it includes fixes for benchmark output parsing, simplification of the Vitest configuration, and an update to the package.json scripts to improve consistency with documentation. These changes ensure that performance metrics are more reliable and reflective of true operational speed.

Highlights

  • Accurate Benchmarking: Refactored connection operation benchmarks to separate setup from measured operations, leading to significantly more accurate performance metrics (70-80x improvement in ops/sec for some operations).
  • Benchmark Output Fix: Implemented lazy initialization for benchmark suite results to prevent empty entries in baseline.json.
  • Simplified Vitest Config: Removed redundant test.include property from the Vitest benchmark configuration.
  • Documentation Update: Added the bench:compare script to package.json to align with documented commands.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses the feedback from the previous review by refactoring the connection benchmarks to separate setup logic from the measured operations. This results in much more accurate performance metrics. The changes to the benchmark output parser and the Vitest configuration are also good improvements for robustness and simplicity. The code is well-structured and the changes are clearly explained. I have one suggestion to make the benchmark setup code more idiomatic.

Comment on lines 11 to 15
// Pre-create connection for send operations (created once, reused across all iterations)
const sharedSocket = new MockSocket();
const sharedConnection = new WebSocketConnection(sharedSocket, [], 'echo-protocol', false, {});
sharedConnection._addSocketEventListeners();
sharedConnection.state = 'open';

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the current setup works, using a beforeAll hook is more idiomatic for setting up state that is shared across all benchmarks in a suite. This makes the setup logic more explicit and aligns with common testing practices in frameworks like Vitest.

You could refactor this part like so:

import { bench, describe, beforeAll } from 'vitest';
// ...

describe('WebSocketConnection Performance', () => {
  // ... (message and buffer allocations)

  let sharedConnection;

  beforeAll(() => {
    // Pre-create connection for send operations (created once, reused across all iterations)
    const sharedSocket = new MockSocket();
    sharedConnection = new WebSocketConnection(sharedSocket, [], 'echo-protocol', false, {});
    sharedConnection._addSocketEventListeners();
    sharedConnection.state = 'open';
  });

  // ... (benchmarks using sharedConnection)
});

This change would require importing beforeAll from vitest.

Vitest's benchmark runner doesn't execute hooks (beforeAll/beforeEach)
before benchmarks in the same way as test(). Direct initialization at
module scope ensures the shared connection is available when benchmarks run.

Attempted using beforeAll() but it resulted in 0 hz for all benchmarks
using sharedConnection, indicating the hook wasn't executed before
benchmark iterations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@theturtle32 theturtle32 merged commit f19a878 into v2 Oct 6, 2025
4 checks passed
@theturtle32 theturtle32 deleted the fix-pr494-review-comments branch October 6, 2025 17:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants