-
Notifications
You must be signed in to change notification settings - Fork 397
refactor(mcp-annotations): migrate client/server samples to annotation-based auto-registration #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…n-based auto-registration - Removed explicit dependency on from client and server modules; now rely on auto-registration via Spring AI MCP core. - Deleted manual customizer/configuration classes (, ) from client. - Refactored handler/provider classes: renamed and moved to reflect annotation-based usage. - Simplified application classes to remove explicit bean registration for MCP handlers/providers. - Updated documentation to describe the new annotation-driven approach, project structure, and removed boilerplate. - Adjusted properties and configuration for clarity and to match the new structure. - Ensure the proper clients paramter is set to all client mcp annotations, - Update READMEs This refactor streamlines the MCP annotation samples, leveraging automatic handler and tool registration for reduced boilerplate and improved maintainability. Signed-off-by: Christian Tzolov <[email protected]>
4a458c1 to
49e5b90
Compare
qcloop
added a commit
to qcloop/spring-ai-examples
that referenced
this pull request
Sep 3, 2025
commit 7266b7a Author: Christian Tzolov <[email protected]> Date: Fri Aug 29 12:56:18 2025 +0200 refactor(mcp-annotations): migrate client/server samples to annotation-based auto-registration (spring-projects#69) - Removed explicit dependency on from client and server modules; now rely on auto-registration via Spring AI MCP core. - Deleted manual customizer/configuration classes (, ) from client. - Refactored handler/provider classes: renamed and moved to reflect annotation-based usage. - Simplified application classes to remove explicit bean registration for MCP handlers/providers. - Updated documentation to describe the new annotation-driven approach, project structure, and removed boilerplate. - Adjusted properties and configuration for clarity and to match the new structure. - Ensure the proper clients paramter is set to all client mcp annotations, - Update READMEs This refactor streamlines the MCP annotation samples, leveraging automatic handler and tool registration for reduced boilerplate and improved maintainability. Signed-off-by: Christian Tzolov <[email protected]> commit 07c7f8a Author: Christian Tzolov <[email protected]> Date: Mon Aug 25 13:36:53 2025 +0200 refactor: rename SyncMcpAnnotationProvider to SyncMcpAnnotationProviders and simplify method names - Rename SyncMcpAnnotationProvider to SyncMcpAnnotationProviders (plural form) - Simplify method names by removing 'createSync' prefix and 'Specifications' suffix - createSyncLoggingSpecifications() → loggingSpecifications() - createSyncSamplingSpecifications() → samplingSpecifications() - createSyncElicitationSpecifications() → elicitationSpecifications() - createSyncProgressSpecifications() → progressSpecifications() - createSyncResourceSpecifications() → resourceSpecifications() - createSyncPromptSpecifications() → promptSpecifications() - createSyncCompleteSpecifications() → completeSpecifications() - createSyncToolSpecifications() → toolSpecifications() - Minor import reordering in McpServerApplication.java This improves API clarity by using a plural class name that better reflects its role as a provider of multiple specifications, and more concise method names. Signed-off-by: Christian Tzolov <[email protected]> commit 54aa77c Author: Christian Tzolov <[email protected]> Date: Sat Aug 23 21:01:33 2025 +0200 improve mcp-annotations and sampling examples Signed-off-by: Christian Tzolov <[email protected]> commit 934b25d Author: Christian Tzolov <[email protected]> Date: Sat Aug 23 10:16:17 2025 +0200 minor Signed-off-by: Christian Tzolov <[email protected]> commit 82e0092 Author: Christian Tzolov <[email protected]> Date: Fri Aug 22 22:15:43 2025 +0200 feat: Add MCP annotations client and refactor server structure - Add new mcp-annotations-client project with comprehensive MCP client handlers - Reorganize mcp-annotations-server into structured provider packages - Add McpToolProvider2 with sampling, elicitation, and progress capabilities - Upgrade Spring AI version from 1.0.1 to 1.1.0-SNAPSHOT across projects - Add support for STREAMABLE and STATELESS MCP server protocols - Include new StreamableHttp transport client examples - Restructure project layout under mcp-annotations parent directory Signed-off-by: Christian Tzolov <[email protected]> commit f1450cf Author: Mark Pollack <[email protected]> Date: Thu Aug 7 10:14:30 2025 -0400 fix: replace third-party JBang action with manual curl installation - Remove jbangdev/setup-jbang action due to enterprise policy restrictions - Install JBang manually using curl after Java setup - Add JBang to PATH and verify installation - Maintain same functionality with enterprise-compliant approach commit a27d36e Author: Mark Pollack <[email protected]> Date: Thu Aug 7 10:01:25 2025 -0400 feat: expand CI to test all 17 passing examples - Update GitHub Actions workflow to run all 17 known passing tests - Increase timeout from 30 to 60 minutes for full test suite - Organize tests into logical groups (simple, agents, patterns, MCP) - Add progress indicators showing test count [n/17] - Update summary to list all 17 tests being executed - Complete Phase 3a documentation in CI plan commit 4276bff Author: Mark Pollack <[email protected]> Date: Thu Aug 7 08:35:49 2025 -0400 docs: update CI plan and TODO with test findings - Added AI validation false negative issue to TODO - Added MCP client/server investigation items to TODO - Updated GitHub Actions to only run known passing tests - Documented 7 excluded tests with reasons for exclusion - Updated plan to reflect test selection strategy commit 125ae0a Author: Mark Pollack <[email protected]> Date: Wed Aug 6 22:09:38 2025 -0400 test: complete Spring AI 1.0.1 testing and comparison - Tested all 24 modules with Spring AI 1.0.1 - Results: 14 passed (58.3%), 7 failed (29.2%), 3 timeout (12.5%) - Found 1 regression: kotlin/kotlin-hello-world fails in 1.0.1 - All other test results consistent between 1.0.0 and 1.0.1 - Created comprehensive test documentation and comparison commit bc6e8a2 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 19:58:38 2025 -0400 test: establish Spring AI 1.0.0 baseline for integration tests - Tested all 24 integration test modules with Spring AI 1.0.0 - Results: 15 passed (62.5%), 6 failed (25%), 3 timeout (12.5%) - Fixed kotlin/rag-with-kotlin Docker Compose path issue - Documented failing tests and known issues - Added test result documentation for baseline comparison commit 3a5242f Author: Mark Pollack <[email protected]> Date: Wed Aug 6 16:13:27 2025 -0400 feat: improve version management to handle all patterns - Update script to handle both spring-ai.version property and direct BOM versions - Skip backup directories to avoid updating old backups - Now updates all 32 modules (was only 17 before) - Add TODO documenting version management gaps - Insert Phase 3a for local testing with Spring AI 1.0.1 - All modules now updated to Spring AI 1.0.1 for testing commit 313a124 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 15:52:57 2025 -0400 docs: complete Phase 3 documentation and review Phase 4 - Create comprehensive GitHub Actions setup documentation - Document CI-specific issues and limitations - Document version management integration success - Update plan to mark Phase 3 documentation complete - Review and update Phase 4 to include all Phase 3b improvements - Ready to proceed with Phase 4 implementation commit 607287a Author: Mark Pollack <[email protected]> Date: Wed Aug 6 13:18:55 2025 -0400 fix: use npm to install Claude Code CLI and update plan - Install Claude Code CLI via npm (working solution) - Add TODO for future optimization with anthropics/claude-code-action - Update plan checkboxes to reflect completed phases - Phases 1, 2, 3, and 3b are now complete - Ready for final testing with ANTHROPIC_API_KEY commit edfb0c5 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 13:14:51 2025 -0400 feat: add Claude Code CLI for AI validation support - Install Claude Code CLI in GitHub Actions workflow - Configure ANTHROPIC_API_KEY environment variable - Re-enable AI validation in test configurations - Update documentation to reflect AI validation is supported - AI validation provides intelligent test validation beyond regex commit 7b5fce7 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 13:11:47 2025 -0400 feat: optimize workflow with official actions and caching - Replace manual JBang install with official jbangdev/setup-jbang action - Add Python setup for future Python tooling needs - Add JBang dependency caching to improve performance - Document that AI validation is local-only (requires Claude CLI) - Add Phase 3b to implementation plan - Create learnings document for workflow optimizations commit 91f4a3b Author: Mark Pollack <[email protected]> Date: Wed Aug 6 13:05:37 2025 -0400 fix: run all tests regardless of failures and disable AI validation - Update workflow to continue running all 3 tests even if one fails - Track test results and fail at the end if any test failed - Disable AI validation temporarily (requires Python setup) - Ensures all tests are attempted before workflow fails commit c0c7494 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:57:18 2025 -0400 fix: add JBang installation and proper exit codes - Add JBang installation step to GitHub Actions workflow - Fix run-integration-tests.sh to exit with code 1 on failures - Ensure GitHub Actions properly reports test failures commit a5dc486 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:47:36 2025 -0400 docs: add TODO for existing workflow migration - Document existing disabled workflow at .github/workflows/integration-tests.yml - Note migration considerations and decision points - Plan to revisit after Phase 5 (Automated Triggers) commit 4c95a95 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:41:35 2025 -0400 feat: add manual GitHub Actions workflow for integration tests - Create integration-tests-manual.yml with manual trigger - Support configurable Spring AI version (default 1.0.1) - Test 3 representative examples across different categories - Add optional test filter for specific test execution - Include JDK 17 setup with Maven caching - Configure OPENAI_API_KEY environment variables - Add test log artifacts and GitHub job summary - Create Phase 3 learnings document commit 5979a76 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:34:50 2025 -0400 feat: add Spring AI version management system - Create update-spring-ai-version.sh to manage versions across 17 pom files - Add restore-spring-ai-version.sh for rollback capability - Add check-spring-ai-version.sh to verify version consistency - Include automatic backup on update with timestamped directories - Validate version format (X.Y.Z or X.Y.Z-SNAPSHOT) - Test both 1.0.1 and 1.1.0-SNAPSHOT versions successfully - Update Phase 2 checkboxes and create learnings document commit d1c3c2a Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:28:07 2025 -0400 refactor: rename integration test runner to run-integration-tests.sh - Rename rit-direct.sh to run-integration-tests.sh for better clarity - Update all documentation references (7 files) - Update log file naming pattern to match new script name - Add commit points to each phase in GitHub Actions plan - Create Phase 1 learnings document commit 1d54f94 Author: Mark Pollack <[email protected]> Date: Wed Aug 6 12:21:23 2025 -0400 docs: add GitHub Actions CI/CD implementation plan with multi-version support Create comprehensive 9-phase plan for GitHub Actions integration testing: - Prioritize Spring AI 1.0.1 release testing for announcement - Support multi-version testing (1.0.x, 1.1.x branches) - Include reference workflow from Spring AI repository - Structure phases for systematic implementation with checkboxes commit 34b21dc Author: Mark Pollack <[email protected]> Date: Wed Aug 6 11:45:09 2025 -0400 refactor: streamline integration testing framework structure Remove obsolete development artifacts and consolidate documentation to present integration testing as a focused, low-profile addition. - Remove 11 obsolete scripts (run_integration_tests.py, rit.sh, etc.) - Remove 5 historical phase learning documents - Remove 3 legacy run scripts and backup files - Consolidate critical learnings into implementation-summary.md - Update documentation to highlight 2 primary tools - Add .claude/ to .gitignore following best practices - Move TODO.txt to integration-testing/ directory Result: 97% coverage framework now appears clean and production-ready with only essential components while preserving all operational knowledge. commit a0aa09b Author: Mark Pollack <[email protected]> Date: Sat Aug 2 18:40:25 2025 -0400 docs: Remove obsolete integration-test-results.md from root directory This file contained outdated test results from July 30 (only 3 tests) and is no longer relevant with the current integration testing framework that has 97% coverage (32/33 examples) and AI validation capabilities. Current framework generates reports via --report flag to user-specified locations, not hardcoded filenames in root directory. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 12fccda Author: Mark Pollack <[email protected]> Date: Sat Aug 2 18:24:31 2025 -0400 refactor: Reorganize documentation structure by moving plans and learnings under integration-testing Move plans/ and learnings/ directories under integration-testing/ to isolate framework documentation from example source code directories. Changes: - Move plans/ → integration-testing/plans/ - Move learnings/ → integration-testing/learnings/ - Update documentation cross-references to new paths - Update troubleshooting guide template references - Update phase insights internal references This consolidates all integration testing framework documentation under a single directory while maintaining clean separation from Spring AI example source code. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 409d0f8 Author: Mark Pollack <[email protected]> Date: Sat Aug 2 18:13:41 2025 -0400 feat: Complete integration testing framework with AI validation system Achieve 97% coverage (32/33 examples) with revolutionary AI-powered validation capabilities. Key achievements: • AI validation system for interactive applications using Claude Code CLI • Interactive application testing breakthrough (Scanner-based apps now testable) • 84% code reduction through centralized JBang utilities • Comprehensive documentation with templates and troubleshooting guide • Production-ready framework with ~92% test reliability Technical innovations: • Multi-mode validation: Primary/Hybrid/Fallback for different application types • Cost-efficient AI validation at ~$0.002 per test • Automatic exit code handling for interactive applications • Template-driven configuration for 5 application categories • Complete Phase 4 knowledge synthesis and documentation Coverage evolution: 0% → 97% through systematic 4-phase implementation Framework status: Production-ready for Spring AI quality assurance 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 99266e1 Author: Mark Pollack <[email protected]> Date: Sat Aug 2 12:28:16 2025 -0400 feat: Complete Phase 6 - AI Validation System Production Ready ## Key Achievements ✅ ### 1. MCP Client Testing Success - Successfully tested complex MCP client-server interactions - AI validation achieved 0.85 confidence with detailed analysis - Validated distributed system components (client, MCP servers, tool discovery) - Demonstrated AI validation superiority over regex for protocol validation ### 2. Enhanced Scaffolding Tool - Added comprehensive AI validation configuration templates - Implemented --ai-mode and --no-ai-validation options - Smart template selection based on complexity (simple/complex/mcp) - Enhanced help and guidance for different validation approaches ### 3. Complete Documentation Suite - README.md: Added comprehensive AI validation section with practical examples - AI_VALIDATION.md: Created detailed 200+ line guide covering all aspects - CLAUDE.md: Updated with AI validation overview and integration examples ### 4. Production-Ready System - Proven reliability with 85-100% confidence scores - Cost efficiency: ~400 tokens per validation with high cache utilization - High accuracy with detailed reasoning for pass/fail decisions - Complete tooling with scaffolding, documentation, and integration support ## AI Validation Capabilities - Context Understanding: Uses README documentation for intelligent validation - Multi-Component Systems: Validates distributed examples holistically - Unpredictable Content: Handles AI-generated jokes, conversations, creative outputs - Complex Workflows: Analyzes multi-step processes and agentic patterns - Cost Transparency: Complete token-level cost tracking and reporting ## Integration Features - Three Validation Modes: Primary (AI-only), Hybrid (AI+regex), Fallback (regex-first) - Four Prompt Templates: Specialized for different example types - Smart Configuration: Auto-generates appropriate settings based on complexity - Backward Compatibility: Existing regex-only tests continue to work 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 7461617 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 15:33:25 2025 -0400 feat: Complete Phase 6 complex workflow validation testing Successfully validates sophisticated AI workflows that are impossible to assess with regex patterns. Demonstrates AI validation superiority for multi-step reasoning and iterative refinement processes. Phase 6 Achievements: ✅ Chain-Workflow Validation (0.95 confidence): - 4-step sequential data transformation pipeline validated - Verified data flow integrity between processing steps - Confirmed proper markdown table output format - Assessed logical consistency of transformations - Cost: 431 tokens, 13.30 seconds ✅ Evaluator-Optimizer Validation (0.95 confidence): - Dual-LLM iterative refinement process validated - Generator/evaluator interaction verified working - Quality improvement feedback loop confirmed functional - Final solution quality assessment validated - Cost: 484 tokens, 13.63 seconds Key Insights Proven: 🎯 Complex AI workflows require AI validation - regex cannot assess: • Sequential step execution and data flow • Iterative refinement processes • Quality improvement verification • Multi-component AI system interactions 🎯 Cost efficiency maintained for complex workflows: • Predictable token usage (~430-480 tokens) • High cache utilization continues • Performance within acceptable range 🎯 High accuracy validation: • 95% confidence on complex multi-step processes • Detailed reasoning for validation decisions • Clear identification of workflow success/failure Next Phase: Client-server examples, failure scenarios, documentation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 5e068d3 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 15:26:49 2025 -0400 feat: Complete Phase 5 - Add comprehensive cost reporting and analysis Implements full cost tracking and analysis for AI validation system with excellent efficiency metrics and comprehensive test coverage. Phase 5 Achievements: ✅ Cost Reporting Implementation: - Enhanced claude_code_wrapper.py to extract token usage, cache metrics, and duration - Added detailed cost information display in Java integration - JSON output includes comprehensive cost_info with all metrics - Real-time cost tracking during validation execution ✅ Kotlin Test Migration: - Updated kotlin-hello-world with AI validation configuration - Specialized validation for structured output (setup/punchline validation) - Successfully tested with 0.90 confidence and detailed reasoning - Validates joke quality and structure beyond regex patterns ✅ rit-direct.sh Integration: - Full integration test suite runs with AI validation enabled - Cost information properly logged in individual and summary logs - Comprehensive test coverage across Java and Kotlin examples - All tests passing with high confidence scores Cost Analysis Results: - Input Tokens: ~11 (template efficiency) - Output Tokens: 380-430 (comprehensive analysis) - Total Tokens: 390-440 (very reasonable per validation) - Duration: 12-15 seconds (better than 30s target) - Cache Efficiency: 10-25x read ratio = substantial cost savings - Confidence: 85-100% with detailed reasoning Performance Metrics Achieved: 🎯 Zero false positives in all tested examples 🎯 High accuracy validation with detailed explanations 🎯 Excellent cost efficiency with predictable token usage 🎯 Template-based system proven extensible and maintainable System Status: Production-ready with comprehensive cost monitoring Next Phase: Broader rollout and complex workflow testing 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit a99be01 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 15:10:44 2025 -0400 feat: Add AI-powered validation system for Spring AI examples Implements intelligent validation using Claude Code to assess example applications beyond simple pattern matching. System can handle unpredictable AI responses, complex workflows, and multi-component examples. Key Features: - AI validation with confidence scoring and detailed reasoning - Three validation modes: primary (AI-only), hybrid (both), fallback (AI if regex fails) - Quiet mode for clean JSON output without logging interference - Path resolution works from any module depth - Template-based prompts for different example types Infrastructure: - Python validator script with Claude Code integration - Extended Java ExampleInfo record with AIValidationConfig - Template system for chat, workflow, and client-server examples - Cost tracking integration ready Testing: - Successfully validates helloworld chat example - AI correctly identifies successful conversation flow and joke quality - End-to-end integration tested and working 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 3c8e3f8 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 14:24:57 2025 -0400 feat: Complete Phase 3a.6 - JBang Utility Centralization Eliminated code duplication across all 18 JBang integration test scripts by creating a centralized utility pattern. This major refactoring improves maintainability and consistency across the test suite. Key changes: - Created IntegrationTestUtils.java with all common test functionality - Refactored all 18 JBang scripts to use centralized utilities - Reduced each script from ~110-130 lines to ~18 lines (84% reduction) - Achieved 0% code duplication across test scripts - Fixed path resolution issues for modules at different directory depths Infrastructure improvements: - Updated scaffold_integration_test.py to generate new pattern - Created comprehensive pattern documentation (JBANG_PATTERN.md) - Added learning review document for Phase 3a.6 - Updated all README files to reflect new architecture This completes Phase 3a (Critical Infrastructure Improvements) with all 6 sub-phases successfully implemented. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit 2394178 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 13:47:46 2025 -0400 docs: Update integration testing plan to reflect completed logging improvements - Added Phase 3a.5 documenting enhanced integration test logging work - All 18 JBang scripts now show full raw output (no filtering) - Log paths normalized to remove ../../../ components - Enhanced rit-direct.sh with --clean-logs and test filtering options - Updated progress metrics: 18/33 examples have tests (55% complete) - Added success metrics for log visibility (100%) and false positives (0%) - Updated developer commands to highlight rit-direct.sh as recommended runner - Marked logging infrastructure enhancement as completed in future work The plan now accurately reflects the significant improvements made to test output visibility and debugging capabilities. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> commit b21b6f4 Author: Mark Pollack <[email protected]> Date: Fri Aug 1 13:43:22 2025 -0400 feat: Improve integration test logging - show full raw output - Updated all JBang integration test scripts to display full raw application output - Removed filtered/truncated output in favor of complete logs for better debugging - Added .normalize() to all log file paths to remove ../../../ relative components - Enhanced rit-direct.sh with: - --clean-logs option to clear all logs before testing - Test filtering capability to run specific tests (e.g., ./rit-direct.sh helloworld) - Fixed log cleanup to use correct directory paths - Consistent logging behavior across all 18 integration test scripts This ensures complete visibility into test execution and simplifies debugging by preserving all application output without any filtering or truncation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This refactor streamlines the MCP annotation samples, leveraging automatic handler and tool registration for reduced boilerplate and improved maintainability.