Skip to content

11 Agent Integrations Guide

doobidoo edited this page Nov 9, 2025 · 3 revisions

Agent Integrations Guide

This guide covers all workflow automation agents available for MCP Memory Service development. These agents leverage external LLMs (Gemini CLI, Groq API) and tools (Amp CLI, GitHub CLI) to automate repetitive tasks, improve code quality, and accelerate development.

Quick Stats: 5 integrated agents | 10-30 min saved per PR | 10x faster LLM inference | Zero-credit research workflows


Table of Contents

  1. Overview - Agent comparison and selection
  2. GitHub Release Manager - Complete release workflow automation
  3. Code Quality Guard - Fast complexity and security analysis
  4. Gemini PR Automator - Automated PR review cycles
  5. Amp CLI Bridge - Credit-free research workflows
  6. Context Provider Integration - Intelligent memory management
  7. Troubleshooting - Common issues and solutions

Overview

Agent Comparison Matrix

Agent Tool Primary Use Time Savings Priority
GitHub Release Manager GitHub CLI Complete release workflow 15-20 min/release Production
Code Quality Guard Groq/Gemini Pre-commit quality checks 5-10 min/commit Active
Gemini PR Automator Gemini CLI Automated PR reviews 10-30 min/PR Active
Amp CLI Bridge Amp CLI Research without credits N/A (credit-saving) Production
Context Provider MCP Memory Intelligent memory triggers Continuous Production

When to Use Each Agent

GitHub Release Manager:

  • ✅ After completing a feature or fix
  • ✅ When multiple commits are ready for release
  • ✅ For hotfix releases (critical bugs)
  • ✅ Automated version bumps and CHANGELOG updates

Code Quality Guard:

  • ✅ Before committing code (pre-commit hook)
  • ✅ During PR creation
  • ✅ When refactoring complex code
  • ✅ Security audits and complexity analysis

Gemini PR Automator:

  • ✅ After creating a PR
  • ✅ During review iterations
  • ✅ For automated test generation
  • ✅ Breaking change detection

Amp CLI Bridge:

  • ✅ Web research (documentation, Stack Overflow)
  • ✅ Codebase pattern analysis
  • ✅ Best practices research
  • ✅ Documentation generation

Context Provider:

  • ✅ Automatic (session initialization)
  • ✅ Project-specific memory management
  • ✅ Release workflow automation
  • ✅ Issue tracking and closure

GitHub Release Manager

Purpose: Automates the complete release workflow including version management, CHANGELOG updates, GitHub release creation, and issue tracking.

Features

Version Management:

  • Four-file procedure: __init__.pypyproject.tomlREADME.mduv lock
  • Semantic versioning (major.minor.patch)
  • Automatic version bump detection

CHANGELOG Management:

  • Format guidelines enforcement
  • Conflict resolution (combines PR entries)
  • Cross-references to GitHub releases

Issue Tracking:

  • Auto-detects "fixes #", "closes #", "resolves #" patterns
  • Suggests closures after release
  • Generates smart closing comments with PR links

Release Procedure:

  • Merge → Tag → Push → Verify workflows
  • Docker Publish validation
  • PyPI publish verification
  • HTTP-MCP Bridge health check

Setup

Prerequisites:

# Install GitHub CLI
brew install gh  # macOS
# or: sudo apt install gh  # Linux

# Authenticate
gh auth login

# Verify
gh auth status

Agent Location: .claude/agents/github-release-manager.md

Usage

Proactive Mode (Recommended): The agent is configured to trigger automatically when you complete a feature. Just finish your work, and Claude Code will suggest creating a release.

Manual Invocation:

# Check if release is needed
@agent github-release-manager "Check if we need a release"

# Create specific version
@agent github-release-manager "Create release for v8.21.0"

# Hotfix release
@agent github-release-manager "Create hotfix release for database lock bug"

Workflow Example

Scenario: You've just merged a PR that fixes issue #123.

  1. Auto-Detection: Agent detects "fixes #123" in PR description
  2. Version Analysis: Determines patch bump needed (v8.20.0 → v8.20.1)
  3. File Updates:
    • src/mcp_memory_service/__init__.py__version__ = "8.20.1"
    • pyproject.tomlversion = "8.20.1"
    • README.md → Latest Release section updated
    • uv lock → Dependency lock file refreshed
  4. CHANGELOG Update:
    ## [8.20.1] - 2025-01-09
    ### Fixed
    - Database lock errors with concurrent HTTP + MCP access (#123)
  5. Git Operations:
    • Commit: chore: bump version to v8.20.1
    • Tag: git tag -a v8.20.1 -m "Fix database lock errors"
    • Push: git push && git push --tags
  6. GitHub Release:
    • Creates release on GitHub with CHANGELOG excerpt
    • Triggers workflows: Docker Publish, PyPI Publish, HTTP-MCP Bridge
  7. Issue Closure:
    • Posts comment: "Fixed in v8.20.1. See CHANGELOG for details."
    • Closes issue #123 automatically

Time: 8-10 minutes (vs 20-30 minutes manual)

Recent Success Stories

v8.20.1 Hotfix (8 minutes total):

  • Bug reported by user
  • Fixed in code
  • Agent executed full release workflow
  • User notified and issue closed
  • All workflows validated

Best Practices

Always Use for Releases:

  • ❌ Manual workflows miss README.md, CHANGELOG formatting
  • ✅ Agent ensures all 4 files updated, proper GitHub Release created

Even for Simple Hotfixes:

  • Agent handles documentation you might forget
  • Consistent release notes format
  • Proper issue tracking and closure

Trust the Agent:

  • Manual v8.20.1: Forgot README.md, incomplete GitHub Release
  • With agent v8.20.1: All files updated, comprehensive release

Troubleshooting

Issue: "No release needed" when you expect one

  • Check if commits since last tag exist: git log $(git describe --tags --abbrev=0)..HEAD
  • Verify CHANGELOG has unreleased entries

Issue: Version bump conflicts

  • Agent will detect if versions are out of sync
  • Follow four-file procedure manually if needed

Issue: GitHub CLI authentication expired

  • Re-authenticate: gh auth login
  • Verify: gh auth status

Code Quality Guard

Purpose: Fast automated code quality analysis using Groq API (primary) or Gemini CLI (fallback) for complexity scoring, security scanning, and refactoring suggestions.

Features

Complexity Analysis:

  • Function-level complexity scoring (1-10 scale)
  • Blocks commits with complexity >8
  • Warns on complexity >7
  • Suggests refactoring strategies

Security Scanning:

  • SQL injection detection
  • XSS vulnerability patterns
  • Command injection risks
  • OWASP Top 10 coverage

TODO Prioritization:

  • Critical/High/Medium/Low categorization
  • Automatic tracking and reporting
  • Integration with project management

Pre-commit Hooks:

  • Automatic quality gates before commits
  • Non-blocking mode for urgent fixes
  • Configurable thresholds

Setup

Prerequisites:

Option 1: Groq API (Recommended):

# Get API key from https://console.groq.com/keys
export GROQ_API_KEY="your-groq-api-key"

# Add to .env (persistent)
echo 'GROQ_API_KEY="your-groq-api-key"' >> .env

# Test
./scripts/utils/groq "What is 2+2?"

Option 2: Gemini CLI (Fallback):

# Install Gemini CLI
npm install -g @google/generative-ai-cli

# Authenticate (OAuth browser flow)
gemini auth login

# Test
gemini "What is 2+2?"

Pre-commit Hook Installation:

# Create symlink
ln -s ../../scripts/hooks/pre-commit .git/hooks/pre-commit

# Make executable
chmod +x .git/hooks/pre-commit

# Test
git add <file>
git commit -m "test"
# Should run quality checks automatically

LLM Priority (v8.20.0+)

The pre-commit hook uses intelligent fallback:

  1. Groq API (Primary) - 200-300ms, simple API key, no browser interruption
  2. Gemini CLI (Fallback) - 2-3s, OAuth browser flow
  3. Skip checks (Graceful) - If neither available, commit proceeds

Why Groq is Primary:

  • ✅ 10x faster inference (~300ms vs 2-3s)
  • ✅ Simple API key authentication
  • ✅ No OAuth browser flow during commits
  • ✅ Kimi K2 model: 256K context, excellent for code analysis

Usage

Complexity Check:

# Groq (fast, recommended)
./scripts/utils/groq "Complexity 1-10 per function, list high (>7) first: $(cat src/server.py)"

# Gemini (slower, fallback)
gemini "Complexity 1-10 per function, list high (>7) first: $(cat src/server.py)"

# With Kimi K2 (best for complex code)
./scripts/utils/groq "Complexity analysis: $(cat src/server.py)" --model moonshotai/kimi-k2-instruct

Security Scan:

./scripts/utils/groq "Security check (SQL injection, XSS, command injection): $(cat src/api.py)"

TODO Prioritization:

# Scan all TODOs in project
bash scripts/maintenance/scan_todos.sh

# Output example:
# CRITICAL: TODO: Fix SQL injection in line 42
# HIGH: TODO: Add rate limiting to API
# MEDIUM: TODO: Refactor database connection pooling

Pre-commit Hook (Automatic):

# Just commit normally
git add src/server.py
git commit -m "refactor: improve error handling"

# Hook runs automatically:
# [Quality Guard] Analyzing complexity...
# [Quality Guard] ✓ All functions below complexity threshold (max: 6)
# [Quality Guard] ✓ No security issues detected
# [Commit allowed]

Performance Comparison

LLM Inference Time Auth Method Use Case
Groq (Kimi K2) ~200ms API key Complex code analysis
Groq (Llama 3.3) ~300ms API key General checks (default)
Groq (Llama 3.1) ~100ms API key Fast queries
Gemini CLI 2-3s OAuth (browser) Fallback only

Workflow Example

Scenario: Pre-commit quality check with Groq

$ git add src/hybrid_backend.py
$ git commit -m "feat: add retry logic to sync"

[Quality Guard] Using Groq API (primary)...
[Quality Guard] Analyzing complexity...
  ├─ sync_to_cloudflare(): 4/10 ✓
  ├─ retry_with_backoff(): 6/10 ✓
  └─ handle_sync_failure(): 9/10 ⚠️  HIGH COMPLEXITY

[Quality Guard] Security scan...
  ✓ No SQL injection patterns
  ✓ No XSS vulnerabilities
  ✓ No command injection risks

[Quality Guard] ⚠️  Warning: handle_sync_failure() has complexity 9
[Quality Guard] Consider refactoring before commit? [y/N]
> n

[Quality Guard] Commit allowed (with warnings)
[main abc1234] feat: add retry logic to sync

Total time: ~500ms (vs 4-5s with Gemini)

Best Practices

Use Groq for Pre-commit:

  • Faster feedback loop
  • No browser interruptions
  • Better developer experience

Reserve Gemini for Deep Analysis:

  • Complex refactoring reviews
  • Architectural decisions
  • When Groq quota exhausted

Configure Thresholds:

# In .git/hooks/pre-commit
COMPLEXITY_BLOCK=8   # Block commits above this
COMPLEXITY_WARN=7    # Warn but allow above this

Troubleshooting

Issue: "Groq API key not found"

# Check environment
echo $GROQ_API_KEY

# If empty, export it
export GROQ_API_KEY="your-key"

# Add to .env for persistence
echo 'GROQ_API_KEY="your-key"' >> .env

Issue: "Gemini authentication failed"

# Re-authenticate
gemini auth login

# Follow browser OAuth flow

# Verify
gemini "test"

Issue: Pre-commit hook too slow

# Switch to Groq (10x faster)
export GROQ_API_KEY="your-key"

# Or disable hook temporarily
git commit --no-verify -m "urgent fix"

Gemini PR Automator

Purpose: Eliminates manual "Fix → Comment → /gemini review → Wait 1min → Repeat" cycles with fully automated review iteration.

Features

Automated Review Loops:

  • Iterates up to 5 times automatically
  • Waits for Gemini Code Assist review
  • Applies safe fixes automatically
  • Commits and pushes changes
  • Repeats until all threads resolved

Quality Gate Checks:

  • Complexity analysis before review
  • Security pattern detection
  • Test coverage validation
  • Breaking change detection

Test Generation:

  • Auto-generates tests for new code
  • Validates test coverage
  • Suggests edge cases

GraphQL Integration:

  • Fetches PR review threads
  • Auto-resolves threads when addressed
  • Updates PR status in real-time

Setup

Prerequisites:

# Install Gemini CLI
npm install -g @google/generative-ai-cli

# Authenticate
gemini auth login

# Install GitHub CLI
gh auth login

# Verify both
gemini "test" && gh auth status

Scripts Location: scripts/pr/

Usage

Full Automated Review:

# 5 iterations, safe fixes enabled
bash scripts/pr/auto_review.sh 215  # PR number

# Watch progress in terminal
# Script will:
# 1. Wait 1 minute for initial Gemini review
# 2. Apply safe fixes
# 3. Commit and push
# 4. Wait 1 minute for next review
# 5. Repeat up to 5 iterations

Quality Gate (Pre-Review):

# Run before requesting review
bash scripts/pr/quality_gate.sh 215

# Checks:
# - Complexity scores (<8 required)
# - Security patterns (no critical issues)
# - Test coverage (>80% recommended)
# - Breaking changes (documented?)

Test Generation:

# Generate tests for new code in PR
bash scripts/pr/generate_tests.sh 215

# Output: tests/test_new_feature.py

Breaking Change Detection:

# Compare feature branch to main
bash scripts/pr/detect_breaking_changes.sh main feature/oauth-integration

# Detects:
# - Removed public functions
# - Changed function signatures
# - Removed environment variables
# - Schema changes

Workflow Example

Scenario: PR #215 with initial review comments

$ bash scripts/pr/auto_review.sh 215

[Auto Review] Starting iteration 1/5...
[Auto Review] Waiting 60s for Gemini Code Assist review...
[Auto Review] Fetching review threads via GraphQL...
[Auto Review] Found 3 unresolved threads:
  1. "Add error handling in sync_to_cloudflare()"
  2. "Fix typo in docstring"
  3. "Reduce complexity in retry_with_backoff()"

[Auto Review] Applying safe fixes...
  ✓ Fixed typo in docstring
  ✓ Added try/except in sync_to_cloudflare()
  ⚠️  Manual review needed for complexity reduction

[Auto Review] Committing changes...
[Auto Review] Pushing to remote...

[Auto Review] Starting iteration 2/5...
[Auto Review] Waiting 60s for Gemini Code Assist review...
[Auto Review] Fetching review threads via GraphQL...
[Auto Review] Found 1 unresolved thread:
  1. "Reduce complexity in retry_with_backoff()"

[Auto Review] No safe automatic fixes available
[Auto Review] Manual intervention required

[Auto Review] Summary:
  ✓ 2/3 threads auto-resolved
  ⚠️  1 thread requires manual fix
  ⏱️  Time saved: ~18 minutes (vs manual iteration)

Time Savings Breakdown

Manual PR Iteration (typical):

  1. Read review comment - 2 min
  2. Make fix - 5 min
  3. Commit and push - 1 min
  4. Comment "/gemini review" - 30s
  5. Wait for review - 1 min
  6. Total per iteration: ~9 min
  7. 5 iterations: ~45 min

Automated PR Iteration (with script):

  1. Run script - 10s
  2. Script handles all iterations - 15 min (background)
  3. Manual intervention if needed - 5 min
  4. Total: ~20 min
  5. Savings: 25 min (55%)

Best Practices

Run Quality Gate First:

# Before requesting review
bash scripts/pr/quality_gate.sh 215

# Fix issues before automation

Use Safe Fixes Mode:

  • Auto-applies only low-risk fixes (typos, formatting, docstrings)
  • Flags complex changes for manual review
  • Prevents breaking changes

Monitor First Iteration:

  • Watch iteration 1 to ensure script works correctly
  • Intervene if unexpected behavior
  • Let iterations 2-5 run unattended

GraphQL for Thread Resolution:

  • Script auto-resolves threads when commits address feedback
  • Saves 2+ minutes per thread vs manual clicking
  • Keeps PR clean and organized

Troubleshooting

Issue: "Gemini review timeout"

# Increase wait time in script
export GEMINI_REVIEW_WAIT=120  # 2 minutes instead of 1

# Re-run
bash scripts/pr/auto_review.sh 215

Issue: "No review threads found"

# Manually request review first
gh pr comment 215 --body "/gemini review"

# Wait 1 minute, then run script
sleep 60
bash scripts/pr/auto_review.sh 215

Issue: "GraphQL authentication failed"

# Re-authenticate GitHub CLI
gh auth login

# Verify with GraphQL test
gh api graphql -f query='{ viewer { login } }'

Amp CLI Bridge

Purpose: Leverage Amp CLI for external research without consuming Claude Code credits.

Features

File-Based Workflow:

  • Claude creates prompt file
  • User runs Amp CLI command
  • Amp writes response file
  • Claude reads and continues

Use Cases:

  • Web research (fetch docs, Stack Overflow)
  • Codebase pattern analysis
  • Best practices research
  • Documentation generation

Credit Conservation:

  • External LLM calls via Amp (not Claude)
  • Useful for large research tasks
  • Preserves Claude Code quota

Setup

Prerequisites:

# Install Amp CLI (if available)
# Installation instructions depend on Amp CLI distribution
# Follow Amp CLI official documentation

# Verify installation
amp --version

Directory Structure:

.claude/amp/
├── prompts/
│   ├── pending/     # Claude creates prompts here
│   └── completed/   # Processed prompts
└── responses/       # Amp writes responses here

Usage

Automated Workflow (Claude-initiated):

User: "Research OAuth 2.1 best practices for team collaboration"

Claude: "I'll create a prompt for Amp to research this topic. Please run:
  amp @.claude/amp/prompts/pending/abc123.json

Once Amp completes, I'll analyze the response."

[Claude creates: .claude/amp/prompts/pending/abc123.json]

User runs:
$ amp @.claude/amp/prompts/pending/abc123.json

[Amp researches and writes: .claude/amp/responses/abc123.json]

Claude: "Thanks! I've read the research. Here's what I found..."
[Claude continues with researched context]

Manual Workflow (User-initiated):

# 1. Create prompt manually
cat > .claude/amp/prompts/pending/my-research.json <<EOF
{
  "task": "Research SQLite WAL mode best practices",
  "context": "MCP Memory Service database configuration",
  "focus": "Concurrent access, performance, reliability"
}
EOF

# 2. Run Amp
amp @.claude/amp/prompts/pending/my-research.json

# 3. Amp writes response
# .claude/amp/responses/my-research.json created

# 4. Tell Claude to read it
User: "Please read .claude/amp/responses/my-research.json and summarize"

Workflow Example

Scenario: Research optimal chunking strategies for document ingestion

User: "What are the best practices for chunking documents for semantic search?"

Claude: "I'll create an Amp research prompt for this. Please run:
  amp @.claude/amp/prompts/pending/chunking-research.json"

[Claude creates prompt file with research questions]

User runs:
$ amp @.claude/amp/prompts/pending/chunking-research.json
[Amp] Researching document chunking strategies...
[Amp] Fetching from LlamaIndex docs...
[Amp] Fetching from Pinecone blog...
[Amp] Analyzing 15 sources...
[Amp] Writing response to .claude/amp/responses/chunking-research.json
[Amp] Done! (45 seconds)

Claude: "Great! I've analyzed the research. Here are the key findings:

1. **Chunk Size**: 512-1024 tokens optimal for semantic search
2. **Overlap**: 10-20% overlap prevents context loss
3. **Boundaries**: Respect sentence/paragraph boundaries
4. **Metadata**: Store source, page number, section headers

I recommend:
- Use 1000 char chunks with 200 char overlap (current config)
- Add paragraph boundary detection (TODO)
- Include source metadata in each chunk (implemented)

Should I implement paragraph boundary detection?"

Credits Used:

  • Claude Code: ~100 tokens (coordination)
  • Amp CLI: 5,000 tokens (research)
  • Savings: 98% credit reduction vs Claude doing research directly

Best Practices

Use Amp for Large Research:

  • ✅ Fetching documentation from 10+ sources
  • ✅ Analyzing best practices across multiple frameworks
  • ✅ Comparing implementation patterns
  • ❌ Simple questions (faster to ask Claude directly)

Clear Prompts:

  • Specify exactly what you need
  • Provide context (project, technology stack)
  • Define expected output format

File Organization:

  • Move completed prompts to completed/ folder
  • Archive old responses periodically
  • Clean up .claude/amp/ monthly

Troubleshooting

Issue: "Amp command not found"

# Verify installation
which amp

# If not installed, follow Amp CLI documentation
# (Installation varies by distribution)

Issue: "Response file not created"

# Check Amp execution logs
amp @.claude/amp/prompts/pending/file.json --verbose

# Verify file permissions
ls -la .claude/amp/responses/

Issue: "Claude doesn't see response"

# Verify file exists
cat .claude/amp/responses/abc123.json

# Explicitly tell Claude
User: "Please read .claude/amp/responses/abc123.json"

Context Provider Integration

Purpose: Rule-based context management with automatic memory storage and retrieval triggers.

Features

Project-Specific Contexts:

  • Python MCP Memory Service context
  • Release Workflow context
  • Custom user-defined contexts

Auto-Store Triggers:

  • Technical patterns (MCP protocol, storage backends)
  • Configuration changes
  • Release events (merges, tags, issues)
  • Documentation updates

Auto-Retrieve Triggers:

  • Troubleshooting queries
  • Setup questions
  • Implementation examples
  • Issue management queries

Session Initialization:

  • Automatic context loading on session start
  • Git repository analysis
  • Recent activity summary

Setup

Automatic (v8.0.0+): Context Provider is integrated with Natural Memory Triggers v7.1.3. No additional setup required if hooks are installed.

Verify:

# Check session initialization status
node ~/.claude/hooks/memory-mode-controller.js status

# Should show:
# Context Provider: ✓ Active
# Available Contexts: 2
#   - python_mcp_memory
#   - mcp_memory_release_workflow

MCP Tools

List Contexts:

mcp context list

# Output:
# Available Contexts:
# 1. python_mcp_memory (Priority: high)
#    - Tools: FastAPI, MCP protocol, storage backends
# 2. mcp_memory_release_workflow (Priority: high)
#    - Tools: Version management, CHANGELOG, issue tracking

Session Status:

mcp context status

# Output:
# Session Initialized: ✓
# Contexts Loaded: 2
# Memories Injected: 8
# Last Update: 2s ago

Optimization Suggestions:

mcp context optimize

# Output:
# Suggestions:
# 1. Add auto-store trigger for "pytest" → testing context
# 2. Consolidate 3 similar retrieve patterns
# 3. Update memory_type from "note" → "implementation"

Auto-Store Examples

Triggered Automatically:

User: "I've switched the backend to hybrid mode"
→ Stores: "Backend configuration change: hybrid mode enabled"
  Tags: hybrid, configuration, storage-backend

User: "Merged PR #215 fixing database locks"
→ Stores: "Release event: PR #215 merged (database lock fix)"
  Tags: release, pr, database-lock, fix

User: "Added OAuth 2.1 support to HTTP server"
→ Stores: "MCP protocol enhancement: OAuth 2.1 Dynamic Client Registration"
  Tags: mcp-protocol, oauth, authentication, feature

Auto-Retrieve Examples

Triggered Automatically:

User: "Why is the Cloudflare backend failing?"
→ Retrieves: Previous troubleshooting notes, configuration examples
→ Injects: Into Claude's context automatically

User: "How do we handle version bumps?"
→ Retrieves: Release workflow documentation, previous releases
→ Injects: Four-file procedure, CHANGELOG format

User: "What issues were fixed in the last release?"
→ Retrieves: Recent release notes, closed issues
→ Injects: Issue #123 (database locks), #124 (OAuth setup)

Creating Custom Contexts

Via MCP Tool:

mcp context create \
  --name "my_feature_context" \
  --category "development" \
  --patterns "auto-store: feature implementation, testing" \
  --patterns "auto-retrieve: how to implement, best practices"

Manual Creation (Advanced): Create .claude/contexts/my_context.json:

{
  "name": "My Custom Context",
  "tool_category": "development",
  "applies_to_tools": ["code_execution", "file_operations"],
  "priority": "medium",
  "auto_store_triggers": {
    "feature_impl": {
      "patterns": ["implementing", "feature complete"],
      "action": "store",
      "memory_type": "implementation",
      "tags": ["feature", "implementation"]
    }
  },
  "auto_retrieve_triggers": {
    "best_practices": {
      "patterns": ["how to implement", "best practice"],
      "action": "retrieve",
      "max_results": 5
    }
  }
}

Best Practices

Let Context Provider Work:

  • Don't manually store/retrieve if auto-triggers cover it
  • Trust the pattern matching (85%+ accuracy)
  • Review optimization suggestions monthly

Customize for Your Workflow:

  • Add project-specific patterns
  • Adjust memory types for your taxonomy
  • Set appropriate priorities

Monitor Performance:

# Check effectiveness
mcp context analyze python_mcp_memory

# Output:
# Effectiveness Analysis:
#   Auto-stores triggered: 42 times
#   Auto-retrieves triggered: 28 times
#   False positives: 2 (4.7%)
#   False negatives: 3 (7.1%)
#   Overall accuracy: 88.1%

Troubleshooting

Issue: Context not auto-storing

# Check pattern matching
node ~/.claude/hooks/memory-mode-controller.js test "implementing new feature"

# If no match, update patterns
mcp context update python_mcp_memory \
  --add-pattern "implementing.*feature"

Issue: Too many auto-retrieves

# Adjust sensitivity
node ~/.claude/hooks/memory-mode-controller.js sensitivity 0.7

# Or reduce max results
mcp context update python_mcp_memory \
  --max-results 3

Issue: Wrong memory types

# Review and consolidate
python scripts/maintenance/consolidate_memory_types.py --dry-run

# Update context patterns
mcp context update python_mcp_memory \
  --memory-type "implementation"

Troubleshooting

Common Issues Across All Agents

Issue: "Command not found"

# Verify all tools installed
which gh        # GitHub CLI
which gemini    # Gemini CLI
which amp       # Amp CLI
node ~/.claude/hooks/memory-mode-controller.js status  # Context Provider

# Install missing tools (see Setup sections above)

Issue: "Authentication failed"

# Re-authenticate each tool
gh auth login           # GitHub
gemini auth login       # Gemini
export GROQ_API_KEY=""  # Groq

# Verify
gh auth status
gemini "test"
./scripts/utils/groq "test"

Issue: "Slow performance"

# Switch from Gemini to Groq (10x faster)
export GROQ_API_KEY="your-key"

# Verify speed improvement
time ./scripts/utils/groq "test"  # ~200-300ms
time gemini "test"                 # ~2-3s

Agent-Specific Issues

GitHub Release Manager:

Code Quality Guard:

Gemini PR Automator:

Amp CLI Bridge:

Context Provider:

Getting Help

Documentation:

GitHub:

Agent Files:

  • .claude/agents/github-release-manager.md
  • .claude/agents/code-quality-guard.md
  • .claude/agents/gemini-pr-automator.md
  • docs/amp-cli-bridge.md
  • docs/context-provider-workflow-automation.md

Using Agents in Other Repositories

These agents were designed for mcp-memory-service but are portable to other projects with appropriate configuration.

Portability Overview

Component Portability Best For
Amp CLI Bridge 95% All languages - Works as-is
GraphQL Helpers 100% All languages - Pure GitHub API
GitHub Release Manager 70% ✅ Python, Node.js, Rust (version file mapping)
Code Quality Guard 60% ⚠️ Needs file extension and complexity config
Gemini PR Automator 50% ⚠️ Requires script rewrites per language

Quick Start (Python Projects)

# 1. Copy agent definitions
mkdir -p .claude/agents
curl -o .claude/agents/github-release-manager.md \
  https://raw.githubusercontent.com/doobidoo/mcp-memory-service/main/.claude/agents/github-release-manager.md

# 2. Copy portable scripts (work with any language)
mkdir -p scripts/pr/lib
curl -o scripts/pr/watch_reviews.sh \
  https://raw.githubusercontent.com/doobidoo/mcp-memory-service/main/scripts/pr/watch_reviews.sh
chmod +x scripts/pr/*.sh

# 3. Update version file paths in agent definitions
# 4. Configure GROQ_API_KEY for fast LLM calls
export GROQ_API_KEY="your-key"

Language-Specific Guides

For detailed migration instructions, see:

Covers:

  • ✅ Python projects (30 min setup)
  • ✅ Node.js/TypeScript projects (2-3 hours)
  • ✅ Rust projects (4-6 hours)
  • ✅ Go projects (6-8 hours)
  • ✅ Configuration templates
  • ✅ Migration checklists
  • ✅ Troubleshooting

Related Resources


📚 Documentation | 🚀 Features | 🤖 Automation | 🌟 Community

Documentation: HomeInstallationIntegrationTroubleshooting

Features: All 33 FeaturesMemory HooksWeb DashboardOAuth 2.1

Automation: Agent GuideCross-Repo SetupContext Provider

Community: GitHubDiscussionsIssuesContribute


MCP Memory Service • Zero database locks • 5ms reads • 85% accurate memory triggers • MIT License

Clone this wiki locally