-
-
Notifications
You must be signed in to change notification settings - Fork 134
11 Agent Integrations Guide
This guide covers all workflow automation agents available for MCP Memory Service development. These agents leverage external LLMs (Gemini CLI, Groq API) and tools (Amp CLI, GitHub CLI) to automate repetitive tasks, improve code quality, and accelerate development.
Quick Stats: 5 integrated agents | 10-30 min saved per PR | 10x faster LLM inference | Zero-credit research workflows
- Overview - Agent comparison and selection
- GitHub Release Manager - Complete release workflow automation
- Code Quality Guard - Fast complexity and security analysis
- Gemini PR Automator - Automated PR review cycles
- Amp CLI Bridge - Credit-free research workflows
- Context Provider Integration - Intelligent memory management
- Troubleshooting - Common issues and solutions
| Agent | Tool | Primary Use | Time Savings | Priority |
|---|---|---|---|---|
| GitHub Release Manager | GitHub CLI | Complete release workflow | 15-20 min/release | Production |
| Code Quality Guard | Groq/Gemini | Pre-commit quality checks | 5-10 min/commit | Active |
| Gemini PR Automator | Gemini CLI | Automated PR reviews | 10-30 min/PR | Active |
| Amp CLI Bridge | Amp CLI | Research without credits | N/A (credit-saving) | Production |
| Context Provider | MCP Memory | Intelligent memory triggers | Continuous | Production |
GitHub Release Manager:
- ✅ After completing a feature or fix
- ✅ When multiple commits are ready for release
- ✅ For hotfix releases (critical bugs)
- ✅ Automated version bumps and CHANGELOG updates
Code Quality Guard:
- ✅ Before committing code (pre-commit hook)
- ✅ During PR creation
- ✅ When refactoring complex code
- ✅ Security audits and complexity analysis
Gemini PR Automator:
- ✅ After creating a PR
- ✅ During review iterations
- ✅ For automated test generation
- ✅ Breaking change detection
Amp CLI Bridge:
- ✅ Web research (documentation, Stack Overflow)
- ✅ Codebase pattern analysis
- ✅ Best practices research
- ✅ Documentation generation
Context Provider:
- ✅ Automatic (session initialization)
- ✅ Project-specific memory management
- ✅ Release workflow automation
- ✅ Issue tracking and closure
Purpose: Automates the complete release workflow including version management, CHANGELOG updates, GitHub release creation, and issue tracking.
Version Management:
- Four-file procedure:
__init__.py→pyproject.toml→README.md→uv lock - Semantic versioning (major.minor.patch)
- Automatic version bump detection
CHANGELOG Management:
- Format guidelines enforcement
- Conflict resolution (combines PR entries)
- Cross-references to GitHub releases
Issue Tracking:
- Auto-detects "fixes #", "closes #", "resolves #" patterns
- Suggests closures after release
- Generates smart closing comments with PR links
Release Procedure:
- Merge → Tag → Push → Verify workflows
- Docker Publish validation
- PyPI publish verification
- HTTP-MCP Bridge health check
Prerequisites:
# Install GitHub CLI
brew install gh # macOS
# or: sudo apt install gh # Linux
# Authenticate
gh auth login
# Verify
gh auth statusAgent Location: .claude/agents/github-release-manager.md
Proactive Mode (Recommended): The agent is configured to trigger automatically when you complete a feature. Just finish your work, and Claude Code will suggest creating a release.
Manual Invocation:
# Check if release is needed
@agent github-release-manager "Check if we need a release"
# Create specific version
@agent github-release-manager "Create release for v8.21.0"
# Hotfix release
@agent github-release-manager "Create hotfix release for database lock bug"Scenario: You've just merged a PR that fixes issue #123.
- Auto-Detection: Agent detects "fixes #123" in PR description
- Version Analysis: Determines patch bump needed (v8.20.0 → v8.20.1)
-
File Updates:
-
src/mcp_memory_service/__init__.py→__version__ = "8.20.1" -
pyproject.toml→version = "8.20.1" -
README.md→ Latest Release section updated -
uv lock→ Dependency lock file refreshed
-
-
CHANGELOG Update:
## [8.20.1] - 2025-01-09 ### Fixed - Database lock errors with concurrent HTTP + MCP access (#123)
-
Git Operations:
- Commit:
chore: bump version to v8.20.1 - Tag:
git tag -a v8.20.1 -m "Fix database lock errors" - Push:
git push && git push --tags
- Commit:
-
GitHub Release:
- Creates release on GitHub with CHANGELOG excerpt
- Triggers workflows: Docker Publish, PyPI Publish, HTTP-MCP Bridge
-
Issue Closure:
- Posts comment: "Fixed in v8.20.1. See CHANGELOG for details."
- Closes issue #123 automatically
Time: 8-10 minutes (vs 20-30 minutes manual)
v8.20.1 Hotfix (8 minutes total):
- Bug reported by user
- Fixed in code
- Agent executed full release workflow
- User notified and issue closed
- All workflows validated
Always Use for Releases:
- ❌ Manual workflows miss README.md, CHANGELOG formatting
- ✅ Agent ensures all 4 files updated, proper GitHub Release created
Even for Simple Hotfixes:
- Agent handles documentation you might forget
- Consistent release notes format
- Proper issue tracking and closure
Trust the Agent:
- Manual v8.20.1: Forgot README.md, incomplete GitHub Release
- With agent v8.20.1: All files updated, comprehensive release
Issue: "No release needed" when you expect one
- Check if commits since last tag exist:
git log $(git describe --tags --abbrev=0)..HEAD - Verify CHANGELOG has unreleased entries
Issue: Version bump conflicts
- Agent will detect if versions are out of sync
- Follow four-file procedure manually if needed
Issue: GitHub CLI authentication expired
- Re-authenticate:
gh auth login - Verify:
gh auth status
Purpose: Fast automated code quality analysis using Groq API (primary) or Gemini CLI (fallback) for complexity scoring, security scanning, and refactoring suggestions.
Complexity Analysis:
- Function-level complexity scoring (1-10 scale)
- Blocks commits with complexity >8
- Warns on complexity >7
- Suggests refactoring strategies
Security Scanning:
- SQL injection detection
- XSS vulnerability patterns
- Command injection risks
- OWASP Top 10 coverage
TODO Prioritization:
- Critical/High/Medium/Low categorization
- Automatic tracking and reporting
- Integration with project management
Pre-commit Hooks:
- Automatic quality gates before commits
- Non-blocking mode for urgent fixes
- Configurable thresholds
Prerequisites:
Option 1: Groq API (Recommended):
# Get API key from https://console.groq.com/keys
export GROQ_API_KEY="your-groq-api-key"
# Add to .env (persistent)
echo 'GROQ_API_KEY="your-groq-api-key"' >> .env
# Test
./scripts/utils/groq "What is 2+2?"Option 2: Gemini CLI (Fallback):
# Install Gemini CLI
npm install -g @google/generative-ai-cli
# Authenticate (OAuth browser flow)
gemini auth login
# Test
gemini "What is 2+2?"Pre-commit Hook Installation:
# Create symlink
ln -s ../../scripts/hooks/pre-commit .git/hooks/pre-commit
# Make executable
chmod +x .git/hooks/pre-commit
# Test
git add <file>
git commit -m "test"
# Should run quality checks automaticallyThe pre-commit hook uses intelligent fallback:
- Groq API (Primary) - 200-300ms, simple API key, no browser interruption
- Gemini CLI (Fallback) - 2-3s, OAuth browser flow
- Skip checks (Graceful) - If neither available, commit proceeds
Why Groq is Primary:
- ✅ 10x faster inference (~300ms vs 2-3s)
- ✅ Simple API key authentication
- ✅ No OAuth browser flow during commits
- ✅ Kimi K2 model: 256K context, excellent for code analysis
Complexity Check:
# Groq (fast, recommended)
./scripts/utils/groq "Complexity 1-10 per function, list high (>7) first: $(cat src/server.py)"
# Gemini (slower, fallback)
gemini "Complexity 1-10 per function, list high (>7) first: $(cat src/server.py)"
# With Kimi K2 (best for complex code)
./scripts/utils/groq "Complexity analysis: $(cat src/server.py)" --model moonshotai/kimi-k2-instructSecurity Scan:
./scripts/utils/groq "Security check (SQL injection, XSS, command injection): $(cat src/api.py)"TODO Prioritization:
# Scan all TODOs in project
bash scripts/maintenance/scan_todos.sh
# Output example:
# CRITICAL: TODO: Fix SQL injection in line 42
# HIGH: TODO: Add rate limiting to API
# MEDIUM: TODO: Refactor database connection poolingPre-commit Hook (Automatic):
# Just commit normally
git add src/server.py
git commit -m "refactor: improve error handling"
# Hook runs automatically:
# [Quality Guard] Analyzing complexity...
# [Quality Guard] ✓ All functions below complexity threshold (max: 6)
# [Quality Guard] ✓ No security issues detected
# [Commit allowed]| LLM | Inference Time | Auth Method | Use Case |
|---|---|---|---|
| Groq (Kimi K2) | ~200ms | API key | Complex code analysis |
| Groq (Llama 3.3) | ~300ms | API key | General checks (default) |
| Groq (Llama 3.1) | ~100ms | API key | Fast queries |
| Gemini CLI | 2-3s | OAuth (browser) | Fallback only |
Scenario: Pre-commit quality check with Groq
$ git add src/hybrid_backend.py
$ git commit -m "feat: add retry logic to sync"
[Quality Guard] Using Groq API (primary)...
[Quality Guard] Analyzing complexity...
├─ sync_to_cloudflare(): 4/10 ✓
├─ retry_with_backoff(): 6/10 ✓
└─ handle_sync_failure(): 9/10 ⚠️ HIGH COMPLEXITY
[Quality Guard] Security scan...
✓ No SQL injection patterns
✓ No XSS vulnerabilities
✓ No command injection risks
[Quality Guard] ⚠️ Warning: handle_sync_failure() has complexity 9
[Quality Guard] Consider refactoring before commit? [y/N]
> n
[Quality Guard] Commit allowed (with warnings)
[main abc1234] feat: add retry logic to syncTotal time: ~500ms (vs 4-5s with Gemini)
Use Groq for Pre-commit:
- Faster feedback loop
- No browser interruptions
- Better developer experience
Reserve Gemini for Deep Analysis:
- Complex refactoring reviews
- Architectural decisions
- When Groq quota exhausted
Configure Thresholds:
# In .git/hooks/pre-commit
COMPLEXITY_BLOCK=8 # Block commits above this
COMPLEXITY_WARN=7 # Warn but allow above thisIssue: "Groq API key not found"
# Check environment
echo $GROQ_API_KEY
# If empty, export it
export GROQ_API_KEY="your-key"
# Add to .env for persistence
echo 'GROQ_API_KEY="your-key"' >> .envIssue: "Gemini authentication failed"
# Re-authenticate
gemini auth login
# Follow browser OAuth flow
# Verify
gemini "test"Issue: Pre-commit hook too slow
# Switch to Groq (10x faster)
export GROQ_API_KEY="your-key"
# Or disable hook temporarily
git commit --no-verify -m "urgent fix"Purpose: Eliminates manual "Fix → Comment → /gemini review → Wait 1min → Repeat" cycles with fully automated review iteration.
Automated Review Loops:
- Iterates up to 5 times automatically
- Waits for Gemini Code Assist review
- Applies safe fixes automatically
- Commits and pushes changes
- Repeats until all threads resolved
Quality Gate Checks:
- Complexity analysis before review
- Security pattern detection
- Test coverage validation
- Breaking change detection
Test Generation:
- Auto-generates tests for new code
- Validates test coverage
- Suggests edge cases
GraphQL Integration:
- Fetches PR review threads
- Auto-resolves threads when addressed
- Updates PR status in real-time
Prerequisites:
# Install Gemini CLI
npm install -g @google/generative-ai-cli
# Authenticate
gemini auth login
# Install GitHub CLI
gh auth login
# Verify both
gemini "test" && gh auth statusScripts Location: scripts/pr/
Full Automated Review:
# 5 iterations, safe fixes enabled
bash scripts/pr/auto_review.sh 215 # PR number
# Watch progress in terminal
# Script will:
# 1. Wait 1 minute for initial Gemini review
# 2. Apply safe fixes
# 3. Commit and push
# 4. Wait 1 minute for next review
# 5. Repeat up to 5 iterationsQuality Gate (Pre-Review):
# Run before requesting review
bash scripts/pr/quality_gate.sh 215
# Checks:
# - Complexity scores (<8 required)
# - Security patterns (no critical issues)
# - Test coverage (>80% recommended)
# - Breaking changes (documented?)Test Generation:
# Generate tests for new code in PR
bash scripts/pr/generate_tests.sh 215
# Output: tests/test_new_feature.pyBreaking Change Detection:
# Compare feature branch to main
bash scripts/pr/detect_breaking_changes.sh main feature/oauth-integration
# Detects:
# - Removed public functions
# - Changed function signatures
# - Removed environment variables
# - Schema changesScenario: PR #215 with initial review comments
$ bash scripts/pr/auto_review.sh 215
[Auto Review] Starting iteration 1/5...
[Auto Review] Waiting 60s for Gemini Code Assist review...
[Auto Review] Fetching review threads via GraphQL...
[Auto Review] Found 3 unresolved threads:
1. "Add error handling in sync_to_cloudflare()"
2. "Fix typo in docstring"
3. "Reduce complexity in retry_with_backoff()"
[Auto Review] Applying safe fixes...
✓ Fixed typo in docstring
✓ Added try/except in sync_to_cloudflare()
⚠️ Manual review needed for complexity reduction
[Auto Review] Committing changes...
[Auto Review] Pushing to remote...
[Auto Review] Starting iteration 2/5...
[Auto Review] Waiting 60s for Gemini Code Assist review...
[Auto Review] Fetching review threads via GraphQL...
[Auto Review] Found 1 unresolved thread:
1. "Reduce complexity in retry_with_backoff()"
[Auto Review] No safe automatic fixes available
[Auto Review] Manual intervention required
[Auto Review] Summary:
✓ 2/3 threads auto-resolved
⚠️ 1 thread requires manual fix
⏱️ Time saved: ~18 minutes (vs manual iteration)Manual PR Iteration (typical):
- Read review comment - 2 min
- Make fix - 5 min
- Commit and push - 1 min
- Comment "/gemini review" - 30s
- Wait for review - 1 min
- Total per iteration: ~9 min
- 5 iterations: ~45 min
Automated PR Iteration (with script):
- Run script - 10s
- Script handles all iterations - 15 min (background)
- Manual intervention if needed - 5 min
- Total: ~20 min
- Savings: 25 min (55%)
Run Quality Gate First:
# Before requesting review
bash scripts/pr/quality_gate.sh 215
# Fix issues before automationUse Safe Fixes Mode:
- Auto-applies only low-risk fixes (typos, formatting, docstrings)
- Flags complex changes for manual review
- Prevents breaking changes
Monitor First Iteration:
- Watch iteration 1 to ensure script works correctly
- Intervene if unexpected behavior
- Let iterations 2-5 run unattended
GraphQL for Thread Resolution:
- Script auto-resolves threads when commits address feedback
- Saves 2+ minutes per thread vs manual clicking
- Keeps PR clean and organized
Issue: "Gemini review timeout"
# Increase wait time in script
export GEMINI_REVIEW_WAIT=120 # 2 minutes instead of 1
# Re-run
bash scripts/pr/auto_review.sh 215Issue: "No review threads found"
# Manually request review first
gh pr comment 215 --body "/gemini review"
# Wait 1 minute, then run script
sleep 60
bash scripts/pr/auto_review.sh 215Issue: "GraphQL authentication failed"
# Re-authenticate GitHub CLI
gh auth login
# Verify with GraphQL test
gh api graphql -f query='{ viewer { login } }'Purpose: Leverage Amp CLI for external research without consuming Claude Code credits.
File-Based Workflow:
- Claude creates prompt file
- User runs Amp CLI command
- Amp writes response file
- Claude reads and continues
Use Cases:
- Web research (fetch docs, Stack Overflow)
- Codebase pattern analysis
- Best practices research
- Documentation generation
Credit Conservation:
- External LLM calls via Amp (not Claude)
- Useful for large research tasks
- Preserves Claude Code quota
Prerequisites:
# Install Amp CLI (if available)
# Installation instructions depend on Amp CLI distribution
# Follow Amp CLI official documentation
# Verify installation
amp --versionDirectory Structure:
.claude/amp/
├── prompts/
│ ├── pending/ # Claude creates prompts here
│ └── completed/ # Processed prompts
└── responses/ # Amp writes responses here
Automated Workflow (Claude-initiated):
User: "Research OAuth 2.1 best practices for team collaboration"
Claude: "I'll create a prompt for Amp to research this topic. Please run:
amp @.claude/amp/prompts/pending/abc123.json
Once Amp completes, I'll analyze the response."
[Claude creates: .claude/amp/prompts/pending/abc123.json]
User runs:
$ amp @.claude/amp/prompts/pending/abc123.json
[Amp researches and writes: .claude/amp/responses/abc123.json]
Claude: "Thanks! I've read the research. Here's what I found..."
[Claude continues with researched context]
Manual Workflow (User-initiated):
# 1. Create prompt manually
cat > .claude/amp/prompts/pending/my-research.json <<EOF
{
"task": "Research SQLite WAL mode best practices",
"context": "MCP Memory Service database configuration",
"focus": "Concurrent access, performance, reliability"
}
EOF
# 2. Run Amp
amp @.claude/amp/prompts/pending/my-research.json
# 3. Amp writes response
# .claude/amp/responses/my-research.json created
# 4. Tell Claude to read it
User: "Please read .claude/amp/responses/my-research.json and summarize"Scenario: Research optimal chunking strategies for document ingestion
User: "What are the best practices for chunking documents for semantic search?"
Claude: "I'll create an Amp research prompt for this. Please run:
amp @.claude/amp/prompts/pending/chunking-research.json"
[Claude creates prompt file with research questions]
User runs:
$ amp @.claude/amp/prompts/pending/chunking-research.json
[Amp] Researching document chunking strategies...
[Amp] Fetching from LlamaIndex docs...
[Amp] Fetching from Pinecone blog...
[Amp] Analyzing 15 sources...
[Amp] Writing response to .claude/amp/responses/chunking-research.json
[Amp] Done! (45 seconds)
Claude: "Great! I've analyzed the research. Here are the key findings:
1. **Chunk Size**: 512-1024 tokens optimal for semantic search
2. **Overlap**: 10-20% overlap prevents context loss
3. **Boundaries**: Respect sentence/paragraph boundaries
4. **Metadata**: Store source, page number, section headers
I recommend:
- Use 1000 char chunks with 200 char overlap (current config)
- Add paragraph boundary detection (TODO)
- Include source metadata in each chunk (implemented)
Should I implement paragraph boundary detection?"
Credits Used:
- Claude Code: ~100 tokens (coordination)
- Amp CLI: 5,000 tokens (research)
- Savings: 98% credit reduction vs Claude doing research directly
Use Amp for Large Research:
- ✅ Fetching documentation from 10+ sources
- ✅ Analyzing best practices across multiple frameworks
- ✅ Comparing implementation patterns
- ❌ Simple questions (faster to ask Claude directly)
Clear Prompts:
- Specify exactly what you need
- Provide context (project, technology stack)
- Define expected output format
File Organization:
- Move completed prompts to
completed/folder - Archive old responses periodically
- Clean up
.claude/amp/monthly
Issue: "Amp command not found"
# Verify installation
which amp
# If not installed, follow Amp CLI documentation
# (Installation varies by distribution)Issue: "Response file not created"
# Check Amp execution logs
amp @.claude/amp/prompts/pending/file.json --verbose
# Verify file permissions
ls -la .claude/amp/responses/Issue: "Claude doesn't see response"
# Verify file exists
cat .claude/amp/responses/abc123.json
# Explicitly tell Claude
User: "Please read .claude/amp/responses/abc123.json"Purpose: Rule-based context management with automatic memory storage and retrieval triggers.
Project-Specific Contexts:
- Python MCP Memory Service context
- Release Workflow context
- Custom user-defined contexts
Auto-Store Triggers:
- Technical patterns (MCP protocol, storage backends)
- Configuration changes
- Release events (merges, tags, issues)
- Documentation updates
Auto-Retrieve Triggers:
- Troubleshooting queries
- Setup questions
- Implementation examples
- Issue management queries
Session Initialization:
- Automatic context loading on session start
- Git repository analysis
- Recent activity summary
Automatic (v8.0.0+): Context Provider is integrated with Natural Memory Triggers v7.1.3. No additional setup required if hooks are installed.
Verify:
# Check session initialization status
node ~/.claude/hooks/memory-mode-controller.js status
# Should show:
# Context Provider: ✓ Active
# Available Contexts: 2
# - python_mcp_memory
# - mcp_memory_release_workflowList Contexts:
mcp context list
# Output:
# Available Contexts:
# 1. python_mcp_memory (Priority: high)
# - Tools: FastAPI, MCP protocol, storage backends
# 2. mcp_memory_release_workflow (Priority: high)
# - Tools: Version management, CHANGELOG, issue trackingSession Status:
mcp context status
# Output:
# Session Initialized: ✓
# Contexts Loaded: 2
# Memories Injected: 8
# Last Update: 2s agoOptimization Suggestions:
mcp context optimize
# Output:
# Suggestions:
# 1. Add auto-store trigger for "pytest" → testing context
# 2. Consolidate 3 similar retrieve patterns
# 3. Update memory_type from "note" → "implementation"Triggered Automatically:
User: "I've switched the backend to hybrid mode"
→ Stores: "Backend configuration change: hybrid mode enabled"
Tags: hybrid, configuration, storage-backend
User: "Merged PR #215 fixing database locks"
→ Stores: "Release event: PR #215 merged (database lock fix)"
Tags: release, pr, database-lock, fix
User: "Added OAuth 2.1 support to HTTP server"
→ Stores: "MCP protocol enhancement: OAuth 2.1 Dynamic Client Registration"
Tags: mcp-protocol, oauth, authentication, feature
Triggered Automatically:
User: "Why is the Cloudflare backend failing?"
→ Retrieves: Previous troubleshooting notes, configuration examples
→ Injects: Into Claude's context automatically
User: "How do we handle version bumps?"
→ Retrieves: Release workflow documentation, previous releases
→ Injects: Four-file procedure, CHANGELOG format
User: "What issues were fixed in the last release?"
→ Retrieves: Recent release notes, closed issues
→ Injects: Issue #123 (database locks), #124 (OAuth setup)
Via MCP Tool:
mcp context create \
--name "my_feature_context" \
--category "development" \
--patterns "auto-store: feature implementation, testing" \
--patterns "auto-retrieve: how to implement, best practices"Manual Creation (Advanced):
Create .claude/contexts/my_context.json:
{
"name": "My Custom Context",
"tool_category": "development",
"applies_to_tools": ["code_execution", "file_operations"],
"priority": "medium",
"auto_store_triggers": {
"feature_impl": {
"patterns": ["implementing", "feature complete"],
"action": "store",
"memory_type": "implementation",
"tags": ["feature", "implementation"]
}
},
"auto_retrieve_triggers": {
"best_practices": {
"patterns": ["how to implement", "best practice"],
"action": "retrieve",
"max_results": 5
}
}
}Let Context Provider Work:
- Don't manually store/retrieve if auto-triggers cover it
- Trust the pattern matching (85%+ accuracy)
- Review optimization suggestions monthly
Customize for Your Workflow:
- Add project-specific patterns
- Adjust memory types for your taxonomy
- Set appropriate priorities
Monitor Performance:
# Check effectiveness
mcp context analyze python_mcp_memory
# Output:
# Effectiveness Analysis:
# Auto-stores triggered: 42 times
# Auto-retrieves triggered: 28 times
# False positives: 2 (4.7%)
# False negatives: 3 (7.1%)
# Overall accuracy: 88.1%Issue: Context not auto-storing
# Check pattern matching
node ~/.claude/hooks/memory-mode-controller.js test "implementing new feature"
# If no match, update patterns
mcp context update python_mcp_memory \
--add-pattern "implementing.*feature"Issue: Too many auto-retrieves
# Adjust sensitivity
node ~/.claude/hooks/memory-mode-controller.js sensitivity 0.7
# Or reduce max results
mcp context update python_mcp_memory \
--max-results 3Issue: Wrong memory types
# Review and consolidate
python scripts/maintenance/consolidate_memory_types.py --dry-run
# Update context patterns
mcp context update python_mcp_memory \
--memory-type "implementation"Issue: "Command not found"
# Verify all tools installed
which gh # GitHub CLI
which gemini # Gemini CLI
which amp # Amp CLI
node ~/.claude/hooks/memory-mode-controller.js status # Context Provider
# Install missing tools (see Setup sections above)Issue: "Authentication failed"
# Re-authenticate each tool
gh auth login # GitHub
gemini auth login # Gemini
export GROQ_API_KEY="" # Groq
# Verify
gh auth status
gemini "test"
./scripts/utils/groq "test"Issue: "Slow performance"
# Switch from Gemini to Groq (10x faster)
export GROQ_API_KEY="your-key"
# Verify speed improvement
time ./scripts/utils/groq "test" # ~200-300ms
time gemini "test" # ~2-3sGitHub Release Manager:
Code Quality Guard:
Gemini PR Automator:
Amp CLI Bridge:
Context Provider:
Documentation:
- TROUBLESHOOTING - General troubleshooting
- Memory Hooks Guide - Hook-specific issues
- Integration Guide - Client setup
GitHub:
- Issues - Report bugs
- Discussions - Ask questions
Agent Files:
.claude/agents/github-release-manager.md.claude/agents/code-quality-guard.md.claude/agents/gemini-pr-automator.mddocs/amp-cli-bridge.mddocs/context-provider-workflow-automation.md
These agents were designed for mcp-memory-service but are portable to other projects with appropriate configuration.
| Component | Portability | Best For |
|---|---|---|
| Amp CLI Bridge | 95% | ✅ All languages - Works as-is |
| GraphQL Helpers | 100% | ✅ All languages - Pure GitHub API |
| GitHub Release Manager | 70% | ✅ Python, Node.js, Rust (version file mapping) |
| Code Quality Guard | 60% | |
| Gemini PR Automator | 50% |
# 1. Copy agent definitions
mkdir -p .claude/agents
curl -o .claude/agents/github-release-manager.md \
https://raw.githubusercontent.com/doobidoo/mcp-memory-service/main/.claude/agents/github-release-manager.md
# 2. Copy portable scripts (work with any language)
mkdir -p scripts/pr/lib
curl -o scripts/pr/watch_reviews.sh \
https://raw.githubusercontent.com/doobidoo/mcp-memory-service/main/scripts/pr/watch_reviews.sh
chmod +x scripts/pr/*.sh
# 3. Update version file paths in agent definitions
# 4. Configure GROQ_API_KEY for fast LLM calls
export GROQ_API_KEY="your-key"For detailed migration instructions, see:
- Using Agents in Other Repositories - Complete cross-repository guide
Covers:
- ✅ Python projects (30 min setup)
- ✅ Node.js/TypeScript projects (2-3 hours)
- ✅ Rust projects (4-6 hours)
- ✅ Go projects (6-8 hours)
- ✅ Configuration templates
- ✅ Migration checklists
- ✅ Troubleshooting
- Using Agents in Other Repositories - 🆕 Cross-repository agent setup
- Complete Feature List - All 33 features
- Integration Guide - Client setup
- Memory Hooks Guide - Natural Memory Triggers
- Context Provider Automation - Rule-based triggers
- Advanced Configuration - Production setup
- GitHub Repository
Documentation: Home • Installation • Integration • Troubleshooting
Features: All 33 Features • Memory Hooks • Web Dashboard • OAuth 2.1
Automation: Agent Guide • Cross-Repo Setup • Context Provider
Community: GitHub • Discussions • Issues • Contribute
MCP Memory Service • Zero database locks • 5ms reads • 85% accurate memory triggers • MIT License