Skip to content

Conversation

@leon90dm
Copy link

No description provided.

Add detailed documentation to help AI assistants understand and work with
the Memori codebase, including:

- Complete architecture overview and component relationships
- Detailed codebase structure with 73+ Python modules
- Development workflows and code quality standards
- Database schema design and multi-tenant patterns
- LLM integration patterns (OpenAI, Anthropic, LiteLLM)
- Security best practices and common pitfalls
- Testing guidelines and debugging tips
- Pull request checklist and contribution guide

This guide serves as a comprehensive reference for AI assistants to
understand conventions, make safe changes, and maintain code quality.
Add comprehensive MCP server to expose Memori's persistent memory
capabilities to any MCP-compatible AI assistant like Claude Desktop.

Features:
- 6 MCP tools for memory operations (record, search, retrieve, stats, etc.)
- 2 MCP resources for read-only data access
- 2 MCP prompt templates for common workflows
- Multi-tenant isolation with user_id/session_id support
- Support for all Memori database backends (SQLite, PostgreSQL, MySQL)
- Automatic memory processing with OpenAI integration

Files added:
- mcp/memori_mcp_server.py - Main MCP server implementation
- mcp/README.md - Comprehensive documentation
- mcp/QUICKSTART.md - 5-minute setup guide
- mcp/claude_desktop_config.json - Configuration template
- mcp/__init__.py - Package initialization
- examples/mcp/basic_usage_example.py - Basic usage examples
- examples/mcp/advanced_workflow_example.py - Advanced workflows

Tools provided:
1. record_conversation - Store conversations with automatic processing
2. search_memories - Intelligent memory search with filtering
3. get_recent_memories - Retrieve recent context
4. get_memory_statistics - Memory analytics and insights
5. get_conversation_history - Access conversation history
6. clear_session_memories - Reset session context

Configuration:
- Added mcp optional dependency group to pyproject.toml
- Requires: mcp>=1.0.0, fastmcp>=2.0.0
- Uses uv for dependency management
- Environment-based configuration (MEMORI_DATABASE_URL, OPENAI_API_KEY)

Usage:
1. Install uv package manager
2. Configure Claude Desktop with provided JSON
3. Restart Claude Desktop
4. Use hammer icon to access Memori tools

This enables Claude and other MCP clients to have persistent, queryable
memory across all conversations with full SQL database backing.
Add comprehensive support for multiple LLM providers in the Memori MCP server,
with OpenRouter as the recommended option for accessing 100+ models.

Features:
- OpenRouter support (100+ models including Claude, GPT-4, Llama, Mistral)
- Azure OpenAI support (enterprise deployments)
- Custom OpenAI-compatible endpoints (Ollama, LM Studio, etc.)
- Automatic provider detection from environment variables
- Priority-based configuration (OpenRouter > Azure > Custom > OpenAI)

Provider detection priority:
1. OpenRouter (OPENROUTER_API_KEY) - access to 100+ models
2. Azure OpenAI (AZURE_OPENAI_API_KEY) - enterprise
3. Custom endpoint (LLM_BASE_URL) - local/self-hosted
4. OpenAI (OPENAI_API_KEY) - default

Files modified:
- mcp/memori_mcp_server.py - Added _detect_llm_provider() function
- mcp/README.md - Comprehensive LLM provider configuration guide
- mcp/QUICKSTART.md - Quick provider selection guide
- mcp/claude_desktop_config_openrouter.json - OpenRouter config template
- examples/mcp/openrouter_example.py - Full OpenRouter usage example

Environment variables added:
- OPENROUTER_API_KEY - OpenRouter API key
- OPENROUTER_MODEL - Model to use (default: openai/gpt-4o)
- OPENROUTER_BASE_URL - API base URL (default: https://openrouter.ai/api/v1)
- OPENROUTER_APP_NAME - Optional app name for rankings
- OPENROUTER_SITE_URL - Optional site URL for rankings
- AZURE_OPENAI_* - Azure OpenAI configuration
- LLM_BASE_URL - Custom endpoint URL
- LLM_API_KEY - Custom API key
- LLM_MODEL - Custom model name

Benefits:
- Access 100+ models through OpenRouter (Claude, GPT-4, Llama, etc.)
- Use free models (Llama 3.1 70B, Mistral, etc.)
- Run local models with Ollama/LM Studio
- Enterprise Azure OpenAI support
- Cost optimization through model selection
- No vendor lock-in

Popular OpenRouter models:
- anthropic/claude-3.5-sonnet - Best for structured tasks
- openai/gpt-4o - OpenAI's fastest GPT-4
- meta-llama/llama-3.1-70b-instruct - FREE, open-source
- google/gemini-pro-1.5 - Google Gemini
- mistralai/mixtral-8x7b-instruct - Cost-effective

This enables users to choose the best model for their needs and budget,
from premium models to free open-source options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants