A Model Context Protocol (MCP) server that integrates OpenAI's Codex CLI with Claude Code, enabling you to use Codex's capabilities directly from within Claude.
This server is converted from the original GPT-5 MCP server, with cost tracking features removed and replaced with direct Codex CLI integration.
- Persistent Process Architecture: 80% faster responses with long-lived Codex processes
- Workspace Isolation: Automatic repository-based session separation
- Streaming Support: Real-time progress updates and thinking events
- Session Management: Health monitoring, cancellation, and restart capabilities
- Lightweight Persistence: JSON-based storage for session recovery
- Resource Processing: Handle attached files and content in prompts
- π Structured Logging: Context-aware error categorization and tracking
- π Enhanced Capability Detection: Dynamic discovery of Codex CLI features
- π Improved Claude Code Integration: Optimized for collaborative workflows
π codex_ask: Primary tool for Codex assistance with enhanced integrationπ¬ codex_conversation_start: Begin a new conversation with contextπ¬ codex_conversation_continue: Continue an existing conversationβοΈ codex_conversation_options: Configure conversation settingsπ codex_conversation_metadata: View conversation detailsπ codex_conversation_summarize: Compress conversation history
π§ codex_cancel: Cancel ongoing operations or force terminate sessionsπ©Ί codex_health: Monitor session status and get diagnosticsπ codex_restart: Recover from errors with process restart
- Node.js (β₯18.0.0)
- pnpm package manager
- Codex CLI installed and configured
npm install -g @openai/codex # or brew install codex
./install.shThis script will:
- Check dependencies
- Install Codex CLI if needed
- Build the project
- Configure Claude Desktop integration
- Run tests
-
Install dependencies:
pnpm install
-
Build the project:
pnpm run build
-
Configure Claude Desktop by adding to
claude_desktop_config.json:{ "mcpServers": { "codex": { "command": "node", "args": ["/path/to/codex_mcp/dist/index.js"], "env": { "MAX_CONVERSATIONS": "50", "MAX_CONVERSATION_HISTORY": "100", "MAX_CONVERSATION_CONTEXT": "10" } } } }
./start.sh# Production mode
node dist/index.js
# Development mode with hot reload
pnpm run devEnvironment variables (set in .env file):
MAX_CONVERSATIONS: Maximum number of concurrent conversations (default: 50)MAX_CONVERSATION_HISTORY: Maximum messages per conversation (default: 100)MAX_CONVERSATION_CONTEXT: Maximum context messages sent to Codex (default: 10)LOG_LEVEL: Logging level (default: info)MAX_SESSIONS: Maximum concurrent Codex sessions (default: 10)SESSION_IDLE_TIMEOUT: Session cleanup timeout in ms (default: 1800000)
Run the comprehensive test suite:
# Test all functionality
node test-server.js
# Test MCP protocol only
node test-mcp-protocol.js
# Test tools functionality
node test-tools.jsMCP clients (like Claude Code) automatically discover available tools through the Model Context Protocol:
The server exposes 9 tools with full JSON schemas:
consult_codex- Core Codex interaction with advanced featuresget_session_health- Monitor and diagnose sessionscancel_request,restart_session- Session managementstart_conversation,continue_conversation- Conversation workflowsset_conversation_options,get_conversation_metadata,summarize_conversation- Advanced conversation control
Users naturally discover capabilities through:
- Tool descriptions in MCP protocol
- Response metadata showing session IDs and workspace paths
- Error messages that guide proper usage
- Rich output from Codex CLI operations
Every response includes helpful context:
Session: `my-session` | Workspace: `/path/to/repo`
@codex What is the best way to implement error handling in Node.js?
@codex Based on this code: [attach file], how can I optimize the performance?
# Automatically uses current repository workspace for isolation
@codex {"session_id": "auth-feature", "prompt": "Help me implement user authentication"}
@codex {"session_id": "auth-feature", "prompt": "Now add password reset functionality"}
# Session maintains workspace context and request history
{
"session_id": "complex-task",
"workspace_path": "/specific/repo",
"streaming": true,
"prompt": "Analyze and refactor this large codebase",
"page": 1,
"max_tokens_per_page": 15000
}@codex_health # Monitor all sessions
@codex_health {"session_id": "my-task"} # Check specific session
@codex_cancel {"session_id": "stuck-task"} # Cancel operations
@codex_restart {"session_id": "failed"} # Recover from errors
# Work on frontend
@codex_ask {"workspace_path": "/projects/frontend", "session_id": "ui", "prompt": "Update React components"}
# Switch to backend
@codex_ask {"workspace_path": "/projects/backend", "session_id": "api", "prompt": "Add new API endpoints"}
# Sessions remain isolated by workspace
@codex_health # Shows both sessions with different workspace IDs
All errors are automatically categorized and logged with context:
- Codex CLI errors: Command failures, timeouts, authentication issues
- Session management: Creation, timeout, and capacity issues
- MCP protocol: Request validation and response handling
- Resource errors: File access, permissions, disk space
The server automatically detects available Codex CLI features:
- JSON mode support
- Available models (GPT-5, o3, etc.)
- Workspace/directory mode
- File operation capabilities
- Plan API support
- Token limits and constraints
Complex prompts with special characters are automatically escaped for secure execution:
// Handles prompts like: "What's the best way to implement auth?"
// Safely escapes: 'What'\''s the best way to implement auth?'- Unified
session_idsystem for seamless integration - Tool names prefixed with
codex_for easy discovery - Rich error context with recovery suggestions
- Workspace-aware session isolation
π Primary Codex interaction tool with enhanced Claude Code integration.
Parameters:
prompt(string, required): The prompt to send to Codexsession_id(string, optional): Session ID for persistent contextworkspace_path(string, optional): Workspace path for repository isolationcontext(string, optional): Additional context for the promptstreaming(boolean, optional): Enable streaming responsesmodel(string, optional): Model to use (e.g., "o3", "gpt-5")page(number, optional): Page number for paginationmax_tokens_per_page(number, optional): Maximum tokens per page
Response: Rich Codex output with session and workspace metadata
π©Ί Monitor session status and diagnostics with detailed capability reporting.
Parameters:
session_id(string, optional): Specific session to check
Response: Session status, workspace info, capabilities, and request counts
π§π Enhanced session management and recovery with structured error handling.
Parameters:
session_id(string, required): Target session IDforce(boolean, optional): Force termination vs graceful restart
Response: Operation status and confirmation
Legacy conversation management (consider using sessions instead):
start_conversation,continue_conversationset_conversation_options,get_conversation_metadatasummarize_conversation
# Install via npm
npm install -g @openai/codex
# Install via Homebrew
brew install codex
# Verify installation
codex --help- Check that dependencies are installed:
pnpm install - Ensure the project is built:
pnpm run build - Verify Codex CLI is working:
codex exec "echo test"
Make sure Codex CLI is authenticated. Run codex to check status or re-authenticate if needed.
src/
βββ index.ts # Main MCP server
βββ codex-process-simple.ts # Enhanced Codex CLI wrapper with capability detection
βββ session-manager.ts # Session management with workspace isolation
βββ conversation.ts # Conversation management
βββ logger.ts # Structured logging system
βββ error-types.ts # Error categorization and definitions
βββ error-utils.ts # Error mapping and recovery utilities
βββ types.ts # TypeScript types
pnpm run buildpnpm run test # Run test suite
pnpm run dev # Development modeMIT License - see original GPT-5 MCP server for full license details.