A proxy server that intercepts and stores calls to large language models (LLMs) for building fine-tuning datasets for small, efficient models. Perfect for training compact models like Liquid AI's LFM2 series (350M to 2.6B parameters) using data from larger models.
The steps are as simple as:
- Use a LLM in your app via an OpenAI-compatible API to solve your task.
- Run traffic through LLM Intercept (this package) to log the data
- Export the dataset directly as parquet and with system prompts removed (optional)
- Fine-tune a smaller model on the collected data
- Replace the large model with your fine-tuned small model much cheaper/faster or locally at zero cost
- Profit!
Works with any OpenAI-compatible API including OpenRouter, other API providers, and local LLM servers.
- π― Ready for fine-tuning - Automatically formats conversations with assistant responses for direct model training
- π OpenAI-compatible API - Drop-in replacement for OpenAI API clients
- π API agnostic - Works with OpenRouter, llama.cpp (llama-server), and any OpenAI-compatible endpoint
- π Request logging - Stores all requests and responses in SQLite database
- π Streaming support - Full support for SSE streaming responses
- π§ Function calls - Supports OpenAI function calling and tools
- π Admin dashboard - Web interface for viewing and analyzing stored requests
- π¦ Export functionality - Export as JSONL.zstd or Parquet format for ML pipelines
- π Search & filter - Filter by date, model, and search message content
- π Password protected - Admin interface secured with password authentication
β οΈ Legal Disclaimer Users are responsible for ensuring compliance with the terms of service of their model provider regarding fine-tuning on model outputs. Some proprietary models (e.g., OpenAI, Anthropic) may restrict this usage. We recommend using open-source models with permissive licenses such as:
- DeepSeek-V3.2 (MIT License)
- Qwen3-235B-A22B (Apache 2.0)
- GLM-4.5 (MIT License)
- Other Apache 2.0 / MIT licensed models
Always review your provider's terms before collecting training data.
pip install llm-interceptOr install from source for development:
git clone https://github.com/yourusername/llm-intercept.git
cd llm-intercept
pip install -e ".[dev]"# Using OpenRouter (default)
llm-intercept serve --admin-password YOUR_SECURE_PASSWORD
# Using OpenRouter (manual)
llm-intercept serve \
--base-url https://openrouter.ai/api/v1/chat/completions \
--admin-password YOUR_SECURE_PASSWORD
# Using a local Ollama server
llm-intercept serve \
--base-url http://localhost:11434/v1/chat/completions \
--admin-password YOUR_SECURE_PASSWORDOr using environment variables:
export BASE_URL=https://openrouter.ai/api/v1/chat/completions
export ADMIN_PASSWORD=YOUR_SECURE_PASSWORD
llm-intercept serveThe server will start on http://localhost:8000 by default.
Simply point your OpenAI-compatible client to the proxy server:
import openai
client = openai.OpenAI(
base_url="http://localhost:8000/v1",
api_key="your-api-key" # e.g., OpenRouter key
)
response = client.chat.completions.create(
model="deepseek/deepseek-chat-v3.1", # Use appropriate model for your target API
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)Access the admin dashboard at:
http://localhost:8000/admin?password=YOUR_SECURE_PASSWORD
Start the proxy server.
Options:
--host- Host to bind to (default:0.0.0.0)--port- Port to bind to (default:8000)--base-url- Target API base URL (default:https://openrouter.ai/api/v1/chat/completions, can useBASE_URLenv var)--database-url- Database URL (default:sqlite:///./llm_intercept.db)--admin-password- Admin interface password (required, can useADMIN_PASSWORDenv var)--reload- Enable auto-reload for development
Examples:
# Basic usage (OpenRouter)
llm-intercept serve --admin-password mypassword
# Custom host and port
llm-intercept serve --host 127.0.0.1 --port 5000 --admin-password mypassword
# Development mode with auto-reload
llm-intercept serve --admin-password mypassword --reloadInitialize the database (create tables).
Options:
--database-url- Database URL (default:sqlite:///./llm_intercept.db)
Example:
llm-intercept init-database --database-url sqlite:///./my_data.dbOpenAI-compatible chat completions endpoint. Forwards requests to the configured target API and stores them.
Headers:
Authorization: Bearer YOUR_API_KEY(API key for your target service)
Supported parameters:
model- Model identifier (e.g.,deepseek/deepseek-chat-v3.1)messages- Array of message objectstemperature- Sampling temperaturemax_tokens- Maximum tokens to generatetop_p- Nucleus sampling parameterfrequency_penalty- Frequency penaltypresence_penalty- Presence penaltystream- Enable streaming (boolean)functions- Function definitions (OpenAI format)function_call- Function call parametertools- Tool definitions (OpenAI format)tool_choice- Tool choice parameter
Health check endpoint.
Response:
{
"status": "healthy",
"timestamp": "2024-01-01T12:00:00"
}Admin dashboard interface (password protected).
Query parameters:
password- Admin password (required)
- Total requests count
- Unique models used
- Average response time
- Date range - Filter by start and end datetime
- Model - Filter by specific model
- Text search - Search within message content
- Paginated list of requests (20 per page)
- Color-coded status indicators (green=OK, red=error)
- View full conversation messages (including assistant responses)
- View tool calls separately if present
- View raw API response data
- See metadata (timestamp, model, response time, streaming status)
- Format options: JSONL.zstd or Parquet
- Auto-filtering: Only exports successful requests (status='ok')
- Option to include or exclude system prompts
- Download button generates timestamped file
- Ready for ML pipelines and fine-tuning frameworks
JSONL.zstd - One JSON object per line, compressed:
{
"messages": [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi there!"}
],
"model": "deepseek/deepseek-chat-v3.1",
"timestamp": "2024-01-01T12:00:00",
"tool_calls": [...] // Optional, if present
}Parquet - Columnar format with snappy compression:
messages- JSON string of conversation arraymodel- Model identifiertimestamp- ISO timestamptool_calls- JSON string of tool calls (nullable)
The package uses SQLModel with SQLite by default. The main table llm_requests stores:
- Request metadata (timestamp, model, API key hash)
- Sampling parameters (temperature, max_tokens, etc.)
- Messages (JSON)
- Response data (JSON)
- Performance metrics (response_time_ms)
- Error information (if any)
BASE_URL- Target API base URL (default:https://openrouter.ai/api/v1/chat/completions)DATABASE_URL- Database connection URL (default:sqlite:///./llm_intercept.db)ADMIN_PASSWORD- Password for admin interface (required)
- Build an application using a large, expensive model (e.g., DeepSeek, Qwen3)
- Route all API calls through LLM Intercept proxy
- Collect real-world usage data
- Export the dataset
- Fine-tune a smaller, cheaper model (e.g., LFM2-1.2B)
- Deploy the fine-tuned model locally or at lower cost
- Track model usage across your organization
- Monitor response times and errors
- Analyze prompt patterns
- Debug API issues
- Compare different models' performance
- Track token usage
- Identify optimization opportunities
pip install -e ".[dev]"
pytestblack llm_intercept/
ruff check llm_intercept/MIT license. See LICENSE file for details.
This app was vibe-coded with Claude Code (Sonnet 4.5) in under an hour. No guarantees, no warranties, use at your own risk! π