An AI powered terminal assistant similar to Claude Code and Gemini CLI, but runs entirely offline using local models. No API keys, no subscriptions, no data leaving your machine.
While tools like Claude Code and Gemini CLI have set the standard for AI-powered terminal experiences, they require cloud connectivity and API costs. Oclai brings similar capabilities directly to your terminal with a key difference: it's completely offline, free, and runs on local AI models.
Built with Go (because let's face it, Go is perfect for CLI tools), Oclai offers a refreshingly simple, non-bloated alternative that respects your privacy and doesn't require an internet connection or subscription.
The AI development ecosystem has largely been dominated by Python and TypeScript/JavaScript implementations. Even industry-leading tools like Claude Code and Gemini CLI are built with TypeScript. As someone who appreciates Go's elegance and performance for building CLI applications, I wanted to shift the momentum—even if just a little—toward Go-based AI tooling. This project is my contribution to that effort.
- Lightweight CLI built with Go for blazing-fast performance
- Minimal dependencies and straightforward installation
- No bloat, just the features you need
Quick Query Mode
oclai q "Explain how goroutines work"- One-off queries to your AI model
- File-aware: Reference files in your query, and Oclai automatically reads and includes their content for context-aware responses
oclai q "Review this code for improvements" -f main.go
cat /path/file.txt | oclai q "Summerize this file"Interactive Chat Mode
oclai chat- Start a continuous conversation with your AI model
- Switch models mid-conversation
- Maintain context throughout your session
Oclai integrates with the Model Context Protocol (MCP) using Anthropic's official Go MCP library, allowing your AI models to interact with external tools and services.
Default MCP Servers Included:
- Filesystem: Securely perform file operations with configurable access controls
- Sequential Thinking: Enable dynamic, reflective problem-solving through thought sequences
- Fetch: Retrieve web content using URLs
Manage Your Servers:
oclai mcp add # Add new MCP servers
oclai mcp remove # Remove configured servers
oclai mcp list # View all configured servers- Model Selection: Set a default model or switch between models during chat sessions
- Custom Ollama Instance: Configure a custom baseURL if you're running Ollama on a remote server
- Context Control: Adjust context limits for your models
- Service Monitoring: Check Ollama service status with
oclai status
Built with GoReleaser for automated, reliable releases across:
- macOS (Darwin)
- Linux
- Windows
Before installing Oclai, ensure you have:
- Ollama - Install Ollama
- npx (Node.js) - Install Node.js (for MCP servers)
- Docker - Install Docker (for MCP servers)
brew tap thejasmeetsingh/oclai https://github.com/thejasmeetsingh/oclai
brew install --cask oclai # or: brew install oclaiwget https://github.com/thejasmeetsingh/oclai/releases/download/v{tag}/oclai_{tag}_linux_x86_64.deb
sudo dpkg -i oclai_{tag}_linux_x86_64.debNote:
Replace {tag} with the specific version number from the repository (e.g., 0.1.0).
This tag corresponds to the release version available on the GitHub repository.
scoop bucket add thejasmeetsingh https://github.com/thejasmeetsingh/oclai
scoop install thejasmeetsingh/oclaiOnce installed, run oclai to see all available commands and options:
oclaiAsk a quick question:
oclai q "Tell me about the roman empire"Start an interactive chat:
oclai chatCheck available models:
oclai modelsGet help on any command:
oclai [command] --help| Command | Aliases | Description |
|---|---|---|
chat |
ch |
Start an interactive chat session |
completion |
- | Generate shell autocompletion scripts |
help |
- | Get help for any command |
mcp |
- | Manage MCP servers (see subcommands below) |
models |
- | List available models |
query |
q |
Ask a query to the model |
status |
- | Check Ollama service status |
Add a new MCP server:
oclai mcp add [flags]Available flags:
--args <string>- Arguments for the server command--cmd <string>- Command to start the server--endpoint <string>- HTTP/SSE endpoint of the server--env <strings>- Specify environment variables (comma-separated) to run the server command with--headers <strings>- Add additional headers (comma-separated) for server connection-n, --name <string>- Server name (required)
List configured MCP servers:
oclai mcp list # or: oclai mcp lsRemove an MCP server:
oclai mcp remove [name] # or: oclai mcp rm [name]These flags can be used with any command:
| Flag | Description |
|---|---|
--baseURL <value> |
Set Ollama base URL |
--ctx <value> |
Set context limit |
-h, --help |
Show help information |
--model <value> |
Set default model |
-
No Streaming Responses: Currently, Oclai doesn't support streaming model responses.
-
Local Model Performance: Since Oclai uses local models via Ollama, the intelligence and capabilities are constrained by the model you choose and your hardware. While these models can't match the power of proprietary models from larger companies, they're highly capable—especially with good hardware and effective prompting—and offer the benefits of privacy and zero API costs.
While this project isn't actively seeking contributions, pull requests are welcome! If you find a bug or have a feature suggestion, feel free to open an issue.
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter issues or have questions:
- Check the built-in help:
oclai --helporoclai [command] --help - Open an issue on GitHub
Built with ❤️ and Go
