A powerful, TypeScript-based AI agent framework built on top of LangChain for automated code generation, editing, and management. Designed to work seamlessly with modern language models and development workflows.
- 🚀 High-performance TypeScript execution engine
- 🔄 Real-time code generation and editing via chat
- 🧠 Advanced AI agent system with ReAct framework
- 🔒 Built-in safety controls and validation
- 📝 Streaming responses for real-time feedback
- 🛠️ Extensible tool system
- Bun runtime installed
- A compatible language model (e.g., Ollama with qwen2.5-coder)
# Clone the repository
git clone https://github.com/yourusername/llamautoma.git
cd llamautoma
# Install dependencies
bun install
# Start the server
bun run start
The server exposes HTTP endpoints for chat and workspace synchronization. All endpoints accept POST requests and expect JSON payloads.
http://localhost:3000/v1
All requests support these common properties:
threadId
(optional): Unique identifier for the conversation threadsafetyConfig
(optional): Configuration for safety controlsmaxInputLength
: Maximum allowed input length (default: 8192)requireToolConfirmation
: Whether to require confirmation for tool usagerequireToolFeedback
: Whether to require feedback for tool usagedangerousToolPatterns
: Array of patterns to flag as potentially dangerous
All endpoints return Server-Sent Events (SSE) streams with the following event types:
start
: Initial response start with metadatacontent
: Main response content (may be sent multiple times)end
: Response completion with final results
The specific content of each event varies by endpoint but follows this general structure:
{
"event": "start" | "content" | "end",
"threadId": string,
"data": {
// Endpoint-specific data structure
}
}
POST /v1/chat
Interactive chat endpoint with streaming responses. Handles all code operations through natural language.
{
"messages": [
{
"role": "user",
"content": "Create a React component for a user profile"
}
],
"threadId": "optional-thread-id",
"safetyConfig": {
"maxInputLength": 8192
}
}
Response: Server-Sent Events (SSE) stream with the following events:
start
: Initial response startcontent
: Main response contentend
: Response completion
POST /v1/sync
Synchronize and analyze codebase structure with progress updates.
{
"root": "/path/to/project",
"excludePatterns": ["node_modules", "dist", ".git"]
}
Response: Server-Sent Events (SSE) stream with file content and status updates.
Llamautoma includes several safety features to prevent potentially harmful operations:
- Input length validation
- Dangerous pattern detection
- Tool execution confirmation
- Execution feedback collection
- Rate limiting
The server can be configured through environment variables or the config file:
{
"modelName": "qwen2.5-coder:7b",
"host": "http://localhost:11434",
"maxIterations": 10,
"userInputTimeout": 30000,
"safetyConfig": {
"requireToolConfirmation": true,
"requireToolFeedback": true,
"maxInputLength": 8192,
"dangerousToolPatterns": [
"rm -rf /",
"DROP TABLE",
"sudo rm",
// ... more patterns
]
}
}
MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.