A comprehensive Spring Boot application that integrates Spring AI, LangChain4j, custom LangGraph-style workflows, and LangSmith tracing for advanced AI-powered applications.
- Spring AI Integration: Native Spring Boot AI capabilities with OpenAI and Ollama support
- LangChain4j Integration: Advanced language model chaining and conversation memory
- LangGraph-style Workflows: Custom Java implementation of workflow patterns inspired by LangGraph
- LangSmith Tracing: Comprehensive monitoring and tracing of AI operations
- REST API: Clean RESTful endpoints for all AI operations
- Async Processing: Non-blocking trace submissions to LangSmith
┌─────────────────────────────────────────────────────────────────────────┐
│ REST API Layer │
├─────────────────────────────────────────────────────────────────────────┤
│ Spring AI │ LangChain4j │ LangGraph │ LangSmith │
│ Integration │ Service │ Workflows │ Tracing │
├─────────────────────────────────────────────────────────────────────────┤
│ Business Logic Layer │
├─────────────────────────────────────────────────────────────────────────┤
│ Configuration & Infrastructure │
└─────────────────────────────────────────────────────────────────────────┘
- Java 17+
- Gradle 8.5+
- Docker (optional: for containerized deployment)
- Kubernetes cluster (optional: for Kubernetes deployment)
- OpenAI API Key (optional: for AI features)
- LangSmith API Key (optional: for tracing)
Create a .env file or set the following environment variables:
OPENAI_API_KEY=your-openai-api-key-here
LANGSMITH_API_KEY=your-langsmith-api-key-here
LANGSMITH_PROJECT=spring-ai-demo- Clone the repository:
git clone <repository-url>
cd spring-ai-langchain- Build the application:
./gradlew build- Run the application:
./gradlew bootRunThe application will start on http://localhost:8080
- Build the Docker image:
docker build -t spring-ai-langchain:latest .- Run with Docker:
docker run -p 8080:8080 \
-e OPENAI_API_KEY=your-openai-api-key \
-e LANGSMITH_API_KEY=your-langsmith-api-key \
spring-ai-langchain:latest# Copy environment file and update with your API keys
cp .env.example .env
# Edit .env file with your API keys
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f spring-ai-langchain
# Stop services
docker-compose downThis includes:
- Spring AI application
- Ollama (local LLM server)
- Redis (optional caching)
- PostgreSQL (optional database)
- Build and push Docker image:
docker build -t your-registry/spring-ai-langchain:1.0.0 .
docker push your-registry/spring-ai-langchain:1.0.0- Install with Helm:
# Add your API keys to values
helm install spring-ai-langchain ./helm/spring-ai-langchain \
--set image.repository=your-registry/spring-ai-langchain \
--set config.openai.apiKey="your-openai-api-key" \
--set config.langsmith.apiKey="your-langsmith-api-key"- For development environment:
helm install spring-ai-dev ./helm/spring-ai-langchain \
-f ./helm/spring-ai-langchain/values-dev.yaml \
--set config.openai.apiKey="your-openai-api-key" \
--set config.langsmith.apiKey="your-langsmith-api-key"- For production environment:
helm install spring-ai-prod ./helm/spring-ai-langchain \
-f ./helm/spring-ai-langchain/values-prod.yaml \
--set config.openai.apiKey="your-openai-api-key" \
--set config.langsmith.apiKey="your-langsmith-api-key"GET /api/ai/healthPOST /api/ai/chat/spring-ai
Content-Type: application/json
{
"message": "Hello, how can you help me today?"
}POST /api/ai/chat/langchain
Content-Type: application/json
{
"message": "Explain quantum computing in simple terms"
}POST /api/ai/workflow
Content-Type: application/json
{
"input": "Can you help me understand machine learning?"
}POST /api/ai/embedding
Content-Type: application/json
{
"text": "This is a sample text for embedding generation"
}The application uses application.yml for configuration:
spring:
ai:
openai:
api-key: ${OPENAI_API_KEY:your-openai-api-key}
chat:
options:
model: gpt-4
temperature: 0.7
langchain4j:
open-ai:
api-key: ${OPENAI_API_KEY:your-openai-api-key}
chat-model:
model-name: gpt-4
temperature: 0.7
langsmith:
api-key: ${LANGSMITH_API_KEY:your-langsmith-api-key}
project-name: ${LANGSMITH_PROJECT:spring-ai-demo}
endpoint: ${LANGSMITH_ENDPOINT:https://api.smith.langchain.com}- Native Spring Boot AI capabilities
- OpenAI and Ollama model support
- Automatic configuration and beans
- Advanced conversation management
- Memory-enabled chat sessions
- Embedding generation capabilities
- WorkflowState: Manages state throughout workflow execution
- WorkflowNode: Functional interface for workflow steps
- Workflow: Orchestrates node execution with conditional edges
- WorkflowService: Pre-built workflows for common patterns
- TraceData: Structured trace information
- LangSmithTracer: Async trace submission to LangSmith
- LangSmithConfig: Configuration and HTTP client setup
The sample workflow demonstrates a multi-step AI processing pipeline:
- Input Processing: Cleans and analyzes input text
- Content Analysis: Determines if input is a question, request, and sentiment
- Response Generation: Creates AI responses using LangChain
- Quality Review: Evaluates response quality and triggers regeneration if needed
Workflow workflow = new Workflow()
.addNode("input", this::processInput)
.addNode("analyze", this::analyzeContent)
.addNode("generate", this::generateResponse)
.addNode("review", this::reviewResponse)
.addEdge("input", "analyze")
.addEdge("analyze", "generate")
.addConditionalEdge("generate", this::shouldReview)
.setEntryPoint("input");- Automatic trace generation for all AI operations
- Async trace submission for performance
- Configurable project organization
- Error tracking and debugging support
- Health checks and metrics at
/actuator/* - Application status monitoring
- Performance metrics
- Build Stage: Uses Eclipse Temurin JDK 17 for compilation
- Runtime Stage: Uses Eclipse Temurin JRE 17 for minimal footprint
- Security: Runs as non-root user (appuser:1001)
- Health Checks: Built-in health monitoring
- JVM Optimization: Container-aware memory settings
The docker-compose.yml provides a complete development environment:
Services:
├── spring-ai-langchain # Main application
├── ollama # Local LLM server
├── ollama-init # Model initialization
├── redis # Caching (optional)
└── postgres # Database (optional)- Multi-environment: Separate values for dev/prod
- Security: Pod security contexts and non-root execution
- Scalability: HPA with CPU/Memory metrics
- Monitoring: Prometheus metrics and health checks
- Configuration: ConfigMaps and Secrets management
- Ingress: TLS termination and routing
helm/spring-ai-langchain/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration
├── values-dev.yaml # Development overrides
├── values-prod.yaml # Production overrides
└── templates/
├── deployment.yaml # Main application deployment
├── service.yaml # Service definition
├── ingress.yaml # Ingress configuration
├── configmap.yaml # Application configuration
├── secret.yaml # API keys and secrets
├── serviceaccount.yaml # Service account
├── hpa.yaml # Horizontal Pod Autoscaler
└── _helpers.tpl # Template helpers
- High Availability: 3+ replicas with pod anti-affinity
- Auto-scaling: HPA with CPU and memory targets
- Security: Network policies, security contexts, TLS
- Monitoring: Prometheus integration, detailed health checks
- Resource Management: Optimized CPU/memory requests and limits
- Single replica for resource efficiency
- NodePort service for easy access
- Debug logging enabled
- Cheaper GPT-3.5-turbo model
- Ollama integration enabled
- Multiple replicas for high availability
- ClusterIP with Ingress and TLS
- Minimal logging for security
- GPT-4 model for quality
- External secret management
- Pod disruption budgets
- Resource quotas and limits
# Application health
curl http://localhost:8080/api/ai/health
# Spring Actuator endpoints
curl http://localhost:8080/actuator/health
curl http://localhost:8080/actuator/metrics
curl http://localhost:8080/actuator/info# Check pod status
kubectl get pods -l app.kubernetes.io/name=spring-ai-langchain
# View logs
kubectl logs -f deployment/spring-ai-langchain
# Check HPA status
kubectl get hpa
# Monitor resource usage
kubectl top pods -l app.kubernetes.io/name=spring-ai-langchain- Automatic trace generation for all AI operations
- Async trace submission for performance
- Configurable project organization
- Error tracking and debugging support
spring-ai-langchain/
├── src/main/java/com/example/springai/
│ ├── config/ # Configuration classes
│ ├── controller/ # REST API endpoints
│ ├── dto/ # Data transfer objects
│ ├── langgraph/ # LangGraph-style workflow components
│ ├── langsmith/ # LangSmith tracing integration
│ └── service/ # Business logic services
├── helm/ # Kubernetes Helm charts
│ └── spring-ai-langchain/
│ ├── templates/ # Kubernetes manifests
│ └── values*.yaml # Environment configurations
├── Dockerfile # Container build instructions
├── docker-compose.yml # Local development stack
└── .env.example # Environment variables template
- Using Docker Compose (Recommended):
cp .env.example .env
# Edit .env with your API keys
docker-compose up -d- Using Gradle:
./gradlew bootRun# Unit tests
./gradlew test
# Integration tests with Docker
docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
# Load testing
curl -X POST http://localhost:8080/api/ai/chat/spring-ai \
-H "Content-Type: application/json" \
-d '{"message": "Hello World"}'./gradlew build# Development build
docker build -t spring-ai-langchain:dev .
# Production build with multi-platform support
docker buildx build --platform linux/amd64,linux/arm64 \
-t spring-ai-langchain:1.0.0 --push .# Lint chart
helm lint ./helm/spring-ai-langchain
# Dry run
helm install --dry-run --debug spring-ai-test ./helm/spring-ai-langchain
# Deploy to development
helm upgrade --install spring-ai-dev ./helm/spring-ai-langchain \
-f ./helm/spring-ai-langchain/values-dev.yaml
# Deploy to production
helm upgrade --install spring-ai-prod ./helm/spring-ai-langchain \
-f ./helm/spring-ai-langchain/values-prod.yaml- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For questions and support:
- Create an issue in the GitHub repository
- Check the documentation in the
/docsfolder - Review the API examples in the
/examplesfolder