A comprehensive CDI (Contexts and Dependency Injection) extension that brings enterprise-grade AI capabilities to Jakarta EE and Eclipse MicroProfile applications through seamless LangChain4j integration.
This project provides a powerful bridge between the LangChain4j AI framework and enterprise Java applications, enabling developers to inject AI services directly into their CDI-managed beans with full support for enterprise features like fault tolerance, telemetry, and configuration management.
- π Seamless Integration: Native CDI extension for LangChain4j with
@RegisterAIService
annotation - π’ Enterprise Ready: Built-in support for MicroProfile Fault Tolerance, Config, and Telemetry
- π― Multiple Deployment Models: Support for both portable and build-compatible CDI extensions
- π Framework Agnostic: Works with Quarkus, WildFly, Helidon, GlassFish, Liberty, Payara, and other Jakarta EE servers
- π§ Configuration Driven: External configuration support through MicroProfile Config
- π Observable: Comprehensive telemetry and monitoring capabilities
- π‘οΈ Resilient: Built-in fault tolerance with retries, timeouts, and fallbacks
The project is structured into several modules, each serving a specific purpose:
langchain4j-cdi-core
: Fundamental CDI integration classes and SPI definitionslangchain4j-cdi-portable-ext
: Portable CDI extension implementation for runtime service registrationlangchain4j-cdi-build-compatible-ext
: Build-time CDI extension for ahead-of-time compilation scenarioslangchain4j-cdi-config
: MicroProfile Config integration for external configurationlangchain4j-cdi-fault-tolerance
: MicroProfile Fault Tolerance integration for resilient AI serviceslangchain4j-cdi-telemetry
: MicroProfile Telemetry integration for observability and monitoring
Add the required dependencies to your pom.xml
:
<dependency>
<groupId>dev.langchain4j.cdi</groupId>
<artifactId>langchain4j-cdi-portable-ext</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>dev.langchain4j.cdi</groupId>
<artifactId>langchain4j-cdi-config</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
Create an AI service interface annotated with @RegisterAIService
:
@RegisterAIService(
tools = BookingService.class,
chatMemoryName = "chat-memory"
)
public interface ChatAiService {
@SystemMessage("You are a helpful customer service assistant.")
@Timeout(unit = ChronoUnit.MINUTES, value = 5)
@Retry(maxRetries = 2)
@Fallback(fallbackMethod = "chatFallback")
String chat(String userMessage);
default String chatFallback(String userMessage) {
return "I'm temporarily unavailable. Please try again later.";
}
}
Inject the AI service into your CDI beans:
@RestController
public class ChatController {
@Inject
ChatAiService chatService;
@POST
@Path("/chat")
public String chat(String message) {
return chatService.chat(message);
}
}
Configure your AI models through microprofile-config.properties
:
# Chat model configuration
langchain4j.chat-model.provider=ollama
langchain4j.chat-model.ollama.base-url=http://localhost:11434
langchain4j.chat-model.ollama.model-name=llama3.1
# Memory configuration
langchain4j.chat-memory.chat-memory.max-messages=100
The project includes comprehensive examples for various Jakarta EE servers:
- Quarkus: Native LangChain4j integration with Quarkus-specific optimizations
- Helidon: Both portable extension and standard LangChain4j usage examples
- GlassFish: Full Jakarta EE server implementation
- Liberty: IBM WebSphere Liberty integration
- Payara: Payara Server implementation
Each example demonstrates a car booking application with:
- Chat Service: Natural language customer support with RAG (Retrieval Augmented Generation)
- Fraud Detection: AI-powered fraud detection service
- Function Calling: Integration with business logic through tool calling
Mistral 7B Instruct v0.2
On left goto "local server", select the model in dropdown combo on the top, then start server
Running Ollama with the llama3.1 model:
CONTAINER_ENGINE=$(command -v podman || command -v docker)
$CONTAINER_ENGINE run -d --rm --name ollama --replace --pull=always -p 11434:11434 -v ollama:/root/.ollama --stop-signal=SIGKILL docker.io/ollama/ollama
$CONTAINER_ENGINE exec -it ollama ollama run llama3.1
Go to each example README.md to see how to execute the example.
This integration is perfect for enterprise applications that need:
- Customer Support Chatbots: AI-powered customer service with access to business data
- Document Analysis: RAG-enabled document processing and question answering
- Fraud Detection: AI-based risk assessment and fraud prevention
- Content Generation: Automated content creation with business context
- Decision Support: AI-assisted business decision making
- Process Automation: Intelligent workflow automation with natural language interfaces
The CDI extension supports extensive configuration through MicroProfile Config:
# Ollama configuration
langchain4j.chat-model.provider=ollama
langchain4j.chat-model.ollama.base-url=http://localhost:11434
langchain4j.chat-model.ollama.model-name=llama3.1
langchain4j.chat-model.ollama.timeout=60s
# OpenAI configuration
langchain4j.chat-model.provider=openai
langchain4j.chat-model.openai.api-key=${OPENAI_API_KEY}
langchain4j.chat-model.openai.model-name=gpt-4
langchain4j.chat-memory.default.max-messages=100
langchain4j.chat-memory.default.type=token-window
langchain4j.embedding-model.provider=ollama
langchain4j.embedding-model.ollama.model-name=nomic-embed-text
Leverage MicroProfile Fault Tolerance annotations:
@RegisterAIService
public interface ResilientAiService {
@Retry(maxRetries = 3, delay = 1000)
@Timeout(5000)
@CircuitBreaker(requestVolumeThreshold = 10)
@Fallback(fallbackMethod = "handleFailure")
String processRequest(String input);
}
Automatic telemetry integration provides metrics for:
- Request/response times
- Success/failure rates
- Token usage
- Model performance
Define business functions that AI can call:
@Component
public class BookingTools {
@Tool("Cancel a booking by ID")
public String cancelBooking(@P("booking ID") String bookingId) {
// Business logic here
return "Booking " + bookingId + " cancelled successfully";
}
}
If you want to contribute, please have a look at CONTRIBUTING.md.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Ready to integrate AI into your enterprise Java application?
- Explore the examples: Start with the framework-specific examples in the
examples/
directory - Read the documentation: Check out individual module documentation for detailed configuration options
- Join the community: Connect with other developers using LangChain4j CDI integration
- Contribute: Help improve the project by reporting issues or submitting pull requests
Built with β€οΈ by the LangChain4j CDI Community