A lightweight Kotlin-based server using Ktor to proxy OpenAI API calls securely for use in Android or frontend apps — preventing direct exposure of API keys.
Built with:
- Ktor (Netty server, client-core, CIO, OkHttp, content negotiation)
- Jackson for JSON serialization
- dotenv-kotlin for secure environment variable management
- Logback for logging
- 🔐 Hides your OpenAI API key securely on the backend
- 📤 Supports standard and streaming (
text/event-stream
) OpenAI responses - 🌱 Lightweight and easy to deploy (runs via a single JAR)
- ⚙️ Built with modern Kotlin and coroutines
- 🧪 Designed for local dev or VPS deployment
- JDK 17 or higher
- Gradle
- OpenAI API key
git clone https://github.com/your-username/kotlin-proxy-llm-server.git
cd kotlin-proxy-llm-server
OPENAI_API_KEY=your_openai_key_here
./gradlew run
Server starts on http://localhost:8080
./gradlew shadowJar
The JAR will be located in build/libs/
.
Run it with:
java -jar build/libs/your-app-name-all.jar
Proxy a simple chat request to OpenAI.
{
"message": "Hello, how are you?"
}
{
"response": "I'm doing great, thanks!"
}
Streaming version supported with stream: true
in the request.
- ✅ Add streaming endpoint (
text/event-stream
) - 🔐 Add token-based authentication for client apps
- 🧪 Add tests and sample Android client
- 🌐 Deploy with Docker + Nginx (optional HTTPS)
- 📊 Logging + rate limiting middleware
- 🛞 Switchable models (GPT-4, GPT-4-turbo, GPT-4o, etc)