Skip to content

A lightweight Kotlin-based server using Ktor to proxy OpenAI API calls securely for use in Android or frontend apps — preventing direct exposure of API keys.

Notifications You must be signed in to change notification settings

amirghm/Kotlin-Proxy-LLM-Server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kotlin Proxy LLM Server

A lightweight Kotlin-based server using Ktor to proxy OpenAI API calls securely for use in Android or frontend apps — preventing direct exposure of API keys.

Built with:

  • Ktor (Netty server, client-core, CIO, OkHttp, content negotiation)
  • Jackson for JSON serialization
  • dotenv-kotlin for secure environment variable management
  • Logback for logging

🚀 Features

  • 🔐 Hides your OpenAI API key securely on the backend
  • 📤 Supports standard and streaming (text/event-stream) OpenAI responses
  • 🌱 Lightweight and easy to deploy (runs via a single JAR)
  • ⚙️ Built with modern Kotlin and coroutines
  • 🧪 Designed for local dev or VPS deployment

📦 Requirements

  • JDK 17 or higher
  • Gradle
  • OpenAI API key

🛠 Setup & Run

1. Clone the repo

git clone https://github.com/your-username/kotlin-proxy-llm-server.git
cd kotlin-proxy-llm-server

2. Create a .env file

OPENAI_API_KEY=your_openai_key_here

3. Run locally

./gradlew run

Server starts on http://localhost:8080

4. Build a fat JAR for deployment

./gradlew shadowJar

The JAR will be located in build/libs/.

Run it with:

java -jar build/libs/your-app-name-all.jar

📡 API

POST /ask

Proxy a simple chat request to OpenAI.

Request Body (JSON):

{
  "message": "Hello, how are you?"
}

Response (JSON):

{
  "response": "I'm doing great, thanks!"
}

Streaming version supported with stream: true in the request.


🛣 Roadmap / Future Ideas

  • ✅ Add streaming endpoint (text/event-stream)
  • 🔐 Add token-based authentication for client apps
  • 🧪 Add tests and sample Android client
  • 🌐 Deploy with Docker + Nginx (optional HTTPS)
  • 📊 Logging + rate limiting middleware
  • 🛞 Switchable models (GPT-4, GPT-4-turbo, GPT-4o, etc)

📃 License

MIT


✨ Credits

About

A lightweight Kotlin-based server using Ktor to proxy OpenAI API calls securely for use in Android or frontend apps — preventing direct exposure of API keys.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published