The application consists of a Python/Flask backend and a React.js/Vite frontend, supporting multiple LLM providers.
- Backend: Python, Flask, UV
- Frontend: React.js, Vite
- LLM Providers:
- Google (Gemini 1.5/2.5/Flash/Pro)
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 2, Claude 3)
- OpenRouter (multiple models)
- Groq (Llama3, Mixtral, Gemma)
- Mistral
- Ollama (local models)
- LM Studio (local models)
- Cerebras
- SambaNova
graph TD
A[Project Root] --> B(Flask Backend)
A --> C(frontend-vite)
A --> D(.env)
B --> E(app.py)
B --> F(api.py)
B --> G(uv.lock)
C --> H(src)
H --> I(App.jsx)
H --> J(PEAChat.jsx)
C --> K(package.json)
To set up and run MyPrompt, follow these steps:
- Python 3.7+: Ensure Python is installed on your system.
- Node.js and npm: Ensure Node.js and npm are installed.
- UV: Install UV by following the instructions on the UV documentation.
- Clone the GitHub repo into a directory of your chosing:
- On Windows:
git clone https://github.com/AlexJ-StL/MyPrompt.git
- On Windows:
- Navigate to the project root directory in your terminal.
- On Windows:
cd MyPrompt
- Create a virtual environment using UV:
uv init && uv .venv
- Activate the virtual environment:
- On Windows:
.\.venv\Scripts\activate
- On macOS and Linux:
source ./.venv/bin/activate
- On Windows:
- Install the backend dependencies using UV:
uv sync
- API Key Configuration:
- Copy the
.example.env
file to.env
:cp .example.env .env
- Edit the
.env
file to include your API keys for the providers you want to use - For OpenAI-compatible providers (OpenRouter, Groq, Mistral), you can use your OpenAI API key as a fallback:
OPENAI_API_KEY=your_openai_api_key_here GOOGLE_API_KEY=your_google_api_key_here
- For local providers (Ollama, LM Studio), you don't need API keys but need to run the respective applications
- Copy the
- Navigate to the
frontend-vite
directory:cd frontend-vite
- Install the frontend dependencies:
npm install
-
Start the Backend:
- Navigate to the project root directory.
- Activate the backend virtual environment (if not already active).
- Run the Flask application:
uv run app.py
The backend server should start, typically on
http://127.0.0.1:5000
. -
Start the Frontend:
- Navigate to the
frontend-vite
directory:cd frontend-vite
- Start the Vite development server:
npm run dev
The frontend development server should start, typically on
http://localhost:5173
. - Navigate to the
-
Use the Application:
- Open your web browser and go to the frontend address (e.g.,
http://localhost:5173
). - Enter your natural language request in the provided input area.
- Click the button to trigger the prompt optimization.
- The optimized XML prompt will be displayed on the page.
- Open your web browser and go to the frontend address (e.g.,
- Navigate to the project root directory.
- Activate the backend virtual environment.
- Run the pytest tests:
pytest
- Navigate to the
frontend-vite
directory. - Run the frontend tests (assuming you have a testing framework configured, e.g., Vitest or Jest):
(Note: Frontend tests are not fully implemented yet based on the provided files, but this is the general command.)
npm test
- Added multi-provider support for:
- OpenAI (GPT-4, GPT-3.5)
- Google Gemini (1.5, 2.5, Flash, Pro)
- Anthropic (Claude)
- OpenRouter (various models)
- Groq (Llama3, Mixtral, Gemma)
- Mistral
- Ollama (local models)
- LM Studio (local models)
- Cerebras
- SambaNova
- Implemented environment variable configuration via
.env
file - Added
.example.env
with setup instructions - Updated API routing to handle provider selection
- Improved error handling and input validation
- Updated documentation for new setup and usage
- Implement comprehensive frontend tests.
- Add more advanced styling and UI/UX improvements.
- Explore additional LLM parameters and optimization techniques.
- Implement error handling and user feedback for API calls.
- Add user interface for selecting providers and models.
- Implement fallback mechanism for provider failures.