Git Commit Message Auto-Generation Tool Using Python
The auto-commit is a Python-based utility designed to automatically generate meaningful Git commit messages. It analyzes changes in your Git repository and produces descriptive commit messages, saving you time and ensuring consistent commit message formatting.
- Automatic Analysis: Scans staged changes (after 
git addorgit rm) to generate relevant commit messages, aligning with standard Git workflows. - Git Integration: Seamlessly integrates with Git workflow through 
git auto-commitalias after installation. - Interactive CLI: Generates commit messages through an interactive command-line interface.
 - Multiple Templates: Supports different commit message templates to suit various workflows.
 - Customizable Rules: Allows users to define custom rules for commit message generation.
 
❗ System Requirements: Currently only supports on Linux system. Contributions for Windows or macOS compatibility are welcome from interested developers.
We provide an installation script (install.sh) that handles:
- Creating a Python virtual environment (
venv) and installing dependencies - Deploying the 
auto-commit.shscript to your home directory (~/.auto-commit.sh) - Setting up the Git alias 
git auto-commit 
Run the installer with:
chmod +x install.sh && ./install.shFor custom setups, you can:
- Modify the script template from 
scripts/auto-commit.sh 
cp scripts/auto-commit.sh ~/.auto-commit.sh
vi ~/.auto-commit.sh  # Make your custom edits- Set execute permissions:
 
chmod +x ~/.auto-commit.sh- Create Git alias manually:
 
git config --global alias.auto-commit '!f() { ~/.auto-commit.sh; }; f'To use the auto-commit tool, run the following command:
git auto-commitThis will open an interactive CLI where you can select the commit message template and provide any additional information.
Example workflow showing:
- Running 
git auto-commitafter staging changes - Reviewing the generated commit message on interactive mode
 - Using 
Tabto accept orEnterto regenerate 
The file .env in your project root configures AI parameters:
# Local AI Settings
LLM_MODEL = "qwen2.5:0.5b"       # Model name/version
LLM_HOST = "http://localhost:11434"  # Local AI server URL
# OpenAI Settings (alternative)
OPENAI_API_KEY = "your-api-key-here"           # Required for OpenAI API
OPENAI_API_BASE = "https://api.openai.com/v1"  # OpenAI endpoint
# Generation Parameters
SYSTEM_PROMPT = ""               # Custom instruction template
TIMEOUT = 60                     # API timeout in seconds
MAX_TOKENS = 2000                # Max response length
TEMPERATURE = 0.8                # Creativity vs determinism (0-1)The system automatically detects which to use based on:
- If 
LLM_MODELstarts withgptANDOPENAI_API_KEYexists → Uses OpenAI - Otherwise → Uses local LLM with specified 
LLM_HOST 
You can switch between them just by changing the .env file - no code changes needed!
Here's how to setup local LLM model via Ollama:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# pull your desired model 
ollama pull llama2
# run your local LLM server
ollama serveModify src/template/system_prompt.j2 to change AI behavior:
{# Example customization - Add new commit type rule #}
You are an expert software engineer analyzing Git changes...This project is licensed under the MIT License. See the LICENSE file for details.
