A Docker container for SimpleTuner, a general fine-tuning kit for diffusion models. This repository provides a pre-configured environment for training Stable Diffusion, SDXL, Flux, and other diffusion models.
- Docker installed on your system
- Git installed on your system
- Basic familiarity with command line operations
- GPU with at least 8GB VRAM (16GB+ recommended)
-
Clone this repository:
git clone https://github.com/chrevdog/SimpleTuner.git cd SimpleTuner
-
Set up the upstream connection (one-time setup):
git remote add upstream https://github.com/bghira/SimpleTuner.git
-
Build the Docker image:
docker build -t theloupedevteam/simpletuner-docker:latest .
-
Run the container:
docker run -it --gpus all -p 8888:8888 -v $(pwd)/storage:/workspace/storage theloupedevteam/simpletuner-docker:latest
-
Access JupyterLab:
- Open your browser and go to
http://localhost:8888
- No password or token required
- Open your browser and go to
When you want to update to the latest version of SimpleTuner:
# Fetch the latest changes from upstream
git fetch upstream
# Switch to main branch
git checkout main
# Merge the latest changes
git merge upstream/main
# Navigate to the SimpleTuner directory
cd SimpleTuner
# Pull the latest changes
git pull origin main
# Go back to the root directory
cd ..
# Build with the new version
docker build -t theloupedevteam/simpletuner-docker:latest .
# Tag with version number (optional but recommended)
docker tag theloupedevteam/simpletuner-docker:latest theloupedevteam/simpletuner-docker:v2.1.2
# Login to Docker Hub (if not already logged in)
docker login
# Push the latest version
docker push theloupedevteam/simpletuner-docker:latest
# Push the versioned tag
docker push theloupedevteam/simpletuner-docker:v2.1.2
# Add all changes
git add .
# Commit with a descriptive message
git commit -m "Update SimpleTuner to v2.1.2"
# Push to your repository
git push origin main
- RAM: 24 min, aim for 48 GB system RAM
-
Out of Memory (OOM) Errors:
- Reduce batch size:
--train_batch_size 1
- Use LoRA:
--use_lora
- Enable mixed precision:
--mixed_precision "fp16"
- Use gradient accumulation:
--gradient_accumulation_steps 4
- Reduce batch size:
-
Slow Training:
- Increase batch size if memory allows
- Use
--mixed_precision "fp16"
- Consider using multiple GPUs with DeepSpeed
-
Poor Training Results:
- Check your dataset quality and captions
- Adjust learning rate (try 1e-4 to 1e-5)
- Increase training steps
- Use proper data augmentation
For detailed information about SimpleTuner features and capabilities, refer to:
- Fork this repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Commit your changes:
git commit -m 'Add feature'
- Push to the branch:
git push origin feature-name
- Submit a pull request
This project is licensed under the AGPL-3.0 License - see the LICENSE file for details.
For issues related to:
- SimpleTuner functionality: Original SimpleTuner Issues
- Docker container: Create an issue in this repository
- Training problems: Check the SimpleTuner Documentation