Run this command to instantly set up and launch Unsloth for local (non-swarm) Docker setups:
curl -fsSL https://raw.githubusercontent.com/liveaverage/unsloth-launch/refs/heads/main/oneshot.sh | bashWhat happens:
- The Unsloth repo is cloned to
/tmp/unsloth-launch - The environment is configured for no Jupyter password
- The Docker container is started with GPU support (if available)
- Jupyter Lab is available at http://localhost:8888
Note:
- This setup is for local Docker Compose only (not Swarm mode)
- GPU support is enabled via
gpus: allindocker-compose.yml - No password is set for Jupyter Lab by default
| GPU Configuration | Brev Launchable |
|---|---|
| NVIDIA L4 x1 | |
| NVIDIA L40S x1 | |
| NVIDIA H100 x1 |
Note: Configuration automatically detects and uses all available GPUs via gpus: all in docker-compose.yml.
A comprehensive Docker Compose framework for running Unsloth notebooks locally with environment variable-based configuration for automatic notebook loading and model selection.
- Docker and Docker Compose - Install Docker
- Modern Docker installations include
docker composeas a subcommand - Legacy
docker-compose(separate binary) is also supported
- Modern Docker installations include
- NVIDIA Container Toolkit (for GPU support) - Install Guide
-
Clone and navigate to the repo:
git clone https://github.com/your-repo/unsloth-launch.git cd unsloth-launch -
Start with default configuration (no setup needed!):
docker-compose up -d
-
Access Jupyter Lab: Open http://localhost:8888 in your browser
This framework provides multiple ways to run Unsloth:
Uses the official unsloth/unsloth Docker image directly with minimal customization.
Builds on top of the official image with additional tools and automation.
brev-unsloth/
βββ docker-compose.yml # Main orchestration file
βββ Dockerfile # Custom image (optional)
βββ .env.template # Environment configuration template
βββ scripts/
β βββ launcher.sh # Main launcher script
β βββ start-jupyter.sh # Jupyter startup enhancement
β βββ download-notebooks.sh # Notebook downloader
βββ configs/ # Pre-configured environments
β βββ llama3-8b.env # Llama 3 8B configuration
β βββ mistral-7b.env # Mistral 7B configuration
β βββ qwen2-7b.env # Qwen 2 7B configuration
β βββ codellama-7b.env # CodeLlama 7B configuration
βββ work/ # Your main workspace
βββ custom-notebooks/ # Additional custom notebooks
βββ data/ # Dataset storage
βββ models/ # Model cache
βββ outputs/ # Training outputs
# Start with basic Unsloth environment
./scripts/launcher.sh start
# Access: http://localhost:8888
# Official notebooks: /workspace/unsloth-notebooks/
# Your work: /workspace/work/# Start with Llama 3 configuration
./scripts/launcher.sh start llama3-8b
# Or manually with environment file:
docker-compose --env-file configs/llama3-8b.env up -d# Set up your environment
export MODEL_NAME=unsloth/llama-3-8b-bnb-4bit
export JUPYTER_PASSWORD=mysecurepassword
export HF_TOKEN=your_huggingface_token
# Start container
./scripts/launcher.sh start# Set notebook URL in .env
NOTEBOOK_URL=https://raw.githubusercontent.com/unslothai/unsloth/main/examples/Llama-3-8b_Alpaca.ipynb
AUTO_START_NOTEBOOK=true
./scripts/launcher.sh startAll configuration is via environment variables with sensible defaults built into docker-compose.yml.
Override any setting by exporting before starting:
export JUPYTER_HOST_PORT=9000
export HF_TOKEN=hf_xxxxx
docker compose up -dCONTAINER_NAME: Container name (default:unsloth-notebook)JUPYTER_PORT: Jupyter port (default:8888)JUPYTER_PASSWORD: Jupyter password protectionSSH_HOST_PORT: SSH access port (default:2222)
HF_TOKEN: Hugging Face token for private modelsWANDB_API_KEY: Weights & Biases API keySSH_KEY: SSH public key for container accessUSER_PASSWORD: Container user password (default:unsloth2024)
MODEL_NAME: Model to pre-load (e.g.,unsloth/llama-3-8b-bnb-4bit)MODEL_CACHE_DIR: Model storage directory (default:/workspace/models)DATASET_NAME: Dataset to use
MAX_SEQ_LENGTH: Maximum sequence length (default:2048)BATCH_SIZE: Training batch size (default:2)LEARNING_RATE: Learning rate (default:2e-4)NUM_TRAIN_EPOCHS: Number of training epochs (default:1)
MEMORY_LIMIT: Container memory limit (default:16G)GPU_COUNT: Number of GPUs to use (default:all)
./scripts/launcher.sh start llama3-8b- Model:
unsloth/llama-3-8b-bnb-4bit - Optimized for Alpaca dataset
- Sequence length: 2048
./scripts/launcher.sh start mistral-7b- Model:
unsloth/mistral-7b-bnb-4bit - Optimized for instruction following
./scripts/launcher.sh start qwen2-7b- Model:
unsloth/qwen2-7b-bnb-4bit - Multi-language support
./scripts/launcher.sh start codellama-7b- Model:
unsloth/codellama-7b-bnb-4bit - Optimized for code generation
- Longer sequence length: 4096
The official Unsloth image includes SSH server support:
# Generate SSH key pair
ssh-keygen -t rsa -b 4096 -f ~/.ssh/unsloth_key
# Set SSH_KEY environment variable
export SSH_KEY="$(cat ~/.ssh/unsloth_key.pub)"
# Start container
./scripts/launcher.sh start
# Connect via SSH
ssh -i ~/.ssh/unsloth_key -p 2222 unsloth@localhostTo use the enhanced image with additional tools:
-
Edit
docker-compose.yml:services: unsloth-jupyter: # Comment out the image line # image: unsloth/unsloth:latest # Uncomment the build section build: context: . dockerfile: Dockerfile
-
Build and start:
./scripts/launcher.sh build ./scripts/launcher.sh start
./scripts/launcher.sh downloadexport NOTEBOOK_URL=https://example.com/my-notebook.ipynb
./scripts/launcher.sh start/workspace/unsloth-notebooks/- Official example notebooks/workspace/work/- Your main workspace (mounted to./work/)/home/unsloth/- User home directory
/workspace/custom-notebooks/- Custom downloaded notebooks/workspace/data/- Dataset storage/workspace/models/- Model cache/workspace/outputs/- Training outputs
β FIXED: This framework includes an automatic fix for the GPU detection issue in the official Unsloth image.
The fix:
- Dynamically detects your CUDA version (11.x, 12.x, etc.)
- Automatically configures Jupyter kernels with CUDA paths
- No manual configuration needed - works out of the box!
See GPU_CUDA_FIX.md for technical details.
Common GPU Issues:
- Verify NVIDIA drivers on host:
nvidia-smi - Check Docker GPU support:
docker run --rm --gpus all nvidia/cuda:12.8-base-ubuntu20.04 nvidia-smi - Verify NVIDIA Container Toolkit:
docker run --rm --gpus all nvidia/cuda:12.8-base-ubuntu20.04 nvidia-smi - Test LD_LIBRARY_PATH in container:
docker exec unsloth-notebook echo $LD_LIBRARY_PATH
Adjust memory limits in .env:
MEMORY_LIMIT=32G
MEMORY_RESERVATION=16GChange port mappings:
JUPYTER_HOST_PORT=8889
SSH_HOST_PORT=2223Set a password for security:
JUPYTER_PASSWORD=your_secure_passwordexport WANDB_API_KEY=your_wandb_key
./scripts/launcher.sh startexport HF_TOKEN=your_hf_token
export MODEL_NAME=meta-llama/Llama-2-7b-hf
./scripts/launcher.sh start# Place dataset in ./data/ directory
export DATASET_NAME=my_custom_dataset
./scripts/launcher.sh start# Start container
./scripts/launcher.sh start [config]
# View logs
./scripts/launcher.sh logs
# Stop container
./scripts/launcher.sh stop
# Download notebooks
./scripts/launcher.sh download
# Clean up
./scripts/launcher.sh clean
# List configurations
./scripts/launcher.sh list
# Build custom image
./scripts/launcher.sh build
# Run tests
./scripts/test.shRun the test suite to verify your setup:
./scripts/test.shThis checks:
- β Required files present
- β Shell script syntax
- β Docker compose configuration
- β Sudoers file permissions
- β Directory structure
This project uses GitHub Actions for CI. On every push/PR:
- ShellCheck linting
- Docker build validation
- Syntax checks
See .github/workflows/ci.yml for details.
- Container runs as non-root
unslothuser - Use strong passwords for Jupyter and SSH
- SSH access requires public key authentication
- Consider firewall rules for production deployments
This framework is provided as-is. The official Unsloth software has its own licensing terms.
- Fork the repository
- Create your feature branch
- Make your changes
- Test with different configurations
- Submit a pull request
Note: This framework is designed to work with the official Unsloth Docker image and provides additional automation and configuration management on top of it.