Skip to content

Conversation

@breakcraft
Copy link

No description provided.

Modify the distributed backend to support Windows and add a PowerShell script for training.

* **Distributed Backend**:
  - Modify `initialize_global_process_group` in `verl/utils/distributed.py` to detect Windows and use the Gloo backend instead of NCCL.

* **Training Script**:
  - Create `scripts/train_tiny_zero.ps1` to set environment variables and call the Python training module, parameterized to accept paths and GPU counts.

* **Configuration**:
  - Update `verl/trainer/config/ppo_trainer.yaml` to set `tensor_model_parallel_size` to read from an environment variable if available.
  - Introduce a `data_parallel_size` setting.

* **Model Loading**:
  - Wrap vLLM imports in `verl/models/transformers/llama.py` in try/except and fall back to a simpler model loading if vLLM isn’t available.
  - Log a warning if vLLM is not found.

* **Attention Mechanism**:
  - Add checks for FlashAttention availability in `verl/models/transformers/monkey_patch.py` and use alternate code paths if not available.

* **Documentation**:
  - Add a Windows Setup section in `README.md` explaining how to install and run.
  - Include an example for 8 GPUs.
  - Document the new PowerShell script.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant