Skip to content

Update deepspeed requirement from <0.6.0 to <0.7.0 #13191

@carmocca

Description

@carmocca

🚀 Feature

Re-land #13048
Blocked by a failing Lite test:

pytorch_lightning/lite/lite.py:407: in _run_impl
    return run_method(*args, **kwargs)
pytorch_lightning/lite/lite.py:412: in _run_with_strategy_setup
    return run_method(*args, **kwargs)
tests/lite/test_lite.py:402: in run
    model, optimizer = self.setup(model, optimizer)
pytorch_lightning/lite/lite.py:173: in setup
    model, optimizers = self._strategy._setup_model_and_optimizers(model, list(optimizers))
pytorch_lightning/strategies/deepspeed.py:414: in _setup_model_and_optimizers
    self.model, optimizer = self._setup_model_and_optimizer(model, optimizers[0])
pytorch_lightning/strategies/deepspeed.py:426: in _setup_model_and_optimizer
    deepspeed_engine, deepspeed_optimizer, _, _ = deepspeed.initialize(
/usr/local/lib/python3.9/dist-packages/deepspeed/__init__.py:120: in initialize
    engine = DeepSpeedEngine(args=args,
/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/zero/partition_parameters.py:377: in wrapper
    if not hasattr(module, "_ds_child_entered"):
/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py:432: in __getattr__
    if name in dir(self):
/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py:1847: in __dir__
    parameters = list(self._parameters.keys())
/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py:432: in __getattr__
    if name in dir(self):
/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py:1847: in __dir__
    parameters = list(self._parameters.keys())
E   RecursionError: maximum recursion depth exceeded while calling a Python object
!!! Recursion detected (same locals & position)
=========================== short test summary info ============================
FAILED tests/lite/test_lite.py::test_deepspeed_multiple_models - RecursionErr...

If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

cc @carmocca @akihironitta @Borda @SeanNaren @awaelchli @rohitgr7

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions