Skip to content

Conversation

@stadlmax
Copy link

@stadlmax stadlmax commented Oct 2, 2025

Description

minor improvements around extended use of torch.compile if applicable

What problem does this change solve?

  • increase recompile_limit in cases where frequent changes of graph sizes (despite being static across training) change frequently within a single batch (e.g. with chunking active) to allow dynamic=False more often
  • allow compilation of LayerNorm while complying with Autocast

What issue or task does this change relate to?

NA

Additional notes

As a contributor to the Anemoi framework, please ensure that your changes include unit tests, updates to any affected dependencies and documentation, and have been tested in a parallel setting (i.e., with multiple GPUs). As a reviewer, you are also responsible for verifying these aspects and requesting changes if they are not adequately addressed. For guidelines about those please refer to https://anemoi.readthedocs.io/en/latest/

By opening this pull request, I affirm that all authors agree to the Contributor License Agreement.

@mchantry mchantry changed the title [DRAFT] misc changes around torch.compile [DRAFT] ITT380 misc changes around torch.compile Oct 9, 2025
@HCookie HCookie moved this from To be triaged to On Pause in Anemoi-dev Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: On Pause

Development

Successfully merging this pull request may close these issues.

1 participant