2929atoms exceeds available GPU memory. The `torch_sim.autobatching` module solves this by:
3030
31311. Automatically determining optimal batch sizes based on GPU memory constraints
32- 2. Providing two complementary strategies: chunking and hot-swapping
32+ 2. Providing two complementary strategies: binning and in-flight
33333. Efficiently managing memory resources during large-scale simulations
3434
3535Let's explore how to use these powerful features!
@@ -120,9 +120,9 @@ def mock_determine_max_batch_size(*args, **kwargs):
120120This is a verbose way to determine the max memory metric, we'll see a simpler way
121121shortly.
122122
123- ## ChunkingAutoBatcher : Fixed Batching Strategy
123+ ## BinningAutoBatcher : Fixed Batching Strategy
124124
125- Now on to the exciting part, autobatching! The `ChunkingAutoBatcher ` groups states into
125+ Now on to the exciting part, autobatching! The `BinningAutoBatcher ` groups states into
126126batches with a binpacking algorithm, ensuring that we minimize the total number of
127127batches while maximizing the GPU utilization of each batch. This approach is ideal for
128128scenarios where all states need to be processed the same number of times, such as
@@ -132,7 +132,7 @@ def mock_determine_max_batch_size(*args, **kwargs):
132132"""
133133
134134# %% Initialize the batcher, the max memory scaler will be computed automatically
135- batcher = ts .ChunkingAutoBatcher (
135+ batcher = ts .BinningAutoBatcher (
136136 model = mace_model ,
137137 memory_scales_with = "n_atoms" ,
138138)
@@ -167,11 +167,11 @@ def process_batch(batch):
167167maximum safe batch size through test runs on your GPU. However, the max memory scaler
168168is typically fixed for a given model and simulation setup. To avoid calculating it
169169every time, which is a bit slow, you can calculate it once and then include it in the
170- `ChunkingAutoBatcher ` constructor.
170+ `BinningAutoBatcher ` constructor.
171171"""
172172
173173# %%
174- batcher = ts .ChunkingAutoBatcher (
174+ batcher = ts .BinningAutoBatcher (
175175 model = mace_model ,
176176 memory_scales_with = "n_atoms" ,
177177 max_memory_scaler = max_memory_scaler ,
@@ -192,7 +192,7 @@ def process_batch(batch):
192192nvt_state = nvt_init (state )
193193
194194# Initialize the batcher
195- batcher = ts .ChunkingAutoBatcher (
195+ batcher = ts .BinningAutoBatcher (
196196 model = mace_model ,
197197 memory_scales_with = "n_atoms" ,
198198)
@@ -217,13 +217,13 @@ def process_batch(batch):
217217
218218# %% [markdown]
219219"""
220- ## HotSwappingAutoBatcher : Dynamic Batching Strategy
220+ ## InFlightAutoBatcher : Dynamic Batching Strategy
221221
222- The `HotSwappingAutoBatcher ` optimizes GPU utilization by dynamically removing
222+ The `InFlightAutoBatcher ` optimizes GPU utilization by dynamically removing
223223converged states and adding new ones. This is ideal for processes like geometry
224224optimization where different states may converge at different rates.
225225
226- The `HotSwappingAutoBatcher ` is more complex than the `ChunkingAutoBatcher ` because
226+ The `InFlightAutoBatcher ` is more complex than the `BinningAutoBatcher ` because
227227it requires the batch to be dynamically updated. The swapping logic is handled internally,
228228but the user must regularly provide a convergence tensor indicating which batches in
229229the state have converged.
@@ -236,7 +236,7 @@ def process_batch(batch):
236236fire_state = fire_init (state )
237237
238238# Initialize the batcher
239- batcher = ts .HotSwappingAutoBatcher (
239+ batcher = ts .InFlightAutoBatcher (
240240 model = mace_model ,
241241 memory_scales_with = "n_atoms" ,
242242 max_memory_scaler = 1000 ,
@@ -296,7 +296,7 @@ def process_batch(batch):
296296"""
297297
298298# %% Initialize with return_indices=True
299- batcher = ts .ChunkingAutoBatcher (
299+ batcher = ts .BinningAutoBatcher (
300300 model = mace_model ,
301301 memory_scales_with = "n_atoms" ,
302302 max_memory_scaler = 80 ,
@@ -317,8 +317,8 @@ def process_batch(batch):
317317TorchSim's autobatching provides powerful tools for GPU-efficient simulation of
318318multiple systems:
319319
320- 1. Use `ChunkingAutoBatcher ` for simpler workflows with fixed iteration counts
321- 2. Use `HotSwappingAutoBatcher ` for optimization problems with varying convergence
320+ 1. Use `BinningAutoBatcher ` for simpler workflows with fixed iteration counts
321+ 2. Use `InFlightAutoBatcher ` for optimization problems with varying convergence
322322 rates
3233233. Let the library handle memory management automatically, or specify limits manually
324324
0 commit comments