Skip to content

Conversation

@junrushao
Copy link
Member

This PR adds a pass LiftThreadBinding to TIR.

Previously, during GPU cross-thread reduction, a temporary local buffer will be created in the RF buffer, as a concrete example:

rf_local = T.alloc_buffer(..., scope="local")

// Step 1. Data parallel RF block
for tx in T.thread_binding(..., thread="threadIdx.x")
  rf_local[tx, ...] =

// Step 2. Cross-thread reduction to accumuate rf_local
for ...:
  for tx' in T.thread_binding(..., thread="threadIdx.x"):
    ... += rf_local[tx', ...]

In this case, the buffer rf_local will only be accessed by a single point tx or tx', but during the pass CompactBuffeRegion, the two variables as thread bindings are treated as two separate variables, i.e. the information that tx and tx' are always identical to each other is discarded, which means the accessed region on rf_local are estimated as Union({tx}, {tx'}) as opposed to {tx}, leading over allocation of local registers.

This pass is introduced to address this issue by lifting thread bindings to their LCAs.

This PR adds a pass LiftThreadBinding to TIR.

Previously, during GPU cross-thread reduction, a temporary local buffer
will be created in the RF buffer, as a concrete example:

```python
rf_local = T.alloc_buffer(..., scope="local")

// Step 1. Data parallel RF block
for tx in T.thread_binding(..., thread="threadIdx.x")
  rf_local[tx, ...] =

// Step 2. Cross-thread reduction to accumuate rf_local
for ...:
  for tx' in T.thread_binding(..., thread="threadIdx.x"):
    ... += rf_local[tx', ...]
```

In this case, the buffer `rf_local` will only be accessed by a single
point `tx` or `tx'`, but during the pass `CompactBuffeRegion`, the two
variables as thread bindings are treated as two separate variables, i.e.
the information that `tx` and `tx'` are always identical to each other
is discarded, which means the accessed region on `rf_local` are
estimated as `Union({tx}, {tx'})` as opposed to `{tx}`, leading over
allocation of local registers.

This pass is introduced to address this issue by lifting thread bindings
to their LCAs.
@tvm-bot
Copy link
Collaborator

tvm-bot commented Jul 2, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@junrushao junrushao marked this pull request as ready for review July 2, 2023 23:44
@jinhongyii jinhongyii merged commit 5931cf1 into apache:main Jul 3, 2023
junrushao added a commit to junrushao/tvm that referenced this pull request Jul 4, 2023
This PR enhances Decode-GEMV rule with the following changes:
- Normalize the GEMV iter domain to S-R-C via transform-block-layout.
  This would help with further analysis and scheduling, in cases for
  example, when there was no spatial loop in the original reduction
  block.
- Get rid of the ad hoc iter type analysis, including the logic calling
  into a TVM packed func `tir.schedule.GetLoopIterType` using
  `tvm._ffi.get_global_func`.
- Split out the logic for two separate cases of scheduling, where the
  innermost dimension is spatial or reduction.
- Introduces `suggest_threads_per_block` to guess the threads to be
  allocated each threadblock. This helps avoid the previous case where
  dlight allocates 256 threads for a workload whose degree of parallelism
  is only 128.
- Misc improvements.

This rest of the changes are split out to separate PRs that are already
merged to main.
- [x] Pass the hints to arithmetic analyzer that shape variables should
be positive ones (apache#15210)
- [x] Eliminate unnecessary block predicate generation - should be
provable via affine analysis (apache#15193)
- [x] Shrink local memory allocation if only one element `X[threadIdx.x]`
is used (apache#15207)
junrushao added a commit to junrushao/tvm that referenced this pull request Jul 4, 2023
This PR enhances Decode-GEMV rule with the following changes:
- Normalize the GEMV iter domain to S-R-C via transform-block-layout.
  This would help with further analysis and scheduling, in cases for
  example, when there was no spatial loop in the original reduction
  block.
- Get rid of the ad hoc iter type analysis, including the logic calling
  into a TVM packed func `tir.schedule.GetLoopIterType` using
  `tvm._ffi.get_global_func`.
- Split out the logic for two separate cases of scheduling, where the
  innermost dimension is spatial or reduction.
- Introduces `suggest_threads_per_block` to guess the threads to be
  allocated each threadblock. This helps avoid the previous case where
  dlight allocates 256 threads for a workload whose degree of parallelism
  is only 128.
- Misc improvements.

This rest of the changes are split out to separate PRs that are already
merged to main.
- [x] Pass the hints to arithmetic analyzer that shape variables should
be positive ones (apache#15210)
- [x] Eliminate unnecessary block predicate generation - should be
provable via affine analysis (apache#15193)
- [x] Shrink local memory allocation if only one element `X[threadIdx.x]`
is used (apache#15207)
junrushao added a commit to junrushao/tvm that referenced this pull request Jul 4, 2023
This PR enhances Decode-GEMV rule with the following changes:
- Normalize the GEMV iter domain to S-R-C via transform-block-layout.
  This would help with further analysis and scheduling, in cases for
  example, when there was no spatial loop in the original reduction
  block.
- Get rid of the ad hoc iter type analysis, including the logic calling
  into a TVM packed func `tir.schedule.GetLoopIterType` using
  `tvm._ffi.get_global_func`.
- Split out the logic for two separate cases of scheduling, where the
  innermost dimension is spatial or reduction.
- Introduces `suggest_threads_per_block` to guess the threads to be
  allocated each threadblock. This helps avoid the previous case where
  dlight allocates 256 threads for a workload whose degree of parallelism
  is only 128.
- Misc improvements.

This rest of the changes are split out to separate PRs that are already
merged to main.
- [x] Pass the hints to arithmetic analyzer that shape variables should
be positive ones (apache#15210)
- [x] Eliminate unnecessary block predicate generation - should be
provable via affine analysis (apache#15193)
- [x] Shrink local memory allocation if only one element `X[threadIdx.x]`
is used (apache#15207)
junrushao added a commit to junrushao/tvm that referenced this pull request Jul 4, 2023
This PR enhances Decode-GEMV rule with the following changes:
- Normalize the GEMV iter domain to S-R-C via transform-block-layout.
  This would help with further analysis and scheduling, in cases for
  example, when there was no spatial loop in the original reduction
  block.
- Get rid of the ad hoc iter type analysis, including the logic calling
  into a TVM packed func `tir.schedule.GetLoopIterType` using
  `tvm._ffi.get_global_func`.
- Split out the logic for two separate cases of scheduling, where the
  innermost dimension is spatial or reduction.
- Introduces `suggest_threads_per_block` to guess the threads to be
  allocated each threadblock. This helps avoid the previous case where
  dlight allocates 256 threads for a workload whose degree of parallelism
  is only 128.
- Misc improvements.

This rest of the changes are split out to separate PRs that are already
merged to main.
- [x] Pass the hints to arithmetic analyzer that shape variables should
be positive ones (apache#15210)
- [x] Eliminate unnecessary block predicate generation - should be
provable via affine analysis (apache#15193)
- [x] Shrink local memory allocation if only one element `X[threadIdx.x]`
is used (apache#15207)
junrushao added a commit to junrushao/tvm that referenced this pull request Jul 5, 2023
This PR enhances Decode-GEMV rule with the following changes:
- Normalize the GEMV iter domain to S-R-C via transform-block-layout.
  This would help with further analysis and scheduling, in cases for
  example, when there was no spatial loop in the original reduction
  block.
- Get rid of the ad hoc iter type analysis, including the logic calling
  into a TVM packed func `tir.schedule.GetLoopIterType` using
  `tvm._ffi.get_global_func`.
- Split out the logic for two separate cases of scheduling, where the
  innermost dimension is spatial or reduction.
- Introduces `suggest_threads_per_block` to guess the threads to be
  allocated each threadblock. This helps avoid the previous case where
  dlight allocates 256 threads for a workload whose degree of parallelism
  is only 128.
- Misc improvements.

This rest of the changes are split out to separate PRs that are already
merged to main.
- [x] Pass the hints to arithmetic analyzer that shape variables should
be positive ones (apache#15210)
- [x] Eliminate unnecessary block predicate generation - should be
provable via affine analysis (apache#15193)
- [x] Shrink local memory allocation if only one element `X[threadIdx.x]`
is used (apache#15207)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants