-
Notifications
You must be signed in to change notification settings - Fork 559
Description
🚀 Feature
As PyTorch/XLA migrates to the LTC (Lazy Tensor Core), we need to clean up the existing stub code (which spans over 6+ files) that were used to do the op lowering. The complete process and file structure for the old op lowering can be found in this doc. Replacing the supported op with the codegen SHOULD NOT introduce any new behavior, it is purely for the clean up purpose.
TODO migrate @JackCaoG's codegen guide doc to a publicly available .md in the repo.
Ops
List of ops that we will replace with full codgen. Please add the following information under each op that you choose to work on: the status of the task, link to the PR, and the owner. For example:
- Status: Merged
- PR: https://github.com/pytorch/xla/pull/3565
- Owner: @wonjoolee95
Once the corresponding PR is merged, feel free to check the box to mark completion.
- __ilshift__.Scalar
- __ilshift__.Tensor
- __irshift__.Scalar
- __irshift__.Tensor
- __lshift__.Scalar
- __lshift__.Tensor
- __rshift__.Scalar
- __rshift__.Tensor
- _adaptive_avg_pool2d #3763
- _adaptive_avg_pool2d_backward #3764
- _adaptive_avg_pool3d #3787
- _adaptive_avg_pool3d_backward #3788
- _amp_foreach_non_finite_check_and_unscale_
- _amp_update_scale_
- _copy_from
- _copy_from_and_resize
- _index_put_impl_
- _local_scalar_dense
- _log_softmax
- _log_softmax_backward_data
- _pack_padded_sequence
- _softmax
- _softmax_backward_data
- _to_cpu
- _trilinear
- _unsafe_view
- abs
- Status: Merged
- PR: Codegen Abs and Maximum op #3544
- Owner: @JackCaoG
- acos
- Status: Merged
- PR: Codegen acos and acosh #3564
- Owner: @wonjoolee95
- acosh
- Status: Merged
- PR: Codegen acos and acosh #3564
- Owner: @wonjoolee95
- adaptive_max_pool2d
- adaptive_max_pool2d_backward
- add.Scalar
- add.Tensor
- addcdiv #3765
- addcdiv_ #3766
- addcmul #3767
- addmm
- alias
- all
- all.dim
- amax #3769
- amin #3770
- any
- any.dim
- arange.start_out
- argmax
- Status: Merged
- PR: Update op argmin and argmax from manual lowering to codegen. #5586 (comment)
- Owner: @zpcore
- argmin
- Status: Merged
- PR: Update op argmin and argmax from manual lowering to codegen. #5586 (comment)
- Owner: @zpcore
- as_strided
- as_strided_
- asin
- Status: Merged
- PR: Full codegen asin, asinh, atan, and atanh #3565
- Owner: @wonjoolee95
- asinh
- Status: Merged
- PR: Full codegen asin, asinh, atan, and atanh #3565
- Owner: @wonjoolee95
- atan
- Status: Merged
- PR: Full codegen asin, asinh, atan, and atanh #3565
- Owner: @wonjoolee95
- atan2
- Status: Merged
- PR: Partially Codegen Atan2 #4184
- Owner: @ManfeiBai
- atanh
- Status: Merged
- PR: Full codegen asin, asinh, atan, and atanh #3565
- Owner: @wonjoolee95
- avg_pool2d #3848
- avg_pool2d_backward #3849
- avg_pool3d #3850
- avg_pool3d_backward #3851
- baddbmm
- Status: Merged
- PR: Partially Codegen Baddbmm #4222
- Owner: @ManfeiBai
- bernoulli
- bernoulli_.float
- bernoulli_.Tensor
- binary_cross_entropy #3799
- binary_cross_entropy_backward #3800
- binary_cross_entropy_with_logits #3801
- bitwise_and.Scalar
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_and.Tensor
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_not.out
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_or.Scalar_out
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_or.Tensor_out
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_xor.Scalar_out
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bitwise_xor.Tensor_out
- Status: Merged
- PR: Codegen for bitwise and, or, xor, and not #3815
- Owner: @steventk-g
- bmm
- cat
- ceil
- Status: Merged
- PR: Codegen for Ceil #3750
- Owner: @AlexWertheim
- celu
- celu_
- cholesky
- clamp
- clamp.Tensor
- clamp_max
- clamp_max.Tensor_out
- clamp_min
- clamp_min.Tensor_out
- clone
- constant_pad_nd
- convolution_backward_overrideable
- convolution_overrideable
- cos
- Status: Merged
- PR: Full codegen for
cosandcosh#3574 - Owner: @miladm
- cosh
- Status: Merged
- PR: Full codegen for
cosandcosh#3574 - Owner: @miladm
- cross
- cumprod
- cumsum
- diag
- diagonal
- div.Scalar
- div.Tensor
- div.Tensor_mode
- dot
- elu
- elu_
- elu_backward
- embedding
- embedding_dense_backward
- empty.memory_format
- empty_strided
- eq.Scalar #3877
- eq.Tensor #3878
- erf
- erfc
- erfinv
- exp
- Status: Merged
- PR: Full codegen erf, erfc, erfinv, and exp #3659
- Owner: @wonjoolee95
- expand
- expm1
- exponential_
- eye.m_out
- eye.out
- fill_.Scalar #4029
- fill_.Tensor #4028
- flip
- floor
- fmod.Scalar
- fmod.Tensor
- frac
- gather
- ge.Scalar #3857
- ge.Tensor #3858
- gelu
- gelu_backward
- ger #3852
- gt.Scalar #3853
- gt.Tensor #3854
- hardshrink
- Status: Merged
- PR: Codegen hardshrink #3999
- Owner: @vanbasten23
- [ x] hardshrink_backward
- hardsigmoid #3730
- hardsigmoid_backward #3731
- hardswish #3732
- hardswish_backward #3733
- hardtanh
- hardtanh_backward
- index.Tensor
- index_add
- index_copy
- index_fill_.int_Scalar
- index_fill_.int_Tensor
- index_put_
- index_select
- inverse
- Status: Blocked
- PR: Full codegen for
Inverse#3575 - Owner: @miladm
- isnan
- Status: Blocked
- PR: Full codegen for isnan #3758
- Owner: @steventk-g
- kl_div
- kl_div_backward
- kthvalue
- l1_loss
- l1_loss_backward
- le.Scalar #3874
- le.Tensor #3875
- leaky_relu
- Status: WIP
- Owner: lsiyuan
- leaky_relu_backward
- lerp.Scalar
- lerp.Tensor
- linspace
- log
- Status: Merged
- PR: Full codegen for
log,log2, andlog10#3573 - Owner: @miladm
- log1p
- log2
- Status: Merged
- PR: Full codegen for
log,log2, andlog10#3573 - Owner: @miladm
- log10
- Status: In Progress
- PR: Full codegen for
log,log2, andlog10#3573 - Owner: @miladm
- log_sigmoid_backward #3737
- log_sigmoid_forward #3736
- logdet
- Status: Blocked
- PR: Full codegen for
logdet#3576 - Owner: @miladm
- logical_and
- Status: Merged
- PR: Full codegen logical boolean ops #3594
- Owner: @wonjoolee95
- logical_not
- Status: Merged
- PR: Full codegen logical boolean ops #3594
- Owner: @wonjoolee95
- logical_or
- Status: Merged
- PR: Full codegen logical boolean ops #3594
- Owner: @wonjoolee95
- logical_xor
- Status: Merged
- PR: Full codegen logical boolean ops #3594
- Owner: @wonjoolee95
- logsumexp
- lt.Scalar #3872
- lt.Tensor #3873
- masked_fill_.Scalar
- masked_fill_.Tensor
- masked_scatter_
- masked_select
- max
- max.dim
- max.dim_max
- max_pool2d_with_indices
- max_pool2d_with_indices_backward
- max_pool3d_with_indices
- max_pool3d_with_indices_backward
- max_unpool2d
- max_unpool3d
- maximum
- Status: Merged
- PR: Codegen Abs and Maximum op #3544
- Owner: @JackCaoG
- mean
- mean.dim
- min
- min.dim
- min.dim_min
- minimum Full codegen minimum #3597
- mish
- mm
- mse_loss
- mse_loss_backward
- mul.Scalar
- mul.Tensor
- mv
- mv.out
- nan_to_num
- native_batch_norm
- native_batch_norm_backward
- ne.Scalar #3880
- ne.Tensor #3881
- neg
- nll_loss2d_backward
- nll_loss2d_forward
- nll_loss_backward
- nll_loss_forward
- nonzero
- norm.Scalar
- norm.ScalarOpt_dim
- norm.ScalarOpt_dim_dtype
- norm.ScalarOpt_dtype
- normal.float_Tensor
- normal.Tensor_float
- normal.Tensor_Tensor
- normal_
- permute
- pow.Scalar
- pow.Tensor_Scalar
- pow.Tensor_Tensor
- prelu
- prod
- prod.dim_int
- put_
- qr
- random_
- random_.from
- random_.to
- reciprocal
- reflection_pad2d
- reflection_pad2d_backward
- relu
- relu_
- remainder.Scalar
- remainder.Tensor
- repeat
- replication_pad1d
- replication_pad1d_backward
- replication_pad2d
- replication_pad2d_backward
- resize_
- roll
- round
- Status: New
- PR: TBD
- Owner: @li-yi-dong
- rrelu_with_noise
- rrelu_with_noise_backward
- rsqrt
- Status: In progress
- PR: TBD
- Owner: @vanbasten23
- rsub.Scalar
- rsub.Tensor
- scatter.reduce
- scatter.src
- scatter.value
- scatter.value_reduce
- scatter_add
- select.int
- selu #3775
- selu_ #3774
- sgn
- Status: Merged
- PR: Full codegen for
sgn,sign#3577 - Owner: @miladm
- sigmoid
- sigmoid_backward
- sign
- Status: Merged
- PR: Full codegen for
sgn,sign#3577 - Owner: @miladm
- silu.out #3776
- silu_backward #3777
- sin
- Status: Merged
- PR: Full codegen sin, sinh, and tan #3581
- Owner: @wonjoolee95
- sinh
- Status: Merged
- PR: Full codegen sin, sinh, and tan #3581
- Owner: @wonjoolee95
- slice.Tensor
- slogdet
- Status: Blocked
- PR: Full codegen for
logdet#3576 - Owner: @miladm
- smooth_l1_loss
- smooth_l1_loss_backward
- softplus
- softplus_backward
- softshrink
- softshrink_backward
- sort
- sort.stable
- split.Tensor
- split_with_sizes
- sqrt
- Status: In progress
- PR: TBD
- Owner: @AlexWertheim
- squeeze
- squeeze.dim
- squeeze_
- squeeze_.dim
- stack
- std
- std.correction
- std.dim
- std_mean.correction
- sub.Scalar
- sub.Tensor
- sum
- sum.dim_IntList
- svd
- symeig
- t
- t_
- take
- tan
- Status: Merged
- PR: Full codegen sin, sinh, and tan #3581
- Owner: @wonjoolee95
- tanh
- Status: Merged
- PR: Codegen for Tanh #3724
- Owner: @steventk-g
- tanh_backward
- threshold
- threshold_backward
- topk
- trace
- transpose.int
- transpose_
- triangular_solve
- tril
- tril_
- triu
- triu_
- trunc
- unbind.int
- uniform_
- unsqueeze
- unsqueeze_
- upsample_bilinear2d
- upsample_bilinear2d_backward
- upsample_nearest2d
- upsample_nearest2d.vec
- upsample_nearest2d_backward
- upsample_nearest2d_backward.vec
- var.correction
- Status: In progress
- PR: TBD
- Owner: @yeounoh
- var_mean.correction
- Status: In progress
- PR: TBD
- Owner: @yeounoh
- view
- where.self
- xlogy.Tensor
- zero_
Won't do's
This contains a list of ops that are not suitable for full codegen at the moment. Please list the op and the corresponding reason.