-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[Relay] Fix an adaptive_max_pool1d operator conversion bug #15386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment. Generated by tvm-bot |
|
cc @jikechao |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@haoyang9804
LGTM, Thanks for your fixing.
|
@echuraev @Hzfengsy @vvchernov |
|
Please add a regression test for it |
|
From one hand LGTM, from another one it confuses me due to fix for specific op was done on very high level (and CI test was constructed for specific op not for all cases). It looks like it has been assumed that outputs exist at any |
I agree that "fix for specific op was done on very high level". So I make the fix only specific to the buggy pytorch op. This is only a workaround currently. I believe there are some deeper issues make this bug happen. |
|
cc @vvchernov |
echuraev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
This is a code patch for fixing this issue: #15004
Please refer to my comments in this link for the root cause of this issue.