Skip to content

Conversation

@padreofthegame
Copy link
Contributor

Bug observed in TFLite frontend when working with quantized conv2d operator with parameter groups != 1

@tvm-bot
Copy link
Collaborator

tvm-bot commented Aug 3, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@padreofthegame padreofthegame force-pushed the tflite_qnn_conv2d branch 2 times, most recently from 19014bd to 0a30d26 Compare August 4, 2023 10:58
@padreofthegame padreofthegame changed the title [Relay]{TFLite] Fix in qnn.conv2d when parameter groups not equal to 1 [Relay][TFLite] Fix in qnn.conv2d when parameter groups not equal to 1 Aug 4, 2023
@padreofthegame
Copy link
Contributor Author

@tvm-bot rerun

@ekalda
Copy link
Contributor

ekalda commented Aug 10, 2023

Thanks @padreofthegame for fixing this! Would you mind providing more detail in the commit message body as per our commit message guideline, as in, what is the expected behaviour and what was happening before that bugfix?

@padreofthegame
Copy link
Contributor Author

Hello @ekalda, thank you for the comment.

I will try to explain my observations on this problem.

Basically, I was working with simple quantized tflite model with conv2D and bias_add layer, which was working fine with parameter groups == 1, but failing for example with groups == 2 with error message:

TVMError: Traceback (most recent call last):
  12: TVMFuncCall
  11: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  10: tvm::transform::Pass::operator()(tvm::IRModule) const
  9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  6: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
  5: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  4: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const [clone .isra.0]
  3: tvm::DiagnosticContext::Render()
  2: tvm::DiagnosticRenderer::Render(tvm::DiagnosticContext const&)
  1: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (tvm::DiagnosticContext)>::AssignTypedLambda<tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext const&)#1}>(tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  0: tvm::ReportAt(tvm::DiagnosticContext const&, std::ostream&, tvm::Span const&, tvm::Diagnostic const&)
  File "/home/padre/TVM_full_repo/tvm/src/ir/diagnostic.cc", line 264
TVMError: The source maps are not populated for this module.` 
Please use `tvm.relay.transform.AnnotateSpans` `to attach source maps for error reporting.

Error: The Relay type checker is unable to show the following types match:
  Tensor[(16), float32]
  Tensor[(2), float32]
In particular:
  dimension 0 conflicts: 16 does not match 2.

While further testing I realized that problem occures every time when the parameter groups differ from 1 or depth of input tensor (should be equivalent to depth convolution). When looking deeper into the code I track that problem occurs in qnn/op/convolution.cc, specifically in the part


	AssignType(types[5], DataType::Float(32), weight->shape[i_axis] * weight->shape[o_axis],
               reporter);  // weight_scale

Since the types[5] parameter corresponds to kernel_scale, and is a tensor of length num_kernels, weight->shape[i_axis] * weight->shape[o_axis] will generally differ from num_kernels, thus the error of tensor type matching will occur.

In solution, I just added an additional if statement that will check if the convolution is depthwise (similar to the code in Conv2dRel), and kept the current workflow, while in other case the shape of types[5] will be set according to conv2d requirements.

@ekalda
Copy link
Contributor

ekalda commented Aug 11, 2023

Thanks for the explanation, makes sense to me :) What I was asking was to amend the commit message body as I don't think "There was a bug" is particularly informative in the commit history and doesn't really comply with the commit message guidelines. Some version of what you wrote in the explaining comment (without the stack trace) would do :)

…to 1

Currently, qnn.conv2d only supports parameter GROUPS to be equal 1 or depth of the input tensor,
while in any other case (which is valid for parameter GROUPS) it results in Relay type checker
missmatch error related to types[5] argument.

This fix expands qnn.conv2d functionality for other valid values of parameter GROUPS.
Copy link
Contributor

@ekalda ekalda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ekalda ekalda merged commit 94f6b37 into apache:main Aug 14, 2023
@ekalda
Copy link
Contributor

ekalda commented Aug 14, 2023

Thanks @padreofthegame!

@padreofthegame padreofthegame deleted the tflite_qnn_conv2d branch August 14, 2023 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants