Skip to content

Conversation

@zytx121
Copy link
Collaborator

@zytx121 zytx121 commented Oct 2, 2022

  • Align test
  • Align train
  • UT

Experiments

rotated_retinanet_obb_csl_gaussian_r50_fpn_fp16_1x_dota_le90.py
report AP50: 69.51
AP50: 0.6804708695155224
AP75: 0.39561891292657414
mAP: 0.3920965305629459

@zytx121 zytx121 added the dev-1.x label Oct 2, 2022
Tensor: Angle offset for each scale level.
Has shape (num_anchors * H * W, 1)
"""
angle_cls_inds = torch.argmax(angle_preds, dim=1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of returns is wrong, i think the return here is (num_anchors * H * W).
In order to support batch input like (B, N, coding_len), the argmax dim can be -1.

In retinanet, csl directly assign the angle into rbox

# `rbbox` has shape (N, 5)
# `ang` has shape (N)
rbbox[..., 4] = ang

In some angle_branch head, such as Rotated FCOS and Rotated YOLOX, they use concat to get final bbox.

# `hbbox` has shape (N, 4)
# `ang` has shape (N, 1)
rbbox = torch.cat([hbbox, ang], dim=-1)

I think we can add a keepdim arg in decode method, and use like this:

angle_cls_inds = torch.argmax(angle_preds, dim=-1, keepdim=keepdim)

When keepdim is true, return (N, 1) or (B, N, 1), otherwise (N,) or (B, N)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@RangiLyu RangiLyu merged commit fa87c7c into open-mmlab:dev-1.x Oct 9, 2022
triple-Mu pushed a commit to triple-Mu/mmrotate that referenced this pull request Jan 31, 2023
* add UT

* fix lint

* update

* Update test_oriented_reppoints.py

* fix UT

* init

* Update angle_branch_retina_head.py

* add ut

* Update test_angle_coder.py

* add UT

* Update test_angle_branch_retina_head.py

* fix

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants