目标检测近来已取得实质性进展。但是,广泛采用的水平边界框表示不适用于无处不在的面向对象,例如航空图像和场景文本中的对象。在本文中,我们提出了一个简单而有效的框架来检测多方向的对象。我们没有直接使四个顶点回归,而是在每个对应的侧面上滑动水平边界框的顶点,以准确地描述多方位的对象。具体来说,我们回归了四个长度比,以表征每个相对侧上的相对滑动偏移。这可以促进偏移学习并且避免定向对象的顺序标签点的混乱问题。为了进一步解决几乎水平物体的混乱问题,我们还基于对象与其水平边界框之间的面积比引入了一个倾斜因子,为每个对象的水平检测或定向检测提供了指导。我们将这五个额外的目标变量添加到更快的R-CNN的回归头中,这需要可忽略的额外计算时间。大量的实验结果表明,该方法在没有花哨的情况下,在航空图像中的物体检测,场景文本检测,鱼眼图像中的行人检测等多个多方向目标检测基准上均具有优异的性能。
Object detection has recently experienced substantial progress. Yet, the widely adopted horizontal bounding box representation is not appropriate for ubiquitous oriented objects such as objects in aerial images and scene texts. In this paper, we propose a simple yet effective framework to detect multi-oriented objects. Instead of directly regressing the four vertices, we glide the vertex of the horizontal bounding box on each corresponding side to accurately describe a multi-oriented object. Specifically, We regress four length ratios characterizing the relative gliding offset on each corresponding side. This may facilitate the offset learning and avoid the confusion issue of sequential label points for oriented objects. To further remedy the confusion issue for nearly horizontal objects, we also introduce an obliquity factor based on area ratio between the object and its horizontal bounding box, guiding the selection of horizontal or oriented detection for each object. We add these five extra target variables to the regression head of faster R-CNN, which requires ignorable extra computation time. Extensive experimental results demonstrate that without bells and whistles, the proposed method achieves superior performances on multiple multi-oriented object detection benchmarks including object detection in aerial images, scene text detection, pedestrian detection in fisheye images.