Arrow Research search
Back to AAAI

AAAI 2022

LGD: Label-Guided Self-Distillation for Object Detection

Conference Paper AAAI Technical Track on Computer Vision III Artificial Intelligence

Abstract

In this paper, we propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation). Previous studies rely on a strong pretrained teacher to provide instructive knowledge that could be unavailable in real-world scenarios. Instead, we generate an instructive knowledge based only on student representations and regular labels. Our framework includes sparse labelappearance encoder, inter-object relation adaptater and intraobject knowledge mapper that jointly form an implicit teacher at training phase, dynamically dependent on labels and evolving student representations. They are trained end-to-end with detector and discarded in inference. Experimentally, LGD obtains decent results on various detectors, datasets, and extensive tasks like instance segmentation. For example in MS- COCO dataset, LGD improves RetinaNet with ResNet-50 under 2× single-scale training from 36. 2% to 39. 0% mAP (+ 2. 8%). It boosts much stronger detectors like FCOS with ResNeXt-101 DCN v2 under 2× multi-scale training from 46. 1% to 47. 9% (+ 1. 8%). Compared with a classical teacherbased method FGFI, LGD not only performs better without requiring pretrained teacher but also reduces 51% training cost beyond inherent student learning. Codes are available at https: //github. com/megvii-research/LGD.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
917206731915307528