[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

When Object Detection Meets Knowledge Distillation: A Survey

Published: 01 August 2023 Publication History

Abstract

Object detection (OD) is a crucial computer vision task that has seen the development of many algorithms and models over the years. While the performance of current OD models has improved, they have also become more complex, making them impractical for industry applications due to their large parameter size. To tackle this problem, knowledge distillation (KD) technology was proposed in 2015 for image classification and subsequently extended to other visual tasks due to its ability to transfer knowledge learned by complex teacher models to lightweight student models. This paper presents a comprehensive survey of KD-based OD models developed in recent years, with the aim of providing researchers with an overview of recent progress in the field. We conduct an in-depth analysis of existing works, highlighting their advantages and limitations, and explore future research directions to inspire the design of models for related tasks. We summarize the basic principles of designing KD-based OD models, describe related KD-based OD tasks, including performance improvements for lightweight models, catastrophic forgetting in incremental OD, small object detection, and weakly/semi-supervised OD. We also analyze novel distillation techniques, i.e. different types of distillation loss, feature interaction between teacher and student models, etc. Additionally, we provide an overview of the extended applications of KD-based OD models on specific datasets, such as remote sensing images and 3D point cloud datasets. We compare and analyze the performance of different models on several common datasets and discuss promising directions for solving specific OD problems.

References

[1]
Z.-Q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 11, pp. 3212–3232, Nov. 2019.
[2]
Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: A survey,” 2019,.
[3]
L. Liu et al., “Deep learning for generic object detection: A survey,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 261–318, 2020.
[4]
G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker, “Learning efficient object detection models with knowledge distillation,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 742–751.
[5]
Z. Xing, X. Chen, and F. Pang, “DD-YOLO: An object detection method combining knowledge distillation and differentiable architecture search,” IET Comput. Vis., vol. 16, pp. 418–430, 2022.
[6]
Z. Li et al., “A compression pipeline for one-stage object detection model,” J. Real-Time Image Process., vol. 18, pp. 1949–1962, 2021.
[7]
G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” 2015,.
[8]
L. Wang and K.-J. Yoon, “Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 3048–3068, Jun. 2022.
[9]
S. Chen, R. Zhan, W. Wang, and J. Zhang, “Learning slimming SAR ship object detector through network pruning and knowledge distillation,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 1267–1282, 2021.
[10]
J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” Int. J. Comput. Vis., vol. 129, no. 6, pp. 1789–1819, 2021.
[11]
J. Gou, L. Sun, B. Yu, L. Du, K. Ramamohanarao, and D. Tao, “Collaborative knowledge distillation via multiknowledge transfer,” IEEE Trans. Neural Netw. Learn. Syst., early access, Oct. 20, 2022.
[12]
J. Gou, L. Sun, B. Yu, S. Wan, and D. Tao, “Hierarchical multi-attention transfer for knowledge distillation,” ACM Trans. Multimedia Comput., Commun. Appl., 2022.
[13]
D. Walawalkar, Z. Shen, and M. Savvides, “Online ensemble model compression using knowledge distillation,” in Proc. Eur. Conf. Comput. Vis., Springer, 2020, pp. 18–35.
[14]
S. S. Kruthiventi, P. Sahay, and R. Biswal, “Low-light pedestrian detection from RGB images using multi-modal knowledge distillation,” in Proc. IEEE Int. Conf. Image Process., 2017, pp. 4207–4211.
[15]
R. Yu, A. Li, V. I. Morariu, and L. S. Davis, “Visual relationship detection with internal and external linguistic knowledge distillation,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 1974–1982.
[16]
R. Mehta and C. Ozturk, “Object detection at 200 frames per second,” in Proc. Eur. Conf. Comput. Vis. Workshops, 2018, pp. 1–15.
[17]
T. Wang, L. Yuan, X. Zhang, and J. Feng, “Distilling object detectors with fine-grained feature imitation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4933–4942.
[18]
Y. Chai et al., “Compact cloud detection with bidirectional self-attention knowledge distillation,” Remote Sens., vol. 12, no. 17, 2020, Art. no.
[19]
L. Wang, J. Huang, Y. Li, K. Xu, Z. Yang, and D. Yu, “Improving weakly supervised visual grounding by contrastive knowledge distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 14 090–14 100.
[20]
M. Bharadhwaj, G. Ramadurai, and B. Ravindran, “Detecting vehicles on the edge: Knowledge distillation to improve performance in heterogeneous road traffic,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 3192–3198.
[21]
C. Deng, M. Wang, L. Liu, Y. Liu, and Y. Jiang, “Extended feature pyramid network for small object detection,” IEEE Trans. Multimedia, vol. 24, pp. 1968–1979, 2022.
[22]
C. Li, X. Qu, A. Gnanasambandam, O. A. Elgendy, J. Ma, and S. H. Chan, “Photon-limited object detection using non-local feature matching and knowledge distillation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 3976–3987.
[23]
J. Chen, S. Wang, L. Chen, H. Cai, and Y. Qian, “Incremental detection of remote sensing objects with feature pyramid and knowledge distillation,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–13, 2022.
[24]
L. Qi et al., “Multi-scale aligned distillation for low-resolution detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 14 443–14 453.
[25]
L. Chen, C. Yu, and L. Chen, “A new knowledge distillation for incremental object detection,” in Proc. Int. Joint Conf. Neural Netw., 2019, pp. 1–7.
[26]
D. Yang, Y. Zhou, D. Wu, C. Ma, F. Yang, and W. Wang, “Two-level residual distillation based triple network for incremental object detection,” 2020,.
[27]
K. Joseph, J. Rajasegaran, S. Khan, F. S. Khan, and V. Balasubramanian, “Incremental object detection via meta-learning,” 2020,.
[28]
J. Zhang, H. Su, W. Zou, X. Gong, Z. Zhang, and F. Shen, “CADN: A weakly supervised learning-based category-aware object detection network for surface defect detection,” Pattern Recognit., vol. 109, 2021, Art. no.
[29]
Z. Zeng, B. Liu, J. Fu, H. Chao, and L. Zhang, “WSOD2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 8292–8300.
[30]
H. Feliz et al., “Squeezed deep 6DoF object detection using knowledge distillation,” in Proc. Int. Joint Conf. Neural Netw., 2020, pp. 1–7.
[31]
Z. Qin, J. Wang, and Y. Lu, “Weakly supervised 3D object detection from point clouds,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 4144–4152.
[32]
Y. Zhu, C. Zhao, C. Han, J. Wang, and H. Lu, “Mask guided knowledge distillation for single shot detector,” in Proc. IEEE Int. Conf. Multimedia Expo, 2019, pp. 1732–1737.
[33]
P. Chen, S. Liu, H. Zhao, and J. Jia, “Distilling knowledge via knowledge review,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 5008–5017.
[34]
Y. Piao, Z. Rong, M. Zhang, W. Ren, and H. Lu, “A2dele: Adaptive and attentive depth distiller for efficient RGB-D salient object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 9060–9069.
[35]
F. R. Valverde, J. V. Hurtado, and A. Valada, “There is more than meets the eye: Self-supervised multi-object detection and tracking with sound by distilling multimodal knowledge,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 11 612–11 621.
[36]
M. Kang, J. Mun, and B. Han, “Towards oracle knowledge distillation with neural architecture search,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 4404–4411.
[37]
X. Chen, J. Su, and J. Zhang, “A two-teacher framework for knowledge distillation,” in Proc. Int. Symp. Neural Netw., Springer, 2019, pp. 58–66.
[38]
Z. Wu and Z. Hu, “Object detection based on self-feature distillation,” J. Phys. Conf. Ser., vol. 1982, no. 1, 2021, Art. no.
[39]
K. Kim, B. Ji, D. Yoon, and S. Hwang, “Self-knowledge distillation with progressive refinement of targets,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 6567–6576.
[40]
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 580–587.
[41]
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 779–788.
[42]
A. Tao, J. Barker, and S. Sarathy, “DetectNet: Deep neural network for object detection in DIGITS,” Parallel Forall, vol. 4, pp. 323–336, 2016.
[43]
J. Gou, L. Sun, B. Yu, S. Wan, W. Ou, and Z. Yi, “Multi-level attention-based sample correlations for knowledge distillation,” IEEE Trans. Ind. Informat., early access, Sep. 26, 2022.
[44]
J. G. Ko and W. Yoo, “Knowledge distillation based compact model learning method for object detection,” in Proc. Int. Conf. Inf. Commun. Technol. Convergence, 2020, pp. 1276–1278.
[45]
Z. Chong et al., “MonoDistill: Learning spatial features for monocular 3D object detection,” in Proc. Int. Conf. Learn. Representations, 2022, pp. 1–17.
[46]
C. Sautier, G. Puy, S. Gidaris, A. Boulch, A. Bursuc, and R. Marlet, “Image-to-Lidar self-supervised distillation for autonomous driving data,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 9891–9901.
[47]
X. Liu, H. Yang, A. Ravichandran, R. Bhotika, and S. Soatto, “Continual universal object detection,” 2020,.
[48]
D. Yang, Y. Zhou, W. Shi, D. Wu, and W. Wang, “RD-IOD: Two-level residual-distillation-based triple-network for incremental object detection,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 18, no. 1, pp. 1–23, 2022.
[49]
Q. Li, S. Jin, and J. Yan, “Mimicking very efficient network for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 6356–6364.
[50]
J. Yu, H. Xie, M. Li, G. Xie, Y. Yu, and C. W. Chen, “Mobile CenterNet for embedded deep learning object detection,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops, 2020, pp. 1–6.
[51]
N. Dong, Y. Zhang, M. Ding, S. Xu, and Y. Bai, “One-stage object detection knowledge distillation via adversarial learning,” Appl. Intell., vol. 52, pp. 4582–4598, 2022.
[52]
L. Yao, R. Pi, H. Xu, W. Zhang, Z. Li, and T. Zhang, “G-DetKD: Towards general distillation framework for object detectors via contrastive and semantic-guided feature imitation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 3591–3600.
[53]
H. Kuang and Z. Liu, “Research on object detection network based on knowledge distillation,” in Proc. 4th Int. Conf. Intell. Auton. Syst., 2021, pp. 8–12.
[54]
H.-T. Li, S.-C. Lin, C.-Y. Chen, and C.-K. Chiang, “Layer-level knowledge distillation for deep neural network learning,” Appl. Sci., vol. 9, no. 10, 2019, Art. no.
[55]
K. Su, C. M. Intisar, Q. Zhao, and Y. Tomioka, “Knowledge distillation for real-time on-road risk detection,” in Proc. IEEE Int. Conf. Dependable Autonomic Secure Comput. Int. Conf. Pervasive Intell. Comput. Int. Conf. Cloud Big Data Comput. Int. Conf. Cyber Sci. Technol. Congr., 2020, pp. 110–117.
[56]
C. Yang, Z. An, L. Cai, and Y. Xu, “Hierarchical self-supervised augmented knowledge distillation,” 2021,.
[57]
P. Zhang, Z. Kang, T. Yang, X. Zhang, N. Zheng, and J. Sun, “LGD: Label-guided self-distillation for object detection,” in Proc. AAAI Conf. Artif. Intell., 2022, pp. 3309–3317.
[58]
A. Wu and C. Deng, “Single-domain generalized object detection in urban scene via cyclic-disentangled self-distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 847–856.
[59]
A. Banitalebi-Dehkordi, “Knowledge distillation for low-power object detection: A simple technique and its extensions for training compact models using unlabeled data,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 769–778.
[60]
A. Banitalebi Dehkordi, “Revisiting knowledge distillation for object detection,” 2021,.
[61]
X. Dai et al., “General instance distillation for object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 7838–7847.
[62]
A. Musa, M. Hassan, M. Hamada, and F. Aliyu, “Low-power deep learning model for plant disease detection for smart-hydroponics using knowledge distillation techniques,” J. Low Power Electron. Appl., vol. 12, no. 2, 2022, Art. no.
[63]
T. Gao, Y. Gao, Y. Li, and P. Qin, “Revisiting knowledge distillation for light-weight visual object detection,” Trans. Inst. Meas. Control, vol. 43, no. 13, pp. 2888–2898, 2021.
[64]
X. Gu, T. Y. Lin, W. Kuo, and Y. Cui, “Open-vocabulary object detection via vision and language knowledge distillation,” in Proc. 10th Int. Conf. Learn. Representations, 2022.
[65]
Y. Liu, X. Dong, X. Lu, F. S. Khan, J. Shen, and S. Hoi, “Teacher-students knowledge distillation for Siamese trackers,” 2019,.
[66]
J. Shen, N. Vesdapunt, V. N. Boddeti, and K. M. Kitani, “In teacher we trust: Learning compressed models for pedestrian detection,” 2016,.
[67]
A. Mishra and D. Marr, “Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy,” 2017,.
[68]
J.-M. Guo, J.-S. Yang, S. Seshathiri, and H.-W. Wu, “A light-weight CNN for object detection with sparse model and knowledge distillation,” Electronics, vol. 11, no. 4, 2022, Art. no.
[69]
X. Chen, C. Xu, M. Dong, C. Xu, and Y. Wang, “An empirical study of adder neural networks for object detection,” in Proc. 35th Conf. Neural Inf. Process. Syst., 2021, pp. 6894–6905.
[70]
R. Chen, H. Ai, C. Shang, L. Chen, and Z. Zhuang, “Learning lightweight pedestrian detector with hierarchical knowledge distillation,” in Proc. IEEE Int. Conf. Image Process., 2019, pp. 1645–1649.
[71]
W. Zhang et al., “Boosting end-to-end multi-object tracking and person search via knowledge distillation,” in Proc. 29th ACM Int. Conf. Multimedia, 2021, pp. 1192–1201.
[72]
C. H. Nguyen, T. C. Nguyen, T. N. Tang, and N. L. Phan, “Improving object detection by label assignment distillation,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2022, pp. 1005–1014.
[73]
Z. Zheng, R. Ye, P. Wang, J. Wang, D. Ren, and W. Zuo, “Localization distillation for object detection,” 2021,.
[74]
Z. Zheng et al., “Localization distillation for dense object detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 9407–9416.
[75]
S. Xu et al., “IDa-Det: An information discrepancy-aware distillation for 1-bit detectors,” in Proc. Eur. Conf. Comput. Vis., Springer, 2022, pp. 346–361.
[76]
Z. Yang et al., “Focal and global knowledge distillation for detectors,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 4643–4652.
[77]
A. Umer, C. Termritthikun, T. Qiu, P. H. Leong, and I. Lee, “On-device saliency prediction based on pseudo knowledge distillation,” IEEE Trans. Ind. Informat., vol. 18, no. 9, pp. 6317–6325, Sep. 2022.
[78]
Q. Guo et al., “Online knowledge distillation via collaborative learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11 020–11 029.
[79]
W. Wang, W. Hong, F. Wang, and J. Yu, “GAN-knowledge distillation for one-stage object detection,” IEEE Access, vol. 8, pp. 60 719–60 727, 2020.
[80]
N. Dong, Y. Zhang, M. Ding, S. Xu, and Y. Bai, “One-stage object detection knowledge distillation via adversarial learning,” Appl. Intell., vol. 52, no. 4, pp. 4582–4598, 2022.
[81]
A. Chawla, H. Yin, P. Molchanov, and J. Alvarez, “Data-free knowledge distillation for object detection,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2021, pp. 3289–3298.
[82]
E. Finogeev, V. Gorbatsevich, A. Moiseenko, Y. Vizilter, and O. Vygolov, “Knowledge distillation using GANs for fast object detection,” Int. Arch. Photogrammetry Remote Sens. Spatial Inf. Sci., vol. 43, pp. 583–588, 2020.
[83]
Z. Kang, P. Zhang, X. Zhang, J. Sun, and N. Zheng, “Instance-conditional knowledge distillation for object detection,” in Proc. 35th Conf. Neural Inf. Process. Syst., 2021, pp. 16468–16480.
[84]
C. Yang, L. Xie, S. Qiao, and A. L. Yuille, “Training deep neural networks in generations: A more tolerant teacher educates better students,” in Proc. AAAI Conf. Artif. Intell., 2019, pp. 5628–5635.
[85]
A. Haselhoff, J. Kronenberger, F. Kuppers, and J. Schneider, “Towards black-box explainability with Gaussian discriminant knowledge distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 21–28.
[86]
S. Lee, S. Lee, and B. C. Song, “Balanced knowledge distillation for one-stage object detector,” Neurocomputing, vol. 500, pp. 394–404, 2022.
[87]
Y. Zhao et al., “Real time object detection for traffic based on knowledge distillation: 3rd place solution to pair competition,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops, 2020, pp. 1–6.
[88]
S. He et al., “Enhancing mid-low-resolution ship detection with high-resolution feature distillation,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022.
[89]
T. Ma, W. Tian, and Y. Xie, “Multi-level knowledge distillation for low-resolution object detection and facial expression recognition,” Knowl.-Based Syst., vol. 240, 2022, Art. no.
[90]
Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 12, pp. 2935–2947, Dec. 2018.
[91]
B.-Y. Liu, H.-X. Chen, Z. Huang, X. Liu, and Y.-Z. Yang, “ZoomInNet: A novel small object detector in drone images with cross-scale knowledge distillation,” Remote Sens., vol. 13, no. 6, 2021, Art. no.
[92]
P. Yang, F. Zhang, and G. Yang, “A fast scene text detector using knowledge distillation,” IEEE Access, vol. 7, pp. 22 588–22 598, 2019.
[93]
Z. Huang, W. Li, and R. Tao, “Extracting and distilling direction-adaptive knowledge for lightweight object detection in remote sensing images,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2022, pp. 2360–2364.
[94]
C. Shiqi, W. Wei, Z. Ronghui, Z. Jun, and L. Shengqi, “A lightweight, arbitrary-oriented SAR ship detector via feature map-based knowledge distillation,” J. Radar, vol. 11, pp. 1–14, 2022.
[95]
Y. Yang et al., “Adaptive knowledge distillation for lightweight remote sensing object detectors optimizing,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–15, 2022.
[96]
W. Zhou, S. Chang, N. Sosa, H. Hamann, and D. Cox, “Lifelong object detection,” 2020,.
[97]
T. Feng and M. Wang, “Response-based distillation for incremental object detection,” 2021,.
[98]
Y. Hao, Y. Fu, Y.-G. Jiang, and Q. Tian, “An end-to-end architecture for class-incremental object detection with knowledge distillation,” in Proc. IEEE Int. Conf. Multimedia Expo, 2019, pp. 1–6.
[99]
N. Dong, Y. Zhang, M. Ding, and G. H. Lee, “Incremental-DETR: Incremental few-shot object detection via self-supervised learning,” 2022,.
[100]
D. Yang, Y. Zhou, and W. Wang, “Multi-view correlation distillation for incremental object detection,” 2021,.
[101]
Y. Peng, L. Yuxuan, and L. Ming, “In defense of knowledge distillation for task incremental learning and its application in 3D object detection,” IEEE Trans. Robot. Autom., vol. 6, no. 2, pp. 2012–2019, Apr. 2021.
[102]
Y. Hao, Y. Fu, and Y.-G. Jiang, “Take goods from shelves: A dataset for class-incremental object detection,” in Proc. Int. Conf. Multimedia Retrieval, 2019, pp. 271–278.
[103]
B. Yang et al., “Continual object detection via prototypical task correlation guided gating mechanism,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 9255–9264.
[104]
E. Verwimp et al., “Re-examining distillation for continual object detection,” 2022,.
[105]
J. Yang, S. Shi, R. Ding, Z. Wang, and X. Qi, “Towards efficient 3D object detection with knowledge distillation,” 2022,.
[106]
H. Cho, J. Choi, G. Baek, and W. Hwang, “ItKD: Interchange transfer-based knowledge distillation for 3D object detection,” 2022,.
[107]
Y. Wang, A. Fathi, J. Wu, T. Funkhouser, and J. Solomon, “Multi-frame to single-frame: Knowledge distillation for 3D object detection,” 2020,.
[108]
Y. Wei, Z. Wei, Y. Rao, J. Li, J. Zhou, and J. Lu, “Lidar distillation: Bridging the beam-induced domain gap for 3D object detection,” 2022,.
[109]
W. Zheng, M. Hong, L. Jiang, and C.-W. Fu, “Boosting 3D object detection by simulating multimodality on point clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 13 638–13 647.
[110]
M. F. Bajestani and Y. Yang, “TKD: Temporal knowledge distillation for active perception,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2020, pp. 953–962.
[111]
B. Xu, G. Liu, H. Huang, C. Lu, and Y. Guo, “Semantic distillation guided salient object detection,” 2022,.
[112]
V. Vapnik and R. Izmailov, “Learning using privileged information: Similarity control and knowledge transfer,” J. Mach. Learn. Res., vol. 16, pp. 2023–2049, 2015.
[113]
Y. Tang, Y. Li, and W. Zou, “Fast video salient object detection via spatiotemporal knowledge distillation,” 2020,.
[114]
J. Gao, J. Wang, S. Dai, L.-J. Li, and R. Nevatia, “NOTE-RCNN: NOise tolerant ensemble RCNN for semi-supervised object detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 9508–9517.
[115]
Y. Cao et al., “Semi-supervised knowledge distillation for tiny defect detection,” in Proc. IEEE 25th Int. Conf. Comput. Supported Cooperative Work Des., 2022, pp. 1010–1015.
[116]
W. Qian, Z. Yan, Z. Zhu, and W. Yin, “Weakly supervised part-based method for combined object detection in remote sensing imagery,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 5024–5036, 2022.
[117]
Z. Huang, Y. Zou, V. Bhagavatula, and D. Huang, “Comprehensive attention self-distillation for weakly-supervised object detection,” 2020,.
[118]
Z. Chen et al., “Spatial likelihood voting with self-knowledge distillation for weakly supervised object detection,” Image Vis. Comput., vol. 16, 2021, Art. no.
[119]
L. F. Zeni and C. R. Jung, “Distilling knowledge from refinement in multiple instance detection networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2020, pp. 768–769.
[120]
Y. Liu, L. Sheng, J. Shao, J. Yan, S. Xiang, and C. Pan, “Multi-label image classification via knowledge distillation from weakly-supervised detection,” in Proc. 26th ACM Int. Conf. Multimedia, 2018, pp. 700–708.
[121]
F. Peng, C. Wang, J. Liu, and Z. Yang, “Active learning for lane detection: A knowledge distillation approach,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 15 152–15 161.
[122]
H. Jin, S. Zhang, X. Zhu, Y. Tang, Z. Lei, and S. Z. Li, “Learning lightweight face detector with knowledge distillation,” in Proc. Int. Conf. Biometrics, 2019, pp. 1–7.
[123]
B. Munjal, F. Galasso, and S. Amin, “Knowledge distillation for end-to-end person search,” 2019,.
[124]
F. Plesse, A. Ginsca, B. Delezoide, and F. Prêteux, “Visual relationship detection based on guided proposals and semantic knowledge distillation,” in Proc. IEEE Int. Conf. Multimedia Expo, 2018, pp. 1–6.
[125]
O. Moutik, S. Tigani, R. Saadane, and A. Chehri, “Hybrid deep learning vision-based models for human object interaction detection by knowledge distillation,” Procedia Comput. Sci., vol. 192, pp. 5093–5103, 2021.
[126]
M. Wu et al., “End-to-end zero-shot HOI detection via vision and language knowledge distillation,” 2022,.
[127]
P. Yun, Y. Liu, and M. Liu, “In defense of knowledge distillation for task incremental learning and its application in 3D object detection,” IEEE Trans. Robot. Autom., vol. 6, no. 2, pp. 2012–2019, Apr. 2021.
[128]
C. Yang, M. Ochal, A. Storkey, and E. J. Crowley, “Prediction-guided distillation for dense object detection,” 2022,.
[129]
M. Zheng et al., “End-to-end object detection with adaptive clustering transformer,” 2020,.
[130]
B. Zhao, Q. Cui, R. Song, Y. Qiu, and J. Liang, “Decoupled knowledge distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 11 953–11 962.
[131]
Z. Tian et al., “Adaptive perspective distillation for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 2, pp. 1372–1387, Feb. 2023.
[132]
Z. Ma et al., “Open-vocabulary one-stage detection with hierarchical visual-language knowledge distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 14 074–14 083.
[133]
Y. Shang, B. Duan, Z. Zong, L. Nie, and Y. Yan, “Lipschitz continuity guided knowledge distillation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 10 675–10 684.
[134]
X. Qu, C. Ding, X. Li, X. Zhong, and D. Tao, “Distillation using oracle queries for transformer-based human-object interaction detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 19 558–19 567.
[135]
X. Liu and Z. Zhu, “Knowledge distillation for object detection based on mutual information,” in Proc. 4th Int. Conf. Intell. Auton. Syst., 2021, pp. 18–23.
[136]
F. R. Amik, A. I. Tasin, S. Ahmed, M. Elahi, and N. Mohammed, “Dynamic rectification knowledge distillation,” 2022,.
[137]
Y. Liu, C. Shu, J. Wang, and C. Shen, “Structured knowledge distillation for dense prediction,” IEEE Trans. Pattern Anal. Mach. Intell., early access, Jun. 12, 2020.
[138]
T. Wang, Y. Zhu, C. Zhao, X. Zhao, J. Wang, and M. Tang, “Attention-guided knowledge distillation for efficient single-stage detector,” in Proc. IEEE Int. Conf. Multimedia Expo, 2021, pp. 1–6.
[139]
Y. Li, Y. Gong, and Z. Zhang, “Few-shot object detection based on self-knowledge distillation,” IEEE Intell. Syst., early access, Sep. 12, 2022.
[140]
M. Gao et al., “An embarrassingly simple approach for knowledge distillation,” 2018,.
[141]
D. Li, S. Tasci, S. Ghosh, J. Zhu, J. Zhang, and L. Heck, “RILOD: Near real-time incremental learning for object detection at the edge,” in Proc. 4th ACM/IEEE Symp. Edge Comput., 2019, pp. 113–126.
[142]
M. He et al., “Cross domain object detection by target-perceived dual branch distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 9570–9580.
[143]
G. Li, X. Li, Y. Wang, S. Zhang, Y. Wu, and D. Liang, “Knowledge distillation for object detection via rank mimicking and prediction-guided feature imitation,” in Proc. AAAI Conf. Artif. Intell., 2022, pp. 1306–1313.
[144]
Y. Chu, P. Li, Y. Bai, Z. Hu, Y. Chen, and J. Lu, “Group channel pruning and spatial attention distilling for object detection,” Appl. Intell., vol. 52, pp. 16246–16264, 2022.
[145]
B. Heo, J. Kim, S. Yun, H. Park, N. Kwak, and J. Y. Choi, “A comprehensive overhaul of feature distillation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 1921–1930.
[146]
L. Zhang and K. Ma, “Improve object detection with feature-based knowledge distillation: Towards accurate and efficient detectors,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–20.
[147]
X. Gu, T.-Y. Lin, W. Kuo, and Y. Cui, “Open-vocabulary object detection via vision and language knowledge distillation,” 2021,.
[148]
C. Li et al., “YOLOv6: A single-stage object detection framework for industrial applications,” 2022,.
[149]
M. He et al., “Cross domain object detection by target-perceived dual branch distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 9570–9580.
[150]
Q. Lan and Q. Tian, “Adaptive instance distillation for object detection in autonomous driving,” 2022,.
[151]
W. Liu et al., “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis., Springer, 2016, pp. 21–37.
[152]
S. Miao and R. Feng, “Object-oriented relational distillation for object detection,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2021, pp. 1510–1514.
[153]
V. VS, P. Oza, and V. M. Patel, “Instance relation graph guided source-free domain adaptive object detection,” 2022,.
[154]
T.-Y. Lin et al., “Microsoft COCO: Common objects in context,” in Proc. Eur. Conf. Comput. Vis., Springer, 2014, pp. 740–755.
[155]
M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The Pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010.
[156]
Y. Zhang et al., “Prime-aware adaptive distillation,” in Proc. Eur. Conf. Comput. Vis., Springer, 2020, pp. 658–674.
[157]
W. Park, D. Kim, Y. Lu, and M. Cho, “Relational knowledge distillation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 3967–3976.
[158]
Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully convolutional one-stage object detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 9627–9636.

Cited By

View all
  • (2025)SEGSID: A Semantic-Guided Framework for Sonar Image DespecklingIEEE Transactions on Image Processing10.1109/TIP.2024.351237834(652-666)Online publication date: 1-Jan-2025
  • (2025)TSID-Net: a two-stage single image dehazing framework with style transfer and contrastive knowledge transferThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-024-03511-241:3(1921-1938)Online publication date: 1-Feb-2025
  • (2024)Multi-Scale Feature Attention Fusion for Image Splicing Forgery DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/369877021:1(1-20)Online publication date: 7-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence  Volume 45, Issue 8
Aug. 2023
1338 pages

Publisher

IEEE Computer Society

United States

Publication History

Published: 01 August 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)SEGSID: A Semantic-Guided Framework for Sonar Image DespecklingIEEE Transactions on Image Processing10.1109/TIP.2024.351237834(652-666)Online publication date: 1-Jan-2025
  • (2025)TSID-Net: a two-stage single image dehazing framework with style transfer and contrastive knowledge transferThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-024-03511-241:3(1921-1938)Online publication date: 1-Feb-2025
  • (2024)Multi-Scale Feature Attention Fusion for Image Splicing Forgery DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/369877021:1(1-20)Online publication date: 7-Oct-2024
  • (2024)HideMIA: Hidden Wavelet Mining for Privacy-Enhancing Medical Image AnalysisProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680806(8110-8119)Online publication date: 28-Oct-2024
  • (2024)DCMSTRD: End-to-end Dense Captioning via Multi-Scale Transformer DecodingIEEE Transactions on Multimedia10.1109/TMM.2024.336986326(7581-7593)Online publication date: 26-Feb-2024
  • (2024)Context Matters: Distilling Knowledge Graph for Enhanced Object DetectionIEEE Transactions on Multimedia10.1109/TMM.2023.326689726(487-500)Online publication date: 1-Jan-2024
  • (2024)Real-Time Adaptive Partition and Resource Allocation for Multi-User End-Cloud Inference Collaboration in Mobile EnvironmentIEEE Transactions on Mobile Computing10.1109/TMC.2024.343010323:12(13076-13094)Online publication date: 1-Dec-2024
  • (2024)Monitoring-Based Traffic Participant Detection in Urban Mixed Traffic: A Novel Dataset and A Tailored DetectorIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.330428825:1(189-202)Online publication date: 1-Jan-2024
  • (2024)Neighborhood Multi-Compound Transformer for Point Cloud RegistrationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.338307134:9(8469-8480)Online publication date: 1-Sep-2024
  • (2024)End-Edge-Cloud Collaborative Computing for Deep Learning: A Comprehensive SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2024.339323026:4(2647-2683)Online publication date: 1-Oct-2024
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media