[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3384419.3430450acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
short-paper

Teacher, trainee, and student based knowledge distillation technique for monitoring indoor activities: poster abstract

Published: 16 November 2020 Publication History

Abstract

Recent years have witnessed unprecedented growth in sensors-based indoor activity recognition. Further, a significant improvement in recognition performance of indoor activities is observed by incorporating Deep Neural Network (DNN) model. In this paper, we propose knowledge distillation based economic and efficient indoor activity recognition approach for low-cost resource constraint devices. Here, we adopt knowledge from teacher and trainee (cumbersome DNN models) for training student (compressed DNN model). Initially, student and trainee both are beginner and trainee helps the student in learning from the teacher. The student, after certain steps, is mature enough for directly learning from the teacher. We introduce an early halting mechanism for simultaneously reducing floating-point operations and training time of the student model.

References

[1]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
[2]
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proc. CVPR, pages 4133--4141, 2017.
[3]
Chungkuk, Bumsoo Yoo, Minsik Kang, and Cho. Snow: Subscribing to knowledge via channel pooling for transfer & lifelong learning of convolutional neural networks. In Proc. ICLR, pages 1--12, 2019.
[4]
A. Mishra and D. Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. In Proc. ICLR, pages 1--15, 2017.
[5]
H. Zhao, X. Sun, J. Dong, C. Chen, and Z. Dong. Highlight every step: Knowledge distillation via collaborative teaching. IEEE Transactions on Cybernetics, pages 1--12, 2020.
[6]
J. Yang, H. Zou, S. Cao, Z. Chen, and L. Xie. Mobileda: Toward edge-domain adaptation. IEEE Internet of Things Journal, 7(8):6909--6918, 2020.

Cited By

View all
  • (2024)Model Compression in Practice: Lessons Learned from Practitioners Creating On-device Machine Learning ExperiencesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642109(1-18)Online publication date: 11-May-2024
  • (2024)Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge DistillationIEEE Transactions on Mobile Computing10.1109/TMC.2023.3297026(1-12)Online publication date: 2024
  • (2023)A Model Personalization-based Federated Learning Approach for Heterogeneous Participants with Variability in the DatasetACM Transactions on Sensor Networks10.1145/3629978Online publication date: 6-Nov-2023

Index Terms

  1. Teacher, trainee, and student based knowledge distillation technique for monitoring indoor activities: poster abstract

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SenSys '20: Proceedings of the 18th Conference on Embedded Networked Sensor Systems
    November 2020
    852 pages
    ISBN:9781450375900
    DOI:10.1145/3384419
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 November 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep neural network
    2. knowledge distillation
    3. sensors

    Qualifiers

    • Short-paper

    Funding Sources

    • SERB

    Conference

    Acceptance Rates

    Overall Acceptance Rate 174 of 867 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 26 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Model Compression in Practice: Lessons Learned from Practitioners Creating On-device Machine Learning ExperiencesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642109(1-18)Online publication date: 11-May-2024
    • (2024)Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge DistillationIEEE Transactions on Mobile Computing10.1109/TMC.2023.3297026(1-12)Online publication date: 2024
    • (2023)A Model Personalization-based Federated Learning Approach for Heterogeneous Participants with Variability in the DatasetACM Transactions on Sensor Networks10.1145/3629978Online publication date: 6-Nov-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media