[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3666025.3699339acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article

AdaShadow: Responsive Test-time Model Adaptation in Non-stationary Mobile Environments

Published: 04 November 2024 Publication History

Abstract

On-device adapting to continual, unpredictable domain shifts is essential for mobile applications like autonomous driving and augmented reality to deliver seamless user experiences in evolving environments. Test-time adaptation (TTA) emerges as a promising solution by tuning model parameters with unlabeled live data immediately before prediction. However, TTA's unique forward-backward-reforward pipeline notably increases the latency over standard inference, undermining the responsiveness in time-sensitive mobile applications. This paper presents AdaShadow, a responsive test-time adaptation framework for non-stationary mobile data distribution and resource dynamics via selective updates of adaptation-critical layers. Although the tactic is recognized in generic on-device training, TTA's unsupervised and online context presents unique challenges in estimating layer importance and latency, as well as scheduling the optimal layer update plan. AdaShadow addresses these challenges with a backpropagation-free assessor to rapidly identify critical layers, a unit-based runtime predictor to account for resource dynamics in latency estimation, and an online scheduler for prompt layer update planning. Also, AdaShadow incorporates a memory I/O-aware computation reuse scheme to further reduce latency in the reforwardpass. Results show that AdaShadow achieves the best accuracy-latency balance under continual shifts. At low memory and energy costs, Adashadow provides a 2x to 3.5x speedup (ms-level) over state-of-the-art TTA methods with comparable accuracy and a 14.8% to 25.4% accuracy boost over efficient supervised methods with similar latency.

References

[1]
Google ARCore. 2019. https://developers.google.com/ar.
[2]
Apple ARKit. 2019. https://developer.apple.com/augmented-reality/arkit/.
[3]
Taghreed Balharith and Fahd Alhaidari. 2019. Round robin scheduling algorithm in CPU and cloud computing: a review. In Proceedings of the IEEE ICCAIS. 1--7.
[4]
Malik Boudiaf, Romain Mueller, Ismail Ben Ayed, and Luca Bertinetto. 2022. Parameter-free online test-time adaptation. In Proceedings of the IEEE/CVF CVPR. 8344--8353.
[5]
Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. 2020. Homm: Higher-order moment matching for unsupervised domain adaptation. In Proceedings of the AAAI. 3422--3429.
[6]
Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. 2022. Contrastive test-time adaptation. In Proceedings of the IEEE/CVF CVPR. 295--305.
[7]
Minghao Chen, Hongyang Xue, and Deng Cai. 2019. Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of the IEEE/CVF ICCV. 2090--2099.
[8]
Ping Chen, Shuibing He, Xuechen Zhang, Shuaiben Chen, Peiyi Hong, Yanlong Yin, Xian-He Sun, and Gang Chen. 2021. CSWAP: A self-tuning compression framework for accelerating tensor swapping in GPUs. In Proceedings of the IEEE CLUSTER. 271--282.
[9]
Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174 (2016).
[10]
Haolin Chu, Xiaolong Zheng, Liang Liu, and Huadong Ma. 2023. nnPerf: Demystifying DNN Runtime Inference Latency on Mobile Platforms. In Proceedings of the ACM SenSys. 125--137.
[11]
Matthew Corbett, Brendan David-John, Jiacheng Shang, Y Charlie Hu, and Bo Ji. 2023. Bystandar: Protecting bystander visual data in augmented reality systems. In Proceedings of the ACM MobiSys. 370--382.
[12]
Kaikai Deng, Dong Zhao, Qiaoyue Han, Shuyue Wang, Zihan Zhang, Anfu Zhou, and Huadong Ma. 2022. Geryon: Edge assisted real-time and robust object detection on drones via mmWave radar and camera fusion. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--27.
[13]
Chuntao Ding, Zhichao Lu, Felix Juefei-Xu, Vishnu Naresh Boddeti, Yidong Li, and Jiannong Cao. 2022. Towards transmission-friendly and robust cnn models over cloud and device. IEEE Transactions on Mobile Computing 22, 10 (2022), 6176--6189.
[14]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the ICLR.
[15]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the ACM ICML. 1180--1189.
[16]
Soumendu Kumar Ghosh, Arnab Raha, and Vijay Raghunathan. 2023. Energy-efficient approximate edge inference systems. ACM Transactions on Embedded Computing Systems 22, 4 (2023), 1--50.
[17]
In Gim and JeongGil Ko. 2022. Memory-efficient DNN training on mobile devices. In Proceedings of the ACM MobiSys. 464--476.
[18]
Shurui Gui, Meng Liu, Xiner Li, Youzhi Luo, and Shuiwang Ji. 2024. Joint learning of label and environment causal independence for graph out-of-distribution generalization. Advances in Neural Information Processing Systems 36 (2024).
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF CVPR. 770--778.
[20]
Yuze He, Chen Bian, Jingfei Xia, Shuyao Shi, Zhenyu Yan, Qun Song, and Guoliang Xing. 2023. VI-Map: Infrastructure-Assisted Real-Time HD Mapping for Autonomous Driving. In Proceedings of the ACM MobiCom. 1--15.
[21]
Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the ICLR.
[22]
Sebastian Herbert and Diana Marculescu. 2007. Analysis of dynamic voltage/frequency scaling in chip-multiprocessors. In Proceedings of the ISLPED. 38--43.
[23]
Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. 2018. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the ACM ICML. 1989--1998.
[24]
Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, and Michael Spranger. 2023. Mecta: Memory-economic continual test-time model adaptation. In Proceedings of the ICLR.
[25]
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In Proceedings of the ICLR.
[26]
Jiagao Hu, Zhengxing Sun, Bo Li, Kewei Yang, and Dongyang Li. 2017. Online user modeling for interactive streaming image classification. In Proceedings of the MMM. Springer, 293--305.
[27]
Chien-Chin Huang, Gu Jin, and Jinyang Li. 2020. Swapadvisor: Pushing deep learning beyond the gpu memory limit via smart swapping. In Proceedings of ASPLOS. 1341--1355.
[28]
Kai Huang, Boyuan Yang, and Wei Gao. 2023. ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection. In Proceedings of the ACM MobiSys. 56--69.
[29]
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Proceedings of the NeurIPS, Vol. 32.
[30]
Puneet Jain, Justin Manweiler, and Romit Roy Choudhury. 2015. Overlay: Practical mobile augmented reality. In Proceedings of the ACM MobiSys. 331--344.
[31]
Venkatesh Kodukula, Mason Manetta, and Robert LiKamWa. 2023. Squint: A Framework for Dynamic Voltage Scaling of Image Sensors Towards Low Power IoT Vision. In Proceedings of the ACM MobiCom. 1--15.
[32]
Rui Kong, Yuanchun Li, Yizhen Yuan, and Linghe Kong. 2023. Convrelu++: Reference-based lossless acceleration of conv-relu operations on mobile cpu. In Proceedings of the ACM MobiSys. 503--515.
[33]
Z Jonny Kong, Qiang Xu, Jiayi Meng, and Y Charlie Hu. 2023. AccuMO: Accuracy-Centric Multitask Offloading in Edge-Assisted Mobile Augmented Reality. In Proceedings of the ACM MobiCom. 1--16.
[34]
Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090 (2019).
[35]
Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. 2023. Surgical fine-tuning improves adaptation to distribution shifts. In Proceedings of the ICLR.
[36]
Jingyu Li, Fengling Jiang, Jing Yang, Bin Kong, Mandar Gogate, Kia Dashtipour, and Amir Hussain. 2021. Lane-deeplab: Lane semantic segmentation in automatic driving scenarios for high-definition maps. Neurocomputing 465 (2021), 15--25.
[37]
Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, et al. 2020. Pytorch distributed: Experiences on accelerating data parallel training. In Proceedings of the VLDB. 3005--3018.
[38]
Jian Liang, Ran He, and Tieniu Tan. 2023. A comprehensive survey on test-time adaptation under distribution shifts. arXiv preprint arXiv:2303.15361 (2023).
[39]
Chengdong Lin, Kun Wang, Zhenjiang Li, and Yu Pu. 2023. A Workload-Aware DVFS Robust to Concurrent Tasks for Mobile Devices. In Proceedings of the ACM MobiCom. 1--16.
[40]
Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2022. On-device training under 256kb memory. In Proceedings of the NeurIPS.
[41]
Liangkai Liu, Sidi Lu, Ren Zhong, Baofu Wu, Yongtao Yao, Qingyang Zhang, and Weisong Shi. 2020. Computing systems for autonomous driving: State of the art and challenges. IEEE Internet of Things Journal 8, 8 (2020), 6469--6486.
[42]
Sicong Liu, Bin Guo, Cheng Fang, Ziqi Wang, Shiyan Luo, Zimu Zhou, and Zhiwen Yu. 2023. Enabling Resource-Efficient AIoT System With Cross-Level Optimization: A Survey. IEEE Communications Surveys & Tutorials (2023).
[43]
Sicong Liu, Bin Guo, Ke Ma, Zhiwen Yu, and Junzhao Du. 2021. AdaSpring: Context-adaptive and runtime-evolutionary deep model compression for mobile applications. Proceedings of the ACM on IMWUT 5, 1 (2021), 1--22.
[44]
Sicong Liu, Xiaochen Li, Zimu Zhou, Bin Guo, Meng Zhang, Haocheng Shen, and Zhiwen Yu. 2023. AdaEnlight: Energy-aware low-light video stream enhancement on mobile devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 4 (2023), 1--26.
[45]
Weijie Liu, Zhiquan Lai, Shengwei Li, Yabo Duan, Keshi Ge, and Dongsheng Li. 2022. AutoPipe: A fast pipeline parallelism approach with balanced partitioning and micro-batch slicing. In Proceedings of the IEEE CLUSTER. IEEE, 301--312.
[46]
Shengyu Lu, Beizhan Wang, Hongji Wang, Lihao Chen, Ma Linjian, and Xiaoyan Zhang. 2019. A real-time object detection algorithm for video. Computers & Electrical Engineering 77 (2019), 398--408.
[47]
M Jehanzeb Mirza, Jakub Micorek, Horst Possegger, and Horst Bischof. 2022. The norm must go on: Dynamic unsupervised domain adaptation by normalization. In Proceedings of the IEEE/CVF CVPR. 14765--14775.
[48]
Ji Joong Moon, Hyun Suk Lee, Jiho Chu, Donghak Park, Seungbaek Hong, Hyungjun Seo, Donghyeon Jeong, Sungsik Kong, and MyungJoo Ham. 2024. A New Frontier of AI: On-Device AI Training and Personalization. In Proceedings of the ICSE.
[49]
Pramod Kaushik Mudrakarta, Mark Sandler, Andrey Zhmoginov, and Andrew Howard. 2019. K for the price of 1: Parameter-efficient multi-task and transfer learning. In Proceedings of the ICLR.
[50]
Zachary Nado, Shreyas Padhy, D Sculley, Alexander D'Amour, Balaji Lakshminarayanan, and Jasper Snoek. 2020. Evaluating prediction-time batch normalization for robustness under covariate shift. In Proceedings of the ACM ICML Workshops.
[51]
Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. 2022. Efficient test-time model adaptation without forgetting. In Proceedings of the ACM ICML. 16888--16905.
[52]
Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. 2023. Towards stable test-time adaptation in dynamic wild world. Proceedings of the ICLR.
[53]
Shishir G Patil, Paras Jain, Prabal Dutta, Ion Stoica, and Joseph Gonzalez. 2022. Poet: Training neural networks on tiny devices with integrated rematerialization and paging. In Proceedings of the ACM ICML. 17573--17583.
[54]
Ashutosh Pattnaik, Xulong Tang, Adwait Jog, Onur Kayiran, Asit K Mishra, Mahmut T Kandemir, Onur Mutlu, and Chita R Das. 2016. Scheduling techniques for GPU architectures with processing-in-memory capabilities. In Proceedings of the PACT. 31--44.
[55]
Stefano Petrangeli, Gwendal Simon, Haoliang Wang, and Vishy Swaminathan. 2019. Dynamic adaptive streaming for augmented reality applications. In Proceedings of the ISM. IEEE, 56--567.
[56]
Protecting data privacy using Microsoft Azure. 2021.
[57]
Zhongnan Qu, Zimu Zhou, Yongxin Tong, and Lothar Thiele. 2022. p-meta: Towards on-device deep model adaptation. In Proceedings of the ACM KDD. 1441--1451.
[58]
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In Proceedings of the ECCV. 525--542.
[59]
Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W Keckler. 2016. vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design. In Proceedings of the MICRO. 1--13.
[60]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF CVPR. 4510--4520.
[61]
Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. 2020. Improving robustness against common corruptions by covariate shift adaptation. Proceedings of the NeurIPS 33, 11539--11551.
[62]
Haihong She, Yigui Luo, Zhaohong Xiang, Weiming Liang, and Yin Xie. 2023. Accurate Latency Prediction of Deep Learning Model Inference Under Dynamic Runtime Resource. In Proceedings of the ICONIP. Springer, 495--510.
[63]
Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. 2018. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the AAAI, Vol. 32.
[64]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. In Proceedings of the ICLR.
[65]
Ajit Singh, Priyanka Goyal, and Sahil Batra. 2010. An optimized round robin scheduling algorithm for CPU scheduling. International Journal on Computer Science and Engineering 2, 07 (2010), 2383--2385.
[66]
Nimit S Sohoni, Christopher R Aberger, Megan Leszczynski, Jian Zhang, and Christopher Re. 2019. Low-memory neural network training: A technical report. arXiv preprint arXiv:1904.10631 (2019).
[67]
Junha Song, Jungsoo Lee, In So Kweon, and Sungha Choi. 2023. EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization. In Proceedings of the IEEE/CVF CVPR. 11920--11929.
[68]
Meta Spark. 2024. https://spark.meta.com.
[69]
Sweden Data Collection & Processing. 2024.
[70]
Xiaohu Tang, Yang Wang, Ting Cao, Li Lyna Zhang, Qi Chen, Deng Cai, Yunxin Liu, and Mao Yang. 2023. Lut-nn: Empower efficient neural network inference with centroid learning and table lookup. In Proceedings of the ACM MobiCom. 1--15.
[71]
Abu Md Niamul Taufique, Chowdhury Sadman Jahan, and Andreas Savakis. 2022. Unsupervised continual learning for gradually varying domains. In Proceedings of the IEEE/CVF CVPR. 3740--3750.
[72]
Uri Verner, Assaf Schuster, Mark Silberstein, and Avi Mendelson. 2012. Scheduling processing of real-time data streams on heterogeneous multi-GPU systems. In Proceedings of the ACM SYSTOR. 1--12.
[73]
Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. 2021. Tent: Fully test-time adaptation by entropy minimization. In Proceedings of the ICLR.
[74]
Jiadai Wang, Jiajia Liu, and Nei Kato. 2018. Networking and communications in autonomous driving: A survey. IEEE Communications Surveys & Tutorials 21, 2 (2018), 1243--1274.
[75]
Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song, Zenglin Xu, and Tim Kraska. 2018. Superneurons: Dynamic GPU memory management for training deep neural networks. In Proceedings of the PPoPP. 41--53.
[76]
Lehao Wang, Zhiwen Yu, Haoyi Yu, Sicong Liu, Yaxiong Xie, Bin Guo, and Yunxin Liu. 2023. AdaEvo: Edge-Assisted Continuous and Timely DNN Model Evolution for Mobile Devices. IEEE Transactions on Mobile Computing (2023).
[77]
Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. 2022. Continual test-time domain adaptation. In Proceedings of the IEEE/CVF CVPR. 7201--7211.
[78]
Qipeng Wang, Mengwei Xu, Chao Jin, Xinran Dong, Jinliang Yuan, Xin Jin, Gang Huang, Yunxin Liu, and Xuanzhe Liu. 2022. Melon: Breaking the memory wall for resource-efficient on-device machine learning. In Proceedings of the ACM MobiSys. 450--463.
[79]
Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, and Hongxin Wei. 2024. Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition. Proceedings of the ACM on IMWUT 7, 4 (2024), 1--27.
[80]
Yanshuo Wang, Ali Cheraghian, Zeeshan Hayder, Jie Hong, Sameera Ramasinghe, Shafin Rahman, David Ahmedt-Aristizabal, Xuesong Li, Lars Petersson, and Mehrtash Harandi. 2024. Backpropagation-free Network for 3D Test-time Adaptation. In Proceedings of the IEEE/CVF CVPR. 23231--23241.
[81]
Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, and Zhangyang Wang. 2019. E2-train: Training state-of-the-art cnns with over 80% energy savings. In Proceedings of the NeurIPS, Vol. 32.
[82]
Markus Wulfmeier, Alex Bewley, and Ingmar Posner. 2018. Incremental adversarial domain adaptation for continually changing environments. In Proceedings of the ICRA. 4489--4495.
[83]
Daliang Xu, Mengwei Xu, Qipeng Wang, Shangguang Wang, Yun Ma, Kang Huang, Gang Huang, Xin Jin, and Xuanzhe Liu. 2022. Mandheling: Mixed-precision on-device dnn training with dsp offloading. In Proceedings of the ACM MobiCom. 214--227.
[84]
Fang Xu, Tianyu Zhou, Hengxu You, and Jing Du. 2024. Improving indoor wayfinding with AR-enabled egocentric cues: A comparative study. Advanced Engineering Informatics 59 (2024), 102265.
[85]
Yongcan Yu, Lijun Sheng, Ran He, and Jian Liang. 2023. Benchmarking test-time adaptation against distribution shifts in image classification. arXiv preprint arXiv:2307.03133 (2023).
[86]
Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2022. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. Proceedings of the ACL (2022).
[87]
Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domain-invariant representation learning. In Proceedings of the ICLR.
[88]
Li Lyna Zhang, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, and Yunxin Liu. 2021. Nn-meter: Towards accurate latency prediction of deep-learning model inference on diverse edge devices. In Proceedings of the ACM MobiSys. 81--93.
[89]
Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, and Peng Cui. 2023. Nico++: Towards better benchmarking for domain generalization. In Proceedings of the IEEE/CVF CVPR. 16036--16047.
[90]
Xingxuan Zhang, Renzhe Xu, Han Yu, Yancheng Dong, Pengfei Tian, and Peng Cui. 2023. Flatness-aware minimization for domain generalization. In Proceedings of the IEEE/CVF ICCV. 5189--5202.
[91]
Qihua Zhou, Song Guo, Zhihao Qu, Jingcai Guo, Zhenda Xu, Jiewei Zhang, Tao Guo, Boyuan Luo, and Jingren Zhou. 2021. Octo: INT8 Training with Loss-aware Compensation and Backward Quantization for Tiny On-device Learning. In Proceedings of the USENIX ATC. 177--191.

Index Terms

  1. AdaShadow: Responsive Test-time Model Adaptation in Non-stationary Mobile Environments

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SenSys '24: Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems
      November 2024
      950 pages
      ISBN:9798400706974
      DOI:10.1145/3666025
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 November 2024

      Check for updates

      Author Tags

      1. latency-efficient test-time adaptation
      2. mobile environments

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      Acceptance Rates

      Overall Acceptance Rate 174 of 867 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 181
        Total Downloads
      • Downloads (Last 12 months)181
      • Downloads (Last 6 weeks)181
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media