• Liu M, Suh S, Vargas J, Zhou B, Grünerbl A and Lukowicz P. (2025). A Wearable Multi-modal Edge-Computing System for Real-Time Kitchen Activity Recognition. Human Activity Recognition and Anomaly Detection. 10.1007/978-981-97-9003-6_9. (132-145).

    https://link.springer.com/10.1007/978-981-97-9003-6_9

  • Li F, Liu M, Kane S and Carrington P. A Contextual Inquiry of People with Vision Impairments in Cooking. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-14).

    https://doi.org/10.1145/3613904.3642233

  • Guo X, Wang Y, Cheng J and Chen Y. (2024). Contactless Activity Identification Using Commodity WiFi. Mobile Technologies for Smart Healthcare System Design. 10.1007/978-3-031-57345-3_2. (13-47).

    https://link.springer.com/10.1007/978-3-031-57345-3_2

  • Knoefel F, Wallace B, Thomas N, Sveistrup H, Goubran R and Laurin C. (2024). Activities of Daily Living. Supportive Smart Homes. 10.1007/978-3-031-37337-4_10. (113-125).

    https://link.springer.com/10.1007/978-3-031-37337-4_10

  • Li X, Yin M, Zhang Y, Yang P, Wan C, Guo X and Tan H. Back-Guard: Wireless Backscattering Based User Sensing With Parallel Attention Model. IEEE Transactions on Mobile Computing. 10.1109/TMC.2022.3215012. 22:12. (7466-7481).

    https://ieeexplore.ieee.org/document/9920948/

  • Ma N, Wu Z, Cheung Y, Guo Y, Gao Y, Li J and Jiang B. A Survey of Human Action Recognition and Posture Prediction. Tsinghua Science and Technology. 10.26599/TST.2021.9010068. 27:6. (973-1001).

    https://ieeexplore.ieee.org/document/9802874/

  • Wang L, Zhou Y, Li R and Ding L. (2022). A fusion of a deep neural network and a hidden Markov model to recognize the multiclass abnormal behavior of elderly people. Knowledge-Based Systems. 252:C. Online publication date: 27-Sep-2022.

    https://doi.org/10.1016/j.knosys.2022.109351

  • Liu M, Suh S, Zhou B, Gruenerbl A and Lukowicz P. Smart-Badge: A wearable badge with multi-modal sensors for kitchen activity recognition. Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers. (356-363).

    https://doi.org/10.1145/3544793.3560391

  • Bhattacharya S, Adaimi R and Thomaz E. (2022). Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 6:2. (1-28). Online publication date: 4-Jul-2022.

    https://doi.org/10.1145/3534582

  • Hu Z, Zhang Y, Yu T and Pan S. (2022). VMA: Domain Variance- and Modality-Aware Model Transfer for Fine-Grained Occupant Activity Recognition 2022 21st ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). 10.1109/IPSN54338.2022.00028. 978-1-6654-9624-7. (259-270).

    https://ieeexplore.ieee.org/document/9826076/

  • Kresge K, Martino S, Zhao T and Wang Y. (2021). WiFi-based Contactless Gesture Recognition Using Lightweight CNN 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS). 10.1109/MASS52906.2021.00093. 978-1-6654-4935-9. (645-650).

    https://ieeexplore.ieee.org/document/9637809/

  • Su M, Tai P, Chen J, Hsieh Y, Lee S and Yeh Z. A Projection-Based Human Motion Recognition Algorithm Based on Depth Sensors. IEEE Sensors Journal. 10.1109/JSEN.2021.3079983. 21:15. (16990-16996).

    https://ieeexplore.ieee.org/document/9430536/

  • Nakamura K, Nitta N, Babaguchi N, Fujii K, Matsumura S and Nabata E. Semi-Supervised Temporal Segmentation of Manufacturing Work Video by Automatically Building a Hierarchical Tree of Category Labels. IEEE Access. 10.1109/ACCESS.2021.3076849. 9. (68017-68027).

    https://ieeexplore.ieee.org/document/9420050/

  • Boujebli M, Drira H, Mestiri M and Farah I. (2020). Rate-Invariant Modeling in Lie Algebra for Activity Recognition. Electronics. 10.3390/electronics9111888. 9:11. (1888).

    https://www.mdpi.com/2079-9292/9/11/1888

  • Kraus M, Schiller M, Behnke G, Bercher P, Dorna M, Dambier M, Glimm B, Biundo S and Minker W. "Was that successful?" On Integrating Proactive Meta-Dialogue in a DIY-Assistant using Multimodal Cues. Proceedings of the 2020 International Conference on Multimodal Interaction. (585-594).

    https://doi.org/10.1145/3382507.3418818

  • Bacharidis K and Argyros A. (2020). Improving Deep Learning Approaches for Human Activity Recognition based on Natural Language Processing of Action Labels 2020 International Joint Conference on Neural Networks (IJCNN). 10.1109/IJCNN48605.2020.9207397. 978-1-7281-6926-2. (1-8).

    https://ieeexplore.ieee.org/document/9207397/

  • Yin M, Li X, Zhang Y, Yang P and Wan C. (2020). Back-Guard: Wireless Backscattering based User Activity Recognition and Identification with Parallel Attention Model 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). 10.1109/IWQoS49365.2020.9213006. 978-1-7281-6887-6. (1-10).

    https://ieeexplore.ieee.org/document/9213006/

  • Wu J, Harrison C, Bigham J and Laput G. Automated Class Discovery and One-Shot Interactions for Acoustic Activity Recognition. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. (1-14).

    https://doi.org/10.1145/3313831.3376875

  • Lilija K, Pohl H and Hornbæk K. Who Put That There? Temporal Navigation of Spatial Recordings by Direct Manipulation. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. (1-11).

    https://doi.org/10.1145/3313831.3376604

  • Sahu A, Bhattacharya R, Bhura P and Chowdhury A. (2020). Action Recognition from Egocentric Videos Using Random Walks. Proceedings of 3rd International Conference on Computer Vision and Image Processing. 10.1007/978-981-32-9291-8_31. (389-402).

    http://link.springer.com/10.1007/978-981-32-9291-8_31

  • Wang Z, Lin L, Wang R, Wei B, Xu Y, Jiang Z and Li R. (2020). Activity Recognition and Classification via Deep Neural Networks. Testbeds and Research Infrastructures for the Development of Networks and Communications. 10.1007/978-3-030-43215-7_15. (213-228).

    http://link.springer.com/10.1007/978-3-030-43215-7_15

  • Wang L, Ding Z, Tao Z, Liu Y and Fu Y. (2019). Generative Multi-View Human Action Recognition 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 10.1109/ICCV.2019.00631. 978-1-7281-4803-8. (6211-6220).

    https://ieeexplore.ieee.org/document/9008775/

  • Diete A and Stuckenschmidt H. (2019). Fusing Object Information and Inertial Data for Activity Recognition. Sensors. 10.3390/s19194119. 19:19. (4119).

    https://www.mdpi.com/1424-8220/19/19/4119

  • Takeda S, Lago P, Okita T and Inoue S. Reduction of marker-body matching work in activity recognition using motion capture. Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers. (835-842).

    https://doi.org/10.1145/3341162.3345591

  • Sanal Kumar K and Bhavani R. (2017). Human activity recognition in egocentric video using PNN, SVM, kNN and SVM+kNN classifiers. Cluster Computing. 10.1007/s10586-017-1131-x. 22:S5. (10577-10586). Online publication date: 1-Sep-2019.

    http://link.springer.com/10.1007/s10586-017-1131-x

  • Andò B, Baglio S, Castorina S, Crispino R and Marletta V. (2019). Advanced Solutions Aimed at the Monitoring of Falls and Human Activities for the Elderly Population. Technologies. 10.3390/technologies7030059. 7:3. (59).

    https://www.mdpi.com/2227-7080/7/3/59

  • Arshad S, Feng C, Yu R and Liu Y. (2019). Leveraging Transfer Learning in Multiple Human Activity Recognition Using WiFi Signal 2019 IEEE 20th International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM). 10.1109/WoWMoM.2019.8793019. 978-1-7281-0270-2. (1-10).

    https://ieeexplore.ieee.org/document/8793019/

  • Ramirez-Amaro K, Dean-Leon E, Bergner F and Cheng G. (2019). A Semantic-Based Method for Teaching Industrial Robots New Tasks. KI - Künstliche Intelligenz. 10.1007/s13218-019-00582-5. 33:2. (117-122). Online publication date: 1-Jun-2019.

    http://link.springer.com/10.1007/s13218-019-00582-5

  • Kubota A, Iqbal T, Shah J and Riek L. Activity recognition in manufacturing: The roles of motion capture and sEMG+inertial wearables in detecting fine vs. gross motion. 2019 International Conference on Robotics and Automation (ICRA). (6533-6539).

    https://doi.org/10.1109/ICRA.2019.8793954

  • Cherian A and Gould S. (2019). Second-order Temporal Pooling for Action Recognition. International Journal of Computer Vision. 127:4. (340-362). Online publication date: 1-Apr-2019.

    https://doi.org/10.1007/s11263-018-1111-5

  • Alberto Flores Vázquez C, Aranda J, Icaza D, Pulla S, Flores-Vázquez M and Federico Cordova N. (2019). Human Activity Recognition without Vision Tracking. Intelligent Video Surveillance. 10.5772/intechopen.82660.

    https://www.intechopen.com/books/intelligent-video-surveillance/human-activity-recognition-without-vision-tracking

  • Diete A, Sztyler T and Stuckenschmidt H. (2019). Vision and Acceleration Modalities: Partners for Recognizing Complex Activities 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). 10.1109/PERCOMW.2019.8730690. 978-1-5386-9151-9. (101-106).

    https://ieeexplore.ieee.org/document/8730690/

  • CHEN L, NAKAMURA Y, KONDO K and MAYOL-CUEVAS W. (2019). Hotspot Modeling of Hand-Machine Interaction Experiences from a Head-Mounted RGB-D Camera. IEICE Transactions on Information and Systems. 10.1587/transinf.2018EDP7146. E102.D:2. (319-330). Online publication date: 1-Feb-2019.

    https://www.jstage.jst.go.jp/article/transinf/E102.D/2/E102.D_2018EDP7146/_article

  • Uddin M, Khaksar W and Torresen J. (2018). Activity Recognition Using Deep Recurrent Neural Network on Translation and Scale-Invariant Features 2018 25th IEEE International Conference on Image Processing (ICIP). 10.1109/ICIP.2018.8451319. 978-1-4799-7061-2. (475-479).

    https://ieeexplore.ieee.org/document/8451319/

  • Wu T, Lin J, Wang T, Hu C, Niebles J and Sun M. Liquid Pouring Monitoring via Rich Sensory Inputs. Computer Vision – ECCV 2018. (352-369).

    https://doi.org/10.1007/978-3-030-01252-6_21

  • Clapés A, Pardo À, Pujol Vila O and Escalera S. (2018). Action detection fusing multiple Kinects and a WIMU. Machine Vision and Applications. 29:5. (765-788). Online publication date: 1-Jul-2018.

    https://doi.org/10.1007/s00138-018-0931-1

  • Garcia-Hernando G, Yuan S, Baek S and Kim T. (2018). First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10.1109/CVPR.2018.00050. 978-1-5386-6420-9. (409-419).

    https://ieeexplore.ieee.org/document/8578148/

  • Ali H, Moftah H and Youssif A. (2018). Depth-based human activity recognition: A comparative perspective study on feature extraction. Future Computing and Informatics Journal. 10.1016/j.fcij.2017.11.002. 3:1. (51-67). Online publication date: 1-Jun-2018.

    https://linkinghub.elsevier.com/retrieve/pii/S231472881730051X

  • Saudabayev A, Rysbek Z, Khassenova R and Varol H. (2018). Human grasping database for activities of daily living with depth, color and kinematic data streams. Scientific Data. 10.1038/sdata.2018.101. 5:1.

    https://www.nature.com/articles/sdata2018101

  • Nguyen V, Ibrahim M, Rupavatharam S, Jawahar M, Gruteser M and Howard R. Eyelight: Light-and-Shadow-Based Occupancy Estimation and Room Activity Recognition. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications. (351-359).

    https://doi.org/10.1109/INFOCOM.2018.8485867

  • Korpela J and Maekawa T. (2018). Privacy preserving recognition of object-based activities using near-infrared reflective markers. Personal and Ubiquitous Computing. 22:2. (365-377). Online publication date: 1-Apr-2018.

    https://doi.org/10.1007/s00779-017-1070-9

  • Hsieh Y and Jeng Y. Development of Home Intelligent Fall Detection IoT System Based on Feedback Optical Flow Convolutional Neural Network. IEEE Access. 10.1109/ACCESS.2017.2771389. 6. (6048-6057).

    http://ieeexplore.ieee.org/document/8101471/

  • Li L, Bai R, Xie B, Peng Y, Wang A, Wang W, Jiang B, Liang J and Chen X. R&P: An Low-Cost Device-Free Activity Recognition for E-Health. IEEE Access. 10.1109/ACCESS.2017.2749323. 6. (81-90).

    http://ieeexplore.ieee.org/document/8107493/

  • Gu Y, Sheng W, Crick C and Ou Y. (2018). Automated assembly skill acquisition and implementation through human demonstration. Robotics and Autonomous Systems. 10.1016/j.robot.2017.10.002. 99. (1-16). Online publication date: 1-Jan-2018.

    https://linkinghub.elsevier.com/retrieve/pii/S0921889016303888

  • Yamazaki W, Ding M, Takamatsu J and Ogasawara T. (2017). Hand pose estimation and motion recognition using egocentric RGB-D video 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO). 10.1109/ROBIO.2017.8324409. 978-1-5386-3742-5. (147-152).

    http://ieeexplore.ieee.org/document/8324409/

  • Hu J, Zheng W, Lai J and Zhang J. (2017). Jointly Learning Heterogeneous Features for RGB-D Activity Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39:11. (2186-2200). Online publication date: 1-Nov-2017.

    https://doi.org/10.1109/TPAMI.2016.2640292

  • Hashimoto A, Fujino T, Harashima J, Iiyama M and Minoh M. Learning Food Appearance by a Supervision with Recipe Text. Proceedings of the 9th Workshop on Multimedia for Cooking and Eating Activities in conjunction with The 2017 International Joint Conference on Artificial Intelligence. (39-44).

    https://doi.org/10.1145/3106668.3106675

  • Yu B and Chan K. (2017). Discovering Knowledge by Behavioral Analytics for Elderly Care 2017 IEEE International Conference on Big Knowledge (ICBK). 10.1109/ICBK.2017.18. 978-1-5386-3120-1. (284-289).

    http://ieeexplore.ieee.org/document/8023431/

  • Arshad S, Feng C, Liu Y, Hu Y, Yu R, Zhou S and Li H. (2017). Wi-chase: A WiFi based human activity recognition system for sensorless environments 2017 IEEE 18th International Symposium on " World of Wireless, Mobile and Multimedia Networks (WoWMoM). 10.1109/WoWMoM.2017.7974315. 978-1-5386-2723-5. (1-6).

    http://ieeexplore.ieee.org/document/7974315/

  • Uddin M. (2017). Human activity recognition using segmented body part and body joint features with hidden Markov models. Multimedia Tools and Applications. 76:11. (13585-13614). Online publication date: 1-Jun-2017.

    https://doi.org/10.1007/s11042-016-3742-2

  • Emi I, Mondol M and Stankovic J. QuActive. Proceedings of the 8th International Conference on Cyber-Physical Systems. (281-291).

    https://doi.org/10.1145/3055004.3055005

  • Wang Z, Yang Z and Dong T. (2017). A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time. Sensors. 10.3390/s17020341. 17:2. (341).

    http://www.mdpi.com/1424-8220/17/2/341

  • Uddin M, Hassan M, Almogren A, Alamri A, Alrubaian M and Fortino G. Facial Expression Recognition Utilizing Local Direction-Based Robust Features and Deep Belief Network. IEEE Access. 10.1109/ACCESS.2017.2676238. 5. (4525-4536).

    http://ieeexplore.ieee.org/document/7867858/

  • Nakahara K, Yamaguchi H and Higashino T. In-home Activity and Micro-motion Logging Using Mobile Robot with Kinect. Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services. (106-111).

    https://doi.org/10.1145/3004010.3004027

  • Awwad S and Piccardi M. (2016). Local depth patterns for fine-grained activity recognition in depth videos 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ). 10.1109/IVCNZ.2016.7804453. 978-1-5090-2748-4. (1-6).

    http://ieeexplore.ieee.org/document/7804453/

  • Ibrahim M, Nguyen V, Rupavatharam S, Jawahar M, Gruteser M and Howard R. Visible light based activity sensing using ceiling photosensors. Proceedings of the 3rd Workshop on Visible Light Communication Systems. (43-48).

    https://doi.org/10.1145/2981548.2981554

  • Ni B, Paramathayalan V, Li T and Moulin P. (2016). Multiple Granularity Modeling. International Journal of Computer Vision. 120:1. (28-43). Online publication date: 1-Oct-2016.

    https://doi.org/10.1007/s11263-016-0891-8

  • Mohanty A, Vaishnavi P, Jana P, Majumdar A, Ahmed A, Goswami T and Sahay R. (2016). Nrityabodha. Image Communication. 47:C. (529-548). Online publication date: 1-Sep-2016.

    https://doi.org/10.1016/j.image.2016.05.019

  • Kumar K and Bhavani R. Analysis of SVM and kNN Classifiers For Egocentric Activity Recognition. Proceedings of the International Conference on Informatics and Analytics. (1-5).

    https://doi.org/10.1145/2980258.2980433

  • Uddin M. (2016). A depth video-based facial expression recognition system utilizing generalized local directional deviation-based binary pattern feature discriminant analysis. Multimedia Tools and Applications. 75:12. (6871-6886). Online publication date: 1-Jun-2016.

    https://doi.org/10.1007/s11042-015-2614-5

  • León O, Cuéllar M, Delgado M, Hunter G, Kymäläinen T and Herrera-Acuña R. (2016). A novel approach for real-time learning and recognition of human activities. Journal of Ambient Intelligence and Smart Environments. 8:4. (423-436). Online publication date: 1-Jan-2016.

    https://doi.org/10.3233/AIS-160387

  • Lea C, Reiter A, Vidal R and Hager G. (2016). Segmental Spatiotemporal CNNs for Fine-Grained Action Segmentation. Computer Vision – ECCV 2016. 10.1007/978-3-319-46487-9_3. (36-52).

    http://link.springer.com/10.1007/978-3-319-46487-9_3

  • Cantarero R, Santofimia M, Villa D, Requena R, Campos M, Florez-Revuelta F, Nebel J, Martinez-del-Rincon J and Lopez J. (2016). Kinect and Episodic Reasoning for Human Action Recognition. Distributed Computing and Artificial Intelligence, 13th International Conference. 10.1007/978-3-319-40162-1_16. (147-154).

    http://link.springer.com/10.1007/978-3-319-40162-1_16

  • Hashimoto A, Harashima J, Yamakata Y and Mori S. (2016). Design in Everyday Cooking: Challenges for Assisting with Menu Planning and Food Preparation. Cross-Cultural Design. 10.1007/978-3-319-40093-8_19. (182-192).

    http://link.springer.com/10.1007/978-3-319-40093-8_19

  • Yan Yan , Ricci E, Gaowen Liu and Sebe N. (2015). Egocentric Daily Activity Recognition via Multitask Clustering. IEEE Transactions on Image Processing. 24:10. (2984-2995). Online publication date: 1-Oct-2015.

    https://doi.org/10.1109/TIP.2015.2438540

  • Matsumura Y, Hashimoto A, Mori S, Mukunoki M and Minoh M. (2015). Clustering scenes in cooking video guided by object access 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). 10.1109/ICMEW.2015.7169812. 978-1-4799-7079-7. (1-6).

    https://ieeexplore.ieee.org/document/7169812

  • Hu J, Zheng W, Lai J and Jianguo Zhang . (2015). Jointly learning heterogeneous features for RGB-D activity recognition 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10.1109/CVPR.2015.7299172. 978-1-4673-6964-0. (5344-5352).

    http://ieeexplore.ieee.org/document/7299172/

  • Cha E, Forlizzi J and Srinivasa S. Robots in the Home. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. (319-326).

    https://doi.org/10.1145/2696454.2696465

  • Wang S and Zhou G. (2015). A review on radio based activity recognition. Digital Communications and Networks. 10.1016/j.dcan.2015.02.006. 1:1. (20-29). Online publication date: 1-Feb-2015.

    https://linkinghub.elsevier.com/retrieve/pii/S2352864815000115

  • Lea C, Hager G and Vidal R. An Improved Model for Segmentation and Recognition of Fine-Grained Activities with Application to Surgical Training Tasks. Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision. (1123-1129).

    https://doi.org/10.1109/WACV.2015.154

  • Cippitelli E, Gasparrini S, De Santis A, Montanini L, Raffaeli L, Gambi E and Spinsante S. (2015). Comparison of RGB-D Mapping Solutions for Application to Food Intake Monitoring. Ambient Assisted Living. 10.1007/978-3-319-18374-9_28. (295-305).

    https://link.springer.com/10.1007/978-3-319-18374-9_28

  • Yan Y, Ricci E, Liu G and Sebe N. (2015). Recognizing Daily Activities from First-Person Videos with Multi-task Clustering. Computer Vision -- ACCV 2014. 10.1007/978-3-319-16817-3_34. (522-537).

    https://link.springer.com/10.1007/978-3-319-16817-3_34

  • Qi J, Yang Z and Cristani M. (2014). Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding. PLoS ONE. 10.1371/journal.pone.0114147. 9:12. (e114147).

    https://dx.plos.org/10.1371/journal.pone.0114147

  • Uddin M. (2014). An Efficient Local Feature-Based Facial Expression Recognition System. Arabian Journal for Science and Engineering. 10.1007/s13369-014-1396-9. 39:11. (7885-7893). Online publication date: 1-Nov-2014.

    http://link.springer.com/10.1007/s13369-014-1396-9

  • Aggarwal J and Xia L. (2014). Human activity recognition from 3D data: A review. Pattern Recognition Letters. 10.1016/j.patrec.2014.04.011. 48. (70-80). Online publication date: 1-Oct-2014.

    https://linkinghub.elsevier.com/retrieve/pii/S0167865514001299

  • Wang Y, Liu J, Chen Y, Gruteser M, Yang J and Liu H. E-eyes. Proceedings of the 20th annual international conference on Mobile computing and networking. (617-628).

    https://doi.org/10.1145/2639108.2639143

  • Jalal A, Kamal S and Kim D. (2014). A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments. Sensors. 10.3390/s140711735. 14:7. (11735-11759).

    https://www.mdpi.com/1424-8220/14/7/11735

  • Ni B, Paramathayalan V and Moulin P. Multiple Granularity Analysis for Fine-Grained Action Detection. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. (756-763).

    https://doi.org/10.1109/CVPR.2014.102

  • Sato A, Watanabe K and Rekimoto J. Shadow Cooking. Proceedings of the 8th International Conference on Universal Access in Human-Computer Interaction. Aging and Assistive Environments - Volume 8515. (558-566).

    https://doi.org/10.1007/978-3-319-07446-7_54

  • Sato A, Watanabe K and Rekimoto J. MimiCook. Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction. (121-124).

    https://doi.org/10.1145/2540930.2540952

  • Blasco R, Marco Á, Casas R, Cirujano D and Picking R. (2014). A Smart Kitchen for Ambient Assisted Living. Sensors. 10.3390/s140101629. 14:1. (1629-1653).

    https://www.mdpi.com/1424-8220/14/1/1629

  • Song Y, Kautz H, Allen J, Swift M, Li Y, Luo J and Zhang C. A Markov logic framework for recognizing complex events from multimodal data. Proceedings of the 15th ACM on International conference on multimodal interaction. (141-148).

    https://doi.org/10.1145/2522848.2522883

  • Borges P, Conci N and Cavallaro A. (2013). Video-Based Human Behavior Understanding. IEEE Transactions on Circuits and Systems for Video Technology. 23:11. (1993-2008). Online publication date: 1-Nov-2013.

    https://doi.org/10.1109/TCSVT.2013.2270402

  • Gong X, Liu J, Zhou W and Liu J. (2013). Guided depth enhancement via a fast marching method. Image and Vision Computing. 31:10. (695-703). Online publication date: 1-Oct-2013.

    https://doi.org/10.1016/j.imavis.2013.07.006

  • Ye M, Zhang Q, Wang L, Zhu J, Yang R and Gall J. (2013). A Survey on Human Motion Analysis from Depth Data. Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications. 10.1007/978-3-642-44964-2_8. (149-187).

    http://link.springer.com/10.1007/978-3-642-44964-2_8

  • Coronato A and Paragliola G. (2013). A Safe Kitchen for Cognitive Impaired People. Ubiquitous Computing and Ambient Intelligence. Context-Awareness and Context-Driven Interaction. 10.1007/978-3-319-03176-7_3. (17-25).

    http://link.springer.com/10.1007/978-3-319-03176-7_3