[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Easy Interface and Control of Tele-education Robots

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

In this paper, we propose a new form of teaching robot system for English classes, which utilizes a tele-operated robot controlled by a teacher from a remote site. By providing a unique operation interface that incorporates non-contact vision recognition technologies, a teacher can easily control the robot from a distance to provide lectures. A robot mechanism with a 3D facial avatar also makes it possible for dynamic behavior and movement controlled from a remote site. Due to the robot’s ability to act in a human-like way, students show great interest and feel at ease learning English from the tele-operated robot. In a field pilot study, elementary students who participated have shown improvements on their standardized tests after the study, which means the tele-operated robot system could contribute effectively to educational systems, particularly in the area of English education.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Thrun S, Bennewitz M, Burgard W, Cremers AB, Dellaert F, Fox D, Hahnel D, Rosenberg C, Roy N, Schulte J, Schulz D (1999) MINERVA: a second-generation museum tour-guide robot. In: Proceedings of international conference on robotics and automation (ICRA 1999), pp 10–15

    Google Scholar 

  2. Hans M, Baum W (2001) Concept of a hybrid architecture for Care-O-bot. In: Proceedings of international workshop on robot-human interactive communication, pp 407–411

    Google Scholar 

  3. Yun S, Kim CG, Choi MT, Kim M (2010) Robust robot’s attention for human based on the multi-modal sensor and robot behavior. In: Proceedings of IEEE workshop on advanced robotics and its social impacts (ARSO 2010), pp 117–122

    Google Scholar 

  4. Sheridan TB (1992) Telerobotics, automation, and human supervisory control. MIT Press, Cambridge

    Google Scholar 

  5. Kobayashi H, Kato N (2011) Educational system with the android robot SAYA and field trial. In: Proceedings of international conference of fuzzy systems, pp 766–771

    Google Scholar 

  6. Paulos E, Canny J (2001) Social tele-embodiment: understanding presence. Auton Robots 11:87–95

    Article  MATH  Google Scholar 

  7. Bonamico C, Lavagetto F (2001) Virtual talking heads for tele-education application. In: Proceedings of the SSGRR

    Google Scholar 

  8. Cascia M, Sclaroff S, Athitsos V (2000) Fast, reliable head tracking under varying illumination: an approach based on robust registration of texture-mapped 3D models. IEEE Trans Pattern Anal Mach Intell 22:322–336

    Article  Google Scholar 

  9. Xiao J, Moriyama T, Kanade T, Cohn J (2003) Robust full-motion recovery of head by dynamic templates and re-registration techniques. Int J Imaging Syst Technol 13:85–94

    Article  Google Scholar 

  10. V2 conference. http://v2tech.com/ko. Accessed 15 March 2013

  11. Ryu W, Kim D (2007) Real-time 3D head tracking under rapidly changing pose, head movement and illumination. In: Proceedings of the international conference on image analysis and recognition, pp 569–580

    Chapter  Google Scholar 

  12. Baker S, Matthews I (2005) Lucas-Kanade 20 years on: a unifying framework. Int J Comput Vis 56(3):221–255

    Article  Google Scholar 

  13. Shin J, Lee J, Kim D (2010) Real-time lip reading system for isolated Korean word recognition. Pattern Recognit 44(3):559–571

    Article  Google Scholar 

  14. Hwang SY, Park JT, Song JB (2010) Autonomous navigation of a mobile robot using an upward-looking camera and sonar sensors. In: Proceedings of IEEE workshop on advanced robotics and its social impacts (ARSO 2010), pp 40–45

    Chapter  Google Scholar 

  15. Costanza E, Robinson J (2003) A region adjacency tree approach to the detection and design of fiducials. In: Proceedings of vision, video and graphics, pp 63–70

    Google Scholar 

  16. IBIZ Academy. http://www.esnw.co.kr. Accessed 15 March 2013

  17. Kim J, Kim WH, Lee WH, Seo JH, Chung MJ, Kwon DS (2012) Automated robot speech gesture generation system based on dialog sentence punctuation mark extraction. In: Proceedings of IEEE international symposium on system integration (SII 2012), pp 645–647

    Google Scholar 

  18. YBM i Test. http://www.e-ct.co.kr. Accessed 15 March 2013

Download references

Acknowledgements

We are grateful to John Stalbaum for his help and insightful discussions. This research was performed for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Knowledge and Economy of Korea.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sang-Seok Yun.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yun, SS., Kim, M. & Choi, MT. Easy Interface and Control of Tele-education Robots. Int J of Soc Robotics 5, 335–343 (2013). https://doi.org/10.1007/s12369-013-0192-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-013-0192-0

Keywords

Navigation