[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2015064991A2 - Smart device enabling non-contact operation control and non-contact operation control method using same - Google Patents

Smart device enabling non-contact operation control and non-contact operation control method using same Download PDF

Info

Publication number
WO2015064991A2
WO2015064991A2 PCT/KR2014/010151 KR2014010151W WO2015064991A2 WO 2015064991 A2 WO2015064991 A2 WO 2015064991A2 KR 2014010151 W KR2014010151 W KR 2014010151W WO 2015064991 A2 WO2015064991 A2 WO 2015064991A2
Authority
WO
WIPO (PCT)
Prior art keywords
target object
information
camera
shape
smart device
Prior art date
Application number
PCT/KR2014/010151
Other languages
French (fr)
Korean (ko)
Other versions
WO2015064991A3 (en
Inventor
차재상
Original Assignee
서울과학기술대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울과학기술대학교 산학협력단 filed Critical 서울과학기술대학교 산학협력단
Publication of WO2015064991A2 publication Critical patent/WO2015064991A2/en
Publication of WO2015064991A3 publication Critical patent/WO2015064991A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04108Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to a smart device, and more particularly, to a smart device capable of non-contact motion control and a non-contact motion control method using the same.
  • MMI man-machine interface
  • the MMI method directly inputs information by using an intuitive tool, it does not need a separate input device and has an advantage that it can be used regardless of user training.
  • a touch screen in which a touch pad is combined with a display module has been adopted in a smart device. As a result, a user can input various user names or select a function while viewing a screen implemented in the display module. It became.
  • the compact display module included in the smart device displays the text, graphics, images, video, and flash included in the web page densely, so that a specific image size is usually smaller than the contact area between the user's finger and the touch screen. Therefore, a problem occurs in that another image or text, which is arranged adjacent to the specific image, is touched, and there is a problem of frequently covering the screen by a finger.
  • the touch screen must be touched by a finger or a touch fan, but as the device can be manipulated, frequent touches may cause damage to liquid crystals or deterioration of recognition of certain parts. Therefore, it is necessary to implement a new input method for executing a desired function without the user touching the touch screen.
  • the present invention is to solve the above problems according to the prior art. That is, the object of the present invention, by recognizing the non-contact motion of the object through at least three or more cameras, the motion recognition can be made more precisely, and because it is non-contact, more lively user interaction beyond the input limit of the conventional touch method
  • the present invention provides a smart device capable of providing non-contact operation control and a non-contact operation control method using the same.
  • a smart device capable of non-contact motion control of the present invention includes a device main body, a camera sensor unit, a non-contact motion recognition unit, and a control unit.
  • the display unit is disposed on one surface of the device main body.
  • the camera sensor unit includes at least three cameras disposed at different corners of the device main body, and acquires image information on the target object through the camera.
  • the camera sensor unit may be disposed on the front or rear or front and rear of the device body, the image information is the image information of the captured image of the target object, the time information and the captured camera information, etc. It includes.
  • the non-contact motion recognition unit analyzes the shape, position, and size change of the target object based on the image information acquired through the camera sensor unit, and generates non-contact motion recognition information according to the change analysis.
  • the non-contact motion recognition unit may calculate a depth map by matching the image information for each camera, an object detector for detecting a target object, a shape recognition unit for recognizing the shape of the target object detected by the object detector, the target object A position measuring unit for measuring the position of the target object by using the parallax average for each camera of the camera and the information generating unit for generating the non-contact motion recognition information by changing the shape, position and size of the target object.
  • the controller displays a screen controlled according to the generated non-contact gesture recognition information on the display unit.
  • the smart device capable of controlling the contactless motion of the present invention includes a storage unit configured to store shape information for recognizing the shape of the target object and preset information for generating contactless motion recognition information according to a shape, position, and size change of the target object. can do.
  • a sensing device for determining whether the camera sensor unit is activated.
  • non-contact motion control method using a smart device obtaining image information on a target object through at least three cameras, calculating depth maps by matching image information of each camera, and calculating Detecting the target object in the image information based on the received depth map, measuring the position of the target object using the disparity average for each camera of the target object, and recognizing the non-contact motion by changing the shape, position, and size of the target object. Generating information and controlling the screen according to the generated non-contact motion recognition information.
  • the method may further include recognizing a shape of the detected target object to recognize a specific shape or change in shape.
  • the camera In the step of acquiring image information about the target object, when the target object does not exist for a predetermined time by a sensing device provided on one side of the device, the camera is switched to a standby state and the target object is detected by the sensing device. When the presence of the is detected the camera can be activated.
  • Smart device by at least three or more cameras to recognize the contactless operation of the object, it is possible to recognize the operation of the object more precisely, the user experience in a convenient and lively device in an environment that requires a variety of user interface Can give
  • the operation of the object can be recognized only when the target object is located within the proximity of the smart device.
  • the present invention limits the separation distance between the smart device and the target object by identifying the position of the target object using a camera. There is no continuous motion recognition in real time. In addition, there is an effect to prevent damage to the screen liquid crystal of the mobile device due to frequent contact.
  • FIG. 1 is a schematic internal configuration diagram of a smart device according to an embodiment of the present invention.
  • FIG. 2 is an external perspective view illustrating an example in which a camera sensor unit is mounted on a smart device according to an embodiment of the present invention.
  • FIG. 3 is a conceptual diagram illustrating an example of the object object positioning according to an embodiment of the present invention.
  • FIG. 4 is a flowchart sequentially illustrating a method for controlling contactless operation of a smart device according to an embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • the smart device of the present invention may be a smartphone, a mobile phone, an MP3, a game machine, a navigation device, a TV, a display device, a notebook computer, a tablet computer, a personal media player (PMP), a personal digital assistant (PDA), and the like.
  • PMP personal media player
  • PDA personal digital assistant
  • FIG. 1 is a schematic internal configuration diagram of a smart device according to an embodiment of the present invention.
  • the smart device of the present invention includes a device main body 100, a display unit 200 disposed on the front of the main body 100, and a camera disposed at different ends of the main body 100. At least three or more 310 are provided, based on the image information obtained through the camera sensor unit 300, the camera sensor unit 300 for obtaining image information on the target object through the camera 310, respectively; A non-contact motion recognition unit 400 which analyzes the shape, position, and size of the target object, generates contactless motion recognition information according to the change analysis, and a controller 500 that controls the device according to the generated non-contact motion recognition information. It is configured to include.
  • the display 200 displays various screens to be provided to the user, and displays the screen controlled according to the screen control signal received from the controller 500. For example, when the user moves his hand from right to left, the display unit 200 receives a screen control signal including the content of the screen page from the control unit 500 to the next page, the next page on the screen page Display the screen moved to.
  • the display unit 200 of the present invention may be configured as a touch screen having a conventional multi-touch function.
  • the camera sensor unit 300 acquires image information about the target object and transmits the image information to the non-contact motion recognition unit 400, where the image information includes image information of capturing the image of the target object, time information and image capturing the image. It contains one camera information.
  • the camera sensor unit 300 of the present invention at least three cameras are required to accurately calculate the position of the target object, it is preferable to be located at different ends of the surface of the device body (100).
  • the four cameras (310a ⁇ 310d) are located along each corner of the front of the device body 100, so that the camera 310 without the dead zone (Dead Zone) of the display unit You can use all four installed areas.
  • the camera sensor unit 300 of the present invention is preferably configured as a parallel camera (parallel camera) structure to have a built-in structure that does not protrude on one surface of the device main body 100, the device main body 100 It can also be placed on both the front and back.
  • the non-contact motion recognition unit 400 analyzes the shape, position, and size change of the target object based on the image information acquired through the camera sensor unit 300, and generates non-contact motion recognition information according to the change analysis.
  • the non-contact motion recognition unit 400 includes an object detector 410, a shape recognition unit 420, a position measuring unit 430, and an information generating unit 440.
  • the present invention stores the shape information for recognizing the shape of the target object through the non-contact motion recognition unit 400 and the preset information for generating the non-contact motion recognition information according to the shape, position, size change of the target object is stored
  • the unit 600 is provided.
  • the present invention estimates the distance of the target object according to the stereo vision method.
  • a depth map is generated by matching images captured by cameras.
  • the target object is detected using the depth map and the captured images, and the parallax average in the detected target object is obtained.
  • the distance of an object may be estimated using the obtained parallax.
  • the technique of estimating the distance of an object using the stereo vision technique is a widely used position tracking technique, and a detailed description thereof will be omitted.
  • the object detector 410 matches the image information for each camera to calculate a depth map, and detects a target object in the image information based on the calculated depth map. Accordingly, the shape recognition unit 420 may recognize the specific shape of the target object, and the position measuring unit 430 obtains a parallax value for the center portion of the detected target object and calculates a distance between the camera and the target object. Will be available.
  • a matching method for calculating the depth map a method such as DP (Dynamic Programming), BP (Belief Propagation), and Graph-cut may be used.
  • DP Dynamic Programming
  • BP Belief Propagation
  • Graph-cut may be used as a method such as DP (Dynamic Programming), BP (Belief Propagation), and Graph-cut.
  • an object may be extracted using various methods.
  • a vertical parallax map (V-disparity) may be used to distribute a parallax value mapped to a vertical parallax map. Accordingly, the object included in the image information is detected. That is, the sum of regions having a disparity value and a disparity value different from the vertical disparity map may be detected as the target object region, or an area of pixels having the same disparity value may be detected as the target object region.
  • the shape recognizer 420 recognizes the shape of the target object detected by the object detector 410. Accordingly, the present invention can recognize the non-contact motion recognition information according to the recognition or shape change of the specific shape of the detected target object. For example, you can distinguish between an extended finger and a folded finger by changing the shape of a certain shape such as scissors, rocks, beams, and bent fingers in an open palm. You can play like a real instrument without touching it, or perform live game control.
  • the position measuring unit 430 measures the position of the target object using the parallax average for each camera of the target object.
  • the position measuring unit 430 of the present invention calculates the position of the target object based on image information acquired through at least three cameras, estimates the distance from the center position between the adjacent cameras to the target object, respectively.
  • the exact position coordinates (x, y, z) for the target object are derived according to the closest points of the estimated distances. Accordingly, it is possible to more accurately determine the position of the target object than when using only two cameras, the left and right cameras, for positioning.
  • the information generator 440 generates non-contact motion recognition information through the shape, position, and size change of the target object through the shape recognizer 420 and the position measurer 430. That is, it is possible to estimate the direction, pattern, and speed of the up, down, left, right, and rotational movements through the change of the position of the target object, and to estimate the shape change, perspective, and speed through the change of the size of the target object. It is possible to generate contactless motion recognition information corresponding thereto.
  • control unit 500 When the control unit 500 receives the non-contact motion recognition information from the non-contact motion recognition unit, the control unit 500 identifies the non-contact motion recognition information, generates motion control information, and transmits a screen control signal according to the generated motion control information to the display unit 200. By doing so, the screen of the display unit can be controlled. In addition, the controller 500 performs operation control of the smart device according to the operation control information.
  • the present invention may further include a sensing device 700 which is a device for determining whether the camera sensor unit 300 is activated.
  • the sensing device 700 of the present invention may be configured as one of an infrared ray, an ultrasonic wave, and a proximity sensor, and the camera sensor unit 300 is in a standby state when a target object does not exist for a predetermined time, and then through the sensing device 700. When the presence of the target object is detected, the state is activated.
  • FIG. 4 is a flowchart sequentially illustrating a method for controlling contactless operation of a smart device according to an embodiment of the present invention.
  • the present invention first obtains image information on a target object through at least three cameras (S810). Subsequently, a depth map is generated by matching images photographed for each camera using a stereo vision method (S820).
  • the target object in the image information is detected based on the generated depth map (S830).
  • the present invention may recognize the detected shape of the target object, and may recognize the non-contact motion recognition information according to the recognition or change of the shape. For example, it is possible to distinguish between an extended finger and a folded finger by changing the shape of a bent finger in the open palm, such as scissors, rocks, and beam motions. You can play as if you were playing a real instrument, or perform live game control without touching.
  • the position of the target object is measured using the parallax average for each camera of the target object (S840).
  • the position of the target object is calculated based on the image information acquired through at least three cameras, the distance from the center position between the adjacent cameras to the target object is respectively estimated, and at the closest point of the estimated distances. Therefore, the exact position coordinates (x, y, z) for the target object are derived.
  • the non-contact motion recognition information is generated by changing the shape, position, and size of the target object (S850).
  • the direction, pattern, and speed of up, down, left, right, and rotational movements can be estimated by changing the position of the target object, and the shape change, perspective and speed can be estimated by changing the size of the target object. It is possible to generate the non-contact motion recognition information.
  • the generated non-contact motion recognition information is identified to generate motion control information corresponding thereto (S860), and screen control according to the generated motion control information is performed (S870).
  • the present invention is able to recognize the contactless motion according to the up, down, left, right, rotational movement direction and perspective, pattern, speed, shape change of the target object through three or more cameras.
  • the operation of the screen and the device can be controlled according to the operation. Accordingly, the present invention is not limited to the motion recognition according to the separation distance between the smart device and the target object, it is possible to provide a more lively user interaction by enabling the motion recognition of the continuous object in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Selective Calling Equipment (AREA)

Abstract

Disclosed are a smart device enabling non-contact operation control and a non-contact operation control method using the same. The smart device comprises: a device body, a camera sensor unit, a non-contact operation recognition unit, and a control unit. The display unit is arranged on a surface of the device body. The camera sensor unit has at least three cameras arranged on different corner portions of the device body, respectively, and acquires image information regarding a target object through the cameras, respectively. The non-contact operation recognition unit performs analysis on changes of the shape, position, and size of the target object, on the basis of the image information acquired through the camera sensor unit, and generates non-contact operation recognition information according to the analysis on the changes. The control unit displays a screen, which is controlled according to the generated non-contact operation recognition information, on the display unit. As such, according to the present invention, at least three cameras recognize a non-contact operation of an object, thereby recognizing the operation of the object more precisely, and the user is provided with a convenient and lively feeling of using the device in an environment requiring various user interfaces.

Description

비접촉 동작 제어가 가능한 스마트 디바이스 및 이를 이용한 비접촉 동작 제어 방법Smart device capable of non-contact motion control and non-contact motion control method using the same
본 발명은 스마트 디바이스에 관한 것으로서, 더욱 상세하게는 비접촉 동작 제어가 가능한 스마트 디바이스 및 이를 이용한 비접촉 동작 제어 방법에 관한 것이다.The present invention relates to a smart device, and more particularly, to a smart device capable of non-contact motion control and a non-contact motion control method using the same.
현대 사회에서 컴퓨터 정보기술의 발전으로 정보기기 사용이 보편화됨에 따라 사용기기와 인간 사이에, 자연스러운 상호작용의 중요성이 증가하고 있다. 이에 따라 정보기기는 텍스트 위주의 인터페이스에서, 인간의 다양한 제스쳐 등을 사용하고 사용자 경험을 중요시 하는 MMI(Man-Machine Interface) 방식으로 발전하고 있다.As the use of information devices is becoming more common due to the development of computer information technology in the modern society, the importance of natural interaction between devices and humans is increasing. Accordingly, information devices are evolving from a text-oriented interface to a man-machine interface (MMI) method that uses various human gestures and emphasizes user experience.
이러한 MMI 방식은 직관적인 도구인 손을 이용하여 직접 정보를 입력하기 때문에 별도의 입력장치가 필요하지 않고, 사용자 훈련 여부와 관계없이 사용할 수 있다는 장점이 있다. 최근에는 다양한 유저 인터페이스 환경을 제공하기 위하여 디스플레이 모듈에 터치패드를 결합한 터치스크린이 스마트 디바이스에 채용되고 있으며, 이로 인하여 사용자는 디스플레이 모듈에 구현된 화면을 보면서 다양한 사용자 명력을 입력하거나 기능을 선택할 수 있게 되었다.Since the MMI method directly inputs information by using an intuitive tool, it does not need a separate input device and has an advantage that it can be used regardless of user training. Recently, in order to provide various user interface environments, a touch screen in which a touch pad is combined with a display module has been adopted in a smart device. As a result, a user can input various user names or select a function while viewing a screen implemented in the display module. It became.
그러나, 종래의 터치스크린을 이용한 스마트 디바이스에서는 사용자가 단순히 스마트 디바이스의 화면에 디스플레이된 메뉴 또는 아이콘을 손가락으로 터치함으로써 상기 메뉴 또는 아이콘과 관련된 기능을 선택할 수 있을 뿐, 사용자의 동작에 따른 다양한 유저 인터페이스 환경을 제공하지 못하는 문제가 있다.However, in a smart device using a conventional touch screen, the user can select a function related to the menu or icon by simply touching a menu or icon displayed on the screen of the smart device with a finger, and various user interfaces according to the user's operation. There is a problem of not providing the environment.
더구나, 최근 무선 인터넷 기술의 발달에 따라 스마트 디바이스를 통해서도 개인용 컴퓨터(Personal Computer)로 보는 웹페이지와 동일한 형태로 웹페이지를 볼 수 있을 뿐만 아니라 모든 유선 웹사이트에 대한 검색을 가능하게 하는 풀브라우징(Full Browsing) 서비스가 제공되고 있다.In addition, with the recent development of wireless Internet technology, full browsing (not only to view web pages in the same form as web pages viewed as a personal computer through smart devices) but also to search all wired websites ( Full Browsing) service is provided.
그러나, 스마트 디바이스에 구비된 소형의 디스플레이 모듈에 상기 웹페이지에 포함된 텍스트, 그래픽, 이미지, 동영상 및 플래시 등이 조밀하게 디스플레이됨으로써, 대게 특정 이미지 크기가 사용자의 손가락과 터치스크린 간의 접촉 면적보다 작기 때문에, 상기 특정 이미지에 인접하여 배열된 다른 이미지 또는 텍스트 등이 터치되는 문제점이 발생되었으며, 손가락으로 인해 화면을 자주 가리는 문제가 존재하였다.However, the compact display module included in the smart device displays the text, graphics, images, video, and flash included in the web page densely, so that a specific image size is usually smaller than the contact area between the user's finger and the touch screen. Therefore, a problem occurs in that another image or text, which is arranged adjacent to the specific image, is touched, and there is a problem of frequently covering the screen by a finger.
또한, 이러한 터치스크린은 손가락이나 터치팬 등을 통해 반드시 접촉해야지만 기기 조작이 가능함에 따라 잦은 터치가 이루어짐에 따라 액정 손상이나, 특정 부분의 인식이 저하될 수 있다. 따라서 사용자가 터치스크린을 접촉하지 않고도 원하는 기능을 실행시키기 위한 새로운 입력 방법이 구현될 필요성이 있다.In addition, the touch screen must be touched by a finger or a touch fan, but as the device can be manipulated, frequent touches may cause damage to liquid crystals or deterioration of recognition of certain parts. Therefore, it is necessary to implement a new input method for executing a desired function without the user touching the touch screen.
본 발명은 상기한 종래 기술에 따른 문제점을 해결하기 위한 것이다. 즉, 본 발명의 목적은, 적어도 세개 이상의 카메라를 통해 객체의 비접촉 동작을 인식함으로써, 동작 인식이 더욱 정밀하게 이루어질 수 있으며, 비접촉식이므로 기존의 터치 방식의 입력 한계를 뛰어넘어 보다 생동감 있는 사용자 상호작용을 제공할 수 있는 비접촉 동작 제어가 가능한 스마트 디바이스 및 이를 이용한 비접촉 동작 제어 방법을 제공함에 있다.The present invention is to solve the above problems according to the prior art. That is, the object of the present invention, by recognizing the non-contact motion of the object through at least three or more cameras, the motion recognition can be made more precisely, and because it is non-contact, more lively user interaction beyond the input limit of the conventional touch method The present invention provides a smart device capable of providing non-contact operation control and a non-contact operation control method using the same.
상기의 목적을 달성하기 위한 기술적 사상으로서 본 발명의 비접촉 동작 제어가 가능한 스마트 디바이스는, 디바이스 본체와, 카메라 센서부, 비접촉 동작 인식부 및 제어부를 포함한다.As a technical idea for achieving the above object, a smart device capable of non-contact motion control of the present invention includes a device main body, a camera sensor unit, a non-contact motion recognition unit, and a control unit.
상기 디스플레이부는 디바이스 본체 일면에 배치된다.The display unit is disposed on one surface of the device main body.
상기 카메라 센서부는 상기 디바이스 본체의 서로 다른 모서리부에 각각 배치되는 카메라가 적어도 세개 이상 구비되고, 상기 카메라를 통해 대상객체에 대한 영상정보를 각각 획득한다. 이때, 카메라 센서부는 상기 디바이스 본체의 전면 또는 후면 또는 전·후면 모두에 배치될 수 있으며, 상기 영상정보는 대상객체에 대한 영상을 캡쳐한 이미지 정보, 영상이 캡쳐된 시간 정보 및 촬영한 카메라 정보 등을 포함한다.The camera sensor unit includes at least three cameras disposed at different corners of the device main body, and acquires image information on the target object through the camera. In this case, the camera sensor unit may be disposed on the front or rear or front and rear of the device body, the image information is the image information of the captured image of the target object, the time information and the captured camera information, etc. It includes.
상기 비접촉 동작 인식부는 카메라 센서부를 통해 획득한 영상정보를 기반으로 상기 대상객체의 모양, 위치 및 크기 변화 분석을 수행하고, 변화분석에 따른 비접촉 동작 인식 정보를 생성한다. 또한, 비접촉 동작 인식부는, 상기 카메라별 영상정보를 매칭하여 깊이 맵을 산출하고, 대상객체를 검출하는 객체 검출부, 상기 객체 검출부를 통해 검출된 대상객체의 모양을 인식하는 모양 인식부, 상기 대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정하는 위치 측정부 및 상기 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성하는 정보 생성부를 포함할 수 있다.The non-contact motion recognition unit analyzes the shape, position, and size change of the target object based on the image information acquired through the camera sensor unit, and generates non-contact motion recognition information according to the change analysis. In addition, the non-contact motion recognition unit may calculate a depth map by matching the image information for each camera, an object detector for detecting a target object, a shape recognition unit for recognizing the shape of the target object detected by the object detector, the target object A position measuring unit for measuring the position of the target object by using the parallax average for each camera of the camera and the information generating unit for generating the non-contact motion recognition information by changing the shape, position and size of the target object.
상기 제어부는 상기 생성된 비접촉 동작 인식 정보에 따라 제어된 화면을 상기 디스플레이부에 디스플레이한다.The controller displays a screen controlled according to the generated non-contact gesture recognition information on the display unit.
본 발명의 비접촉 동작 제어가 가능한 스마트 디바이스는 상기 대상객체의 모양을 인식하기 위한 모양정보 및 대상객체의 모양, 위치, 크기 변화에 따른 비접촉 동작 인식 정보를 생성하기 위한 기 설정된 정보가 저장된 저장부를 포함할 수 있다. 더불어, 상기 카메라 센서부의 활성화 여부를 판단하기 위한 감지장치를 포함할 수 있다.The smart device capable of controlling the contactless motion of the present invention includes a storage unit configured to store shape information for recognizing the shape of the target object and preset information for generating contactless motion recognition information according to a shape, position, and size change of the target object. can do. In addition, it may include a sensing device for determining whether the camera sensor unit is activated.
본 발명의 일실시예에 의한 스마트 디바이스를 이용한 비접촉 동작 제어 방법은, 적어도 세개 이상의 카메라를 통해 대상객체에 대한 영상정보를 각각 획득하는 단계, 카메라별 영상정보를 매칭하여 깊이 맵을 산출하고, 산출된 깊이 맵을 기반으로 영상정보 내 대상객체를 검출하는 단계, 대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정하는 단계, 상기 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성하는 단계 및 생성된 비접촉 동작 인식 정보에 따라 화면을 제어하는 단계를 포함하여 구성된다.In the non-contact motion control method using a smart device according to an embodiment of the present invention, obtaining image information on a target object through at least three cameras, calculating depth maps by matching image information of each camera, and calculating Detecting the target object in the image information based on the received depth map, measuring the position of the target object using the disparity average for each camera of the target object, and recognizing the non-contact motion by changing the shape, position, and size of the target object. Generating information and controlling the screen according to the generated non-contact motion recognition information.
상기 영상정보 내 대상객체를 검출하는 단계 이후에, 검출된 대상객체의 모양을 인식하여 특정모양의 인식 또는 모양변화를 인식하는 단계를 추가로 포함할 수 있다.After detecting the target object in the image information, the method may further include recognizing a shape of the detected target object to recognize a specific shape or change in shape.
상기 대상객체에 대한 영상정보를 각각 획득하는 단계에서는, 디바이스 일측에 구비된 감지장치에 의해 기 설정된 시간만큼 대상객체가 존재하지 않을 경우, 카메라를 대기상태로 전환하고, 상기 감지장치에 의해 대상객체의 존재가 감지될 경우 상기 카메라를 활성화 시킬 수 있다.In the step of acquiring image information about the target object, when the target object does not exist for a predetermined time by a sensing device provided on one side of the device, the camera is switched to a standby state and the target object is detected by the sensing device. When the presence of the is detected the camera can be activated.
본 발명에 따른 스마트 디바이스는, 적어도 세개 이상의 카메라가 객체의 비접촉 동작을 인식함으로써, 객체의 동작을 더욱 정밀하게 인식할 수 있으며, 다양한 사용자 인터페이스가 요구되는 환경에서 사용자에게 편리하면서도 생동감 있는 장치의 사용감을 줄 수 있다. Smart device according to the present invention, by at least three or more cameras to recognize the contactless operation of the object, it is possible to recognize the operation of the object more precisely, the user experience in a convenient and lively device in an environment that requires a variety of user interface Can give
또한, 종래에는 스마트 디바이스의 근접거리 이내에 대상객체가 위치해야만 객체의 동작을 인식할 수 있었으나, 본 발명은 카메라를 이용하여 대상객체의 위치를 파악함으로써 스마트 디바이스와 대상객체와의 이격거리에 대한 제한이 없으며, 실시간으로 연속적인 동작 인식이 가능하다. 더불어, 잦은 접촉으로 인한 모바일 기기의 스크린 액정의 손상을 방지할 수 있는 효과가 있다.In addition, in the related art, the operation of the object can be recognized only when the target object is located within the proximity of the smart device. However, the present invention limits the separation distance between the smart device and the target object by identifying the position of the target object using a camera. There is no continuous motion recognition in real time. In addition, there is an effect to prevent damage to the screen liquid crystal of the mobile device due to frequent contact.
도 1은 본 발명의 일실시예에 따른 스마트 디바이스의 개략적인 내부 구성도이다.1 is a schematic internal configuration diagram of a smart device according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 스마트 디바이스에 카메라 센서부가 장착된 일예를 나타낸 외부 사시도이다.2 is an external perspective view illustrating an example in which a camera sensor unit is mounted on a smart device according to an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따른 대상객체 위치파악의 일예를 보여주는 개념도이다.3 is a conceptual diagram illustrating an example of the object object positioning according to an embodiment of the present invention.
도 4는 본 발명의 일실시예에 따른 스마트 디바이스의 비접촉 동작 제어 방법을 순차적으로 도시한 순서도이다.4 is a flowchart sequentially illustrating a method for controlling contactless operation of a smart device according to an embodiment of the present invention.
본 발명은 다양한 변경을 가할 수 있고 여러 가지 형태를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 본문에 상세하게 설명하고자 한다.As the inventive concept allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the text.
그러나, 이는 본 발명을 특정한 개시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다. 제1, 제2 등의 용어는 다양한 구성 요소들을 설명하는데 사용될 수 있지만, 상기 구성 요소들은 상기 용어들에 의해 한정되어서는 안된다. 상기 용어들은 하나의 구성 요소를 다른 구성 요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성 요소는 제2 구성 요소로 명명될 수 있고, 유사하게 제2 구성 요소도 제1 구성 요소로 명명될 수 있다.However, this is not intended to limit the present invention to the specific disclosed form, it should be understood to include all modifications, equivalents, and substitutes included in the spirit and scope of the present invention. Terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
본 출원에서 사용한 용어는 단지 특정한 실시예들을 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 명세서에 기재된 특징, 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this application, the terms "comprise" or "having" are intended to indicate that there is a feature, number, step, action, component, part, or combination thereof described in the specification, and that one or more other features It should be understood that it does not exclude in advance the possibility of the presence or addition of numbers, steps, actions, components, parts or combinations thereof.
이하, 본 발명의 바람직한 실시예를 첨부 도면에 의거하여 상세하게 설명하기로 한다.Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
본 발명의 스마트 디바이스는 스마트폰, 휴대폰, MP3, 게임기, 네비게이션, TV, 디스플레이 장치, 노트북 컴퓨터, 태블릿(tablet) 컴퓨터, PMP(Personal Media Player), PDA(Personal Digital Assistants) 등일 수 있으며, 본 발명의 실시예에서는 스마트 디바이스의 대표적인 구성인 스마트폰을 이용하여 설명하기로 한다. 또한, 본 발명을 설명함에 있어 관련된 공지 기능 또는 구성에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴수 있다고 판단되는 경우에는 그 상세한 설명을 생략하기로 한다.The smart device of the present invention may be a smartphone, a mobile phone, an MP3, a game machine, a navigation device, a TV, a display device, a notebook computer, a tablet computer, a personal media player (PMP), a personal digital assistant (PDA), and the like. In the embodiment of the present invention will be described using a smartphone that is a typical configuration of a smart device. In addition, in describing the present invention, when it is determined that a detailed description of a related known function or configuration may unnecessarily obscure the subject matter of the present invention, the detailed description thereof will be omitted.
도 1은 본 발명의 일실시예에 따른 스마트 디바이스의 개략적인 내부 구성도이다.1 is a schematic internal configuration diagram of a smart device according to an embodiment of the present invention.
도 1에 도시된 바와 같이, 본 발명의 스마트 디바이스는 디바이스 본체(100), 상기 본체(100) 전면에 배치되는 디스플레이부(200), 상기 본체(100)의 서로 다른 단부에 각각 배치되는 카메라(310)가 적어도 세개 이상 구비되고, 상기 카메라(310)를 통해 대상객체에 대한 영상정보를 각각 획득하는 카메라 센서부(300), 상기 카메라 센서부(300)를 통해 획득한 영상정보를 기반으로 상기 대상객체의 모양, 위치 및 크기 변화 분석을 수행하고, 변화 분석에 따른 비접촉 동작 인식 정보를 생성하는 비접촉 동작 인식부(400) 및 생성된 비접촉 동작 인식 정보에 따라 디바이스를 제어하는 제어부(500)를 포함하여 구성된다.As shown in FIG. 1, the smart device of the present invention includes a device main body 100, a display unit 200 disposed on the front of the main body 100, and a camera disposed at different ends of the main body 100. At least three or more 310 are provided, based on the image information obtained through the camera sensor unit 300, the camera sensor unit 300 for obtaining image information on the target object through the camera 310, respectively; A non-contact motion recognition unit 400 which analyzes the shape, position, and size of the target object, generates contactless motion recognition information according to the change analysis, and a controller 500 that controls the device according to the generated non-contact motion recognition information. It is configured to include.
먼저, 디스플레이부(200)는 사용자에게 제공할 다양한 화면을 표시하며, 상기 제어부(500)로부터 수신된 화면 제어 신호에 따라 제어된 화면을 표시한다. 예를 들어, 사용자가 손을 오른쪽에서 왼쪽으로 이동할 경우, 디스플레이부(200)는 제어부(500)로부터 화면 페이지가 다음 페이지로 넘어가는 내용이 포함된 화면 제어 신호를 수신하여 해당 화면 페이지에서 다음 페이지로 이동된 화면을 표시한다. 본 발명의 디스플레이부(200)는 기존의 멀티터치 기능을 갖는 터치 스크린으로 구성될 수도 있다.First, the display 200 displays various screens to be provided to the user, and displays the screen controlled according to the screen control signal received from the controller 500. For example, when the user moves his hand from right to left, the display unit 200 receives a screen control signal including the content of the screen page from the control unit 500 to the next page, the next page on the screen page Display the screen moved to. The display unit 200 of the present invention may be configured as a touch screen having a conventional multi-touch function.
카메라 센서부(300)는 대상객체에 대한 영상정보를 획득하여 비접촉 동작 인식부(400)로 전송하게 되는데, 여기서 영상정보는 대상객체의 영상을 캡쳐한 이미지 정보, 영상이 캡쳐된 시간 정보 및 촬영한 카메라 정보를 포함한다. 본 발명의 카메라 센서부(300)는, 대상객체의 위치를 정확히 계산하기 위해 카메라가 적어도 3개 이상 필요하며, 디바이스 본체(100) 일면의 서로 다른 단부에 위치하는 것이 바람직하다. The camera sensor unit 300 acquires image information about the target object and transmits the image information to the non-contact motion recognition unit 400, where the image information includes image information of capturing the image of the target object, time information and image capturing the image. It contains one camera information. The camera sensor unit 300 of the present invention, at least three cameras are required to accurately calculate the position of the target object, it is preferable to be located at different ends of the surface of the device body (100).
본 발명에서는 도 2에 도시된 바와 같이, 4개의 카메라(310a~310d)가 디바이스 본체(100) 전면의 각 모서리를 따라 위치함으로써, 디스플레이부의 비 사용영역(Dead Zone)이 없이 카메라(310)가 설치된 4각 영역을 모두 사용할 수 있다. 또한, 본 발명의 카메라 센서부(300)는 디바이스 본체(100) 일면에 돌출되지 않는 내장형태의 구조를 가질 수 있도록 평행식 카메라(parallel camera) 구조로 구성되는 것이 바람직하며, 디바이스 본체(100) 전면 및 후면에 모두 배치될 수도 있다.In the present invention, as shown in Figure 2, the four cameras (310a ~ 310d) are located along each corner of the front of the device body 100, so that the camera 310 without the dead zone (Dead Zone) of the display unit You can use all four installed areas. In addition, the camera sensor unit 300 of the present invention is preferably configured as a parallel camera (parallel camera) structure to have a built-in structure that does not protrude on one surface of the device main body 100, the device main body 100 It can also be placed on both the front and back.
비접촉 동작 인식부(400)는 상기 카메라 센서부(300)를 통해 획득한 영상정보를 기반으로 상기 대상객체의 모양, 위치 및 크기 변화 분석을 수행하고, 변화 분석에 따른 비접촉 동작 인식 정보를 생성한다. 이를 위해, 비접촉 동작 인식부(400)는 객체 검출부(410), 모양 인식부(420), 위치 측정부(430) 및 정보 생성부(440)를 포함한다. 또한, 본 발명은 비접촉 동작 인식부(400)를 통해 대상객체의 모양을 인식하기 위한 모양정보 및 대상객체의 모양, 위치, 크기 변화에 따른 비접촉 동작 인식 정보를 생성하기 위한 기 설정된 정보가 저장된 저장부(600)를 구비한다.The non-contact motion recognition unit 400 analyzes the shape, position, and size change of the target object based on the image information acquired through the camera sensor unit 300, and generates non-contact motion recognition information according to the change analysis. . To this end, the non-contact motion recognition unit 400 includes an object detector 410, a shape recognition unit 420, a position measuring unit 430, and an information generating unit 440. In addition, the present invention stores the shape information for recognizing the shape of the target object through the non-contact motion recognition unit 400 and the preset information for generating the non-contact motion recognition information according to the shape, position, size change of the target object is stored The unit 600 is provided.
본 발명은 스테레오 비전 방식에 따라 대상객체의 거리를 추정한다. 스테레오 비전 시스템을 이용하여 거리를 추정하는 방식의 경우, 카메라로부터 각각 촬영된 영상을 매칭시켜 깊이 맵(depth map)을 생성한다. 그리고 깊이 맵과 각각 촬영된 영상을 이용하여 대상객체를 검출하고, 상기 검출된 대상객체 내의 시차 평균을 구한다. 스테레오 비전 시스템에서는 상기 구해진 시차를 이용하여 객체의 거리를 추정할 수 있다. 이러한 스테레오 비전 기술을 이용하여 객체의 거리를 추정하는 기술은, 이미 넓게 쓰이고 있는 위치 추적 기술로 여기에서는 그에 대한 상세한 설명은 생략하기로 한다.The present invention estimates the distance of the target object according to the stereo vision method. In the case of estimating distance using a stereo vision system, a depth map is generated by matching images captured by cameras. The target object is detected using the depth map and the captured images, and the parallax average in the detected target object is obtained. In the stereo vision system, the distance of an object may be estimated using the obtained parallax. The technique of estimating the distance of an object using the stereo vision technique is a widely used position tracking technique, and a detailed description thereof will be omitted.
객체 검출부(410)는 카메라별 영상정보를 매칭하여 깊이 맵(depth map)을 산출하고, 산출된 깊이 맵을 기반으로 영상정보 내 대상객체를 검출한다. 이에 따라, 모양 인식부(420)에서는 대상객체의 특정모양을 인식할 수 있게 되며, 위치 측정부(430)에서는 검출된 대상객체의 중심부분에 대한 시차 값을 구하여 카메라와 대상객체와의 거리를 구할 수 있게 된다. 상기 깊이 맵을 산출하기 위한 매칭 방법으로는 DP(Dynamic Programming), BP(Belief Propagation), Graph-cut 등과 같은 방법을 사용할 수 있다. 또한, 깊이 맵을 이용하여 객체를 검출하는 과정은 다양한 방식을 이용하여 객체를 추출할 수 있으며, 본 발명에서는 수직 시차맵(V-disparity)을 이용하여 수직 시차맵에 맵핑된 시차 값의 분포에 따라 영상정보에 포함된 객체를 검출한다. 즉, 수직 시차맵을 이용하여 감소 경향과 다른 시차값을 갖는 영역의 합을 대상객체 영역으로 검출거나, 같은 시차값을 갖는 픽셀들의 영역을 대상객체 영역으로 검출할 수 있다.The object detector 410 matches the image information for each camera to calculate a depth map, and detects a target object in the image information based on the calculated depth map. Accordingly, the shape recognition unit 420 may recognize the specific shape of the target object, and the position measuring unit 430 obtains a parallax value for the center portion of the detected target object and calculates a distance between the camera and the target object. Will be available. As a matching method for calculating the depth map, a method such as DP (Dynamic Programming), BP (Belief Propagation), and Graph-cut may be used. In addition, in the process of detecting an object using a depth map, an object may be extracted using various methods. In the present invention, a vertical parallax map (V-disparity) may be used to distribute a parallax value mapped to a vertical parallax map. Accordingly, the object included in the image information is detected. That is, the sum of regions having a disparity value and a disparity value different from the vertical disparity map may be detected as the target object region, or an area of pixels having the same disparity value may be detected as the target object region.
모양 인식부(420)는 상기 객체 검출부(410)를 통해 검출된 대상객체의 모양을 인식한다. 이에 따라, 본 발명은 검출된 대상객체의 특정모양의 인식 또는 모양 변화에 따라서도 비접촉 동작 인식 정보를 인식할 수 있게 된다. 예를 들어, 가위, 바위, 보와 같은 특정모양과, 펼쳐진 손바닥에서 손가락을 구부린 경우의 모양 변화를 통해 펼친 손가락과 접힌 손가락을 구별할 수 있으며, 이에 따라 기타, 피아노 등의 악기연주를 화면을 터치하지 않고 실제 악기를 다루듯 연주하거나, 생동감 있는 게임 제어를 수행할 수 있게 된다.The shape recognizer 420 recognizes the shape of the target object detected by the object detector 410. Accordingly, the present invention can recognize the non-contact motion recognition information according to the recognition or shape change of the specific shape of the detected target object. For example, you can distinguish between an extended finger and a folded finger by changing the shape of a certain shape such as scissors, rocks, beams, and bent fingers in an open palm. You can play like a real instrument without touching it, or perform live game control.
위치 측정부(430)는 대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정한다. 본 발명의 위치 측정부(430)는 적어도 세개 이상의 카메라를 통해 획득한 영상정보를 기반으로 하여 대상객체의 위치를 산출하며, 인접한 카메라 사이의 중심위치에서 대상객체까지와의 거리를 각각 추정하고, 추정된 거리들의 최접점에 따라 대상객체에 대한 정확한 위치좌표(x, y, z)를 도출한다. 이에 따라, 두개의 카메라인 좌·우측 카메라만을 이용하여 위치파악하는 경우보다, 더욱 정확한 대상객체의 위치파악이 가능하다.The position measuring unit 430 measures the position of the target object using the parallax average for each camera of the target object. The position measuring unit 430 of the present invention calculates the position of the target object based on image information acquired through at least three cameras, estimates the distance from the center position between the adjacent cameras to the target object, respectively. The exact position coordinates (x, y, z) for the target object are derived according to the closest points of the estimated distances. Accordingly, it is possible to more accurately determine the position of the target object than when using only two cameras, the left and right cameras, for positioning.
예를 들어, 도 3에 도시된 바와 같이, 4개의 카메라(310a~310d)를 통해 영상정보를 획득하는 경우, 제1 카메라(310a) 및 제2 카메라(310b) 사이의 중심위치에서 대상객체까지와의 거리(a), 제2 카메라(310b) 및 제3 카메라(310c) 사이의 중심위치에서 대상객체까지와의 거리(b), 제3 카메라(310c) 및 제4 카메라(310d) 사이의 중심위치에서 대상객체까지와의 거리(c), 제4 카메라(310d) 및 제1 카메라(310a) 사이의 중심위치에서 대상객채까지와의 거리(d)를 스테레오 비전 알고리즘을 이용한 삼각 공식을 통해 각각 산출하고, 이를 통해 대상객체에 대한 정확한 위치좌표를 도출한다. For example, as illustrated in FIG. 3, when image information is acquired through four cameras 310a to 310d, the center object between the first camera 310a and the second camera 310b to the target object. Distance (a), distance from the center position between the second camera (310b) and the third camera (310c) to the target object (b), between the third camera (310c) and the fourth camera (310d) The distance (c) from the center position to the target object and the distance (d) from the center position between the fourth camera 310d and the first camera 310a to the target object through the trigonometric formula using the stereo vision algorithm Each of them is calculated and the correct position coordinates for the target object are derived.
정보 생성부(440)는 모양 인식부(420) 및 위치 측정부(430)를 통한 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성한다. 즉, 대상객체의 위치변화를 통해 상·하·좌·우·회전 움직임 방향 및 패턴, 속도를 추정할 수 있게 되며, 대상객체의 크기변화를 통해 모양변화 및 원근, 속도를 추정할 수 있게 되어 그에 대응되는 비접촉 동작 인식 정보를 생성할 수 있게 된다.The information generator 440 generates non-contact motion recognition information through the shape, position, and size change of the target object through the shape recognizer 420 and the position measurer 430. That is, it is possible to estimate the direction, pattern, and speed of the up, down, left, right, and rotational movements through the change of the position of the target object, and to estimate the shape change, perspective, and speed through the change of the size of the target object. It is possible to generate contactless motion recognition information corresponding thereto.
제어부(500)는 비접촉 동작 인식부로부터 비접촉 동작 인식 정보를 전달 받으면, 비접촉 동작 인식 정보를 파악하여 동작 제어 정보를 생성하고, 생성된 동작 제어 정보에 따른 화면 제어 신호를 디스플레이부(200)로 전송함으로써 디스플레이부의 화면을 제어할 수 있다. 또한, 제어부(500)는 동작 제어 정보에 따른 스마트 디바이스의 동작 제어를 수행한다.When the control unit 500 receives the non-contact motion recognition information from the non-contact motion recognition unit, the control unit 500 identifies the non-contact motion recognition information, generates motion control information, and transmits a screen control signal according to the generated motion control information to the display unit 200. By doing so, the screen of the display unit can be controlled. In addition, the controller 500 performs operation control of the smart device according to the operation control information.
더불어, 본 발명은 카메라 센서부(300)의 활성화 여부를 판단하기 위한 장치인 감지장치(700)를 추가로 포함하여 구성될 수도 있다. 본 발명의 감지장치(700)는 적외선, 초음파, 근접 센서 중 하나로 구성될 수 있으며, 카메라 센서부(300)는 일정시간 대상객체가 존재하지 않을 경우 대기상태로 있다가 감지장치(700)를 통해 대상객체의 존재가 감지될 경우 활성화 상태로 전환된다.In addition, the present invention may further include a sensing device 700 which is a device for determining whether the camera sensor unit 300 is activated. The sensing device 700 of the present invention may be configured as one of an infrared ray, an ultrasonic wave, and a proximity sensor, and the camera sensor unit 300 is in a standby state when a target object does not exist for a predetermined time, and then through the sensing device 700. When the presence of the target object is detected, the state is activated.
도 4는 본 발명의 일실시예에 따른 스마트 디바이스의 비접촉 동작 제어 방법을 순차적으로 도시한 순서도이다.4 is a flowchart sequentially illustrating a method for controlling contactless operation of a smart device according to an embodiment of the present invention.
도 4에 도시된 바와 같이, 본 발명은 먼저 대상객체에 대한 영상정보를 적어도 세개 이상의 카메라를 통해 각각 획득한다(S810). 이어서, 스테레오 비전 방식을 이용하여 카메라별로 각각 촬영된 영상을 매칭시켜 깊이 맵을 생성한다(S820). As shown in FIG. 4, the present invention first obtains image information on a target object through at least three cameras (S810). Subsequently, a depth map is generated by matching images photographed for each camera using a stereo vision method (S820).
다음으로 생성된 깊이 맵을 기반으로 영상정보 내 대상객체를 검출한다(S830). 이때, 본 발명에서는 검출된 대상객체의 모양을 인식하여, 특정 모양의 인식 또는 모양 변화에 따라서도 비접촉 동작 인식 정보를 인식할 수도 있다. 예를 들어, 가위, 바위, 보 동작과 같은 특정모양과, 펼쳐진 손바닥에서 손가락을 구부린 경우의 모양 변화를 통해 펼친 손가락과 접힌 손가락을 구별할 수 있으며, 이에 따라 기타, 피아노 등의 악기연주를 화면을 터치하지 않고도 실제 악기를 다루듯 연주하거나, 생동감 있는 게임 제어를 수행할 수 있게 된다.Next, the target object in the image information is detected based on the generated depth map (S830). In this case, the present invention may recognize the detected shape of the target object, and may recognize the non-contact motion recognition information according to the recognition or change of the shape. For example, it is possible to distinguish between an extended finger and a folded finger by changing the shape of a bent finger in the open palm, such as scissors, rocks, and beam motions. You can play as if you were playing a real instrument, or perform live game control without touching.
이어서, 대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정한다(S840). 본 발명에서는 적어도 세개 이상의 카메라를 통해 획득한 영상정보를 기반으로 하여 대상객체의 위치를 산출하며, 인접한 카메라 사이의 중심위치에서 대상객체까지와의 거리를 각각 추정하고, 추정된 거리들의 최접점에 따라 대상객체에 대한 정확한 위치좌표(x, y, z)를 도출한다.Next, the position of the target object is measured using the parallax average for each camera of the target object (S840). In the present invention, the position of the target object is calculated based on the image information acquired through at least three cameras, the distance from the center position between the adjacent cameras to the target object is respectively estimated, and at the closest point of the estimated distances. Therefore, the exact position coordinates (x, y, z) for the target object are derived.
다음으로 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성한다(S850). 다시 말해, 대상객체의 위치변화를 통해 상·하·좌·우·회전 움직임 방향 및 패턴, 속도를 추정하며, 대상객체의 크기변화를 통해 모양변화, 원근 및 속도를 추정할 수 있게 되어 그에 대응되는 비접촉 동작 인식 정보를 생성할 수 있게 된다.Next, the non-contact motion recognition information is generated by changing the shape, position, and size of the target object (S850). In other words, the direction, pattern, and speed of up, down, left, right, and rotational movements can be estimated by changing the position of the target object, and the shape change, perspective and speed can be estimated by changing the size of the target object. It is possible to generate the non-contact motion recognition information.
이어서, 생성된 비접촉 동작 인식 정보를 파악하여 그에 대응하는 동작 제어 정보를 생성하고(S860), 생성된 동작 제어 정보에 따른 화면 제어를 수행한다(S870).Subsequently, the generated non-contact motion recognition information is identified to generate motion control information corresponding thereto (S860), and screen control according to the generated motion control information is performed (S870).
상술한 바와 같이, 본 발명은 3개 이상의 카메라를 통해 대상객체의 상·하·좌·우·회전 움직임 방향 및 원근, 패턴, 속도, 모양 변화에 따른 비접촉식 동작을 인식할 수 있게 되며, 인식된 동작에 따라 화면 및 디바이스의 동작을 제어할 수 있게 된다. 이에 따라, 본 발명은 스마트 디바이스와 대상객체와의 이격거리에 따른 동작 인식에 제한이 없으며, 실시간으로 연속적인 객체의 동작 인식이 가능해 보다 생동감 있는 사용자 상호작용을 제공할 수 있게 된다.As described above, the present invention is able to recognize the contactless motion according to the up, down, left, right, rotational movement direction and perspective, pattern, speed, shape change of the target object through three or more cameras. The operation of the screen and the device can be controlled according to the operation. Accordingly, the present invention is not limited to the motion recognition according to the separation distance between the smart device and the target object, it is possible to provide a more lively user interaction by enabling the motion recognition of the continuous object in real time.
이상에서 설명한 본 발명은 전술한 실시예 및 첨부된 도면에 의해 한정되는 것은 아니며, 본 발명의 기술적 사상을 벗어나지 않는 범위 내에서 여러 가지 치환, 변형 및 변경이 가능하다는 것은 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 있어 명백하다 할 것이다.The present invention described above is not limited to the above-described embodiments and the accompanying drawings, and various substitutions, modifications, and changes are possible in the art without departing from the technical spirit of the present invention. It will be clear to those of ordinary knowledge.

Claims (9)

  1. 디바이스 본체;Device body;
    상기 본체 일면에 배치되는 디스플레이부;A display unit disposed on one surface of the main body;
    상기 본체의 서로 다른 단부에 각각 배치되는 카메라가 적어도 세개 이상 구비되고, 상기 카메라를 통해 대상객체에 대한 영상정보를 각각 획득하는 카메라 센서부;At least three cameras disposed at different ends of the main body, respectively, and a camera sensor unit configured to obtain image information on the target object through the camera;
    상기 카메라 센서부를 통해 획득한 영상정보를 기반으로 상기 대상객체의 모양, 위치 및 크기 변화 분석을 수행하고, 변화분석에 따른 비접촉 동작 인식 정보를 생성하는 비접촉 동작 인식부; 및A non-contact motion recognition unit for performing a shape, position, and size change analysis of the target object based on the image information acquired through the camera sensor unit, and generating non-contact motion recognition information according to the change analysis; And
    상기 생성된 비접촉 동작 인식 정보에 따라 제어된 화면을 상기 디스플레이부에 디스플레이하는 제어부;A controller configured to display a screen controlled according to the generated non-contact gesture recognition information on the display unit;
    를 포함하여 구성된 비접촉 동작 제어가 가능한 스마트 디바이스.Smart device capable of contactless motion control configured to include.
  2. 제 1항에 있어서,The method of claim 1,
    상기 카메라 센서부는,The camera sensor unit,
    상기 디바이스 본체의 전면 또는 후면 또는 전·후면 모두에 배치되는 것을 특징으로 하는 비접촉 동작 제어가 가능한 스마트 디바이스.Smart device capable of non-contact operation control, characterized in that disposed on the front or rear or both front, and rear of the device body.
  3. 제 1항에 있어서,The method of claim 1,
    상기 영상정보는,The video information,
    상기 대상객체에 대한 영상을 캡쳐한 이미지 정보, 영상이 캡쳐된 시간 정보 및 촬영한 카메라 정보를 포함하는 것을 특징으로 하는 비접촉 동작 제어가 가능한 스마트 디바이스.The smart device capable of non-contact operation control, comprising image information of capturing an image of the target object, time information of capturing an image, and photographed camera information.
  4. 제 1항에 있어서,The method of claim 1,
    상기 비접촉 동작 인식부는,The non-contact motion recognition unit,
    상기 카메라별 영상정보를 매칭하여 깊이 맵을 산출하고, 대상객체를 검출하는 객체 검출부;An object detector configured to calculate a depth map by matching the image information of each camera, and detect a target object;
    상기 객체 검출부를 통해 검출된 대상객체의 모양을 인식하는 모양 인식부;A shape recognition unit recognizing a shape of the target object detected by the object detection unit;
    상기 대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정하는 위치 측정부; A position measuring unit configured to measure a position of the target object by using a parallax average of each target camera;
    상기 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성하는 정보 생성부;An information generator configured to generate contactless motion recognition information by changing a shape, a position, and a size of the target object;
    를 포함하여 구성되는 것을 특징으로 하는 비접촉 동작 제어가 가능한 스마트 디바이스.Smart device capable of non-contact operation control, characterized in that configured to include.
  5. 제 3항에 있어서,The method of claim 3, wherein
    상기 대상객체의 모양을 인식하기 위한 모양정보 및 대상객체의 모양, 위치, 크기 변화에 따른 비접촉 동작 인식 정보를 생성하기 위한 기 설정된 정보가 저장된 저장부를 더 포함하는 것을 특징으로 하는 비접촉 동작 제어가 가능한 스마트 디바이스.The apparatus may further include a storage unit configured to store shape information for recognizing a shape of the target object and preset information for generating non-contact motion recognition information according to a shape, position, and size change of the target object. Smart devices.
  6. 제 1항에 있어서,The method of claim 1,
    상기 카메라 센서부의 활성화 여부를 판단하기 위한 감지장치가 추가로 구비되는 것을 특징으로 하는 비접촉 동작 제어가 가능한 스마트 디바이스.Smart device capable of non-contact operation control, characterized in that further provided with a sensing device for determining whether the camera sensor unit is activated.
  7. 디바이스 본체의 서로 다른 단부에 각각 배치되는 카메라가 적어도 세개 이상 구비된 스마트 디바이스를 이용한 비접촉 동작 제어 방법에 있어서,In the non-contact motion control method using a smart device provided with at least three cameras respectively disposed at different ends of the device body,
    적어도 세개 이상의 카메라를 통해 대상객체에 대한 영상정보를 각각 획득하는 단계와;Acquiring image information about the target object through at least three cameras;
    카메라별 영상정보를 매칭하여 깊이 맵을 산출하고, 산출된 깊이 맵을 기반으로 영상정보 내 대상객체를 검출하는 단계와;Calculating a depth map by matching image information of each camera, and detecting a target object in the image information based on the calculated depth map;
    대상객체의 카메라별 시차평균을 이용하여 대상객체의 위치를 측정하는 단계와;Measuring a position of the target object using a parallax average for each camera of the target object;
    상기 대상객체의 모양, 위치 및 크기 변화를 통해 비접촉 동작 인식 정보를 생성하는 단계와;Generating contactless motion recognition information by changing a shape, a position, and a size of the target object;
    생성된 비접촉 동작 인식 정보에 따라 화면을 제어하는 단계;Controlling the screen according to the generated contactless recognition information;
    를 포함하여 구성된 스마트 디바이스의 비접촉 동작 제어 방법.Contactless control method of the smart device configured to include.
  8. 제 7항에 있어서,The method of claim 7, wherein
    상기 영상정보 내 대상객체를 검출하는 단계 이후에,After detecting the target object in the image information,
    검출된 대상객체의 모양을 인식하여 특정모양의 인식 또는 모양변화를 인식하는 단계를 추가로 포함하는 것을 특징으로 하는 스마트 디바이스의 비접촉 동작 제어 방법.And recognizing a shape or a change in shape by recognizing a shape of the detected target object.
  9. 제 7항에 있어서,The method of claim 7, wherein
    상기 대상객체에 대한 영상정보를 각각 획득하는 단계에서는,In acquiring image information about the target object,
    디바이스 일측에 구비된 감지장치에 의해 기 설정된 시간만큼 대상객체가 존재하지 않을 경우, 카메라를 대기상태로 전환하고, 상기 감지장치에 의해 대상객체의 존재가 감지될 경우 상기 카메라를 활성화 시키는 것을 특징으로 하는 스마트 디바이스의 비접촉 동작 제어 방법.When the target object does not exist for a predetermined time by the sensing device provided on one side of the device, the camera is switched to the standby state, and when the presence of the target object is detected by the sensing device, the camera is activated. Non-contact motion control method of the smart device.
PCT/KR2014/010151 2013-10-28 2014-10-28 Smart device enabling non-contact operation control and non-contact operation control method using same WO2015064991A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20130128420 2013-10-28
KR10-2013-0128420 2013-10-28

Publications (2)

Publication Number Publication Date
WO2015064991A2 true WO2015064991A2 (en) 2015-05-07
WO2015064991A3 WO2015064991A3 (en) 2015-06-25

Family

ID=53005344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/010151 WO2015064991A2 (en) 2013-10-28 2014-10-28 Smart device enabling non-contact operation control and non-contact operation control method using same

Country Status (2)

Country Link
KR (1) KR101535738B1 (en)
WO (1) WO2015064991A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018097483A1 (en) * 2016-11-23 2018-05-31 삼성전자주식회사 Motion information generating method and electronic device supporting same

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102187965B1 (en) * 2020-03-31 2020-12-07 주식회사 가난한동지들 A Display Device with A Object Camera and A Sensor
KR20210123156A (en) * 2020-04-02 2021-10-13 삼성전자주식회사 An electronic device and method for operating functions of the electronic device using uncontact gesture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110132076A (en) * 2010-06-01 2011-12-07 금오공과대학교 산학협력단 System for controling non-contact screen and method for controling non-contact screen in the system
KR20120123487A (en) * 2010-02-10 2012-11-08 마이크로칩 테크놀로지 저머니 Ⅱ 게엠베하 운트 콤파니 카게 System and method for contactless detection and recognition of gestures in a three-dimensional space
KR20130053153A (en) * 2011-11-15 2013-05-23 전자부품연구원 Mobile terminal of non-contact type and method thereof
KR20130088104A (en) * 2013-04-09 2013-08-07 삼성전자주식회사 Mobile apparatus and method for providing touch-free interface
WO2013147501A1 (en) * 2012-03-26 2013-10-03 실리콤텍(주) Motion gesture sensing module and motion gesture sensing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101194883B1 (en) * 2010-03-19 2012-10-25 김은주 system for controling non-contact screen and method for controling non-contact screen in the system
KR101708696B1 (en) * 2010-09-15 2017-02-21 엘지전자 주식회사 Mobile terminal and operation control method thereof
EP2620849B1 (en) * 2010-09-22 2019-08-28 Shimane Prefectural Government Operation input apparatus, operation input method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120123487A (en) * 2010-02-10 2012-11-08 마이크로칩 테크놀로지 저머니 Ⅱ 게엠베하 운트 콤파니 카게 System and method for contactless detection and recognition of gestures in a three-dimensional space
KR20110132076A (en) * 2010-06-01 2011-12-07 금오공과대학교 산학협력단 System for controling non-contact screen and method for controling non-contact screen in the system
KR20130053153A (en) * 2011-11-15 2013-05-23 전자부품연구원 Mobile terminal of non-contact type and method thereof
WO2013147501A1 (en) * 2012-03-26 2013-10-03 실리콤텍(주) Motion gesture sensing module and motion gesture sensing method
KR20130088104A (en) * 2013-04-09 2013-08-07 삼성전자주식회사 Mobile apparatus and method for providing touch-free interface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018097483A1 (en) * 2016-11-23 2018-05-31 삼성전자주식회사 Motion information generating method and electronic device supporting same
US10796439B2 (en) 2016-11-23 2020-10-06 Samsung Electronics Co., Ltd. Motion information generating method and electronic device supporting same

Also Published As

Publication number Publication date
KR101535738B1 (en) 2015-07-09
WO2015064991A3 (en) 2015-06-25
KR20150048608A (en) 2015-05-07

Similar Documents

Publication Publication Date Title
US10761610B2 (en) Vehicle systems and methods for interaction detection
CN106104434B (en) User's handedness and orientation are determined using touch panel device
KR101844390B1 (en) Systems and techniques for user interface control
KR101872426B1 (en) Depth-based user interface gesture control
US20110109577A1 (en) Method and apparatus with proximity touch detection
WO2018151449A1 (en) Electronic device and methods for determining orientation of the device
JP2016173831A (en) Remote control of computer device
EP2341418A1 (en) Device and method of control
KR20120068253A (en) Method and apparatus for providing response of user interface
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
TWI496094B (en) Gesture recognition module and gesture recognition method
CN103677240A (en) Virtual touch interaction method and equipment
WO2012145142A2 (en) Control of electronic device using nerve analysis
KR20140104597A (en) Mobile devices of transmitting and receiving data using gesture
US11886643B2 (en) Information processing apparatus and information processing method
WO2015064991A2 (en) Smart device enabling non-contact operation control and non-contact operation control method using same
TWI499938B (en) Touch control system
TW201439813A (en) Display device, system and method for controlling the display device
KR101019255B1 (en) wireless apparatus and method for space touch sensing and screen apparatus using depth sensor
JP7279975B2 (en) Method, system, and non-transitory computer-readable recording medium for supporting object control using two-dimensional camera
KR20130085094A (en) User interface device and user interface providing thereof
US10558270B2 (en) Method for determining non-contact gesture and device for the same
KR20150076574A (en) Method and apparatus for space touch
JP2013109538A (en) Input method and device
TWI444875B (en) Multi-touch input apparatus and its interface method using data fusion of a single touch sensor pad and imaging sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14856872

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14856872

Country of ref document: EP

Kind code of ref document: A2