[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016151869A1 - Information processing apparatus, information processing method, and program - Google Patents

Information processing apparatus, information processing method, and program Download PDF

Info

Publication number
WO2016151869A1
WO2016151869A1 PCT/JP2015/059822 JP2015059822W WO2016151869A1 WO 2016151869 A1 WO2016151869 A1 WO 2016151869A1 JP 2015059822 W JP2015059822 W JP 2015059822W WO 2016151869 A1 WO2016151869 A1 WO 2016151869A1
Authority
WO
WIPO (PCT)
Prior art keywords
projector
cameras
input
controlling
external display
Prior art date
Application number
PCT/JP2015/059822
Other languages
French (fr)
Inventor
Xiao Peng
Original Assignee
Nec Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Corporation filed Critical Nec Corporation
Priority to PCT/JP2015/059822 priority Critical patent/WO2016151869A1/en
Publication of WO2016151869A1 publication Critical patent/WO2016151869A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3173Constructional details thereof wherein the projection device is specially adapted for enhanced portability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, and program.
  • wearable devices such as smart watches have become the new hot spot in consumer electronics.
  • the smart watch is computerized and can perform many other functions besides telling the time, such as making phone call, playing audio and video and so on.
  • the wearable device usually contains the strong computing devices.
  • many sensors are integrated into the wearable devices for various features, such as camera (image sensor), accelerometer and so on.
  • the wearable device can also connect to the outside world through communication devices and protocols, such as WiFi and near field communication (NFC).
  • WiFi and NFC near field communication
  • the human- machine interface (HMI) in the wearable devices is very important for the application scope and the user experience.
  • the wearable devices should be easy to be fixed on human body, for example, the smart watches should be fastened on the wrist, the wearable devices should be small and light enough.
  • the nature of size and weight limitation determined that the HMI in the wearable device cannot be made complex enough as the smart phone. Thus, the current wearable devices fall into the simple HMI problem.
  • the HMI is limited to the simple button press operation.
  • the simple HMI problem mainly includes two aspects: the first is that little information can display on the screen at one time and the second is that only simple touch operation is allowed on the screen.
  • the drawback of this invention includes two aspects:
  • the purpose of this invention is to solve the simple HMI problem for wearable
  • One aspect of the present invention provides an information processing apparatus comprising at least one projector, at least two cameras and a control unit for controlling said at least one projector to display an image on a surface outside of the apparatus and controlling the at least two cameras to generate a virtual input.
  • Another aspect of the present invention provides a method for controlling an information processing apparatus comprising at least one projector and at least two cameras, comprising controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.
  • FIG. 1 is a schematic diagram illustrating the basic architecture of the first embodiment.
  • FIG. 2 is a schematic diagram illustrating the working process of the first embodiment.
  • FIG. 3 is a schematic diagram illustrating the construction of the second embodiment.
  • FIG. 4 is a schematic diagram illustrating the architecture of the second embodiment.
  • FIG. 5 is a flow chart illustrating the working process of the second embodiment.
  • FIG. 6 is a schematic diagram illustrating the continuous and discontinuous area in the second embodiment.
  • Fig. 7A is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
  • FIG. 7B is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
  • FIG. 7C is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
  • Fig. 8A is a schematic diagram illustrating the relative position calculation of projector and the surface in the surface plane.
  • Fig. 8B is a schematic diagram illustrating the relative position calculation of projector and the surface in the vertical plane.
  • FIG. 9 is a schematic diagram illustrating the making the projection to the required shape with the mask setting in the second embodiment.
  • FIG. 10 is a schematic diagram illustrating the feature point detection
  • FIG. 11 A is a flow chart illustrating the working process for detecting the input operation start.
  • FIG. 1 IB is a flow chart illustrating the working process for detecting the input operation end.
  • FIG. 12 is a schematic diagram illustrating the tracking for recognizing the handwriting.
  • FIG. 13 is a schematic diagram illustrating the construction of the third embodiment.
  • FIG. 14 is a schematic diagram illustrating the architecture of the third embodiment.
  • FIG. 15 is a flow chart illustrating the working process of the third embodiment.
  • FIG. 16 is a schematic diagram illustrating the making the projection to the required shape with the background imitation in the third embodiment.
  • FIG. 17 is a schematic diagram illustrating the construction of the fourth embodiment.
  • FIG. 18 is a schematic diagram illustrating the architecture of the fourth embodiment.
  • FIG. 19 is a schematic diagram illustrating the making the projection to the required shape with the background imitation in the fourth embodiment.
  • FIG. 20 is a schematic diagram illustrating the difference between the fingertip and the actual operation point.
  • FIG. 21 is a schematic diagram illustrating the detection error when fingertip is not face towards the light transmitter and receiver.
  • Fig. 1 illustrates the basic block diagram of a wearable device 100 according to the first embodiment of the present invention.
  • the wearable device 100 there are at least two cameras 101 for recognizing the operation in virtual input. There are some spaces between the cameras.
  • At least one projector 102 is used to project an image on an external display area 120.
  • An internal HMI 104 includes the internal output device, such as the screen of the wearable device 100, and the internal input devices, such as the buttons of the wearable device 100.
  • the internal HMI can also be realized as the wireless or voice control.
  • a control unit 103 is used to control the cameras, the projector and the internal HMI.
  • control unit 103 receives the images signals captured from the cameras 101, recognizes the operation information from the virtual input. If necessary, control unit 103 can modify the parameters of the cameras 101.
  • the apparatus can include other devices for assisting the external display and virtual input.
  • the communication device can help the wearable device to communicate with the outer device.
  • the various sensors can provide the information for controlling the cameras and projectors.
  • control unit 103 controls the area, the angle, the shape and other parameters of the projection. Control unit 103 transmits the projection content to the projector.
  • control unit 103 receives the input information from the internal HMI 104 and transmits the output information to the internal HMI 104.
  • the control unit 103 is implemented in some computing device, such as Central Processing Unit (CPU). There is memory device for storing the control program.
  • CPU Central Processing Unit
  • Fig. 2 illustrates the working process of the external display and virtual input.
  • the cameras capture the images in which the surface to be projected on is contained.
  • the continuous area which is suitable for external display on the surface is detected (S203).
  • the external display area and shape is determined according to user and application requirement(S205).
  • the relative position of the projector and the surface is calculated in the surface plane and vertical plane.
  • the parameters, such as the projection angle and the distortion correction, of the projector are set (S207).
  • the external display content is projected on the surface by the projector (S209).
  • the cameras capture the images in order to detect the feature points of the virtual input object, such as the fingertip (S211).
  • the detecting method of feature points is calculating the relative positions of the input object and the surface.
  • the predefined amount of points which have the smallest distance between the input object and the surface are regarded as the feature points. From the viewpoint of the cameras, since the input object will always appear in the operation area, how to judge the start and end of the actual operation is quite important. It is difficult to recognize whether the input object touches the surface or not. Instead, the relative positions of the feature points and the surface are considered. When the input object is approaching the surface, the relative distances between the feature points are getting smaller.
  • the input operation start is detected.
  • the relative distances between the feature points are getting larger.
  • the input operation end is detected.
  • the feature points are tracked for recognizing the operation of the virtual input object between the operation start and end (S213).
  • Input information which is input into the wearable is recognized (S215). During the actual realization, this process is carried out over and over again in real time computing.
  • Fig. 3 illustrates the architecture of the second embodiment for the case of smart watch 300.
  • Fig. 4 illustrates a block diagram of the smart watch 300.
  • smart watch 300 contains the computing device, the memory device and sensors 408 in the watch body.
  • One projector 102 is mounted in the watch body for projecting an image on the external display area 120.
  • the mask 307 is mounted on the projector 102 for changing the shape of external display area 120.
  • Two cameras 301 , 302 are mounted at both sides of the projector 102 for recognizing the operation in virtual input.
  • a screen 304 and the buttons 305 in the watch body form the internal HMI.
  • the external display area 120 is on the surface of the back of the hand 310.
  • the virtual input object is the opposite hand 320.
  • the control unit 303 is implemented in the computing device and memory device. The control unit 303 can use the information from sensors 408 for assisting the external display and virtual input.
  • Fig. 5 illustrates the working process of this embodiment.
  • the cameras 301, 302 capture the images in which the back of the hand to be projected on is contained.
  • the continuous area which is suitable for external display on the surface is detected (S503).
  • the external display area 120 and shape is determined according to user and application requirement (S505).
  • the relative position of the projector 102 and the surface is calculated in the surface plane and vertical plane. Based on the external display area and shape determination, and the relative position of the projector and the surface, the parameters of the projector are set (S507).
  • An image is projected to the surface by the projector 102 (S509).
  • the shape of the external display is determined by setting the mask.
  • the cameras capture the images in order to detect the feature points of the virtual input object which is the fingertip (S511).
  • the feature points are tracked for recognizing the operation of the virtual input object (S513).
  • the input information input into the smart watch is recognized (S515). During the actual realization, this process is carried out over and over again in real time computing.
  • Fig. 6 shows the example of continuous area 601 and discontinuous area 602.
  • the continuous area means an area which has no edge or boundary in the closed area.
  • the area in the solid line is the continuous area of the back of hand.
  • the area in the dashed line is the example of the discontinuous area. Since the discontinuous area 602 contains the space between the fingers, it will bring distortion.
  • the shape and the area of the external display is determined in the continuous area.
  • Figs. 7A-7C shows the example of changing the position and shape of external display area 120.
  • the email is projected on the back of hand 320.
  • the projection area 702 is enlarged according to the email content (see Fig. 7B).
  • the shape is set from rectangle 702 to circle 703 (see Fig. 7C).
  • FIGs. 8A and 8B illustrate the example that the relative position is calculated in the surface plane 801 and the vertical plane 802.
  • camera 301 and camera 302 capture different images of the back of hand.
  • the relative position of projector in the surface plane 801 is calculated through analyzing the two different images.
  • camera 301 and camera 302 capture different images of the back of hand.
  • the relative position of projector in the vertical plane 802 is calculated through analyzing the two different images.
  • the parameters such as the projection angle and the distortion correction are set.
  • the mask 307 mounted on the projector 102 is configured manually or automatically. As shown in the example in Fig. 9, with the mask 307, the required hexagon external display area 901 is projected on the back of the hand 320.
  • the index finger of the opposite hand 320 is detected.
  • the relative positions of the feature points in the surface plane 801 and the vertical plane 802 are calculated with the two cameras 301, 302 in the same principle of calculating the relative positon of the projector 102.
  • the predefined amount of points which have the smallest distance between the finger 1001 and the surface 1002 are detected as the feature points.
  • the input operation start is detected (SI 105).
  • the input operation end is detected (SI 115).
  • the input information is recognized. For example in Fig. 12, the handwriting input "A" 1201 can be recognized and input to the smart watch 300.
  • Fig. 14 illustrates a block diagram of the smart watch 1300.
  • the smart watch 1300 contains the computing device, the memory device, the sensors 408 and communication unit 1301 in the watch body.
  • One projector is mounted in the watch body for projecting the external display.
  • Two cameras 301, 302 are mounted at both sides of the projector 102 for recognizing the operation in virtual input.
  • the screen and the buttons in the watch body form the internal HMI 104.
  • the external display area is on the surface of the back of the hand.
  • the virtual input object is the opposite hand.
  • the control unit 303 is implemented in the computing device and memory device. The control unit 303 can use the information from sensors for assisting the external display and virtual input.
  • the communication unit 1401 connects a remote computing device 1410, such as the cloud, which helps the computing and controlling the external display and virtual input.
  • Fig. 15 illustrates the working process of this embodiment.
  • the cameras 301 , 302 capture the images in which the back of the hand to be projected on is contained(S).
  • S 1501 the continuous area which is suitable for external display on the surface is detected (S1503).
  • the external display area and shape is determined according to user and application requirement (SI 505).
  • the image information, such as the color and brightness, of the selected area on the surface is recorded.
  • the relative position of the projector 102 and the surface is calculated in the surface plane and vertical plane.
  • the parameters of the projector are set(S1507).
  • the external display content (image) is projected to the surface by the projector 102 (S1509).
  • the cameras 301, 302 capture the images in order to detect the feature points of the virtual input object which is the fingertip (S1511).
  • the feature points are tracked for recognizing the operation of the virtual input object (S1513).
  • the input information input into the smart watch 1000 is recognized (S1515). During the actual realization, this process is carried out over and over again in real time computing.
  • Fig. 16 illustrates the example of the background imitation for projecting the required shape on the surface.
  • the background imitation needs not to change the actual projection area.
  • the content to be displayed in the external display area 1601 is projected.
  • the imitation image that is similar with the projection surface is projected on the rest area which is called "background imitation area”.
  • the effect is that the background imitation area 1602 looks the same with the other part on the surface, which does not influence the user experience.
  • Fig. 18 illustrates a block diagram of the smart wristband 1700.
  • the smart wristband 1700 contains the computing device, the memory device and sensors in the wristband body 1707.
  • Two projectors 1701, 1702 are mounted in the wristband body 1707 for projecting the external display.
  • Two cameras 1703, 1704 are mounted for recognizing the operation in virtual input.
  • the indicator lights 1705 and the buttons 1706 in the wristband body 1707 form the internal HMI.
  • the external display area is positioned on the back of the hand.
  • the dedicated stylus 1710 can be used as a virtual input object.
  • the control unit 1803 is implemented in the computing device and memory device.
  • the control unit 1803 can use the information from sensors 1808 for assisting the external display and virtual input.
  • each camera is connected to one computing device (CD) 1801, 1802, which is used to distributing the computing and helping the control in the control unit 1803.
  • CD computing device
  • Fig. 19 illustrates the example of the background imitation for projecting the required shape on the surface.
  • the background imitation needs not to change the actual projection area.
  • the content to be displayed in the external display area 1901 is projected from projector 1701.
  • the imitation image that is similar with the projection surface is projected on the rest area, which is called "background imitation area.”
  • the effect is that the background imitation area 1902 looks the same with the other part on the surface, which does not influence the user experience.
  • the content of the background imitation area 1902 is projected from projector 1702.
  • the effect is that the background imitation area 1902 looks the same with the other part on the surface, which does not influence the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An information processing apparatus is disclosed in the specifiation of this application. The information processing apparatus comprises at least one projector, at least two cameras and a control unit for controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.

Description

Description
Title of Invention: INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM Technical Field
[0001] The present invention relates to an information processing apparatus, an information processing method, and program.
Background
[0002] In recent years, wearable devices, such as smart watches have become the new hot spot in consumer electronics. Taking the smart watch as an example, it is computerized and can perform many other functions besides telling the time, such as making phone call, playing audio and video and so on. To realize these complex functions, the wearable device usually contains the strong computing devices. In addition, many sensors are integrated into the wearable devices for various features, such as camera (image sensor), accelerometer and so on. The wearable device can also connect to the outside world through communication devices and protocols, such as WiFi and near field communication (NFC).
[0003] Since the wearable devices provide various functions for daily life, the human- machine interface (HMI) in the wearable devices is very important for the application scope and the user experience.
Citation List
Patent Literature
[0004] PTL 1 : US patent US6614422B 1
PTL 2: US patent US7173605B2
PTL 3: US patent application US20130076633A1
PTL 4: US patent application US20130181905A1
PTL 5: US patent application US 20130016070 A 1
PTL 6: IP patent 2002-318652
PTL 7: JP patent 2004-094653
PTL 8: JP patent 2010-049583
PTL 9: JP patent 2014-048691
Summary of Invention
Technical Problem
[0005] Since the wearable devices should be easy to be fixed on human body, for example, the smart watches should be fastened on the wrist, the wearable devices should be small and light enough. The nature of size and weight limitation determined that the HMI in the wearable device cannot be made complex enough as the smart phone. Thus, the current wearable devices fall into the simple HMI problem.
[0006] For the wearable devices with no screen, the HMI is limited to the simple button press operation. For the wearable devices with screen, since the screen is not large enough, the simple HMI problem mainly includes two aspects: the first is that little information can display on the screen at one time and the second is that only simple touch operation is allowed on the screen.
[0007] To solve this problem, external display and virtual input method is the efficient
solution. However, the literatures have their drawbacks.
[0008] The common features of [PTL 1], [PTL 2], [PTL 3]
and [PTL 4]
are using a projector to display the virtual keyboard on some plane, and using the sensors to detect the operations on the virtual keyboard. The drawbacks of these inventions include two aspects:
(1) They need some flat and large plane for displaying the virtual keyboard, which greatly limits the application scope and utilization place. This drawback determines that they can hardly be used in the wearable devices.
(2) They can only displaying virtual keyboard and detecting the keyboard hitting operation, which also greatly limits the application scope. For example, the dragging operation in the smart phone cannot be realized in these inventions.
[0009] The invention in [PTL 5]
uses the projector to display the virtual button pattern to the hand or other surface and uses the camera to detect virtual button selection. The drawback of this invention includes two aspects:
(1 ) Since there is only one camera, it is difficult to make accurate recognition of the virtual input motion in some directions. If the direction of the finger or other operation body is the same with the direction of the camera, the moving is hard to be recognized. In addition, it is difficult to calculate the relative position of the projector and the projection surface, which makes it difficult to control the external display accurately. This drawback makes [PTL 5]can only applied to simple operation recognition such as button selection.
(2) Since the virtual input detection method is just based on the brightness difference, the accurate information of the operation object cannot be detected. The operation start and end also cannot be detected. This drawback also restrict [PTL 5]
to simple operation.
(3) There is no solution to detect the projection surface on that which area is suitable for the projection. Since the boundary and edge brings significant distortion of the projection, the projection should avoid crossing them.
(4) There is no solution to change the proper projection area according to the user re- quirement and application requirement. This drawback also limits the application scope.
The inventions in [PTL 6], [PTL 7], [PTL 8], [PTL 9]
focus on the input of wearable device. The common drawbacks are summarized in two aspects:
(1) The methods of tracking the input object have restrictions. The invention in [PTL 6]
does not give how to track the input object (fingertip), but just give a reference paper. In this reference paper, it needs to assemble some sensor on the fingertip for the tracking. This method is difficult to realize in the real product and it is quite inconvenient for the customer. The invention in [PTL 7]
uses the camera to recognize the input object (fingertip). The method is firstly getting the outline of the hand, and detecting the first edge point of the outline as the fingertip in scanning the captured picture. The problem is that the fingertip point is not the actual input operation object, which is shown in Fig. 20. This problem reduces the input object tracking accuracy. The inventions in [PTL 8]
and [PTL 9]
uses the light (infrared ray) reflection to track the input object (fingertip). Besides the same problem with [PTL 7], another problem is that the fingertip should always face towards the light transmitter and receiver, otherwise the position of the fingertip maybe totally wrong. Fig.21 shows this problem. The fingertip is at position A, but the sensor will recognize position B as the fingertip. The problem is that [PTL 8]
and [PTL 9]
can only be used for some rough operation.
(2) The methods of detecting the input operation start and end are not accurate enough. Since the input object will always appear in the operation area, how to judge the start and end of the actual operation is quite important. [PTL 6]
does not mention this problem. [PTL 7]
uses the infrared photography to whether the judge the input object touches the operation surface or not. The accuracy of this method is quite low since the infrared photography is easy be influenced by many factors, such as the environment temperature, the temperature difference of different people, the pressure of the finger and so on. [PTL 8]
uses the microphone to detect the touch sound of the finger. This method is also quite inaccurate since the finger touch sound is very small and can be easily drowned out by the environment sound. [PTL 9]
needs the additional touch on the device screen for telling the system that the virtual input process is started and ended. It greatly limits the application scope of the system and cannot be used for complicated operation.
[0010] The purpose of this invention is to solve the simple HMI problem for wearable
devices with no drawbacks listed above.
Solution to Problem
[0011] One aspect of the present invention provides an information processing apparatus comprising at least one projector, at least two cameras and a control unit for controlling said at least one projector to display an image on a surface outside of the apparatus and controlling the at least two cameras to generate a virtual input.
[0012] Another aspect of the present invention provides a method for controlling an information processing apparatus comprising at least one projector and at least two cameras, comprising controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.
Brief Description of Drawings
[0013] [fig.l]Fig. 1 is a schematic diagram illustrating the basic architecture of the first embodiment.
[fig.2]Fig. 2 is a schematic diagram illustrating the working process of the first embodiment.
[fig.3]Fig. 3 is a schematic diagram illustrating the construction of the second embodiment.
[fig.4]Fig. 4 is a schematic diagram illustrating the architecture of the second embodiment.
[fig.5]Fig. 5 is a flow chart illustrating the working process of the second embodiment. [fig.6]Fig. 6 is a schematic diagram illustrating the continuous and discontinuous area in the second embodiment.
[fig.7A]Fig. 7A is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
[fig.7B]Fig. 7B is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
[fig.7C]Fig. 7C is a schematic diagram illustrating changing the external display area and shape according to the user and application requirement.
[fig.8A]Fig. 8A is a schematic diagram illustrating the relative position calculation of projector and the surface in the surface plane.
[fig.8B]Fig. 8B is a schematic diagram illustrating the relative position calculation of projector and the surface in the vertical plane.
[fig.9]Fig. 9 is a schematic diagram illustrating the making the projection to the required shape with the mask setting in the second embodiment. [fig.lO]Fig. 10 is a schematic diagram illustrating the feature point detection,
[fig.1 1 A]Fig. 11 A is a flow chart illustrating the working process for detecting the input operation start.
[fig.l lB]Fig. 1 IB is a flow chart illustrating the working process for detecting the input operation end.
[fig.l2]Fig. 12 is a schematic diagram illustrating the tracking for recognizing the handwriting.
[fig.l3]Fig. 13 is a schematic diagram illustrating the construction of the third embodiment.
[fig.l4]Fig. 14 is a schematic diagram illustrating the architecture of the third embodiment.
[fig.l5]Fig. 15 is a flow chart illustrating the working process of the third embodiment. [fig.l6]Fig. 16 is a schematic diagram illustrating the making the projection to the required shape with the background imitation in the third embodiment.
[fig.l7]Fig. 17 is a schematic diagram illustrating the construction of the fourth embodiment.
[fig.l8]Fig. 18 is a schematic diagram illustrating the architecture of the fourth embodiment.
[fig.l9]Fig. 19 is a schematic diagram illustrating the making the projection to the required shape with the background imitation in the fourth embodiment.
[fig.20]Fig. 20 is a schematic diagram illustrating the difference between the fingertip and the actual operation point.
[fig.21]Fig. 21 is a schematic diagram illustrating the detection error when fingertip is not face towards the light transmitter and receiver.
Description of Embodiments
[0014] Preferred embodiments of the present invention will now be described in detail with reference to the drawings. It should be noted that the relative arrangement of the components, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
[0015] (First Embodiment)
Fig. 1 illustrates the basic block diagram of a wearable device 100 according to the first embodiment of the present invention. In the wearable device 100, there are at least two cameras 101 for recognizing the operation in virtual input. There are some spaces between the cameras. At least one projector 102 is used to project an image on an external display area 120. An internal HMI 104 includes the internal output device, such as the screen of the wearable device 100, and the internal input devices, such as the buttons of the wearable device 100. The internal HMI can also be realized as the wireless or voice control. A control unit 103 is used to control the cameras, the projector and the internal HMI.
[0016] In controlling cameras process, the control unit 103 receives the images signals captured from the cameras 101, recognizes the operation information from the virtual input. If necessary, control unit 103 can modify the parameters of the cameras 101. Besides these parts, the apparatus can include other devices for assisting the external display and virtual input. For example, the communication device can help the wearable device to communicate with the outer device. The various sensors can provide the information for controlling the cameras and projectors.
[0017 ] In controlling projector process, the control unit 103 controls the area, the angle, the shape and other parameters of the projection. Control unit 103 transmits the projection content to the projector.
[0018] In controlling the internal HMI, the control unit 103 receives the input information from the internal HMI 104 and transmits the output information to the internal HMI 104.
[0019] The control unit 103 is implemented in some computing device, such as Central Processing Unit (CPU). There is memory device for storing the control program.
[0020] Fig. 2 illustrates the working process of the external display and virtual input. After the starting of the external display and virtual input, the cameras capture the images in which the surface to be projected on is contained. Through detecting and analyzing the feature of the surface (S201), the continuous area which is suitable for external display on the surface is detected (S203). In the continuous area, the external display area and shape is determined according to user and application requirement(S205). The relative position of the projector and the surface is calculated in the surface plane and vertical plane. Based on the external display area and shape determination, and the relative position of the projector and the surface, the parameters, such as the projection angle and the distortion correction, of the projector are set (S207). The external display content is projected on the surface by the projector (S209). At the same time, the cameras capture the images in order to detect the feature points of the virtual input object, such as the fingertip (S211). The detecting method of feature points is calculating the relative positions of the input object and the surface. The predefined amount of points which have the smallest distance between the input object and the surface are regarded as the feature points. From the viewpoint of the cameras, since the input object will always appear in the operation area, how to judge the start and end of the actual operation is quite important. It is difficult to recognize whether the input object touches the surface or not. Instead, the relative positions of the feature points and the surface are considered. When the input object is approaching the surface, the relative distances between the feature points are getting smaller. During this process, when all the relative distances or the average relative distance is smaller than the threshold, the input operation start is detected. When the input object is leaving the surface, the relative distances between the feature points are getting larger. During this process, when all the relative distances or the average relative distance is larger than the threshold, the input operation end is detected. The feature points are tracked for recognizing the operation of the virtual input object between the operation start and end (S213). Input information which is input into the wearable is recognized (S215). During the actual realization, this process is carried out over and over again in real time computing.
[0021 J Hereafter, embodiments of the present invention will be explained taking as an
example. The embodiments used to describe the principles of the present invention are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged architecture.
[0022] (Second Embodiment)
Fig. 3 illustrates the architecture of the second embodiment for the case of smart watch 300. Fig. 4 illustrates a block diagram of the smart watch 300. In this embodiment, smart watch 300 contains the computing device, the memory device and sensors 408 in the watch body. One projector 102 is mounted in the watch body for projecting an image on the external display area 120. The mask 307 is mounted on the projector 102 for changing the shape of external display area 120. Two cameras 301 , 302 are mounted at both sides of the projector 102 for recognizing the operation in virtual input. A screen 304 and the buttons 305 in the watch body form the internal HMI. The external display area 120 is on the surface of the back of the hand 310. The virtual input object is the opposite hand 320. In the watch body, the control unit 303 is implemented in the computing device and memory device. The control unit 303 can use the information from sensors 408 for assisting the external display and virtual input.
[0023] Fig. 5 illustrates the working process of this embodiment. After the starting of the external display and virtual input, the cameras 301, 302 capture the images in which the back of the hand to be projected on is contained. Through detecting and analyzing the feature of the back of the hand (S501), the continuous area which is suitable for external display on the surface is detected (S503). In the continuous area, the external display area 120 and shape is determined according to user and application requirement (S505). The relative position of the projector 102 and the surface is calculated in the surface plane and vertical plane. Based on the external display area and shape determination, and the relative position of the projector and the surface, the parameters of the projector are set (S507). An image is projected to the surface by the projector 102 (S509). The shape of the external display is determined by setting the mask. At the same time, the cameras capture the images in order to detect the feature points of the virtual input object which is the fingertip (S511). The feature points are tracked for recognizing the operation of the virtual input object (S513). The input information input into the smart watch is recognized (S515). During the actual realization, this process is carried out over and over again in real time computing.
[0024] In determining the continuous area which is suitable for external display on the back of hand, Fig. 6 shows the example of continuous area 601 and discontinuous area 602. The continuous area means an area which has no edge or boundary in the closed area. The area in the solid line is the continuous area of the back of hand. The area in the dashed line is the example of the discontinuous area. Since the discontinuous area 602 contains the space between the fingers, it will bring distortion.
[0025] According to the user and application requirement, the shape and the area of the external display is determined in the continuous area. Figs. 7A-7C shows the example of changing the position and shape of external display area 120. In this example, the email is projected on the back of hand 320. The projection area 702 is enlarged according to the email content (see Fig. 7B). As the user like, the shape is set from rectangle 702 to circle 703 (see Fig. 7C).
[0026] Since in actual situation, the relative position of the smart watch and the back of the hand is always changing, the parameters of the projector should be adjusted in real time. Figs. 8A and 8B illustrate the example that the relative position is calculated in the surface plane 801 and the vertical plane 802. In Figs. 8A and 8B, camera 301 and camera 302 capture different images of the back of hand. Based on the fact that the distance between the cameras 301, 302 is fixed, the relative position of projector in the surface plane 801 is calculated through analyzing the two different images. In Fig. 8B, camera 301 and camera 302 capture different images of the back of hand. In the same way, based on the fact that the distance between the cameras 301, 302 is fixed, the relative position of projector in the vertical plane 802 is calculated through analyzing the two different images. On the basis of the relative position information and the actual area with the required shape, the parameters such as the projection angle and the distortion correction are set.
[0027] In order to project the required shape, the mask 307 mounted on the projector 102 is configured manually or automatically. As shown in the example in Fig. 9, with the mask 307, the required hexagon external display area 901 is projected on the back of the hand 320.
[0028] In detecting the feature points of the virtual input object, the index finger of the opposite hand 320is detected. In tracking the operation of the virtual input object, the relative positions of the feature points in the surface plane 801 and the vertical plane 802 are calculated with the two cameras 301, 302 in the same principle of calculating the relative positon of the projector 102. As shown in Fig. 10, the predefined amount of points which have the smallest distance between the finger 1001 and the surface 1002 are detected as the feature points.
[0029] In detecting the input operation start and end, the detecting process is shown in Figs.
11 A and 1 IB. When the input object is approaching the surface (S 1101), and all the relative distances or the average relative distance is smaller than the threshold (SI 103), the input operation start is detected (SI 105). When the input object is leaving the surface (S 1111), and all the relative distances or the average relative distance is larger than the threshold (SI 113), the input operation end is detected (SI 115). Through tracking the operation of the feature points of the virtual input object, the input information is recognized. For example in Fig. 12, the handwriting input "A" 1201 can be recognized and input to the smart watch 300.
[0030] (Third embodiment)
The architecture of the third embodiment for the case of smart watch 1300 is illustrated in Fig. 13. Fig. 14 illustrates a block diagram of the smart watch 1300. In this embodiment, the smart watch 1300 contains the computing device, the memory device, the sensors 408 and communication unit 1301 in the watch body. One projector is mounted in the watch body for projecting the external display. Two cameras 301, 302 are mounted at both sides of the projector 102 for recognizing the operation in virtual input. The screen and the buttons in the watch body form the internal HMI 104. The external display area is on the surface of the back of the hand. The virtual input object is the opposite hand. In the watch body, the control unit 303 is implemented in the computing device and memory device. The control unit 303 can use the information from sensors for assisting the external display and virtual input. In this embodiment, the communication unit 1401 connects a remote computing device 1410, such as the cloud, which helps the computing and controlling the external display and virtual input.
[0031 ] Fig. 15 illustrates the working process of this embodiment. The cameras 301 , 302 capture the images in which the back of the hand to be projected on is contained(S). Through detecting and analyzing the feature of the back of the hand (S 1501), the continuous area which is suitable for external display on the surface is detected (S1503). In the continuous area, the external display area and shape is determined according to user and application requirement (SI 505). The image information, such as the color and brightness, of the selected area on the surface is recorded. The relative position of the projector 102 and the surface is calculated in the surface plane and vertical plane. Based on the external display area and shape determination, and the relative position of the projector and the surface, the parameters of the projector are set(S1507). The external display content (image) is projected to the surface by the projector 102 (S1509). In order to project the required shape, the method of
background imitation is utilized. At the same time, the cameras 301, 302 capture the images in order to detect the feature points of the virtual input object which is the fingertip (S1511). The feature points are tracked for recognizing the operation of the virtual input object (S1513). The input information input into the smart watch 1000 is recognized (S1515). During the actual realization, this process is carried out over and over again in real time computing.
[0032] Fig. 16 illustrates the example of the background imitation for projecting the required shape on the surface. The background imitation needs not to change the actual projection area. In the required area, the content to be displayed in the external display area 1601 is projected. The imitation image that is similar with the projection surface is projected on the rest area which is called "background imitation area". The effect is that the background imitation area 1602 looks the same with the other part on the surface, which does not influence the user experience.
[0033] (Fourth embodiment)
The architecture of the fourth embodiment for the case of a smart wristband 1700 is illustrated in Fig. 17. Fig. 18 illustrates a block diagram of the smart wristband 1700. In this embodiment, the smart wristband 1700 contains the computing device, the memory device and sensors in the wristband body 1707. Two projectors 1701, 1702 are mounted in the wristband body 1707 for projecting the external display. Two cameras 1703, 1704 are mounted for recognizing the operation in virtual input. The indicator lights 1705 and the buttons 1706 in the wristband body 1707 form the internal HMI. The external display area is positioned on the back of the hand. The dedicated stylus 1710 can be used as a virtual input object. In the wristband body 1707, the control unit 1803 is implemented in the computing device and memory device. The control unit 1803 can use the information from sensors 1808 for assisting the external display and virtual input.
[0034] The remaining components and operations are the same as those in the third embodiment and are denoted by the same reference numerals, and a detailed description thereof will be omitted.
[0035] In this embodiment, each camera is connected to one computing device (CD) 1801, 1802, which is used to distributing the computing and helping the control in the control unit 1803.
[0036] Fig. 19 illustrates the example of the background imitation for projecting the required shape on the surface. The background imitation needs not to change the actual projection area. In the required area, the content to be displayed in the external display area 1901 is projected from projector 1701. The imitation image that is similar with the projection surface is projected on the rest area, which is called "background imitation area." The effect is that the background imitation area 1902 looks the same with the other part on the surface, which does not influence the user experience. The content of the background imitation area 1902 is projected from projector 1702. The effect is that the background imitation area 1902 looks the same with the other part on the surface, which does not influence the user experience.

Claims

Claims
An information processing apparatus comprising:
at least one projector;
at least two cameras; and
a control unit for controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.
The apparatus according to claim 1, wherein a relative position of projector and the surface is calculated in the surface plane and the vertical plane.
The apparatus according to claim 1, wherein a feature of the surface is analyzed for detecting a continuous area which is suitable for display on the surface.
The apparatus according to claim 1, wherein an external display area is determined according to a user and/or application requirement.
The apparatus according to claim 4, wherein parameters of said at least one of projector are set based on the external display area, and a relative position of the projector and the surface.
The apparatus according to claim 5, wherein a shape of the external display area is determined with setting a mask or using a background imitation.
The apparatus according to any one of claims 1 to 6, wherein feature points of the virtual input are detected as points which have smallest distance between an input object and the surface.
The apparatus according to claim 1, wherein when the input object is approaching the surface, and all the relative distances or the average relative distance is smaller than the threshold, the input operation start is detected, and when the input object is leaving the surface, and all the relative distances or the average relative distance is larger than the threshold, the input operation end is detected.
A method for controlling an information processing apparatus having at least one projector and at least two cameras, comprising:
controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.
A program for causing an information processing apparatus having at least one projector and at least two cameras, to execute: controlling said at least one projector to display an image on a surface outside of the apparatus and controlling said at least two cameras to generate a virtual input.
PCT/JP2015/059822 2015-03-23 2015-03-23 Information processing apparatus, information processing method, and program WO2016151869A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/059822 WO2016151869A1 (en) 2015-03-23 2015-03-23 Information processing apparatus, information processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/059822 WO2016151869A1 (en) 2015-03-23 2015-03-23 Information processing apparatus, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2016151869A1 true WO2016151869A1 (en) 2016-09-29

Family

ID=56978046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/059822 WO2016151869A1 (en) 2015-03-23 2015-03-23 Information processing apparatus, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2016151869A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3644162A2 (en) * 2018-10-18 2020-04-29 Karl Storz SE & Co. KG Method and system for controlling devices in a sterile environment
JP2022131024A (en) * 2021-02-26 2022-09-07 セイコーエプソン株式会社 Display method and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293402A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Monitoring interactions between two or more objects within an environment
WO2013028280A2 (en) * 2011-08-19 2013-02-28 Qualcomm Incorporated Dynamic selection of surfaces in real world for projection of information thereon
US20140078378A1 (en) * 2011-05-25 2014-03-20 Obzerv Technologies Unc. Active Imaging Device Having Field of View and Field of Illumination With Corresponding Rectangular Aspect Ratios
US20140292648A1 (en) * 2013-04-02 2014-10-02 Fujitsu Limited Information operation display system, display program, and display method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293402A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Monitoring interactions between two or more objects within an environment
US20140078378A1 (en) * 2011-05-25 2014-03-20 Obzerv Technologies Unc. Active Imaging Device Having Field of View and Field of Illumination With Corresponding Rectangular Aspect Ratios
WO2013028280A2 (en) * 2011-08-19 2013-02-28 Qualcomm Incorporated Dynamic selection of surfaces in real world for projection of information thereon
US20140292648A1 (en) * 2013-04-02 2014-10-02 Fujitsu Limited Information operation display system, display program, and display method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3644162A2 (en) * 2018-10-18 2020-04-29 Karl Storz SE & Co. KG Method and system for controlling devices in a sterile environment
JP2022131024A (en) * 2021-02-26 2022-09-07 セイコーエプソン株式会社 Display method and program
JP7287409B2 (en) 2021-02-26 2023-06-06 セイコーエプソン株式会社 Display method and program

Similar Documents

Publication Publication Date Title
EP3090331B1 (en) Systems with techniques for user interface control
CN110199251B (en) Display device and remote operation control device
WO2017215375A1 (en) Information input device and method
US20160320855A1 (en) Touch fee interface for augmented reality systems
WO2020103526A1 (en) Photographing method and device, storage medium and terminal device
WO2017036035A1 (en) Screen control method and device
CN106406710A (en) Screen recording method and mobile terminal
WO2018072339A1 (en) Virtual-reality helmet and method for switching display information of virtual-reality helmet
WO2021035646A1 (en) Wearable device and control method therefor, gesture recognition method, and control system
JP2014048937A (en) Gesture recognition device, control method thereof, display equipment, and control program
US11886643B2 (en) Information processing apparatus and information processing method
WO2019033322A1 (en) Handheld controller, and tracking and positioning method and system
CN105306819B (en) A kind of method and device taken pictures based on gesture control
CN109839827B (en) Gesture recognition intelligent household control system based on full-space position information
US11816924B2 (en) Method for behaviour recognition based on line-of-sight estimation, electronic equipment, and storage medium
WO2018198499A1 (en) Information processing device, information processing method, and recording medium
JP4870651B2 (en) Information input system and information input method
JP4985531B2 (en) Mirror system
WO2016151869A1 (en) Information processing apparatus, information processing method, and program
JPWO2018150569A1 (en) Gesture recognition device, gesture recognition method, projector including gesture recognition device, and video signal supply device
JP2016071401A (en) Position detection apparatus, projector, and position detection method
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
US20220244788A1 (en) Head-mounted display
US9761009B2 (en) Motion tracking device control systems and methods
US11054941B2 (en) Information processing system, information processing method, and program for correcting operation direction and operation amount

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15886421

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15886421

Country of ref document: EP

Kind code of ref document: A1