[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016199248A1 - Information presentation system and information presentation method - Google Patents

Information presentation system and information presentation method Download PDF

Info

Publication number
WO2016199248A1
WO2016199248A1 PCT/JP2015/066756 JP2015066756W WO2016199248A1 WO 2016199248 A1 WO2016199248 A1 WO 2016199248A1 JP 2015066756 W JP2015066756 W JP 2015066756W WO 2016199248 A1 WO2016199248 A1 WO 2016199248A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
information presentation
user
processing
presentation system
Prior art date
Application number
PCT/JP2015/066756
Other languages
French (fr)
Japanese (ja)
Inventor
竜志 鵜飼
大西 邦一
瀬尾 欣穂
大内 敏
佑哉 大木
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2015/066756 priority Critical patent/WO2016199248A1/en
Publication of WO2016199248A1 publication Critical patent/WO2016199248A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/08Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator

Definitions

  • the present invention relates to an information presentation system and an information presentation method for presenting information on a user's surrounding environment.
  • an object discrimination notification apparatus shown in Patent Document 1 below is known.
  • This object discrimination notification device causes the wearer to wear the right-eye and left-eye television cameras in the direction in which the face is directed, and to input the right-eye and left-eye input video data of the right-eye and left-eye television cameras.
  • This is a device that extracts the outline of an object based on the data and generates an audio information signal representing the position and name of the object based on the right eye and left eye outline data.
  • the present invention has been made from the actual situation of such a conventional technique, and an object of the present invention is to provide an information presentation system and an information presentation method capable of sufficiently transmitting information on the surrounding environment of the user.
  • an information presentation system of the present invention is an information presentation system for presenting information on a user's surrounding environment, and is included in an imaging device that images the surrounding environment and the surrounding environment. Based on the recording device that records predetermined possession information possessed by the object, the captured image captured by the imaging device, and the retained information recorded in the recording device, a preset is set from among the captured images.
  • a processing device that recognizes the object, extracts the retained information represented by the object and generates a predetermined information signal, and an output corresponding to the information signal generated by the processing device It is characterized by having a vessel.
  • the information presentation method of the present invention includes an imaging device that captures information about a user's surrounding environment, a recording device that records predetermined holding information held by an object included in the surrounding environment, and these imaging device and recording.
  • An information presentation method used in an information presentation system including a processing device connected to a device and processing information of the surrounding environment, and an output device connected to the processing device, wherein the processing device is the imaging device Based on the captured image and the predetermined possession information acquired in these steps, the step of acquiring the captured image of the processing device, the processing device acquiring the predetermined possession information from the recording device, and the processing device. Recognizing the object set in advance from the captured image, extracting the possessed information represented by the object and generating a predetermined information signal; Output unit is, is characterized by comprising a step of performing an output corresponding to the information signal generated by the processing unit.
  • FIG. 1 It is a figure which shows an example of the communication relationship between the information presentation apparatus shown in FIG. 1, and an external apparatus. It is a flowchart which shows the flow of operation
  • FIG. 13 It is a figure which shows the look-up table mentioned as another example of the possession information recorded on the recording device shown in FIG. It is a functional block diagram which shows the main functions of the processing apparatus of the information presentation apparatus which concerns on 2nd Embodiment of this invention. It is a functional block diagram which shows the main functions of the processing apparatus of the information processing apparatus which concerns on 2nd Embodiment of this invention. It is a functional block diagram which shows the main functions of the processing apparatus of the information presentation apparatus which concerns on 4th Embodiment of this invention. It is a figure explaining the direction determination process by the direction determination part shown in FIG. 13, and is a figure which shows an example of the captured image of a camera when a user's face is not facing the advancing direction.
  • the information presentation system and the information presentation method of the present invention provide not only visual assistance for visually impaired persons, but also persons with visual characteristics such as myopia, hyperopia, and astigmatism, persons with difficult color identification, and standard visual characteristics. It is possible to provide visual assistance to various people, such as those who have it.
  • FIG. 2 is a diagram showing a configuration of the information presentation system 100 according to the first embodiment of the present invention.
  • the information presentation system 100 As shown in FIGS. 1A and 1B, the information presentation system 100 according to the first embodiment of the present invention is, for example, worn by the user 2 and presents information on the surrounding environment of the user 2. It is comprised from the information presentation apparatus 1 which performs the process to perform.
  • the information presentation device 1 includes a camera 11, a recording device 12, a processing device 13, an audio output device 14 ⁇ / b> A, a communication device 15, an I / F (Interface) 16, and a bus 17. .
  • the camera 11 ⁇ / b> A functions as an imager 11 that captures information about the environment around the user 2.
  • the face of the user 2 is It is mounted on the information presentation apparatus 1 so that an image in the direction in which it faces can be taken.
  • the camera 11A may be disposed around the face of the user 2 as shown in FIG. 1A, for example, or worn by the user 2 as shown in FIG. It may be attached directly to the clothes collar.
  • a color camera configured to capture an image of visible light can be used as the camera 11A. With this configuration, it is possible to acquire a full color image of the environment surrounding the user 2.
  • the recording device 12 is necessary for the processing device 13 to process information about the surrounding environment of the user 2, such as predetermined possession information held by an object included in the surrounding environment of the user 2.
  • the recording device 12 includes hardware such as an HDD 12A that is a nonvolatile storage medium capable of reading and writing information, and an object recognition unit 132 and an object possession information recognition unit 133 described later of the processing device 13 are included.
  • Information to be referred to when processing, OS (Operating System), various control / processing programs such as an information presentation program, application programs, and the like are stored.
  • OS Operating System
  • various control / processing programs such as an information presentation program, application programs, and the like are stored.
  • the predetermined possessed information recorded in the recording device 12 will be described in detail later.
  • the processing device 13 recognizes a preset object from the captured image based on the captured image captured by the camera 11 ⁇ / b> A and the retained information recorded in the recording device 12, and stores the retained information represented by the object.
  • the processing of information on the surrounding environment of the user 2 is performed, including the process of extracting and generating a predetermined information signal.
  • a predetermined information signal to be presented to the user for example, a voice signal obtained by synthesizing the voice presented to the user 2 is used.
  • the audio signal generation processing by the processing device 13 will be described in detail later.
  • the processing device 13 includes a CPU (Central Processing Unit) 13A that performs various calculations for processing information about the environment of the user 2, and a ROM (Read Only) that stores a program for executing calculations by the CPU 13A.
  • CPU Central Processing Unit
  • ROM Read Only
  • the output device 14 performs output corresponding to the information signal generated by the processing device 13.
  • the output device 14 is configured by, for example, an audio output device 14A that outputs audio corresponding to the audio signal generated by the processing device 13.
  • this audio output device 14A for example, as shown in FIGS. 1 (A) and 1 (B), headphones (first headphones) that transmit sound by air vibration, although not shown, sound is transmitted by air vibration.
  • headphones first headphones
  • earphone first earphone
  • speaker that transmits sound by air vibration.
  • the user 2 can easily recognize the information of the surrounding environment presented by the information presentation device 1 by perceiving the vibration of air by the headphones 14A, the earphones, and the speakers.
  • the audio output device 14A for example, a headphone (second headphone) that transmits sound by bone conduction, an earphone (second earphone) that outputs sound by bone conduction, or the like can be used.
  • the user 2 can easily recognize the information of the surrounding environment presented by the information presentation device 1 by perceiving the vibration of the bone by the bone conduction type headphones or earphones.
  • the audio output device 14A may be configured by using the above-described headphones, earphones, and speakers alone, or may be configured by combining headphones, earphones, speakers, and the like.
  • the communication device 15 functions as an information presentation device side communication device that communicates with external devices including the information processing device 3 (see FIG. 4) and the external command device 5 (see FIG. 4) described later.
  • the CPU 13A, ROM 13B, RAM 13C, and HDD 12A described above are connected to the I / F 16 via the bus 17, and the camera 11A, the audio output device 14A, and the communication device 15 are connected to the I / F 16. Accordingly, various signals and information are exchanged among the camera 11A, the recording device 12, the processing device 13, the audio output device 14A, and the communication device 15 inside the information presentation device 1. .
  • an information presentation program or the like stored in a recording medium such as the ROM 13B, the HDD 12A, or an optical disk (not shown) is read into the RAM 13C, and operates according to the control of the CPU 13A.
  • Such software and hardware cooperate to form a functional block that realizes the function of the processing device 13 included in the information presentation device 1.
  • FIG. 3 is a functional block diagram showing the main functions of the processing device 13.
  • the processing device 13 includes a control unit 131, an object recognition unit 132, an object possession information recognition unit 133, a voice synthesis unit 134, and a communication processing unit 135.
  • the control unit 131 controls the operation of each component of the camera 11A, the recording device 12, the processing device 13, and the audio output device 14A in the information presentation device 1, and transmits various signals and information between the components. Do.
  • the object recognition unit 132 receives the captured image acquired by the control unit 131 from the camera 11A from the control unit 131, and recognizes the object displayed on the captured image, that is, the target. Then, the object recognition unit 132 aggregates the recognized object names as a set of object names, and transmits this set of object names to the control unit 131.
  • the object possession information recognition unit 133 receives from the control unit 131 the captured image acquired by the control unit 131 from the camera 11A, the set of object names received from the object recognition unit 132, and the possession information recorded in the recording device 12. , The information held by the object recognized by the object recognition unit 132 is recognized and extracted. Then, the object possession information recognizing unit 133 uses, for each object recognized by the object recognizing unit 132 based on the extracted possession information, an object possession information character string for presenting possession information represented by each object in words. Then, these object possession information character strings are collected as a set of object possession information character strings, and the set of object possession information character strings is transmitted to the control unit 131.
  • the speech synthesizer 134 functions as a signal generator that generates an information signal to be presented to the user 2. Specifically, the voice synthesizer 134 generates a voice signal obtained by synthesizing the voice to be presented to the user 2 based on the set of object possession information character strings received by the control unit 131 from the object possession information recognition unit 133. . Then, the voice synthesis unit 134 transmits the generated voice signal to the control unit 131. As a result, an audio signal is transmitted from the control unit 131 to the audio output device 14A, and audio is output from the audio output device 14A. Therefore, the user 2 is presented by the information presentation device 1 by listening to the audio. Information, that is, information such as the state of an object existing in the surrounding environment of the user 2 and the contents described in the object can be grasped.
  • the communication processing unit 135 performs communication processing for transmitting and receiving various signals and information to and from an external device via the communication device 15 in accordance with a communication command from the control unit 131.
  • an external device for example, an external command device 5 (see FIG. 4) that transmits a command to the information presenting device 1 or an information processing device 3 that processes various types of information including information on the surrounding environment of the user 2 (See FIG. 4).
  • FIG. 4 is a diagram showing an example of a communication relationship between the information presentation device 1 worn by the user 2 and the external devices 2 and 4.
  • the communication device 15 of the information presentation device 1 is communicatively connected to the external information processing device 3 via the Internet 401, for example. Therefore, the information presentation apparatus 1 periodically receives information necessary for processing the information about the environment of the user 2 from the information processing apparatus 3, and updates the information in the recording apparatus 12 to the latest information. Thus, the accuracy of information processing of the surrounding environment by the processing device 13 can be increased.
  • FIG. 5 is a flowchart showing an operation flow of the information presentation apparatus 1 according to the first embodiment of the present invention.
  • step (hereinafter referred to as imaging step”) S) 501
  • FIG. 6 is a diagram illustrating an example of an image captured by the camera 11A.
  • a vehicle traffic light 111 As shown in FIG. 6, in the captured image 110 captured by the camera 11 ⁇ / b> A, a vehicle traffic light 111, a pedestrian traffic light 112, a store 113, a signboard 114, a sidewalk 115, a roadway 116, and a road center line 117 are projected. ing.
  • the camera 11 ⁇ / b> A transmits a captured image obtained by capturing such an ambient environment to the control unit 131.
  • control unit 131 transmits the captured image received from the camera 11 ⁇ / b> A to the object recognition unit 132.
  • the object recognition part 132 will recognize the object currently displayed on the captured image as follows, for example, if a captured image is received from the control part 131 (S502).
  • the object recognition unit 132 performs edge detection on the captured image of the camera 11A. Therefore, the object recognizing unit 132 sets the RGB value of each pixel constituting the captured image to (r, g, b) and sets the function f (r, g, b) using (r, g, b) as arguments. A value is calculated for each pixel.
  • the function f (r, g, b) is a function for calculating the feature quantity of each pixel from (r, g, b).
  • the object recognizing unit 132 calculates the feature amount of each pixel, and identifies a location where the feature amount changes sharply.
  • the maximum value of the difference between the feature amount of the pixel and the feature amount of the pixel adjacent to the top, bottom, left, and right is determined. It is possible to use a method in which the maximum difference amount is defined and obtained, and a line connecting the maximum difference amounts is used as an edge.
  • the edge detection by the object recognition unit 132 is performed by obtaining the value of one feature quantity f (r, g, b) for each pixel of the captured image.
  • the object recognizing unit 132 obtains the values of the feature quantities f1 (r, g, b), f2 (r, g, b), and f3 (r, g, b) for each pixel of the captured image.
  • An edge may be detected from a plurality of feature amounts, such as detecting an edge from the feature amount. As a result, the accuracy of edge detection by the object recognition unit 132 can be improved.
  • FIG. 7 is a diagram showing an example of an image 110a obtained by performing edge detection on the captured image 110 of the camera 11A shown in FIG.
  • an image 110a illustrated in FIG. 7 can be obtained as a result of the edge detection.
  • an edge 111a of a traffic signal for a vehicle, an edge 112a of a traffic signal for a pedestrian, an edge 113a of a store, an edge 114a of a signboard, an edge 115a of a sidewalk, an edge 116a of a roadway, and an edge 117a of a road center line are detected.
  • the object recognition unit 132 recognizes an object included in the captured image based on the result of edge detection for the captured image of the camera 11A.
  • the object recognizing unit 132 groups each edge constituting the image of the edge detection result. For example, a group of edges present in a part in an image as a result of edge detection and having few contacts with other edges is defined as one group. This makes it possible to detect the edge of each object included in the captured image of the camera 11A.
  • the object recognition unit 132 transmits a zero object detection signal indicating the fact to the control unit 131.
  • the description will be made on the assumption that the object recognition unit 131 has detected one or more object edges from the captured image.
  • the object recognition unit 132 acquires, for example, the lookup table 120 (see FIG. 8) as the predetermined possessed information from the recording device 12 via the control unit 131. For this purpose, the object recognition unit 132 transmits a lookup table request signal for requesting the lookup table 120 to the control unit 131. In response to the lookup table request signal, the control unit 131 receives the lookup table 120 from the recording device 12 and transmits it to the object recognition unit 132.
  • FIG. 8 is a diagram showing a look-up table 120 given as an example of possession information recorded in the recording device 12.
  • the look-up table 120 includes, for example, an edge pattern item indicating an object edge, an object name item indicating an object name, and an object holding information type item indicating the type of information held by the object.
  • the object possession information type is a color
  • an item of a color acquisition mode indicating a mode indicating which area of the object the color is acquired an object information possession area indicating an area where the object retains information
  • a character string template item indicating a character string template used when creating the object possession information character string, and these items are associated with each other.
  • the color acquisition mode includes, for example, a specific area color acquisition mode that is a mode for acquiring a color of a specific area of an object, and a bright area that is a mode of acquiring a bright (large) area color of the object.
  • a color acquisition mode or the like can be used.
  • the lookup table 120 shown in FIG. 8 when the object possession information type is not color, it is not necessary to specify the corresponding color acquisition mode, and therefore “-” is shown.
  • the object recognizing unit 132 obtains such a lookup table 120 from the recording device 12, the most similar edge among the edge patterns in the lookup table 120 is detected for each edge of each object detected from the captured image. Search for a pattern. Next, the object recognizing unit 132 arranges the edge patterns searched for each edge of each object, that is, the object names corresponding to the most similar edge patterns in the order of short distance between the center of the captured image and the center of the object, The object names are collected as a set of names, and the set of object names is transmitted to the control unit 131.
  • each object name included in the set of object names is referred to as a first object name, a second object name,.
  • control unit 131 determines whether a zero object detection signal has been received from the object recognition unit 132 (S503). At this time, if the control unit 132 determines that the zero object detection signal has been received (S503 / Yes), the operation of the information presentation apparatus 1 according to the first embodiment of the present invention is terminated.
  • the control unit 131 determines that it has not received the zero object detection signal, that is, has received a set of object names (S503 / No), from the captured image and object recognition unit 132 received from the camera 11A.
  • the received set of object names is transmitted to the object possession information recognition unit 133.
  • the object possession information recognition part 133 will recognize and extract the possession information which the object imaged by the captured image represents, for example as follows, if the pair of a captured image and an object name is received from the control part 131, for example. (S504).
  • the object possession information recognition unit 133 acquires the lookup table 120 from the recording device 12 via the control unit 131.
  • the object possession information recognition unit 133 transmits a lookup table request signal to the control unit 131.
  • the control unit 131 receives the lookup table 120 from the recording device 12 and transmits the lookup table 120 to the object possession information recognition unit 133.
  • the object possession information recognition unit 133 searches the lookup table 120 for the first object name included in the set of object names.
  • the object possession information recognition unit 133 obtains an object possession information type, a color acquisition mode, an object information possession area, and a character string template corresponding to the searched first object name.
  • the object possession information recognition unit 133 recognizes and extracts possession information represented by the object from the captured image in accordance with the acquired object possession information type, color acquisition mode, object information possession region, and character string template.
  • FIG. 9 is a flowchart showing the flow of object possession information extraction processing by the object possession information recognition unit 133.
  • the object possession information recognition unit 133 determines whether or not the acquired object possession information type is a character (S901). At this time, if the object possession information recognition unit 133 determines that the object possession information type is a character (S901 / Yes), the object possession information recognition unit 133 corresponds to the first object name displayed in the captured image according to the acquired object information possession area. A character existing in a specific area within the object to be recognized is recognized (S902). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S903).
  • the object possession information recognizing unit 133 recognizes the characters on the surface 121 of the signboard displayed in the captured image.
  • the character possession information recognition unit 133 recognizes, for example, “florist” as the character on the front surface 121 of the signboard, the character possession information recognition unit 133 refers to the character string template “ ⁇ character>” and is included in the character string template. By replacing ⁇ letter> with the recognized “florist”, the character string “I have a florist” is created.
  • the object name is a sign board attached to a traffic signal for a vehicle or a traffic signal for a pedestrian.
  • the object possession information recognition unit 133 recognizes, for example, “push button type” or “please push the push button” as a character of the sign board attached to the traffic signal for the vehicle or the traffic signal for the pedestrian,
  • the character string “There is a pushbutton signal” is created by replacing the included ⁇ character> with a “pushbutton signal” corresponding to the recognized character.
  • the object possession information recognition unit 133 determines that the object possession information type is not a character, that is, a color (S901 / No), the acquired color acquisition mode is the bright region color acquisition mode. Whether or not (S904). At this time, if the object possession information recognition unit 133 determines that the color acquisition mode is the bright area color acquisition mode (S904 / Yes), the first object displayed in the captured image according to the obtained object information possession area The colors of a plurality of predetermined areas in the object corresponding to the name and the brightness of the areas are acquired (S905).
  • the brightness indicator of the area acquired in S905 for example, the brightness of the area can be used.
  • another brightness index for example, it is possible to use a value obtained by determining a plurality of predetermined colors in an object and then multiplying the brightness of the area by a predetermined constant for each color. . Thereby, the extraction accuracy of the object possession information by the object possession information recognition unit 133 can be improved.
  • the object possession information recognition unit 133 compares the brightness of the plurality of acquired areas, and determines the color of the brightest area among these areas (S906). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S907).
  • a color is set for the object possession information type whose object name corresponds to the pedestrian traffic light, and a bright area color acquisition mode is set for the color acquisition mode. Also, in the object information holding area, a red lighting area 122 indicating that travel is impossible and a blue lighting area 123 indicating permission to pass are set in the pedestrian traffic light. Therefore, the object possession information recognition unit 133 acquires the color and brightness of the red lighting area 122 and the color and brightness of the blue lighting area 123 of the pedestrian traffic light shown in the captured image.
  • the object possession information recognition unit 133 recognizes red and blue as the colors of the red lighting area 122 and the blue lighting area 123 of the pedestrian traffic light, respectively, and then the brightness of the red lighting area 122 is the blue lighting area 123. It is judged whether it is brighter than the brightness of. At this time, when the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is brighter than that of the blue lighting area 123, the character possession information recognition unit 133 refers to the character string template “pedestrian traffic light is ⁇ color>”. The character string “pedestrian traffic light is red” is created by replacing ⁇ color> included in the character string template with “red” which is brighter among the recognized colors.
  • the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is not brighter than the brightness of the blue lighting area 123, that is, dark, the character string template “pedestrian traffic light is ⁇ color>.”
  • the character string “pedestrian traffic light is blue” is created by replacing ⁇ color> included in the character string template with “blue” which is brighter among the recognized colors.
  • the object possession information recognition unit 133 determines in S904 that the color acquisition mode is not the bright area color acquisition mode, that is, the specific area color acquisition mode (S904 / No), the acquired object information possession area Accordingly, the color of the specific area in the object corresponding to the first object name displayed in the captured image is acquired (S908). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S909).
  • a color is set for the object possession information type whose object name corresponds to the center line of the road, and the color acquisition mode includes a specific area color. Acquisition mode is set. In the object information holding area, the entire area of the center line of the road is set.
  • the object possession information recognition unit 133 recognizes, for example, white as the color of the center line of the road, the object possession information recognition unit 133 refers to the character string template “the center line of the road is ⁇ color>.”
  • the character string “The center line of the road is white” is created by replacing the included ⁇ color> with the recognized “white”.
  • the object possession information recognizing unit 133 when the object possession information recognizing unit 133 creates the object possession information character string in S903, S907, and S909, the object possession information recognizing unit 133 displays the first object name and the first object displayed in the captured image. It is determined whether or not the creation of the object possession information character strings of all the objects corresponding to the object names of 2 is completed (S910).
  • the object possession information recognition unit 133 determines that the creation of the object possession information character strings of all the objects has not been completed (S910 / No), the process returns to S901, and the objects corresponding to the remaining object names On the other hand, the processing of S901 to S909 described above is repeated. On the other hand, when the object possession information recognition unit 133 determines in S910 that the creation of the object possession information character strings of all the objects has been completed (S910 / Yes), the object possession information recognition unit 133 extracts the object possession information. Exit.
  • the object possession information recognition unit 133 sets the object possession information character string created for each object name included in the object name pair as a set of object possession information character strings. Collectively, this set of object possession information character strings is transmitted to the control unit 131.
  • the control unit 131 transmits the set of object possession information character strings received from the object possession information recognition unit 133 to the speech synthesis unit 1134.
  • the speech synthesizer 134 generates a speech signal by synthesizing the speech that reads each object possession information character string included in the set of object possession information character strings received from the control unit 131 (S505). Thereafter, the speech synthesizer 134 aggregates the speech signals generated for each object possession information character string as a set of synthesized speech, and transmits this synthesized speech set to the control unit 131. The control unit 131 transmits the set of synthesized speech received from the speech synthesis unit 134 to the speech output device 14A.
  • the voice output device 14A sequentially outputs the synthesized voice corresponding to the first object name among the synthesized voices corresponding to each voice signal included in the set of synthesized voices (S506). Thereby, the user 2 recognizes the information on the surrounding environment presented by the information presentation device 1 by listening to the voice output from the voice output device 14A.
  • the above-described processes of S501 to S506 are repeatedly performed at predetermined time intervals. Thereby, since the information presentation apparatus 1 can present the surrounding environment information to the user 2 at predetermined time intervals, the convenience for the user 2 can be enhanced.
  • the object recognition unit is based on the captured image of the camera 11A and the lookup table 120 recorded in the recording device 12.
  • Information such as the state of the object recognized by 132 and the contents described in the object can be easily notified to the user 2 via the audio output device 14A. Thereby, since the information on the surrounding environment of the user 2 can be sufficiently transmitted, excellent information can be presented to the user 2.
  • the object recognizing unit 132 stores in the lookup table 120 for each edge of each object detected from the captured image.
  • the present invention is not limited to this case.
  • the specific example is demonstrated in detail, referring FIG.
  • FIG. 10 is a diagram showing a look-up table 120A given as another example of the retained information recorded in the recording device 12.
  • the look-up table 120A is configured by replacing the item of the edge pattern in the look-up table 120 shown in FIG. 8 described above with the item of the edge and color pattern indicating the edge and color of the object.
  • the object recognizing unit 132 detects not only the edge of the object but also the color in the edge. For each detected edge of each object, the most similar among the edge and color patterns in the look-up table 120A shown in FIG. It is also possible to recognize an object by searching for the edge and color pattern.
  • the object recognition unit 132 After detecting the edge of each object, the object recognition unit 132 recognizes the color in each edge constituting each object from the captured image. Then, the object recognition unit 132 acquires the lookup table 120A from the recording device 12 via the control unit 131. For this purpose, the object recognition unit 132 transmits a lookup table request signal to the control unit 131. In response to this lookup table request signal, the control unit 131 receives the lookup table 120A from the recording device 12 and transmits it to the object recognition unit 132.
  • the object recognition unit 132 When the object recognition unit 132 receives the lookup table 120A, for each edge of each object detected from the captured image, the most similar edge and color pattern from among the edge and color patterns in the lookup table 120A. Search for. Next, the object recognition unit 132 aggregates the edge names and color patterns searched for each edge of each object, that is, the object names corresponding to the most similar edge and color patterns as a set of object names. The set is transmitted to the control unit 131. As described above, the object recognition accuracy by the object recognition unit 132 can be improved by using not only the edge pattern but also the color in the edge.
  • the object possession information recognition unit 133 recognizes the position of the object based on the captured image.
  • an object possession information character string based on the position of the object may be created.
  • the object possession information recognition unit 133 uses the captured pedestrian traffic light 112 from the captured image 110. By recognizing that it is ahead of the person 2 and replacing ⁇ position> in the character string template “ ⁇ Position> pedestrian traffic light is ⁇ color>” with the recognized “front”. The pedestrian traffic light ahead is blue. "
  • the object possession information recognition unit 133 causes the signboard 114 to move from the captured image 110 to the left hand side of the user.
  • the object possession information recognition unit 133 has, for example, a temporary storage area that temporarily stores only one set of object possession information character strings, as follows. A set of object possession information character strings may be transmitted to the control unit 131.
  • the object possession information recognizing unit 133 creates a set of object possession information character strings (hereinafter referred to as a first object possession information character string set for convenience), and then an object before a predetermined time. It is determined whether a retained information character string set (hereinafter referred to as a second object retained information character string set for convenience) is stored in the temporary storage area.
  • a first object possession information character string set for convenience
  • a retained information character string set hereinafter referred to as a second object retained information character string set for convenience
  • the object possession information recognition unit 133 determines that the second set of object possession information character strings is stored in the temporary storage area, the object possession information character string set, It is determined whether or not the object possession information character string set is different.
  • the object possession information recognition unit 133 determines that the first object possession information character string set is different from the second object possession information character string set, the first object possession information character string
  • the object possession information character string set included in the set of columns and not included in the second object possession information character string set, that is, the second object possession of the first object possession information character string set A set of object possession information character strings obtained by collecting object possession information character strings that are not shared with the information character string set is transmitted to the control unit 131.
  • the object possession information recognition unit 133 updates the temporary storage area in consideration of the object possession information character string.
  • the object possession information recognizing unit 133 determines that the first object possession information character string set and the second object possession information character string set are the same, the object possession information is retained even for one object.
  • a zero object possession information detection signal indicating that it has not been transmitted is transmitted to the control unit 131.
  • the control unit 131 receives a zero object possession information detection signal from the object possession information recognition unit 133, the control unit 131 ends the operation of the information presentation apparatus 1 according to the first embodiment of the present invention.
  • the object possession information recognition unit 133 determines that the second set of object possession information character strings is not stored in the temporary storage area, the object possession information character string is transmitted to the control unit 131. To do. Next, the object possession information recognition unit 133 stores the first set of object possession information character strings in the temporary storage area as the set of second object possession information character strings.
  • the control unit 131 receives a set of object possession information character strings from the object possession information recognition unit 133, the control unit 131 transmits the received set of object possession information character strings to the speech synthesis unit 134. Thereby, since the information presentation apparatus 1 can avoid the troublesomeness of presenting the same ambient environment information to the user 2 multiple times, the efficiency of information presentation of the ambient environment by the information presentation apparatus 1 can be reduced. Can be increased.
  • the object recognition unit 132 recognizes an object for all edges detected from the captured image, and all of these are recognized.
  • the object possession information extraction process by the object possession information recognition unit 133 in S504 the speech signal generation process by the speech synthesis unit 134 in S505, and the output process of the synthesized speech by the speech output device 14A in S506.
  • this invention is not limited to this case.
  • the object recognition unit 132 recognizes objects for all edges detected from the captured image, and the object in S504 is applied to only one or several of the recognized objects.
  • Object possession information extraction processing by the possession information recognition unit 133, speech signal generation processing by the speech synthesis unit 134 in S505, and synthesized speech output processing by the speech output device 14A in S506 may be performed.
  • the information presenting device 1 ranks the importance of the plurality of objects by using the object names, and the objects by the object possession information recognition unit 133 in S504 are ordered in descending order of importance. It is desirable to perform a process of extracting the stored information, a process of generating a voice signal by the voice synthesizer 134 in S505, and a process of outputting a synthesized voice by the voice output device 14A in S506. Thereby, the processing of S504 to S506 performed by the information presentation apparatus 1 can be reduced, and only the information carefully selected by the information presentation apparatus 1 can be presented to the user 2.
  • the object recognition unit 132 recognizes objects for all the edges detected from the captured image, and for all these objects, the object by the object possession information recognition unit 133 in S504. After performing the process of extracting the stored information, the voice signal generation process by the voice synthesizer 134 in S505 and the voice output device 14A in S506 only for one or several of the recognized objects. The synthesized voice output process may be performed.
  • the information presentation apparatus 1 ranks the importance levels of the plurality of objects by using the object names and the possessed information, and the speech synthesis unit 134 in S505 sequentially performs the objects from the highest importance levels. It is desirable to perform an audio signal generation process and a synthesized voice output process by the audio output device 14A in S506. Thereby, in the selection of the object for the processing of S505 and S506 performed by the information presentation apparatus 1, the recognition result of the information held by the object is used, thereby improving the accuracy ranking of the importance by the information presentation apparatus 1 can do.
  • the lookup table 120 is recorded in the recording device 12, and the control unit 131 includes the object recognition unit 132 and the object possession information recognition unit 133.
  • the lookup table 120 is acquired from the recording device 12 in response to the lookup table request signal and transmitted to the object recognition unit 132 and the object holding information recognition unit 133.
  • the present invention is limited to this case. Absent.
  • the lookup table 120 is recorded in the information processing apparatus 3 different from the information presentation apparatus 1, and the control unit 131 receives the information from the object recognition unit 132 and the object possession information recognition unit 133.
  • the lookup table 120 may be acquired from the information processing apparatus 3 via the Internet 401 in response to the lookup table request signal. Accordingly, if the lookup table 120 recorded in the information processing device 3 is regularly updated, the information presentation device 1 can update the latest lookup table without updating the recording content of the recording device 12. 120 can be used.
  • the present invention is not limited to this case.
  • a monochrome camera can be used as the camera 11A.
  • the information presentation apparatus 1 can recognize a character string held by an object, such as a character described on the signboard 114 shown in FIG.
  • the power consumption of the information presentation apparatus 1 is reduced and the information presentation apparatus 1 is reduced.
  • the life of a battery (not shown) for driving the battery can be extended.
  • the camera 11A may be configured to further capture an image of heat distribution.
  • a thermographic camera can be used as the camera 11A.
  • a color camera or a monochrome camera and a thermography camera can be used in combination as the camera 11A.
  • the object possession information recognition unit 133 when the object possession information recognizing unit 133 receives the combination of the image captured by the thermography camera and the object name from the control unit 131, the object possession information recognition unit 133 indicates the temperature of the object corresponding to the object name combination based on the image captured by the thermography camera. Acquire temperature information and associate it with retained information.
  • the object possession information recognition unit 133 creates an object possession information character string based on the acquired temperature of the object. Thereby, the information presentation apparatus 1 can recognize the temperature of the object and can also present the temperature of the object to the user 2. Further, the object possession information recognition unit 133 determines whether the temperature of the recognized object satisfies a predetermined temperature condition, and creates an object possession information character string only when it is determined that the predetermined temperature condition is satisfied. May be. Thereby, the information of the surrounding environment which the information presentation apparatus 1 presents to the user 2 can be selected more carefully.
  • the predetermined temperature condition described above for example, if a preset temperature is defined as TD and a temperature recognized by the object possession information recognition unit 133 is defined as T, a condition that the temperature T is equal to or higher than the temperature TD (T ⁇ TD) and temperature It is possible to utilize that any of the conditions where T is equal to or lower than the temperature TD (T ⁇ TD) is satisfied.
  • the information presentation device 1 can present information on the surrounding environment to the user 2 only for an object that is equal to or higher than the temperature TD or lower than the temperature TD.
  • the camera 11A faces the user 2 when the user 2 wears the information presentation apparatus 1 and stands upright and looks at the front.
  • the camera 11A may be configured to be able to capture images from all directions from the user 2.
  • the information presenting apparatus 1 ranks the importance of the objects by using the object name or the object name and the possession information, and in order from the objects with the highest importance in S505. It is desirable to perform a voice signal generation process by the voice synthesizer 134 and a synthesized voice output process by the voice output device 14A in S506.
  • the information presentation device 1 is configured so that, for example, an object in a direction in which the user 2 is not facing, such as a car approaching the user 2 from a direction opposite to the traveling direction of the user 2, and possession information of the object Can be presented to the user 2.
  • the object color and the character represented by the object are used as the object holding information type of the lookup table 120 recorded in the recording device 12.
  • the present invention is not limited to this case.
  • an object possession information type of the lookup table 120 it is possible to use a graphic shown in the object.
  • the object possession information recognition unit 133 can extract possession information with high accuracy by recognizing the shape of the person displayed on the traffic signal for pedestrians. .
  • the second embodiment of the present invention is different from the first embodiment described above in that the information presentation system 100 according to the first embodiment starts from the information presentation apparatus 1 that performs processing for presenting information on the surrounding environment of the user 2.
  • the information presentation apparatus 1 is configured to perform object recognition processing by the object recognition unit 132, object holding information extraction processing by the object holding information recognition unit 133, and voice signal generation processing by the voice synthesis unit 134.
  • the information presentation system 100 according to the second embodiment includes the information presentation device 1 and the information processing device 3 described above, and the information processing device 3 performs object recognition processing and object possession by the object recognition unit 132. That is, at least one of extraction processing of object possession information by the information recognition unit 133 and generation processing of an audio signal by the speech synthesis unit 134 is performed.
  • FIG. 11 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the second embodiment of the present invention.
  • the information presentation device 1 according to the second embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the first embodiment as shown in FIG. 13 object recognition units 132, object possession information recognition units 133, and speech synthesis units 134 are not provided.
  • the control unit 131 transmits the captured image received from the camera 11A to the information processing device 3 via the communication device 15.
  • the control unit 131 transmits the captured image and a communication command with the information processing device 3 to the communication processing unit 135.
  • the communication processing unit 135 performs a process of transmitting the captured image to the information processing device 3 via the communication device 15 and the Internet 401 in accordance with the communication command received from the control unit 131.
  • FIG. 12 is a functional block diagram showing main functions of the processing device 33 of the information processing device 3 according to the second embodiment of the present invention.
  • the information processing apparatus 3 includes a recording device 12, a processing device 33, and a communication device 35.
  • the processing device 33 includes a control unit 331, object recognition, and the like.
  • the communication processing unit 335 receives a captured image from the information presentation device 1 via the Internet 401 and the communication device 35 in accordance with a communication command from the control unit 331.
  • the communication processing unit 335 transmits the captured image received from the information presentation device 1 to the control unit 331.
  • the control unit 331 transmits the captured image received from the communication processing unit 335 to the object recognition unit 132.
  • control unit 331 When the control unit 331 receives a set of synthesized speech from the speech synthesis unit 134, the control unit 331 transmits a communication command with the information presentation device 1 and a set of synthesized speech to the communication processing unit 335.
  • the communication processing unit 335 performs processing for transmitting a set of synthesized speech to the information presentation device 1 via the communication device 35 and the Internet 401 in accordance with a communication command from the control unit 331.
  • the communication processing unit 135 of the information presentation device 1 receives a set of synthesized speech from the information processing device 3 and transmits it to the control unit 131 in accordance with a communication command from the control unit 131.
  • the control unit 131 transmits the set of synthesized speech received from the communication processing unit 135 to the audio output device 14A.
  • the same operational effects as the first embodiment described above can be obtained, and the information presentation system 1 is connected to the information presentation apparatus 1 via the Internet 401.
  • the external information processing device 3 performs object recognition processing by the object recognition unit 132, object holding information extraction processing by the object holding information recognition unit 133, and voice signal generation processing by the voice synthesis unit 134. Processing performed by the information presentation device 1 can be reduced. Further, as the processing performed by the information presentation apparatus 1 is reduced, the power consumption of the information presentation apparatus 1 is reduced, and the life of the battery for driving the information presentation apparatus 1 can be extended.
  • the information processing apparatus 3 performs object recognition processing by the object recognition unit 132, object possession information extraction processing by the object possession information recognition unit 133, and audio.
  • the information processing apparatus 3 may perform at least one of these processes, and the information presentation apparatus 1 may perform the remaining processes. Even if comprised in this way, the effect similar to having mentioned above can be acquired.
  • control unit 131 determines which process among the object recognition process by the object recognition unit 132, the object possession information extraction process by the object possession information recognition unit 133, and the speech signal generation process by the speech synthesis unit 134. You may determine dynamically whether the presentation apparatus 1 and the information processing apparatus 3 are made to each perform. Thereby, for example, when the remaining amount of the battery mounted on the information presentation device 1 is reduced, the control unit 131 causes the information processing device 3 to execute processing with a high load, thereby driving the information presentation device 1. Can be extended.
  • the third embodiment of the present invention differs from the first embodiment described above in that the camera 11A of the information presentation apparatus 1 according to the first embodiment is configured to capture an image of visible light.
  • the camera 11A of the information presentation device 1 according to the third embodiment is configured to further capture a distance image.
  • the processing apparatus 13 of the information presentation apparatus 1 which concerns on 3rd Embodiment of this invention shows the distance information which shows the distance between the recognized object and the user 2 based on the distance image imaged with the camera 11A. It is configured to acquire and associate with possession information of the object.
  • the camera 11A of the information presentation apparatus 1 When the camera 11A of the information presentation apparatus 1 according to the third embodiment of the present invention receives an imaging command from the control unit 131, the camera 11A starts imaging the surrounding environment of the user 2.
  • the camera 11A is configured to capture information in a direction in which the face of the user 2 is facing when the user 2 wears the information presentation device 1. It is mounted on the presentation device 1.
  • the camera 11A is configured by a camera that can capture a full-color image and a distance image, for example.
  • a stereo camera composed of two color cameras arranged at a predetermined interval can be used as the camera 11A according to the third embodiment of the present invention.
  • the camera 11A creates a full-color image and a distance image from the captured images of the two color cameras of the stereo camera.
  • a captured image of one of two color cameras can be a full color image.
  • a camera including a single color camera and a Time Of Flight distance image sensor can be used instead of a stereo camera. In this case, the full-color image and the distance image are captured by the color camera and the camera including the distance image sensor, respectively.
  • the camera 11A transmits a full color image and a distance image captured using a stereo camera to the control unit 131.
  • the control unit 131 transmits the full color image and the distance image received from the camera 11A to the object recognition unit 132.
  • the object recognition unit 132 recognizes an object using a full color image.
  • the object recognition unit 132 acquires the distance from the user 2 to the object by using the distance image received from the control unit 131 for each recognized object.
  • the object recognition unit 132 aggregates the object name and distance for each recognized object as a set of object name and distance, and transmits the set of object name and distance to the control unit 131.
  • the control unit 131 transmits the full-color image received from the camera 11 ⁇ / b> A and the object name / distance pair received from the object recognition unit 132 to the object possession information recognition unit 133.
  • the object possession information recognition unit 133 acquires the lookup table 120 from the recording device 12, recognizes possession information held by the object recognized by the object recognition unit 132, and then performs a character string template of the lookup table 120.
  • the object possession information character string is created according to the above.
  • the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is brighter than the brightness of the blue lighting area 123.
  • the object possession information recognizing unit 133 follows the character string template “ ⁇ distance> the traffic signal for the pedestrian ahead is ⁇ color>.”
  • the ⁇ distance> and ⁇ color> included in the character string template are acquired as “10 m ”And“ red ”, respectively, to create the character string“
  • the pedestrian traffic light 10 meters ahead is red. ”
  • the same operational effects as the first embodiment described above can be obtained, and the information presentation device 1 can obtain distance information obtained from the distance image of the camera 11A.
  • the user 2 can easily grasp the distance between the object and the user 2 in addition to the object name and possession information of the object.
  • the information presentation system 100 according to the third embodiment of the present invention can present more detailed information about an object to the user 2.
  • the object recognition unit 132 has been described with respect to the case where the object recognition is performed using only the full-color image. Not limited.
  • the object recognition unit 132 can recognize an object using a distance image, or can recognize an object using both a full-color image and a distance image. Thereby, when the camera 11A images a plurality of objects with different distances from the user 2, the object recognition accuracy by the object recognition unit 132 can be improved.
  • the information recognition apparatus 133 according to the fourth embodiment of the present invention includes an object recognition unit 133 according to the fourth embodiment that captures an image captured by the camera 11A.
  • the user 2 is configured to recognize an object existing in a predetermined direction. Therefore, in the information presentation apparatus 1 according to the fourth embodiment of the present invention, the object possession information recognizing unit 133 extracts the possession information of the object, and the speech synthesis is performed only on the object existing in the predetermined direction from the user 2.
  • the voice signal generation process by the unit 134 and the synthesized voice output process by the voice output device 14A are performed.
  • the camera 11 ⁇ / b> A is attached to the information presentation device 1 so that when the user 2 wears the information presentation device 1, an image in a direction in which the user 2 is facing can be captured. Therefore, the user 2 is not always imaging the predetermined direction that requires information. For example, when the user 2 is moving, such as when walking or running, the predetermined direction in which the user 2 needs information is the traveling direction of the user 2, but the face of the user 2 is Since the camera 11A does not necessarily face the traveling direction, the camera 11A may capture a direction other than the traveling direction of the user 2.
  • the configuration and operation of the information presentation apparatus 1 according to the fourth embodiment of the present invention will be described in detail with reference to FIGS. 13 to 16, taking as an example the case where the user 2 is walking.
  • the predetermined direction in which the user 2 needs information for example, the traveling direction of the user 2 is set.
  • symbol is attached
  • FIG. 13 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the fourth embodiment of the present invention.
  • the processing device 13 of the information presentation device 1 according to the fourth embodiment of the present invention includes a direction determination unit 136 in addition to the configuration of the processing device 13 according to the first embodiment.
  • the control unit 131 when the control unit 131 receives a set of object names (hereinafter referred to as a first set of object names for convenience) from the object recognition unit 132, The set of one object name and the captured image received from the camera 11 ⁇ / b> A are transmitted to the direction determination unit 136.
  • the direction determination unit 136 determines whether each object corresponding to the first set of object names exists in the traveling direction of the user 2 based on the captured image received from the control unit 131.
  • the direction determination unit 136 determines that each object is present in the traveling direction of the user 2
  • the direction determination unit 136 converts the object name of the object existing in the traveling direction of the user 2 to a new set of object names (hereinafter referred to as a second object name for convenience).
  • the second set of object names is transmitted to the control unit 131.
  • the control unit 131 transmits the second set of object names received from the direction determination unit 136 and the captured image received from the camera 11A to the object possession information recognition unit 133.
  • the direction determination unit 136 can determine, for example, whether or not each object corresponding to the first set of object names received from the control unit 131 exists in the traveling direction of the user 2 as follows. . Specifically, the direction determination unit 136 determines whether or not the first set of object names includes at least one of a sidewalk and a roadway, for example, both the sidewalk and the roadway.
  • the angle ⁇ formed by the boundary between the sidewalk and the roadway and the lower edge of the captured image is predetermined. For example, it is determined whether the angle is 45 degrees or more.
  • the direction determination unit 136 determines that the formed angle ⁇ is 45 degrees or more, the face of the user 2 is facing the traveling direction, and each object corresponding to the first set of object names is the user 2 It can be determined that the object exists in the traveling direction.
  • the direction determination unit 136 determines that the formed angle ⁇ is less than 45 degrees, the face of the user 2 is not facing the traveling direction, and each object corresponding to the first object name is the user It can be determined that the object does not exist in the two traveling directions.
  • both the sidewalk 115 and the roadway 116 are shown in the captured image 110 of the camera 11 shown in FIG. Since the angle ⁇ 1 formed by the boundary line 118 between the sidewalk 115 and the roadway 116 and the lower edge 119 of the captured image 110 is 45 degrees or more, the direction determination unit 136 indicates that the face of the user 2 faces the traveling direction. It is determined that
  • FIG. 14 is a diagram illustrating a captured image 110A of the camera 11A cited as a comparative example of the captured image 110 illustrated in FIG.
  • the angle ⁇ 2 formed by the boundary line 118 of the sidewalk 115 and the roadway 116 and the lower edge 119 of the captured image 110A is less than 45 degrees. Is determined not to face the direction of travel.
  • the same effect as that of the first embodiment described above can be obtained, and an object existing in a predetermined direction from the user 2 can be obtained. Only for this, the object possession information recognition unit 133 performs object possession information extraction processing, the speech synthesis unit 134 performs speech signal generation processing, and the speech output device 14A performs synthetic speech output processing. However, only the carefully selected information can be presented to the user 2. In addition, since it is possible to reduce the load on the processing of the information presentation device 1, the power consumption of the information presentation device 1 can be reduced, and the life of the battery for driving the information presentation device 1 can be extended.
  • the direction determination unit 136 determines whether each object corresponding to the first set of object names is based on the captured image received from the control unit 131. Although the case where it was determined whether it exists in the advancing direction of the user 2 was demonstrated, this invention is not limited to this case.
  • the information presentation apparatus 1 is an angular velocity sensor (not shown) that measures an angular velocity in a direction in which the user 2 swings his / her head from left to right (hereinafter referred to as a rotational direction for convenience) from a state in which the user 2 stands upright. May be provided.
  • a rotational direction for convenience
  • the rotation in the rotation direction of the neck is determined from the measurement result of the angular velocity sensor. Is not detected.
  • the rotation of the neck in the rotation direction is detected from the measurement result of the angular velocity sensor.
  • the direction determination unit 136 can determine that the face of the user 2 is not facing the traveling direction when the rotation of the neck of the user 2 is detected from the measurement result of the angular velocity sensor, When the rotation of the user 2 in the direction of rotation of the neck of the user 2 is not continuously detected for a predetermined time (for example, 5 seconds) from the measurement result of the angular velocity sensor, the face of the user 2 is facing the traveling direction. Can be determined.
  • the direction determination unit 136 includes each object corresponding to the first set of object names in the traveling direction of the user 2 as follows. It may be determined whether or not to do so.
  • the direction determination unit 136 includes a temporary storage area for temporarily storing one set of the captured image of the camera 11A and the first object name.
  • the information presentation apparatus 1 performs imaging with the camera 11A at a predetermined time interval ⁇ T, and the direction determination unit 136 uses a first object name set and a captured image (hereinafter referred to as a first captured image for convenience).
  • a first captured image Received from the control unit 131 at the time t, a set of object names stored in the temporary storage area (hereinafter referred to as a third set of object names for convenience) and a captured image (hereinafter referred to as a second set for convenience).
  • captured image is a set of object names and a captured image received by the direction determination unit 136 from the control unit 131 at time t ⁇ T.
  • the direction determination unit 136 determines whether a set of the second captured image and the third object name is stored in the temporary storage area. At this time, when it is determined that the combination of the second captured image and the third object name is stored in the temporary storage area, the direction determination unit 136 stores the combination of the first object name and the third object name. It is acquired at which position in the first captured image and in the second captured image the objects common to the set are captured.
  • the position of the object in the first captured image is, for example, more than half of the objects common to the first set of object names and the third set of object names.
  • the direction determination unit 136 indicates that the face of the user 2 is facing the traveling direction, when it can be considered that the movement is concentrically away from the vicinity of the center of the captured image compared to the position of the object inside It can be determined that each object corresponding to the first set of object names is an object that exists in the traveling direction of the user 2.
  • the positions of the objects in the first captured image are within the second captured image. If it cannot be considered that the object is moving in a direction concentrically away from the vicinity of the center of the captured image as compared with the position of the object, the direction determination unit 136 indicates that the face of the user 2 is facing the traveling direction. Instead, it can be determined that each object corresponding to the first set of object names is not an object existing in the traveling direction of the user 2.
  • FIG. 15 is a diagram illustrating an example of the time transition of the captured image of the camera 11A, the left diagram illustrates the second captured image 110B, and the right diagram illustrates the first captured image 110C.
  • the objects 110B1, 110B2, etc. are projected on the second captured image 110B of the camera 11A, and the objects 110C1, 110C2, etc. are projected on the first captured image 110C of the camera 11A.
  • the object 110B1 in the second captured image 110B and the object 110C1 in the first captured image 110C are the same object, and the object 110B2 in the second captured image 110B and the object 110C2 in the first captured image 110C are The same object.
  • the positions of the objects 110C1 and 110C2 in the first captured image 110C are moved in a direction that is concentrically separated from the vicinity of the center of the captured image as compared with the positions of the objects 110B1 and 110B2 in the second captured image 110B. Therefore, the direction determination unit 136 determines that the face of the user 2 is facing the traveling direction.
  • FIG. 16 is a diagram illustrating another example of the time transition of the captured image of the camera 11A, the left diagram illustrates the second captured image 110D, and the right diagram illustrates the first captured image 110E.
  • the objects 110D1, 110D2, etc. are projected on the second captured image 110D of the camera 11A, and the objects 110E1, 110E2, etc. are projected on the first captured image 110E of the camera 11A. ing.
  • the object 110D1 in the second captured image 110D and the object 110E1 in the first captured image 110E are the same object, and the object 110D2 in the second captured image 110D and the object 110E2 in the first captured image 110E are The same object.
  • the direction determination unit 136 Determines that the face of the user 2 is not facing the traveling direction.
  • the direction determination unit 136 may determine whether the face of the user 2 is facing the traveling direction based on the scales of the first captured image and the second captured image.
  • the present invention is not limited to this case and can be used at any time.
  • the object possession information recognizing unit 133 extracts the object possession information
  • the speech synthesizer 134 generates the speech signal
  • the speech output apparatus only for an object that exists in a predetermined direction for which the person 2 needs information. You may perform the output process of the synthetic speech by 14A.
  • the direction determination unit 136 may, for example, Whether or not each object corresponding to the set of object names received from 131 exists in a predetermined direction can be determined.
  • the direction determination unit 136 includes a temporary storage area that temporarily stores n sets of captured images and object names of the camera 11A.
  • the temporary storage area includes a set of object names and captured images received from the control unit 131 at a predetermined time interval ⁇ T at time t when the first set of object names and captured images are received from the control unit 131.
  • a set of object names and captured images received from the control unit 131 at time (t ⁇ k ⁇ ⁇ T) are stored with k being a natural number between 1 and n.
  • the direction determination unit 136 includes, for example, (n / 2), among the n object name pairs stored in the temporary storage area, each object name constituting the object name set received at time t. Judge whether or not it is included. Then, the direction determination unit 136 has (n / 2) or more of n object name sets stored in the temporary storage area as the object names constituting the object name set received at time t. If it is determined that the object name is included, it is determined that the object corresponding to the object name is an object existing in a predetermined direction, and each object name constituting the set of object names received at time t is stored in the temporary storage area. When it is determined that (n / 2) or more of n sets of object names stored in the table are not included, it is determined that the object corresponding to the object name is not an object existing in a predetermined direction. To do.
  • the direction determination unit 136 includes a line-of-sight detector (not shown) that detects the direction of the line of sight of the user 2. Based on the detection result, it may be determined whether or not each object corresponding to the first set of object names received from the control unit 131 exists in a predetermined direction in which the user 2 needs information.
  • a line-of-sight detector for example, a camera that images the eyeball of the user 2 can be used.
  • the direction determination unit 136 detects the direction of the eye line of the user 2 using the captured image of the camera.
  • the direction determination unit 136 determines that each object corresponding to the first set of object names exists in the direction of the detected line of sight of the user 2, the object is an object existing in the predetermined direction. If it is determined that there is no object corresponding to the set of object names in the direction of the line of sight of the detected user 2, it is determined that the object is not an object existing in the predetermined direction.
  • the eye detector for the direction determination processing by the direction determination unit 136, the user 2 can quickly obtain information on the surrounding environment that the user 2 needs.
  • the fifth embodiment of the present invention differs from the first embodiment described above in that the information presentation device 1 according to the first embodiment presents the user 2 with a predetermined information signal to be presented to the user 2.
  • the information presentation apparatus 1 according to the fifth embodiment is configured to use an information signal other than the voice signal, whereas the voice signal obtained by synthesizing the voice is used.
  • FIG. 17 is an overall view schematically showing the information presentation apparatus 1 according to the fifth embodiment of the present invention.
  • a vibration element 14 ⁇ / b> B that vibrates according to the information signal generated by the signal generation unit 134 is used. Is possible.
  • the signal generation unit 134 creates a vibration pattern based on the set of object possession information character strings received from the control unit 131.
  • the signal generation unit 134 transmits the created vibration pattern to the control unit 131 as an information signal.
  • the vibration element 14B vibrates according to the vibration pattern indicated by the information signal received from the control unit 131. Thereby, the user 2 recognizes the information on the surrounding environment presented by the information presentation device 1 by perceiving the vibration of the vibration element 14B.
  • the vibration pattern for example, it is possible to use the Morse code of the object possession information character string included in the set of object possession information character strings. Thereby, the user 2 can recognize the object possession information character string accurately.
  • a vibration pattern defined for a specific object possession information character string for example, a continuous pattern continuously generated during a predetermined time. It is possible to use a vibration pattern or a discontinuous vibration pattern as a discontinuous pattern generated intermittently at a preset time interval.
  • the character string “pedestrian traffic light is blue” has a continuous vibration pattern of 1 second, and the character string “pedestrian traffic light is red.” Has a vibration of 0.1 second. It is possible to use a 5-period non-continuous vibration pattern or the like in which non-vibration for 0.1 seconds is one period. For example, by assigning possession information of a specific object that requires urgency to the user 2, such as a pedestrian traffic light being red, the user 2 possesses a specific object that requires urgency. Information can be recognized immediately. Thereby, the safety of walking of the user 2 can be further improved.
  • the above-described discontinuous vibration pattern is desirably configured as a pattern having a total period of 1 second or less.
  • FIG. 18 is an overall view schematically showing another example of the information presentation apparatus 1 according to the fifth embodiment of the present invention.
  • the information presentation apparatus 1 in addition to the vibration element 14B, information generated by the signal generation unit 134 is also included. It is also possible to use an electronic braille device 14C that displays braille corresponding to the signal. In this case, the information presentation apparatus 1 presents the possessed information represented by the object to the user 2 using the electronic Braille device 14C. For example, the electronic braille device 14C displays braille by changing the arrangement of the surface irregularities according to the information signal. Further, the information presentation device 1 presents to the user 2 using the vibration element 14B that the information presentation device 1 is presenting the possession information of the object to the user 2 using the electronic braille device 14C. To do.
  • the signal generation unit 134 creates the vibration pattern of the vibration element 14B and the electronic braille character string of the electronic braille device 14C based on the object possession information character string received from the control unit 131.
  • the vibration pattern for example, the above-mentioned continuous vibration pattern for 1 second can be used.
  • the signal generation unit 134 transmits the created vibration pattern of the vibration element 14B and the electronic braille character string of the electronic braille device 14C to the control unit 131 as information signals.
  • the vibration element 14B vibrates according to the vibration pattern indicated by the information signal received from the control unit 131, and the electronic braille device 14C displays the braille based on the electronic braille character string indicated by the information signal received from the control unit 131.
  • the user 2 perceives the vibration of the vibration element 14B, recognizes that the electronic braille device 14C displays the braille of the electronic braille character string, and reads the braille of the electronic braille device 14C to provide information. Recognize the information of the surrounding environment presented by the device 1.
  • FIG. 19 is an overall view schematically showing another example of the information presentation apparatus 1 according to the fifth embodiment of the present invention.
  • the information presentation apparatus 1 As another example of the output device 14, as illustrated in FIG. 19, for example, information generated by the signal generation unit 134 instead of the vibration element 14 ⁇ / b> B. It is also possible to use an image display element 14D that displays an image corresponding to a signal.
  • the information presentation apparatus 1 is configured by, for example, a transmissive head-mounted display that is attached to the head of the user 2 and includes a video display element 14D.
  • the signal generation unit 134 creates an image of the image display element 14D based on the object possession information character string received from the control unit 131.
  • the signal generation unit 134 transmits the created video of the video display element 14D to the control unit 131 as an information signal.
  • the video display element 14 ⁇ / b> D displays a video according to the information signal received from the control unit 131.
  • the user 2 recognizes information on the surrounding environment presented by the information presentation device 1 by perceiving the video displayed by the video display element 14D.
  • the same effect as that of the first embodiment described above can be obtained, and the information presentation apparatus 1 can use the voice without using it.
  • the user 2 can recognize the possessed information represented by the object using tactile sensation or vision other than hearing. Thereby, the user 2 can recognize the possessed information represented by the object while listening to surrounding sounds.
  • the information presentation apparatus 1 includes, as another example of the output device 14, a vibration element 14B and an electronic braille device 14C, and uses the electronic braille device 14C to provide the user 2
  • the output device 14 a vibration element 14B and an electronic braille device 14C
  • the present invention is limited to this case. I can't.
  • the information presentation apparatus 1 not only presents the possession information represented by the object to the user 2 using the electronic braille device 14C, but also vibrates according to the vibration pattern defined for the specific object possession information character string.
  • the possessed information represented by the object may be presented to the user by vibrating the element 14B.
  • the sixth embodiment of the present invention differs from the fifth embodiment described above in that the information presentation system 100 according to the fifth embodiment is different from the information presentation device 1 that performs processing of presenting information on the surrounding environment of the user 2.
  • the information presentation system 100 according to the sixth embodiment is provided outside the information presentation device 1 and the information presentation device 1 and outputs information processed by the information presentation device 1.
  • the external output device 4 see FIG. 20).
  • FIG. 20 the configuration and operation of the information presentation system 100 according to the sixth embodiment of the present invention will be described in detail with reference to FIGS. 20 and 21.
  • FIG. 20 the same or corresponding parts as those in the configuration of the fifth embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
  • the information presentation device 1 according to the sixth embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the fifth embodiment, but does not include the output device 14 shown in FIG. Further, the communication device 15 according to the sixth embodiment of the present invention functions as an information presentation device side communication device that communicates with the external output device 4, and is connected to the external output device 4 through a wireless communication line, for example. ing.
  • FIG. 20 is a functional block diagram showing main functions of the processing device 43 of the external output device 4 according to the sixth embodiment of the present invention.
  • the external output device 4 includes a processing device 43, an output device 14, and a communication device 45.
  • the processing device 43 includes a control unit 431 and a communication device.
  • a processing unit 435 is included.
  • the communication device 45 functions as an external output device side communication device that communicates with the information presentation device 1.
  • the control unit 131 when the control unit 131 receives an information signal from the output signal generation unit 134, the control unit 131 transmits the information signal to the external output device 4.
  • the control unit 131 transmits a communication command and information signal with the external output device 4 to the communication processing unit 135.
  • the communication processing unit 135 transmits the information signal received from the control unit 131 to the external output device 4 via the communication device 15 in accordance with the communication command received from the control unit 131.
  • the communication processing unit 435 of the external output device 4 receives an information signal from the information presentation device 1 via the communication device 45 in accordance with a communication command from the control unit 431.
  • the control unit 431 transmits the information signal received from the communication processing unit 435 to the output device 14.
  • the output device 14 performs output corresponding to the information signal received from the control unit 431.
  • the user 2 recognizes information on the surrounding environment presented by the information presentation device 1 by perceiving the output from the output device 14.
  • FIG. 21 is an overall view showing a white cane 4A cited as an example of the external output device 4 according to the sixth embodiment of the present invention.
  • the external output device 4 for example, a white cane 4A held by the user 2 as shown in FIG.
  • the white cane 4A includes, as the output device 14, for example, a vibration element 14B that vibrates according to the information signal generated by the signal generation unit 134, as in the fifth embodiment described above.
  • this vibration element 14B is formed in the holding part which the user 2 holds by hand.
  • the information presentation system 100 configured as described above, the same effect as that of the fifth embodiment described above can be obtained, and the information presentation device 1 can be linked with the external output device 4.
  • the information presentation device 1 can be linked with the external output device 4.
  • various presentation forms of the ambient environment information can be realized.
  • the user's 2 wearing thing can be reduced by using what the user 2 possesses like the white cane 4A as the external output device 4, the user's 2 convenience can be reduced. Can be provided.
  • the white cane 4A is used as the external output device 4 .
  • the present invention is not limited to this case.
  • a device such as a mobile terminal such as a smartphone carried by 2, a watch, a ring device, or the like may be used, and the above-described vibration element 14 ⁇ / b> B may be provided in each of these devices.
  • the white cane 4A includes the vibration element 14B that vibrates according to the information signal generated by the signal generation unit 134
  • the invention is not limited to this case, and may include, for example, an electrical stimulation element (not shown) that applies electrical stimulation according to the information signal.
  • an electrical stimulation element (not shown) that applies electrical stimulation according to the information signal.
  • the information presentation apparatus 1 according to the seventh embodiment of the present invention further includes an input device 18 for inputting a destination that the user 2 is scheduled to reach (FIG. 22). See).
  • FIG. 22 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the seventh embodiment of the present invention.
  • the processing device 13 of the information presentation device 1 according to the seventh embodiment of the present invention includes a navigation processing unit 137 in addition to the configuration of the processing device 13 according to the first embodiment.
  • the information presentation device 1 requests the user 2 to input a destination.
  • the control unit 131 acquires from the recording device 12 a destination input request character string for prompting input of the destination by words.
  • this destination input request character string for example, “Please enter a destination.” Can be used.
  • control unit 131 transmits the destination input request character string acquired from the recording device 12 to the speech synthesis unit 134.
  • the voice synthesizer 134 generates a voice signal obtained by synthesizing voice from the destination input request character string, and transmits the generated voice signal to the control unit 131.
  • the control unit 131 transmits the audio signal received from the audio synthesis unit 134 to the audio output device 14A
  • the audio output device 14A outputs the synthesized audio corresponding to the audio signal received from the control unit 131.
  • the input device 18 receives the destination input from the user 2 and recognizes the destination.
  • the input device 18 includes a microphone that receives the voice of the destination uttered by the user 2, and the microphone recognizes the destination by using voice recognition that converts the received voice into an electrical signal. Can do. Then, the input device 18 transmits the recognized destination information to the control unit 131.
  • the control unit 131 transmits the destination information received from the input device 18 to the navigation processing unit 137.
  • the control unit 131 receives a set of object possession information character strings from the object possession information recognition unit 133
  • the control unit 131 transmits the set of object possession information character strings and the captured image received from the camera 11A to the navigation processing unit 137. .
  • the navigation processing unit 137 creates a navigation character string for guiding the user 2 to the destination with words based on the destination information received from the control unit 131 and the object possession information character string.
  • the navigation character string is transmitted to the control unit 131.
  • the control unit 131 transmits the navigation character string received from the navigation processing unit 137 to the speech synthesis unit 134.
  • the voice synthesis unit 134 generates a voice signal obtained by synthesizing voice from the navigation character string received from the control unit 131, and transmits the generated voice signal to the control unit 131.
  • the control unit 131 transmits the audio signal received from the audio synthesis unit 134 to the audio output device 14A.
  • the audio output device 14 ⁇ / b> A outputs synthesized audio corresponding to the audio signal received from the control unit 131.
  • the destination of the user 2 is the station A
  • the current location of the user 2 is the station B
  • the object recognition unit 132 uses the train departure guidance as an object used when the user 2 moves to the destination.
  • the navigation processing by the navigation processing unit 137 will be described in detail, taking as an example the case where the object possession information recognition unit 133 recognizes the information of the departure guidance of the train described in the electronic bulletin board. To do.
  • the navigation processing unit 137 acquires the boarding train information database recorded in the recording device 12 from the recording device 12, and refers to the boarding train information database, and the train on the train to reach the A station from the B station Search for information.
  • the navigation processing unit 137 searches train departure information described on the electronic bulletin board from the set of object possession information character strings.
  • the information on the departure guidance of the train includes the destination of the train leaving the station B, the departure time of the train, and the platform number from which the train leaves.
  • the navigation processing unit 137 searches the train departure guidance information for optimal boarding train information indicating information on a boarding train that is optimal for boarding at the B station in order to reach the A station from the B station. For example, when the destination is C station, the departure time is 12:34, and the departure platform number is line 5, the navigation processing unit 137 uses the navigation character string “To go to station A, Get on the train for Station C, which departs from Line 5 at 12:34. " Then, the navigation processing unit 137 transmits the generated navigation character string information to the control unit 131.
  • the navigation processing unit 137 can also add a character string that assists the navigation character string to the navigation character string.
  • the navigation processing unit 137 can add information that assists the departure time included in the navigation character string.
  • the information presentation device 1 includes, for example, a clock carried by the user 2, and the navigation processing unit 137 calculates a time difference from the current time to the departure time, and displays the calculated time difference on the clock. Also good.
  • the navigation processing unit 137 adds information that guides the route from the current position to the platform where the train departs based on the recognition results of the object recognition unit 132 and the object possession information recognition unit 133, for example. May be.
  • the destination of the user 2 is a flower shop, the current location of the user 2 is near the destination, and the destination is captured in the captured image of the camera 11A is taken as an example, and the navigation processing by the navigation processing unit 137 is performed.
  • the object recognition unit 132 recognizes a destination or a signboard of the destination, and the object possession information recognition unit 133 recognizes the name of the destination.
  • a color camera is used for the camera 11A.
  • the navigation processing unit 137 detects the destination from the set of object possession information character strings, and further detects the position of the destination in the captured image.
  • the navigation processing unit 137 estimates the positional relationship between the user 2 and the destination based on the position of the destination in the captured image.
  • the navigation processing unit 137 creates a navigation character string based on the estimated positional relationship.
  • the navigation processing unit 137 refers to the position of the florist that is the destination in the captured image, and if it is estimated that the florist is in front of the left hand of the user 2, the navigation character string “Destination Florist is in front of left hand. " Then, the navigation processing unit 137 transmits information on the created navigation character string to the control unit 131.
  • the same effect as that of the first embodiment described above can be obtained, and the information presentation device 1 can use the input device 18.
  • the user 2 can be guided to the destination without having the user 2 guess the surrounding situation and current location from the surrounding environment information. Can do. Thereby, since the user 2 can reach
  • the present invention is not limited to this case, and the camera 11A is used.
  • a camera capable of capturing a full-color image and a distance image may be used. Accordingly, the navigation processing unit 137 can add a character string indicating the distance between the user 2 and the destination as a character string that assists the navigation character string.
  • the navigation processing unit 137 estimates that the florist is on the left hand 10 m ahead of the user 2 based on the full-color image and the distance image of the camera 11A, the navigation character string “The florist at the destination is It is on the left hand 10m ahead. " Then, the navigation processing unit 137 transmits information on the created navigation character string to the control unit 131. Thereby, the information presentation apparatus 1 can perform detailed navigation including distance information to the destination of the user 2.
  • the eighth embodiment of the present invention is different from the first embodiment described above in that the information presentation system 100 according to the first embodiment is different from the information presentation device 1 that performs the process of presenting information on the surrounding environment of the user 2.
  • the information presentation system 100 according to the eighth embodiment is provided outside the information presentation apparatus 1 and the information presentation apparatus 1 as shown in FIG.
  • the external command device 5 is configured to transmit a command to the external command device 5.
  • the information presentation device 1 according to the eighth embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the first embodiment. Further, the communication device 15 according to the sixth embodiment of the present invention functions as an information presentation device side communication device that communicates with the external command device 5. For example, as with the information processing device 3, the external command device 5 and the Internet 401. Is connected via communication.
  • the communication processing unit 135 performs a specific object (hereinafter, for convenience sake) from the external command device 5 via the communication device 15 in accordance with a communication command from the control unit 131.
  • a possessed information recognition request command for requesting recognition of retained information for a recognition target object) is received, and the retained information recognition request command is transmitted to the control unit 131.
  • the control unit 131 transmits the captured image received from the camera 11 ⁇ / b> A, the set of object names received from the object recognition unit 132, and the retained information recognition request command received from the communication processing unit 135 to the object retention information recognition unit 133.
  • the object possession information recognition unit 133 includes the name of the recognition target object in the set of object names by searching for the object name included in the set of object names in accordance with the possession information recognition request command received from the control unit 131. Determine whether it is. At this time, if the object possession information recognition unit 133 determines that the name of the recognition target object is included in the set of object names, the object possession information recognition unit 133 recognizes and extracts the possession information represented by the recognition target object. Then, the object possession information recognition unit 133 creates an object possession information character string, and transmits information on the created object possession information character string to the control unit 131. On the other hand, when the object possession information recognition unit 133 determines that the name of the recognition target object is not included in the set of object names, the object possession information recognition unit 133 transmits a zero object possession information detection signal to the control unit 131.
  • the external command device 5 for example, a smartphone having a navigation function can be used.
  • the external command device 5 sends an information on a possessed information request to the pedestrian traffic light via the Internet 401. It transmits to the presentation apparatus 1.
  • the object possession information recognition unit 133 of the information presentation device 1 recognizes and extracts the lighted color of the pedestrian traffic light according to the possession information request command from the external command device 5, for example, the object possession information character string “ The pedestrian traffic light ahead is blue. "
  • the same effect as that of the first embodiment described above can be obtained, and the information presentation device 1 cooperates with the external command device 5.
  • the user 2 By presenting information on the surrounding environment to the user 2, the user 2 can efficiently obtain the possessed information represented by the object designated using the external command device 5. Therefore, it is possible to provide the information presentation system 100 that is convenient for the user 2.
  • the information presentation device 1 can receive an imaging command from the external command device 5 via the communication device 15, and can transmit a captured image of the camera 11A to the information processing device 3 according to the imaging command. Further, the information presentation device 1 receives a voice output request command for requesting voice output from the external command device 5 via the communication device 15, and executes voice output by the voice output device 14A according to the voice output request command. Can do. Thereby, it is possible to present information on the surrounding environment corresponding to the intention of the user 2.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Vascular Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are an information presentation system and an information presentation method which can convey sufficient information about the surrounding environment of a user. The information presentation system according to the present invention presents information about the surrounding environment of a user 2 and is provided with: a camera 11A that captures an image of the surrounding environment of the user 2; a recording device 12 that records predetermined retention information retaining information of objects included in the surrounding environment of the user 2; a processing device 13 that recognizes preset objects in the image captured by the camera 11A on the basis of the captured image and the retention information recorded in the recording device 12, extracts retention information represented by the objects, and performs a process of generating predetermined information signals; and an output device 14 for outputting an output corresponding to the information signals generated by the processing device 13.

Description

情報提示システム及び情報提示方法Information presentation system and information presentation method
 本発明は、利用者の周囲環境の情報を提示する情報提示システム及び情報提示方法に関する。 The present invention relates to an information presentation system and an information presentation method for presenting information on a user's surrounding environment.
 利用者の周囲環境に存在する障害物等の情報を提示する装置の従来技術の1つとして、例えば、下記の特許文献1に示す対象物判別報知装置が知られている。この対象物判別報知装置は、装用者に対して、顔を向けた方向に右目用及び左目用テレビジョンカメラを装用させると共に、当該右目用及び左目用テレビジョンカメラの右目、左目入力映像データに基づいて対象物の輪郭を抽出し、当該右目、左目輪郭線データに基づいて対象物の位置、名称を表す音声情報信号を生成する装置である。 As one of the prior arts of an apparatus that presents information such as obstacles existing in the user's surrounding environment, for example, an object discrimination notification apparatus shown in Patent Document 1 below is known. This object discrimination notification device causes the wearer to wear the right-eye and left-eye television cameras in the direction in which the face is directed, and to input the right-eye and left-eye input video data of the right-eye and left-eye television cameras. This is a device that extracts the outline of an object based on the data and generates an audio information signal representing the position and name of the object based on the right eye and left eye outline data.
特開2004-290618号公報JP 2004-290618 A
 上述した特許文献1に開示された従来技術の対象物判別報知装置では、障害物だけでなく、例えば信号等のように、安全に歩行する上で有用な対象物の存在を利用者に報知することができるが、その対象物の状態や対象物に記載されている内容等の情報についてまでは、利用者に報知することができないという問題点がある。例えば、従来技術の対象物判別報知装置は、進路上の信号や看板の存在を利用者に報知することができるが、信号が表示している色や看板に記載されている文字等の情報を報知することができない。従って、利用者が従来技術の対象物判別装置を用いても、利用者の周囲環境の情報が十分に伝達されないので、利用者にとって必ずしも満足のいく情報の提示が行われていないことが懸念されている。 In the above-described object discrimination notification device disclosed in Patent Document 1 described above, not only an obstacle but also a presence of an object useful for walking safely, such as a signal, is notified to the user. However, there is a problem in that it is impossible to notify the user about information such as the state of the object and the contents described in the object. For example, the prior art object identification and notification device can notify the user of the signal on the route and the presence of the sign, but can display information such as the color displayed on the signal and the characters written on the sign. Cannot be notified. Therefore, even if the user uses the prior art object discriminating apparatus, information on the surrounding environment of the user is not sufficiently transmitted, so there is a concern that the user is not necessarily presenting information that is satisfactory. ing.
 本発明は、このような従来技術の実情からなされたもので、その目的は、利用者の周囲環境の情報を十分に伝達することができる情報提示システム及び情報提示方法を提供することにある。 The present invention has been made from the actual situation of such a conventional technique, and an object of the present invention is to provide an information presentation system and an information presentation method capable of sufficiently transmitting information on the surrounding environment of the user.
 上記の目的を達成するために、本発明の情報提示システムは、利用者の周囲環境の情報を提示する情報提示システムであって、前記周囲環境を撮像する撮像器と、前記周囲環境に含まれる対象物が保有する所定の保有情報を記録する記録装置と、前記撮像器によって撮像された撮像画像及び前記記録装置に記録された前記保有情報に基づいて、前記撮像画像の中から予め設定された前記対象物を認識し、当該対象物が表す前記保有情報を抽出して所定の情報信号を生成する処理を行う処理装置と、前記処理装置によって生成された前記情報信号に対応する出力を行う出力器とを備えたことを特徴としている。 In order to achieve the above object, an information presentation system of the present invention is an information presentation system for presenting information on a user's surrounding environment, and is included in an imaging device that images the surrounding environment and the surrounding environment. Based on the recording device that records predetermined possession information possessed by the object, the captured image captured by the imaging device, and the retained information recorded in the recording device, a preset is set from among the captured images. A processing device that recognizes the object, extracts the retained information represented by the object and generates a predetermined information signal, and an output corresponding to the information signal generated by the processing device It is characterized by having a vessel.
 また、本発明の情報提示方法は、利用者の周囲環境の情報を撮影する撮像器、前記周囲環境に含まれる対象物が保有する所定の保有情報を記録する記録装置、これらの撮像器と記録装置に接続され、前記周囲環境の情報を処理する処理装置、及び前記処理装置に接続された出力器を備えた情報提示システムにおいて使用される情報提示方法であって、前記処理装置が前記撮像器の撮像画像を取得するステップと、前記処理装置が前記所定の保有情報を前記記録装置から取得するステップと、前記処理装置が、これらのステップにおいて取得した前記撮像画像及び前記所定の保有情報に基づいて、前記撮像画像の中から予め設定された前記対象物を認識し、当該対象物が表す前記保有情報を抽出して所定の情報信号を生成するステップと、前記出力器が、前記処理装置によって生成された前記情報信号に対応する出力を行うステップとを備えたことを特徴としている。 In addition, the information presentation method of the present invention includes an imaging device that captures information about a user's surrounding environment, a recording device that records predetermined holding information held by an object included in the surrounding environment, and these imaging device and recording. An information presentation method used in an information presentation system including a processing device connected to a device and processing information of the surrounding environment, and an output device connected to the processing device, wherein the processing device is the imaging device Based on the captured image and the predetermined possession information acquired in these steps, the step of acquiring the captured image of the processing device, the processing device acquiring the predetermined possession information from the recording device, and the processing device. Recognizing the object set in advance from the captured image, extracting the possessed information represented by the object and generating a predetermined information signal; Output unit is, is characterized by comprising a step of performing an output corresponding to the information signal generated by the processing unit.
 本発明の情報提示システム及び情報提示方法によれば、利用者の周囲環境の情報を十分に伝達することができる。前述した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 According to the information presentation system and the information presentation method of the present invention, it is possible to sufficiently transmit information on the surrounding environment of the user. Problems, configurations, and effects other than those described above will become apparent from the following description of embodiments.
本発明の第1実施形態に係る情報提示システムとしての情報提示装置を模式的に示す全体図であり、(A)図は利用者の顔の辺りにカメラが配置された情報提示装置の一例を利用者が装着した状態を示す図、(B)図は利用者の胸の辺りにカメラが配置された情報提示装置の他の例を利用者が装着した状態を示す図である。BRIEF DESCRIPTION OF THE DRAWINGS It is a general view which shows typically the information presentation apparatus as an information presentation system which concerns on 1st Embodiment of this invention, (A) The figure is an example of the information presentation apparatus by which the camera was arrange | positioned around the user's face. The figure which shows the state with which the user mounted | worn, (B) The figure which shows the state with which the user mounted | wore the other example of the information presentation apparatus with which the camera was arrange | positioned around the user's chest. 本発明の第1実施形態に係る情報提示システムの構成を示す図である。It is a figure which shows the structure of the information presentation system which concerns on 1st Embodiment of this invention. 図2に示す処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus shown in FIG. 図1に示す情報提示装置と外部の装置との通信関係の一例を示す図である。It is a figure which shows an example of the communication relationship between the information presentation apparatus shown in FIG. 1, and an external apparatus. 本発明の第1実施形態に係る情報提示装置の動作の流れを示すフローチャートである。It is a flowchart which shows the flow of operation | movement of the information presentation apparatus which concerns on 1st Embodiment of this invention. 図3に示すカメラの撮像画像の一例を示す図である。It is a figure which shows an example of the captured image of the camera shown in FIG. 図6に示す撮像画像に対してエッジ検出を行うことにより得られた画像の一例を示す図である。It is a figure which shows an example of the image obtained by performing edge detection with respect to the captured image shown in FIG. 図2に示す記録装置に記録された保有情報の一例として挙げたルックアップテーブルを示す図である。It is a figure which shows the look-up table mentioned as an example of the holding information recorded on the recording device shown in FIG. 図5に示す物体保有情報認識部による物体の保有情報の抽出処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the extraction process of the holding information of the object by the object holding information recognition part shown in FIG. 図2に示す記録装置に記録された保有情報の他の例として挙げたルックアップテーブルを示す図である。It is a figure which shows the look-up table mentioned as another example of the possession information recorded on the recording device shown in FIG. 本発明の第2実施形態に係る情報提示装置の処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus of the information presentation apparatus which concerns on 2nd Embodiment of this invention. 本発明の第2実施形態に係る情報処理装置の処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus of the information processing apparatus which concerns on 2nd Embodiment of this invention. 本発明の第4実施形態に係る情報提示装置の処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus of the information presentation apparatus which concerns on 4th Embodiment of this invention. 図13に示す方向判定部による方向の判定処理を説明する図であり、利用者の顔が進行方向を向いていないときのカメラの撮像画像の一例を示す図である。It is a figure explaining the direction determination process by the direction determination part shown in FIG. 13, and is a figure which shows an example of the captured image of a camera when a user's face is not facing the advancing direction. 図13に示す方向判定部による方向の判定処理の他の例を説明する図であり、カメラの撮像画像の時間推移の一例を示す図である。It is a figure explaining the other example of the direction determination process by the direction determination part shown in FIG. 13, and is a figure which shows an example of the time transition of the captured image of a camera. 図13に示す方向判定部による方向の判定処理の他の例を説明する図であり、カメラの撮像画像の時間推移の他の例を示す図である。It is a figure explaining the other example of the direction determination process by the direction determination part shown in FIG. 13, and is a figure which shows the other example of the time transition of the captured image of a camera. 本発明の第5実施形態に係る情報提示装置を模式的に示す全体図である。It is a general view which shows typically the information presentation apparatus which concerns on 5th Embodiment of this invention. 本発明の第5実施形態に係る情報提示装置の他の例を模式的に示す全体図である。It is a general view which shows typically the other example of the information presentation apparatus which concerns on 5th Embodiment of this invention. 本発明の第5実施形態に係る情報提示装置の他の例として挙げたヘッドマウントディスプレイを模式的に示す全体図である。It is the whole figure which shows typically the head mounted display mentioned as another example of the information presentation apparatus which concerns on 5th Embodiment of this invention. 本発明の第6実施形態に係る外部出力装置の処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus of the external output device which concerns on 6th Embodiment of this invention. 図20に示す外部出力装置の一例として挙げた白杖を示す全体図である。It is a general view which shows the white cane mentioned as an example of the external output device shown in FIG. 本発明の第7実施形態に係る情報提示装置の処理装置の主な機能を示す機能ブロック図である。It is a functional block diagram which shows the main functions of the processing apparatus of the information presentation apparatus which concerns on 7th Embodiment of this invention.
 以下、本発明に係る情報提示システム及び情報提示方法を実施するための形態を図に基づいて説明する。なお、以下の説明は、本発明の各実施形態を説明するためのものであり、本発明の範囲を制限するものではない。従って、当業者であればこれらの各要素若しくは全要素をこれと同等なものに置換した実施形態を採用することが可能であり、これらの各実施形態も本発明の範囲に含まれる。また、本発明の情報提示システム及び情報提示方法は、視覚障がい者に対する視覚補助だけではなく、近視、遠視、乱視の視覚特性を持つ人、色識別が困難な人、及び標準的な視覚特性を持つ人等、様々な人に対して、視覚補助を行うことが可能である。 Hereinafter, an embodiment for carrying out an information presentation system and an information presentation method according to the present invention will be described with reference to the drawings. In addition, the following description is for describing each embodiment of the present invention, and does not limit the scope of the present invention. Accordingly, those skilled in the art can employ embodiments in which each or all of these elements are replaced with equivalent ones, and these embodiments are also included in the scope of the present invention. In addition, the information presentation system and the information presentation method of the present invention provide not only visual assistance for visually impaired persons, but also persons with visual characteristics such as myopia, hyperopia, and astigmatism, persons with difficult color identification, and standard visual characteristics. It is possible to provide visual assistance to various people, such as those who have it.
[第1実施形態]
 図1(A)及び図1(B)は本発明の第1実施形態に係る情報提示システム100としての情報提示装置1の全体、及びこの情報提示装置1を装着した利用者2の例を示す図、図2は本発明の第1実施形態に係る情報提示システム100の構成を示す図である。
[First Embodiment]
1A and 1B show the entire information presentation apparatus 1 as the information presentation system 100 according to the first embodiment of the present invention, and an example of a user 2 wearing the information presentation apparatus 1. FIG. 2 is a diagram showing a configuration of the information presentation system 100 according to the first embodiment of the present invention.
 図1(A)、図1(B)に示すように、本発明の第1実施形態に係る情報提示システム100は、例えば、利用者2が装着し、利用者2の周囲環境の情報を提示する処理を行う情報提示装置1から構成されている。 As shown in FIGS. 1A and 1B, the information presentation system 100 according to the first embodiment of the present invention is, for example, worn by the user 2 and presents information on the surrounding environment of the user 2. It is comprised from the information presentation apparatus 1 which performs the process to perform.
 この情報提示装置1は、例えば図2に示すように、カメラ11、記録装置12、処理装置13、音声出力装置14A、通信装置15、I/F(Interface)16、及びバス17を備えている。 For example, as shown in FIG. 2, the information presentation device 1 includes a camera 11, a recording device 12, a processing device 13, an audio output device 14 </ b> A, a communication device 15, an I / F (Interface) 16, and a bus 17. .
 カメラ11Aは、利用者2の周囲環境の情報を撮像する撮像器11として機能する。本発明の第1実施形態に係る情報提示装置1では、カメラ11Aは、例えば、利用者2が情報提示装置1を装着した上で直立して正面を見たときに、利用者2の顔が向いている方向の画像を撮像することができるように、情報提示装置1に実装されている。また、カメラ11Aは、例えば図1(A)に示すように、利用者2の顔の辺りに配置してもよいし、あるいは図1(B)に示すように、利用者2が身に着けている服の襟に直接取付けてもよい。さらに、カメラ11Aとして、例えば、可視光の画像を撮像するように構成されたカラーカメラを利用することが可能である。このように構成することにより、利用者2の周囲環境のフルカラー画像を取得することができる。 The camera 11 </ b> A functions as an imager 11 that captures information about the environment around the user 2. In the information presentation apparatus 1 according to the first embodiment of the present invention, for example, when the user 2 wears the information presentation apparatus 1 and looks upright while the user 2 is wearing the information presentation apparatus 1, the face of the user 2 is It is mounted on the information presentation apparatus 1 so that an image in the direction in which it faces can be taken. The camera 11A may be disposed around the face of the user 2 as shown in FIG. 1A, for example, or worn by the user 2 as shown in FIG. It may be attached directly to the clothes collar. Furthermore, as the camera 11A, for example, a color camera configured to capture an image of visible light can be used. With this configuration, it is possible to acquire a full color image of the environment surrounding the user 2.
 記録装置12は、図2に示すように、利用者2の周囲環境に含まれる対象物が保有する所定の保有情報等、処理装置13が利用者2の周囲環境の情報を処理するのに必要な各種の情報を記録する。また、記録装置12は、情報の読書きが可能な不揮発性の記憶媒体であるHDD12A等のハードウェアを含んで構成され、処理装置13の後述の物体認識部132や物体保有情報認識部133が処理を行う際に参照する情報、OS(Operating System)、情報提示プログラム等の各種の制御・処理プログラム及びアプリケーション・プログラム等を格納している。なお、記録装置12に記録される上述の所定の保有情報については、後で詳しく述べる。 As shown in FIG. 2, the recording device 12 is necessary for the processing device 13 to process information about the surrounding environment of the user 2, such as predetermined possession information held by an object included in the surrounding environment of the user 2. Various types of information are recorded. The recording device 12 includes hardware such as an HDD 12A that is a nonvolatile storage medium capable of reading and writing information, and an object recognition unit 132 and an object possession information recognition unit 133 described later of the processing device 13 are included. Information to be referred to when processing, OS (Operating System), various control / processing programs such as an information presentation program, application programs, and the like are stored. The predetermined possessed information recorded in the recording device 12 will be described in detail later.
 処理装置13は、カメラ11Aによって撮像された撮像画像及び記録装置12に記録された保有情報に基づいて、撮像画像の中から予め設定された対象物を認識し、当該対象物が表す保有情報を抽出して所定の情報信号を生成する処理を含む、利用者2の周囲環境の情報の処理を行う。本発明の第1実施形態に係る情報提示装置1では、利用者2に提示するための上記所定の情報信号として、例えば、利用者2に提示する音声を合成した音声信号を利用する。なお、処理装置13による音声信号の生成処理については、後で詳しく述べる。 The processing device 13 recognizes a preset object from the captured image based on the captured image captured by the camera 11 </ b> A and the retained information recorded in the recording device 12, and stores the retained information represented by the object. The processing of information on the surrounding environment of the user 2 is performed, including the process of extracting and generating a predetermined information signal. In the information presentation apparatus 1 according to the first embodiment of the present invention, as the predetermined information signal to be presented to the user 2, for example, a voice signal obtained by synthesizing the voice presented to the user 2 is used. The audio signal generation processing by the processing device 13 will be described in detail later.
 また、処理装置13は、利用者2の周囲環境の情報を処理するための各種の演算を行うCPU(Central Processing Unit)13A、並びにCPU13Aによる演算を実行するためのプログラムを格納するROM(Read Only Memory)13B、及びCPU13Aがプログラムを実行する際の作業領域となるRAM(Random Access Memory)13C等を含むハードウェアから構成されている。 The processing device 13 includes a CPU (Central Processing Unit) 13A that performs various calculations for processing information about the environment of the user 2, and a ROM (Read Only) that stores a program for executing calculations by the CPU 13A. (Memory) 13B and hardware including a RAM (Random Access Memory) 13C, which is a work area when the CPU 13A executes a program.
 出力器14は、処理装置13によって生成された情報信号に対応する出力を行う。本発明の第1実施形態に係る情報提示装置1では、出力器14は、例えば、処理装置13によって生成された音声信号に対応する音声を出力する音声出力装置14Aから構成されている。この音声出力装置14Aとしては、例えば図1(A)、図1(B)に示すように、音声を空気振動により伝達するヘッドホン(第1のヘッドホン)、図示されないが、音声を空気振動により伝達するイヤホン(第1のイヤホン)、及び音声を空気振動により伝達するスピーカを利用することが可能である。これにより、利用者2は、ヘッドホン14A、イヤホン、及びスピーカによる空気の振動を知覚することにより、情報提示装置1が提示した周囲環境の情報を容易に認識することができる。 The output device 14 performs output corresponding to the information signal generated by the processing device 13. In the information presentation device 1 according to the first embodiment of the present invention, the output device 14 is configured by, for example, an audio output device 14A that outputs audio corresponding to the audio signal generated by the processing device 13. As this audio output device 14A, for example, as shown in FIGS. 1 (A) and 1 (B), headphones (first headphones) that transmit sound by air vibration, although not shown, sound is transmitted by air vibration. It is possible to use an earphone (first earphone) that performs sound and a speaker that transmits sound by air vibration. Thereby, the user 2 can easily recognize the information of the surrounding environment presented by the information presentation device 1 by perceiving the vibration of air by the headphones 14A, the earphones, and the speakers.
 この他、音声出力装置14Aとして、例えば、音声を骨伝導により伝達するヘッドホン(第2のヘッドホン)、音声を骨伝導により出力するイヤホン(第2のイヤホン)等を利用することが可能である。これにより、利用者2は、骨伝導方式のヘッドホンやイヤホンによる骨の振動を知覚することにより、情報提示装置1が提示した周囲環境の情報を容易に認識することができる。特に、音声出力装置14Aとして、耳を塞がなくて済む骨伝導方式のヘッドホンやイヤホンを利用することにより、利用者2は、情報提示装置1が提示した周囲環境の情報以外の情報も容易に聞くことができる。従って、利用者2に対する利便性を向上することができる。なお、音声出力装置14Aは、上述したヘッドホン、イヤホン、及びスピーカ等を単独で用いて構成されてもよいし、ヘッドホン、イヤホン、及びスピーカ等を組み合わせて構成されてもよい。 In addition, as the audio output device 14A, for example, a headphone (second headphone) that transmits sound by bone conduction, an earphone (second earphone) that outputs sound by bone conduction, or the like can be used. Thereby, the user 2 can easily recognize the information of the surrounding environment presented by the information presentation device 1 by perceiving the vibration of the bone by the bone conduction type headphones or earphones. In particular, by using bone conduction type headphones or earphones that do not need to block the ears as the audio output device 14A, the user 2 can easily obtain information other than the ambient environment information presented by the information presentation device 1. I can hear you. Therefore, the convenience for the user 2 can be improved. Note that the audio output device 14A may be configured by using the above-described headphones, earphones, and speakers alone, or may be configured by combining headphones, earphones, speakers, and the like.
 通信装置15は、後述の情報処理装置3(図4参照)や外部指令装置5(図4参照)を含む外部の装置と通信を行う情報提示装置側通信装置として機能する。そして、上述したCPU13A、ROM13B、RAM13C、及びHDD12Aがバス17を介してI/F16に接続されると共に、カメラ11A、音声出力装置14A、及び通信装置15がI/F16に接続されている。従って、情報提示装置1の内部において各種の信号及び情報の授受が、カメラ11A、記録装置12、処理装置13、音声出力装置14A、及び通信装置15の間で相互に行われるようになっている。 The communication device 15 functions as an information presentation device side communication device that communicates with external devices including the information processing device 3 (see FIG. 4) and the external command device 5 (see FIG. 4) described later. The CPU 13A, ROM 13B, RAM 13C, and HDD 12A described above are connected to the I / F 16 via the bus 17, and the camera 11A, the audio output device 14A, and the communication device 15 are connected to the I / F 16. Accordingly, various signals and information are exchanged among the camera 11A, the recording device 12, the processing device 13, the audio output device 14A, and the communication device 15 inside the information presentation device 1. .
 このような情報提示装置1の構成において、ROM13BやHDD12A、もしくは図示しない光学ディスク等の記録媒体に格納された情報提示プログラム等がRAM13Cに読出され、CPU13Aの制御に従って動作することにより、情報提示プログラム等のソフトウェアとハードウェアとが協働して、情報提示装置1に含まれる処理装置13の機能を実現する機能ブロックが構成される。 In such a configuration of the information presentation apparatus 1, an information presentation program or the like stored in a recording medium such as the ROM 13B, the HDD 12A, or an optical disk (not shown) is read into the RAM 13C, and operates according to the control of the CPU 13A. Such software and hardware cooperate to form a functional block that realizes the function of the processing device 13 included in the information presentation device 1.
 次に、処理装置13による利用者2の周囲環境の情報処理の機能構成について、図3を参照しながら詳細に説明する。 Next, the functional configuration of information processing of the surrounding environment of the user 2 by the processing device 13 will be described in detail with reference to FIG.
 図3は処理装置13の主な機能を示す機能ブロック図である。 FIG. 3 is a functional block diagram showing the main functions of the processing device 13.
 処理装置13は、図3に示すように、制御部131、物体認識部132、物体保有情報認識部133、音声合成部134、及び通信処理部135を備えている。 As shown in FIG. 3, the processing device 13 includes a control unit 131, an object recognition unit 132, an object possession information recognition unit 133, a voice synthesis unit 134, and a communication processing unit 135.
 制御部131は、情報提示装置1におけるカメラ11A、記録装置12、処理装置13、及び音声出力装置14Aの各構成要素の動作を制御すると共に、各構成要素間の各種の信号や情報の伝送を行う。 The control unit 131 controls the operation of each component of the camera 11A, the recording device 12, the processing device 13, and the audio output device 14A in the information presentation device 1, and transmits various signals and information between the components. Do.
 物体認識部132は、制御部131がカメラ11Aから取得した撮像画像を制御部131から受信し、撮像画像に映し出されている物体、すなわち対象物の認識を行う。そして、物体認識部132は、認識した物体の名称を物体名称の組として集約し、この物体名称の組を制御部131に送信する。 The object recognition unit 132 receives the captured image acquired by the control unit 131 from the camera 11A from the control unit 131, and recognizes the object displayed on the captured image, that is, the target. Then, the object recognition unit 132 aggregates the recognized object names as a set of object names, and transmits this set of object names to the control unit 131.
 物体保有情報認識部133は、制御部131がカメラ11Aから取得した撮像画像、物体認識部132から受信した物体名称の組、及び記録装置12に記録されている保有情報を制御部131から受信し、物体認識部132によって認識された物体が保有している保有情報を認識して抽出する。そして、物体保有情報認識部133は、抽出した保有情報に基づいて、物体認識部132によって認識された物体毎に、これらの各物体が表す保有情報を言葉で提示するための物体保有情報文字列を作成した後、これらの物体保有情報文字列を物体保有情報文字列の組として集約し、この物体保有情報文字列の組を制御部131に送信する。 The object possession information recognition unit 133 receives from the control unit 131 the captured image acquired by the control unit 131 from the camera 11A, the set of object names received from the object recognition unit 132, and the possession information recorded in the recording device 12. , The information held by the object recognized by the object recognition unit 132 is recognized and extracted. Then, the object possession information recognizing unit 133 uses, for each object recognized by the object recognizing unit 132 based on the extracted possession information, an object possession information character string for presenting possession information represented by each object in words. Then, these object possession information character strings are collected as a set of object possession information character strings, and the set of object possession information character strings is transmitted to the control unit 131.
 音声合成部134は、利用者2に提示するための情報信号を生成する信号生成部として機能する。具体的には、音声合成部134は、制御部131が物体保有情報認識部133から受信した物体保有情報文字列の組を基に、利用者2に提示する音声を合成した音声信号を生成する。そして、音声合成部134は、生成した音声信号を制御部131に送信する。これにより、制御部131から音声出力装置14Aに音声信号が伝送され、音声出力装置14Aから音声が出力されるので、利用者2は、この音声を聞くことにより、情報提示装置1によって提示された情報、すなわち利用者2の周囲環境に存在する物体の状態や当該物体に記載されている内容等の情報を把握することができる。 The speech synthesizer 134 functions as a signal generator that generates an information signal to be presented to the user 2. Specifically, the voice synthesizer 134 generates a voice signal obtained by synthesizing the voice to be presented to the user 2 based on the set of object possession information character strings received by the control unit 131 from the object possession information recognition unit 133. . Then, the voice synthesis unit 134 transmits the generated voice signal to the control unit 131. As a result, an audio signal is transmitted from the control unit 131 to the audio output device 14A, and audio is output from the audio output device 14A. Therefore, the user 2 is presented by the information presentation device 1 by listening to the audio. Information, that is, information such as the state of an object existing in the surrounding environment of the user 2 and the contents described in the object can be grasped.
 通信処理部135は、制御部131からの通信指令に従って、通信装置15を介して外部の装置との間で各種の信号や情報を送受信する通信処理を行う。外部の装置としては、例えば、情報提示装置1に対して指令を送信する外部指令装置5(図4参照)や、利用者2の周囲環境の情報を含む各種の情報を処理する情報処理装置3(図4参照)が含まれる。 The communication processing unit 135 performs communication processing for transmitting and receiving various signals and information to and from an external device via the communication device 15 in accordance with a communication command from the control unit 131. As an external device, for example, an external command device 5 (see FIG. 4) that transmits a command to the information presenting device 1 or an information processing device 3 that processes various types of information including information on the surrounding environment of the user 2 (See FIG. 4).
 図4は利用者2が装着した情報提示装置1と外部の装置2,4との通信関係の一例を示す図である。 FIG. 4 is a diagram showing an example of a communication relationship between the information presentation device 1 worn by the user 2 and the external devices 2 and 4.
 情報提示装置1の通信装置15は、例えば、インターネット401を介して外部の情報処理装置3に通信接続されている。従って、情報提示装置1は、利用者2の周囲環境の情報を処理するのに必要な情報を情報処理装置3から定期的に受信し、記録装置12内の情報を最新の情報に更新することにより、処理装置13による周囲環境の情報処理の精度を高めることができる。 The communication device 15 of the information presentation device 1 is communicatively connected to the external information processing device 3 via the Internet 401, for example. Therefore, the information presentation apparatus 1 periodically receives information necessary for processing the information about the environment of the user 2 from the information processing apparatus 3, and updates the information in the recording apparatus 12 to the latest information. Thus, the accuracy of information processing of the surrounding environment by the processing device 13 can be increased.
 次に、本発明の第1実施形態に係る情報提示装置1の動作について、図5のフローチャートに基づいて詳細に説明し、併せて、本発明の第1実施形態に係る情報処理方法について説明する。 Next, the operation of the information presentation apparatus 1 according to the first embodiment of the present invention will be described in detail based on the flowchart of FIG. 5 and the information processing method according to the first embodiment of the present invention will be described together. .
 図5は本発明の第1実施形態に係る情報提示装置1の動作の流れを示すフローチャートである。 FIG. 5 is a flowchart showing an operation flow of the information presentation apparatus 1 according to the first embodiment of the present invention.
 本発明の第1実施形態に係る情報提示装置1では、まず、カメラ11Aは、制御部131から利用者2の周囲環境の撮像を要求する撮像指令を受信すると、撮像を開始する(ステップ(以下、Sと記す)501)。 In the information presentation apparatus 1 according to the first embodiment of the present invention, first, when the camera 11A receives an imaging command for requesting imaging of the surrounding environment of the user 2 from the control unit 131, the camera 11A starts imaging (step (hereinafter referred to as “step (hereinafter referred to as imaging step”) , S) 501).
 図6はカメラ11Aの撮像画像の一例を示す図である。 FIG. 6 is a diagram illustrating an example of an image captured by the camera 11A.
 図6に示すように、カメラ11Aによって撮像された撮像画像110には、車両用信号機111、歩行者用信号機112、店113、看板114、歩道115、車道116、道路の中央線117が映し出されている。カメラ11Aは、このような周囲環境を撮像した撮像画像を制御部131に送信する。 As shown in FIG. 6, in the captured image 110 captured by the camera 11 </ b> A, a vehicle traffic light 111, a pedestrian traffic light 112, a store 113, a signboard 114, a sidewalk 115, a roadway 116, and a road center line 117 are projected. ing. The camera 11 </ b> A transmits a captured image obtained by capturing such an ambient environment to the control unit 131.
 次に、制御部131は、カメラ11Aから受信した撮像画像を物体認識部132に送信する。そして、物体認識部132は、制御部131から撮像画像を受信すると、例えば、以下のようにして撮像画像に映し出されている物体の認識を行う(S502)。 Next, the control unit 131 transmits the captured image received from the camera 11 </ b> A to the object recognition unit 132. And the object recognition part 132 will recognize the object currently displayed on the captured image as follows, for example, if a captured image is received from the control part 131 (S502).
 始めに、物体認識部132は、カメラ11Aの撮像画像に対してエッジ検出を行う。そのために、物体認識部132は、撮像画像を構成する各画素のRGB値を(r,g,b)とし、(r,g,b)を引数とする関数f(r,g,b)の値を画素毎に算出する。ここで、関数f(r,g,b)は、(r,g,b)から各画素の特徴量を計算する関数である。 First, the object recognition unit 132 performs edge detection on the captured image of the camera 11A. Therefore, the object recognizing unit 132 sets the RGB value of each pixel constituting the captured image to (r, g, b) and sets the function f (r, g, b) using (r, g, b) as arguments. A value is calculated for each pixel. Here, the function f (r, g, b) is a function for calculating the feature quantity of each pixel from (r, g, b).
 例えば、f(r,g,b)=0.2989r+0.5866g+0.1144bとすれば、standardRGB(C)[sRGB(C)]におけるY刺激値となり、明るさに対応する。物体認識部132は、各画素の特徴量を計算し、その特徴量が鋭敏に変化している個所を特定する。特徴量が鋭敏に変化している個所を特定する方法としては、例えば、各画素に対して、当該画素の特徴量と上下左右に隣接する画素の特徴量との差分の最大値を当該画素の最大差分量と定義して求め、この最大差分量の極大を結ぶ線をエッジとする方法を用いることが可能である。 For example, if f (r, g, b) = 0.22989r + 0.5866g + 0.1144b, the Y stimulus value in standardRGB (C) [sRGB (C)] corresponds to the brightness. The object recognizing unit 132 calculates the feature amount of each pixel, and identifies a location where the feature amount changes sharply. As a method for specifying the location where the feature amount is changing sharply, for example, for each pixel, the maximum value of the difference between the feature amount of the pixel and the feature amount of the pixel adjacent to the top, bottom, left, and right is determined. It is possible to use a method in which the maximum difference amount is defined and obtained, and a line connecting the maximum difference amounts is used as an edge.
 本発明の第1実施形態に係る情報提示装置1では、物体認識部132によるエッジ検出は、撮像画像の画素毎に1つの特徴量f(r,g,b)の値を求めることにより行った場合について説明したが、本発明はこの場合に限られない。例えば、物体認識部132は、撮像画像の画素毎に特徴量f1(r,g,b)、f2(r,g,b)、及びf3(r,g,b)の値を求め、3つの特徴量からエッジを検出する等、複数の特徴量からエッジを検出してもよい。これにより、物体認識部132によるエッジ検出の精度を向上することが可能となる。例えば、f1(r,g,b)=r、f2(r,g,b)=g、f3(r,g,b)=bとすることにより、赤色、緑色、青色のエッジ検出の精度をより向上することが可能となる。 In the information presentation device 1 according to the first embodiment of the present invention, the edge detection by the object recognition unit 132 is performed by obtaining the value of one feature quantity f (r, g, b) for each pixel of the captured image. Although the case has been described, the present invention is not limited to this case. For example, the object recognizing unit 132 obtains the values of the feature quantities f1 (r, g, b), f2 (r, g, b), and f3 (r, g, b) for each pixel of the captured image. An edge may be detected from a plurality of feature amounts, such as detecting an edge from the feature amount. As a result, the accuracy of edge detection by the object recognition unit 132 can be improved. For example, by setting f1 (r, g, b) = r, f2 (r, g, b) = g, and f3 (r, g, b) = b, the accuracy of red, green, and blue edge detection can be improved. It becomes possible to improve further.
 図7は図6に示すカメラ11Aの撮像画像110に対してエッジ検出を行うことにより得られた画像110aの一例を示す図である。 FIG. 7 is a diagram showing an example of an image 110a obtained by performing edge detection on the captured image 110 of the camera 11A shown in FIG.
 物体認識部132は、図6に示すカメラ11Aの撮像画像110に対してエッジ検出を行うと、エッジ検出の結果として、図7に示す画像110aを得ることができる。この画像110aには、車両用信号機のエッジ111a、歩行者用信号機のエッジ112a、店のエッジ113a、看板のエッジ114a、歩道のエッジ115a、車道のエッジ116a、道路の中央線のエッジ117aが検出されている。 When the object recognition unit 132 performs edge detection on the captured image 110 of the camera 11A illustrated in FIG. 6, an image 110a illustrated in FIG. 7 can be obtained as a result of the edge detection. In this image 110a, an edge 111a of a traffic signal for a vehicle, an edge 112a of a traffic signal for a pedestrian, an edge 113a of a store, an edge 114a of a signboard, an edge 115a of a sidewalk, an edge 116a of a roadway, and an edge 117a of a road center line are detected. Has been.
 次に、物体認識部132は、カメラ11Aの撮像画像に対するエッジ検出の結果に基づいて、撮像画像に含まれる物体の認識を行う。そのために、物体認識部132は、エッジ検出の結果の画像を構成する各エッジをグループ分けする。例えば、エッジ検出の結果の画像内において一部分に固まって存在し、かつ他のエッジとの接点が少ないエッジの集団を一つのグループとする。これにより、カメラ11Aの撮像画像に含まれる各物体のエッジを検出することが可能となる。 Next, the object recognition unit 132 recognizes an object included in the captured image based on the result of edge detection for the captured image of the camera 11A. For this purpose, the object recognizing unit 132 groups each edge constituting the image of the edge detection result. For example, a group of edges present in a part in an image as a result of edge detection and having few contacts with other edges is defined as one group. This makes it possible to detect the edge of each object included in the captured image of the camera 11A.
 一方、物体認識部132は、撮像画像の中から1つも物体のエッジを検出しなかった場合、その旨を示すゼロ物体検出信号を制御部131に送信する。以下では、本発明の内容を分かり易く説明するために、物体認識部131が撮像画像の中から1つ以上の物体のエッジを検出したと仮定して説明を行う。 On the other hand, when no object edge is detected from the captured image, the object recognition unit 132 transmits a zero object detection signal indicating the fact to the control unit 131. Hereinafter, in order to explain the contents of the present invention in an easy-to-understand manner, the description will be made on the assumption that the object recognition unit 131 has detected one or more object edges from the captured image.
 物体認識部132は、例えば、上記所定の保有情報としてのルックアップテーブル120(図8参照)を記録装置12から制御部131を介して取得する。そのために、物体認識部132は、ルックアップテーブル120を要求するルックアップテーブル要求信号を制御部131に対して送信する。制御部131は、このルックアップテーブル要求信号に応じて、記録装置12からルックアップテーブル120を受信して物体認識部132に送信する。 The object recognition unit 132 acquires, for example, the lookup table 120 (see FIG. 8) as the predetermined possessed information from the recording device 12 via the control unit 131. For this purpose, the object recognition unit 132 transmits a lookup table request signal for requesting the lookup table 120 to the control unit 131. In response to the lookup table request signal, the control unit 131 receives the lookup table 120 from the recording device 12 and transmits it to the object recognition unit 132.
 図8は記録装置12に記録された保有情報の一例として挙げたルックアップテーブル120を示す図である。 FIG. 8 is a diagram showing a look-up table 120 given as an example of possession information recorded in the recording device 12.
 図8に示すように、ルックアップテーブル120は、例えば、物体のエッジを示すエッジパターンの項目、物体の名称を示す物体名称の項目、物体が保有する情報の種別を示す物体保有情報種別の項目、物体保有情報種別が色である場合に、物体のうちどの領域の色を取得するかを表すモードを示す色取得モードの項目、物体が情報を保有している領域を示す物体情報保有領域、及び物体保有情報文字列を作成する際に利用される文字列のテンプレートを示す文字列テンプレートの項目から構成されており、これらの各項目は相互に対応付けられている。 As shown in FIG. 8, the look-up table 120 includes, for example, an edge pattern item indicating an object edge, an object name item indicating an object name, and an object holding information type item indicating the type of information held by the object. When the object possession information type is a color, an item of a color acquisition mode indicating a mode indicating which area of the object the color is acquired, an object information possession area indicating an area where the object retains information, And a character string template item indicating a character string template used when creating the object possession information character string, and these items are associated with each other.
 ルックアップテーブル120の物体保有情報種別には、例えば、物体の色及び物体に表された文字を利用することが可能である。色取得モードには、例えば、物体のうち特定の領域の色を取得するモードである特定領域色取得モード、及び物体のうち明るさが明るい(大きい)領域の色を取得するモードである明領域色取得モード等を利用することが可能である。なお、図8に示すルックアップテーブル120において、物体保有情報種別が色でない場合には、対応する色取得モードを指定する必要がないため、「-」が図示されている。 For the object possession information type of the lookup table 120, for example, the color of the object and the characters represented on the object can be used. The color acquisition mode includes, for example, a specific area color acquisition mode that is a mode for acquiring a color of a specific area of an object, and a bright area that is a mode of acquiring a bright (large) area color of the object. A color acquisition mode or the like can be used. In the lookup table 120 shown in FIG. 8, when the object possession information type is not color, it is not necessary to specify the corresponding color acquisition mode, and therefore “-” is shown.
 物体認識部132は、このようなルックアップテーブル120を記録装置12から取得すると、撮像画像の中から検出した各物体のエッジ毎に、ルックアップテーブル120内のエッジパターンの中から最も類似したエッジパターンを検索する。次に、物体認識部132は、各物体のエッジ毎に検索したエッジパターン、すなわち最も類似したエッジパターンに対応する物体名称を、撮像画像の中心と物体の中心との距離が短い順に並べ、物体名称の組として集約し、この物体名称の組を制御部131に送信する。以下では、物体名称の組に含まれる各物体名称を距離が短い順に第1の物体名称、第2の物体名称、・・・と呼ぶ。 When the object recognizing unit 132 obtains such a lookup table 120 from the recording device 12, the most similar edge among the edge patterns in the lookup table 120 is detected for each edge of each object detected from the captured image. Search for a pattern. Next, the object recognizing unit 132 arranges the edge patterns searched for each edge of each object, that is, the object names corresponding to the most similar edge patterns in the order of short distance between the center of the captured image and the center of the object, The object names are collected as a set of names, and the set of object names is transmitted to the control unit 131. Hereinafter, each object name included in the set of object names is referred to as a first object name, a second object name,.
 次に、制御部131は、物体認識部132からゼロ物体検出信号を受信したかどうかを判断する(S503)。このとき、制御部132は、ゼロ物体検出信号を受信したと判断すると(S503/Yes)、本発明の第1実施形態に係る情報提示装置1の動作を終了する。 Next, the control unit 131 determines whether a zero object detection signal has been received from the object recognition unit 132 (S503). At this time, if the control unit 132 determines that the zero object detection signal has been received (S503 / Yes), the operation of the information presentation apparatus 1 according to the first embodiment of the present invention is terminated.
 一方、S503において、制御部131は、ゼロ物体検出信号を受信していない、すなわち物体名称の組を受信したと判断すると(S503/No)、カメラ11Aから受信した撮像画像及び物体認識部132から受信した物体名称の組を物体保有情報認識部133に送信する。そして、物体保有情報認識部133は、制御部131から撮像画像及び物体名称の組を受信すると、例えば、以下のようにして撮像画像に撮像されている物体が表す保有情報を認識して抽出する(S504)。 On the other hand, in S503, when the control unit 131 determines that it has not received the zero object detection signal, that is, has received a set of object names (S503 / No), from the captured image and object recognition unit 132 received from the camera 11A. The received set of object names is transmitted to the object possession information recognition unit 133. And the object possession information recognition part 133 will recognize and extract the possession information which the object imaged by the captured image represents, for example as follows, if the pair of a captured image and an object name is received from the control part 131, for example. (S504).
 始めに、物体保有情報認識部133は、ルックアップテーブル120を記録装置12から制御部131を介して取得する。そのために、物体保有情報認識部133は、ルックアップテーブル要求信号を制御部131に送信する。制御部131は、ルックアップテーブル要求信号に応じて、記録装置12からルックアップテーブル120を受信し、物体保有情報認識部133に送信する。 First, the object possession information recognition unit 133 acquires the lookup table 120 from the recording device 12 via the control unit 131. For this purpose, the object possession information recognition unit 133 transmits a lookup table request signal to the control unit 131. In response to the lookup table request signal, the control unit 131 receives the lookup table 120 from the recording device 12 and transmits the lookup table 120 to the object possession information recognition unit 133.
 次に、物体保有情報認識部133は、制御部131からルックアップテーブル120を受信すると、物体名称の組に含まれる第1の物体名称をルックアップテーブル120の中から検索する。物体保有情報認識部133は、検索した第1の物体名称に対応する物体保有情報種別、色取得モード、物体情報保有領域、及び文字列テンプレートを取得する。そして、物体保有情報認識部133は、取得した物体保有情報種別、色取得モード、物体情報保有領域、及び文字列テンプレートに応じて、撮像画像から物体が表す保有情報を認識して抽出する。 Next, when the object possession information recognition unit 133 receives the lookup table 120 from the control unit 131, the object possession information recognition unit 133 searches the lookup table 120 for the first object name included in the set of object names. The object possession information recognition unit 133 obtains an object possession information type, a color acquisition mode, an object information possession area, and a character string template corresponding to the searched first object name. Then, the object possession information recognition unit 133 recognizes and extracts possession information represented by the object from the captured image in accordance with the acquired object possession information type, color acquisition mode, object information possession region, and character string template.
 ここで、物体保有情報認識部133による物体の保有情報の抽出処理を、図9のフローチャートに基づいて詳細に説明する。 Here, the object possession information extraction process by the object possession information recognition unit 133 will be described in detail based on the flowchart of FIG.
 図9は物体保有情報認識部133による物体の保有情報の抽出処理の流れを示すフローチャートである。 FIG. 9 is a flowchart showing the flow of object possession information extraction processing by the object possession information recognition unit 133.
 図9に示すように、まず、物体保有情報認識部133は、取得した物体保有情報種別が文字であるかどうかを判断する(S901)。このとき、物体保有情報認識部133は、物体保有情報種別が文字であると判断すると(S901/Yes)、取得した物体情報保有領域に従って、撮像画像に映し出されている第1の物体名称に対応する物体内の特定の領域に存在する文字を認識する(S902)。そして、物体保有情報認識部133は、取得した文字列テンプレートに基づいて、物体保有情報文字列を作成する(S903)。 As shown in FIG. 9, first, the object possession information recognition unit 133 determines whether or not the acquired object possession information type is a character (S901). At this time, if the object possession information recognition unit 133 determines that the object possession information type is a character (S901 / Yes), the object possession information recognition unit 133 corresponds to the first object name displayed in the captured image according to the acquired object information possession area. A character existing in a specific area within the object to be recognized is recognized (S902). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S903).
 以下、S903における物体保有情報文字列の作成の具体例として、物体名称が看板である場合について説明する。 Hereinafter, the case where the object name is a signboard will be described as a specific example of creating the object possession information character string in S903.
 図8に示すルックアップテーブル120において、物体名称が看板に対応する物体保有情報種別には文字が設定されており、物体情報保有領域には看板の表面121が設定されている。従って、物体保有情報認識部133は、撮像画像に映し出されている看板の表面121の文字を認識する。 In the look-up table 120 shown in FIG. 8, characters are set in the object holding information type whose object name corresponds to the signboard, and the signboard surface 121 is set in the object information holding area. Therefore, the object possession information recognizing unit 133 recognizes the characters on the surface 121 of the signboard displayed in the captured image.
 このとき、物体保有情報認識部133は、看板の表面121の文字として、例えば、「花屋」を認識した場合、文字列テンプレート「<文字>があります。」を参照し、文字列テンプレートに含まれる<文字>を、認識した「花屋」に置き換えることにより、文字列「花屋があります。」を作成する。 At this time, when the object possession information recognition unit 133 recognizes, for example, “florist” as the character on the front surface 121 of the signboard, the character possession information recognition unit 133 refers to the character string template “<character>” and is included in the character string template. By replacing <letter> with the recognized “florist”, the character string “I have a florist” is created.
 次に、物体保有情報文字列の作成の他の例として、物体名称が車両用信号機や歩行者用信号機に付随する標示板である場合について説明する。 Next, as another example of creating the object possession information character string, a case where the object name is a sign board attached to a traffic signal for a vehicle or a traffic signal for a pedestrian will be described.
 物体保有情報認識部133は、車両用信号機や歩行者用信号機に付随する標示板の文字として、例えば、「押ボタン式」や「押ボタンを押して下さい」等を認識した場合、文字列テンプレートに含まれる<文字>を、認識した文字に対応する「押ボタン式信号」に置き換えることにより、文字列「押ボタン式信号があります。」を作成する。 When the object possession information recognition unit 133 recognizes, for example, “push button type” or “please push the push button” as a character of the sign board attached to the traffic signal for the vehicle or the traffic signal for the pedestrian, The character string “There is a pushbutton signal” is created by replacing the included <character> with a “pushbutton signal” corresponding to the recognized character.
 図9に戻り、S901において、物体保有情報認識部133は、物体保有情報種別が文字でない、すなわち色であると判断すると(S901/No)、取得した色取得モードが明領域色取得モードであるかどうかを判断する(S904)。このとき、物体保有情報認識部133は、色取得モードが明領域色取得モードであると判断すると(S904/Yes)、取得した物体情報保有領域に従って、撮像画像に映し出されている第1の物体名称に対応する物体内の所定の複数の領域の色、及び当該領域の明るさを取得する(S905)。 Returning to FIG. 9, in S901, when the object possession information recognition unit 133 determines that the object possession information type is not a character, that is, a color (S901 / No), the acquired color acquisition mode is the bright region color acquisition mode. Whether or not (S904). At this time, if the object possession information recognition unit 133 determines that the color acquisition mode is the bright area color acquisition mode (S904 / Yes), the first object displayed in the captured image according to the obtained object information possession area The colors of a plurality of predetermined areas in the object corresponding to the name and the brightness of the areas are acquired (S905).
 S905において取得される領域の明るさの指標としては、例えば、当該領域の輝度を利用することが可能である。別の明るさの指標としては、例えば、物体内の所定の複数の色を判別した後、これらの色毎に予め定めた定数を当該領域の輝度に乗じた値を利用することが可能である。これにより、物体保有情報認識部133による物体の保有情報の抽出精度を向上させることができる。 As the brightness indicator of the area acquired in S905, for example, the brightness of the area can be used. As another brightness index, for example, it is possible to use a value obtained by determining a plurality of predetermined colors in an object and then multiplying the brightness of the area by a predetermined constant for each color. . Thereby, the extraction accuracy of the object possession information by the object possession information recognition unit 133 can be improved.
 次に、物体保有情報認識部133は、取得した複数の領域の明るさを比較し、これらの領域のうち最も明るい領域の色を判別する(S906)。そして、物体保有情報認識部133は、取得した文字列テンプレートに基づいて、物体保有情報文字列を作成する(S907)。 Next, the object possession information recognition unit 133 compares the brightness of the plurality of acquired areas, and determines the color of the brightest area among these areas (S906). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S907).
 以下、S907における物体保有情報文字列の作成の具体例として、物体名称が歩行者用信号機である場合について説明する。 Hereinafter, a case where the object name is a pedestrian traffic light will be described as a specific example of creating the object possession information character string in S907.
 図8に示すルックアップテーブル120において、物体名称が歩行者用信号機に対応する物体保有情報種別には色が設定されており、色取得モードには明領域色取得モードが設定されている。また、物体情報保有領域には、歩行者用信号機のうち進行不可を表す赤色点灯領域122と、通行許可を表す青色点灯領域123とが設定されている。従って、物体保有情報認識部133は、撮像画像に映し出されている歩行者用信号機の赤色点灯領域122の色と明るさ、及び青色点灯領域123の色と明るさを取得する。 In the lookup table 120 shown in FIG. 8, a color is set for the object possession information type whose object name corresponds to the pedestrian traffic light, and a bright area color acquisition mode is set for the color acquisition mode. Also, in the object information holding area, a red lighting area 122 indicating that travel is impossible and a blue lighting area 123 indicating permission to pass are set in the pedestrian traffic light. Therefore, the object possession information recognition unit 133 acquires the color and brightness of the red lighting area 122 and the color and brightness of the blue lighting area 123 of the pedestrian traffic light shown in the captured image.
 次に、物体保有情報認識部133は、歩行者用信号機の赤色点灯領域122と青色点灯領域123の色として、赤色と青色をそれぞれ認識した後、赤色点灯領域122の明るさが青色点灯領域123の明るさよりも明るいかどうかを判断する。このとき、物体保有情報認識部133は、赤色点灯領域122の明るさが青色点灯領域123の明るさよりも明るいと判断すると、文字列テンプレート「歩行者用信号機は<色>です。」を参照し、文字列テンプレートに含まれる<色>を、認識した色のうち明るさがより明るい「赤色」に置き換えることにより、文字列「歩行者用信号機は赤色です。」を作成する。 Next, the object possession information recognition unit 133 recognizes red and blue as the colors of the red lighting area 122 and the blue lighting area 123 of the pedestrian traffic light, respectively, and then the brightness of the red lighting area 122 is the blue lighting area 123. It is judged whether it is brighter than the brightness of. At this time, when the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is brighter than that of the blue lighting area 123, the character possession information recognition unit 133 refers to the character string template “pedestrian traffic light is <color>”. The character string “pedestrian traffic light is red” is created by replacing <color> included in the character string template with “red” which is brighter among the recognized colors.
 一方、物体保有情報認識部133は、赤色点灯領域122の明るさが青色点灯領域123の明るさよりも明るくない、すなわち暗いと判断すると、文字列テンプレート「歩行者用信号機は<色>です。」を参照し、文字列テンプレートに含まれる<色>を、認識した色のうち明るさがより明るい「青色」に置き換えることにより、文字列「歩行者用信号機は青色です。」を作成する。 On the other hand, if the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is not brighter than the brightness of the blue lighting area 123, that is, dark, the character string template “pedestrian traffic light is <color>.” The character string “pedestrian traffic light is blue” is created by replacing <color> included in the character string template with “blue” which is brighter among the recognized colors.
 図9に戻り、S904において、物体保有情報認識部133は、色取得モードが明領域色取得モードでない、すなわち特定領域色取得モードであると判断すると(S904/No)、取得した物体情報保有領域に従って、撮像画像に映し出されている第1の物体名称に対応する物体内の特定の領域の色を取得する(S908)。そして、物体保有情報認識部133は、取得した文字列テンプレートに基づいて、物体保有情報文字列を作成する(S909)。 Returning to FIG. 9, when the object possession information recognition unit 133 determines in S904 that the color acquisition mode is not the bright area color acquisition mode, that is, the specific area color acquisition mode (S904 / No), the acquired object information possession area Accordingly, the color of the specific area in the object corresponding to the first object name displayed in the captured image is acquired (S908). Then, the object possession information recognition unit 133 creates an object possession information character string based on the acquired character string template (S909).
 以下、S909における物体保有情報文字列の作成の具体例として、物体名称が道路の中央線である場合について説明する。 Hereinafter, the case where the object name is the center line of the road will be described as a specific example of creating the object possession information character string in S909.
 図8に示すルックアップテーブル120において図示されていないが、例えば、物体名称が道路の中央線に対応する物体保有情報種別には、色が設定されており、色取得モードには、特定領域色取得モードが設定されている。また、物体情報保有領域には、道路の中央線全体の領域が設定されている。 Although not shown in the lookup table 120 shown in FIG. 8, for example, a color is set for the object possession information type whose object name corresponds to the center line of the road, and the color acquisition mode includes a specific area color. Acquisition mode is set. In the object information holding area, the entire area of the center line of the road is set.
 このとき、物体保有情報認識部133は、道路の中央線の色として、例えば、白色を認識した場合、文字列テンプレート「道路の中央線は<色>です。」を参照し、文字列テンプレートに含まれる<色>を、認識した「白色」に置き換えることにより、文字列「道路の中央線は白色です。」を作成する。 At this time, if the object possession information recognition unit 133 recognizes, for example, white as the color of the center line of the road, the object possession information recognition unit 133 refers to the character string template “the center line of the road is <color>.” The character string “The center line of the road is white” is created by replacing the included <color> with the recognized “white”.
 このようにして、S903、S907、及びS909において物体保有情報認識部133が物体保有情報文字列を作成すると、物体保有情報認識部133は、撮像画像に映し出されている第1の物体名称、第2の物体名称、・・・に対応する全ての物体の物体保有情報文字列の作成が完了したかどうかを判断する(S910)。 In this way, when the object possession information recognizing unit 133 creates the object possession information character string in S903, S907, and S909, the object possession information recognizing unit 133 displays the first object name and the first object displayed in the captured image. It is determined whether or not the creation of the object possession information character strings of all the objects corresponding to the object names of 2 is completed (S910).
 このとき、物体保有情報認識部133は、全ての物体の物体保有情報文字列の作成が完了していないと判断すると(S910/No)、S901の処理に戻り、残りの物体名称に対応する物体に対して、上述したS901~S909の処理を繰り返す。一方、S910において、物体保有情報認識部133は、全ての物体の物体保有情報文字列の作成が完了したと判断すると(S910/Yes)、物体保有情報認識部133による物体の保有情報の抽出処理を終了する。 At this time, if the object possession information recognition unit 133 determines that the creation of the object possession information character strings of all the objects has not been completed (S910 / No), the process returns to S901, and the objects corresponding to the remaining object names On the other hand, the processing of S901 to S909 described above is repeated. On the other hand, when the object possession information recognition unit 133 determines in S910 that the creation of the object possession information character strings of all the objects has been completed (S910 / Yes), the object possession information recognition unit 133 extracts the object possession information. Exit.
 次に、図5に戻り、S504の処理が完了すると、物体保有情報認識部133は、物体名称の組に含まれる物体名称毎に作成した物体保有情報文字列を物体保有情報文字列の組として集約し、この物体保有情報文字列の組を制御部131に送信する。制御部131は、物体保有情報認識部133から受信した物体保有情報文字列の組を音声合成部1134に送信する。 Next, returning to FIG. 5, when the process of S504 is completed, the object possession information recognition unit 133 sets the object possession information character string created for each object name included in the object name pair as a set of object possession information character strings. Collectively, this set of object possession information character strings is transmitted to the control unit 131. The control unit 131 transmits the set of object possession information character strings received from the object possession information recognition unit 133 to the speech synthesis unit 1134.
 次に、音声合成部134は、制御部131から受信した物体保有情報文字列の組に含まれる各物体保有情報文字列を読み上げる音声を合成して音声信号を生成する(S505)。その後、音声合成部134は、物体保有情報文字列毎に生成した音声信号を合成音声の組として集約し、この合成音声の組を制御部131に送信する。制御部131は、音声合成部134から受信した合成音声の組を音声出力装置14Aに送信する。 Next, the speech synthesizer 134 generates a speech signal by synthesizing the speech that reads each object possession information character string included in the set of object possession information character strings received from the control unit 131 (S505). Thereafter, the speech synthesizer 134 aggregates the speech signals generated for each object possession information character string as a set of synthesized speech, and transmits this synthesized speech set to the control unit 131. The control unit 131 transmits the set of synthesized speech received from the speech synthesis unit 134 to the speech output device 14A.
 音声出力装置14Aは、合成音声の組に含まれる各音声信号に対応する合成音声のうち第1の物体名称に対応する合成音声から順番に出力する(S506)。これにより、利用者2は、音声出力装置14Aから出力された音声を聞くことにより、情報提示装置1が提示した周囲環境の情報を認識する。 The voice output device 14A sequentially outputs the synthesized voice corresponding to the first object name among the synthesized voices corresponding to each voice signal included in the set of synthesized voices (S506). Thereby, the user 2 recognizes the information on the surrounding environment presented by the information presentation device 1 by listening to the voice output from the voice output device 14A.
 本発明の第1実施形態に係る情報提示装置1の動作では、上述したS501~S506の処理が所定の時間間隔で繰返して行われる。これにより、情報提示装置1が利用者2に周囲環境の情報を所定の時間間隔毎に提示できるので、利用者2に対する利便性を高めることができる。 In the operation of the information presentation apparatus 1 according to the first embodiment of the present invention, the above-described processes of S501 to S506 are repeatedly performed at predetermined time intervals. Thereby, since the information presentation apparatus 1 can present the surrounding environment information to the user 2 at predetermined time intervals, the convenience for the user 2 can be enhanced.
 このように構成した本発明の第1実施形態に係る情報提示システム100及び情報提示方法によれば、カメラ11Aの撮影画像と記録装置12に記録されたルックアップテーブル120に基づいて、物体認識部132が認識した物体の状態や物体に記載されている内容等の情報を、音声出力装置14Aを介して利用者2に容易に知らせることができる。これにより、利用者2の周囲環境の情報を十分に伝達できるので、利用者2に対して優れた情報の提示を行うことができる。 According to the information presentation system 100 and the information presentation method according to the first embodiment of the present invention configured as described above, the object recognition unit is based on the captured image of the camera 11A and the lookup table 120 recorded in the recording device 12. Information such as the state of the object recognized by 132 and the contents described in the object can be easily notified to the user 2 via the audio output device 14A. Thereby, since the information on the surrounding environment of the user 2 can be sufficiently transmitted, excellent information can be presented to the user 2.
 なお、上述した本発明の第1実施形態に係る情報提示装置1の動作では、S502において、物体認識部132が、撮像画像の中から検出した各物体のエッジ毎に、ルックアップテーブル120内のエッジパターンの中から最も類似したエッジパターンを検索することにより、物体の認識を行った場合について説明したが、本発明はこの場合に限られない。以下、その具体例について、図10を参照しながら詳細に説明する。 In the operation of the information presentation apparatus 1 according to the first embodiment of the present invention described above, in S502, the object recognizing unit 132 stores in the lookup table 120 for each edge of each object detected from the captured image. Although the case has been described where an object is recognized by searching for the most similar edge pattern from among the edge patterns, the present invention is not limited to this case. Hereinafter, the specific example is demonstrated in detail, referring FIG.
 図10は記録装置12に記録された保有情報の他の例として挙げたルックアップテーブル120Aを示す図である。なお、ルックアップテーブル120Aは、上述した図8に示すルックアップテーブル120のエッジパターンの項目を、物体のエッジ及び色を示すエッジ及び色のパターンの項目に置き換えて構成されている。 FIG. 10 is a diagram showing a look-up table 120A given as another example of the retained information recorded in the recording device 12. The look-up table 120A is configured by replacing the item of the edge pattern in the look-up table 120 shown in FIG. 8 described above with the item of the edge and color pattern indicating the edge and color of the object.
 物体認識部132は、物体のエッジに加え、さらにエッジ内の色も検出し、検出した各物体のエッジ毎に、図10に示すルックアップテーブル120A内のエッジ及び色のパターンの中から最も類似したエッジ及び色のパターンを検索することにより、物体の認識を行うことも可能である。 The object recognizing unit 132 detects not only the edge of the object but also the color in the edge. For each detected edge of each object, the most similar among the edge and color patterns in the look-up table 120A shown in FIG. It is also possible to recognize an object by searching for the edge and color pattern.
 この場合には、物体認識部132は、各物体のエッジを検出した後、撮像画像の中から各物体を構成する各エッジ内の色を認識する。そして、物体認識部132は、ルックアップテーブル120Aを記録装置12から制御部131を介して取得する。そのために、物体認識部132は、ルックアップテーブル要求信号を制御部131に対して送信する。制御部131は、このルックアップテーブル要求信号に応じて、記録装置12からルックアップテーブル120Aを受信して物体認識部132へ送信する。 In this case, after detecting the edge of each object, the object recognition unit 132 recognizes the color in each edge constituting each object from the captured image. Then, the object recognition unit 132 acquires the lookup table 120A from the recording device 12 via the control unit 131. For this purpose, the object recognition unit 132 transmits a lookup table request signal to the control unit 131. In response to this lookup table request signal, the control unit 131 receives the lookup table 120A from the recording device 12 and transmits it to the object recognition unit 132.
 物体認識部132は、ルックアップテーブル120Aを受信すると、撮像画像の中から検出した各物体のエッジ毎に、ルックアップテーブル120A内のエッジ及び色のパターンの中から最も類似したエッジ及び色のパターンを検索する。次に、物体認識部132は、各物体のエッジ毎に検索したエッジ及び色のパターン、すなわち最も類似したエッジ及び色のパターンに対応する物体名称を物体名称の組として集約し、この物体名称の組を制御部131に送信する。このように、物体認識部132による物体の認識において、エッジパターンのみではなく、エッジ内の色も利用することにより、物体の認識精度を向上することができる。 When the object recognition unit 132 receives the lookup table 120A, for each edge of each object detected from the captured image, the most similar edge and color pattern from among the edge and color patterns in the lookup table 120A. Search for. Next, the object recognition unit 132 aggregates the edge names and color patterns searched for each edge of each object, that is, the object names corresponding to the most similar edge and color patterns as a set of object names. The set is transmitted to the control unit 131. As described above, the object recognition accuracy by the object recognition unit 132 can be improved by using not only the edge pattern but also the color in the edge.
 また、本発明の第1実施形態に係る情報提示装置1の動作では、図9に示すS903、S907、及びS909において、物体保有情報認識部133は、撮像画像に基づいて、物体の位置を認識し、さらに物体の位置を踏まえた物体保有情報文字列を作成してもよい。 In the operation of the information presentation apparatus 1 according to the first embodiment of the present invention, in S903, S907, and S909 shown in FIG. 9, the object possession information recognition unit 133 recognizes the position of the object based on the captured image. In addition, an object possession information character string based on the position of the object may be created.
 例えば、図6に示すように、歩行者用信号機112が撮像画像110の中心よりも上方の位置にある場合には、物体保有情報認識部133は、撮像画像110から歩行者用信号機112が利用者2の前方にあると認識し、文字列テンプレート「<位置>の歩行者用信号機は<色>です。」に含まれる<位置>を、認識した「前方」に置き換えることにより、文字列「前方の歩行者用信号機は青色です。」を作成する。 For example, as shown in FIG. 6, when the pedestrian traffic light 112 is at a position above the center of the captured image 110, the object possession information recognition unit 133 uses the captured pedestrian traffic light 112 from the captured image 110. By recognizing that it is ahead of the person 2 and replacing <position> in the character string template “<Position> pedestrian traffic light is <color>” with the recognized “front”. The pedestrian traffic light ahead is blue. "
 同様に、図6に示すように、看板114が撮像画像110の中心よりも左側の位置にある場合には、物体保有情報認識部133は、撮像画像110から看板114が利用者の左手側にあると認識し、文字列テンプレート「<位置>に<文字>があります。」に含まれる<位置>を、認識した「左手側」に置き換えることにより、文字列「左手側に花屋があります。」を作成する。これにより、情報提示装置1は、物体が保有するより詳細な情報を利用者2に提示することができる。 Similarly, as illustrated in FIG. 6, when the signboard 114 is located on the left side of the center of the captured image 110, the object possession information recognition unit 133 causes the signboard 114 to move from the captured image 110 to the left hand side of the user. The character string “There is a florist on the left hand side” by replacing the <position> in the character string template “<position> has <character>” with the recognized “left hand side”. Create Thereby, the information presentation apparatus 1 can present the user 2 with more detailed information held by the object.
 また、本発明の第1実施形態に係る情報提示装置1の動作では、上述したS501~S506の処理が所定の時間間隔で繰返して行われた場合について説明したが、この場合には、利用者2に同一の周囲環境の情報を複数回に渡って提示する可能性がある。そこで、これに伴う煩わしさを回避するため、物体保有情報認識部133は、例えば、物体保有情報文字列の組を1組だけ一時的に記憶する一時記憶領域を有し、以下のようにして物体保有情報文字列の組を制御部131に送信してもよい。 In the operation of the information presentation apparatus 1 according to the first embodiment of the present invention, the case where the above-described processing of S501 to S506 is repeatedly performed at a predetermined time interval has been described. In this case, the user There is a possibility that information on the same surrounding environment is presented multiple times. Therefore, in order to avoid the trouble associated with this, the object possession information recognition unit 133 has, for example, a temporary storage area that temporarily stores only one set of object possession information character strings, as follows. A set of object possession information character strings may be transmitted to the control unit 131.
 具体的には、物体保有情報認識部133は、物体保有情報文字列の組(以下、便宜的に第1の物体保有情報文字列の組と称する)を作成した後、所定の時間前の物体保有情報文字列の組(以下、便宜的に第2の物体保有情報文字列の組と称する)が一時記憶領域に記憶されているかどうかを判断する。 Specifically, the object possession information recognizing unit 133 creates a set of object possession information character strings (hereinafter referred to as a first object possession information character string set for convenience), and then an object before a predetermined time. It is determined whether a retained information character string set (hereinafter referred to as a second object retained information character string set for convenience) is stored in the temporary storage area.
 このとき、物体保有情報認識部133は、第2の物体保有情報文字列の組が一時記憶領域に記憶されていると判断した場合、第1の物体保有情報文字列の組と、第2の物体保有情報文字列の組とが異なっているかどうかを判断する。次に、物体保有情報認識部133は、第1の物体保有情報文字列の組と、第2の物体保有情報文字列の組とが異なっていると判断した場合、第1の物体保有情報文字列の組に含まれており、第2の物体保有情報文字列の組に含まれていない物体保有情報文字列の組、すなわち第1の物体保有情報文字列の組のうち第2の物体保有情報文字列の組と共通していない物体保有情報文字列を集約した物体保有情報文字列の組を制御部131に送信する。そして、物体保有情報認識部133は、当該物体保有情報文字列を加味して一時記憶領域を更新する。 At this time, if the object possession information recognition unit 133 determines that the second set of object possession information character strings is stored in the temporary storage area, the object possession information character string set, It is determined whether or not the object possession information character string set is different. Next, when the object possession information recognition unit 133 determines that the first object possession information character string set is different from the second object possession information character string set, the first object possession information character string The object possession information character string set included in the set of columns and not included in the second object possession information character string set, that is, the second object possession of the first object possession information character string set A set of object possession information character strings obtained by collecting object possession information character strings that are not shared with the information character string set is transmitted to the control unit 131. Then, the object possession information recognition unit 133 updates the temporary storage area in consideration of the object possession information character string.
 一方、物体保有情報認識部133は、第1の物体保有情報文字列の組と、第2の物体保有情報文字列の組とが同一であると判断した場合、物体が1つも保有情報を保有していない旨を示すゼロ物体保有情報検出信号を制御部131に送信する。制御部131は、物体保有情報認識部133からゼロ物体保有情報検出信号を受信した場合、本発明の第1実施形態に係る情報提示装置1の動作を終了する。 On the other hand, if the object possession information recognizing unit 133 determines that the first object possession information character string set and the second object possession information character string set are the same, the object possession information is retained even for one object. A zero object possession information detection signal indicating that it has not been transmitted is transmitted to the control unit 131. When the control unit 131 receives a zero object possession information detection signal from the object possession information recognition unit 133, the control unit 131 ends the operation of the information presentation apparatus 1 according to the first embodiment of the present invention.
 また、物体保有情報認識部133は、第2の物体保有情報文字列の組が一時記憶領域に記憶されていないと判断した場合、第1の物体保有情報文字列の組を制御部131に送信する。次に、物体保有情報認識部133は、第1の物体保有情報文字列の組を、第2の物体保有情報文字列の組として一時記憶領域に記憶する。そして、制御部131は、物体保有情報認識部133から物体保有情報文字列の組を受信した場合、受信した物体保有情報文字列の組を音声合成部134に送信する。これにより、情報提示装置1が利用者2に同一の周囲環境の情報を複数回に渡って提示することに伴う煩わしさを回避できるので、情報提示装置1による周囲環境の情報提示の効率性を高めることができる。 If the object possession information recognition unit 133 determines that the second set of object possession information character strings is not stored in the temporary storage area, the object possession information character string is transmitted to the control unit 131. To do. Next, the object possession information recognition unit 133 stores the first set of object possession information character strings in the temporary storage area as the set of second object possession information character strings. When the control unit 131 receives a set of object possession information character strings from the object possession information recognition unit 133, the control unit 131 transmits the received set of object possession information character strings to the speech synthesis unit 134. Thereby, since the information presentation apparatus 1 can avoid the troublesomeness of presenting the same ambient environment information to the user 2 multiple times, the efficiency of information presentation of the ambient environment by the information presentation apparatus 1 can be reduced. Can be increased.
 また、本発明の第1実施形態に係る情報提示装置1の動作では、S502において、物体認識部132が、撮像画像の中から検出した全てのエッジに対して物体の認識を行い、これらの全ての物体に対して、S504における物体保有情報認識部133による物体の保有情報の抽出処理、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行った場合ついて説明したが、本発明はこの場合に限られない。 In the operation of the information presentation apparatus 1 according to the first embodiment of the present invention, in S502, the object recognition unit 132 recognizes an object for all edges detected from the captured image, and all of these are recognized. The object possession information extraction process by the object possession information recognition unit 133 in S504, the speech signal generation process by the speech synthesis unit 134 in S505, and the output process of the synthesized speech by the speech output device 14A in S506. Although the case where it carried out was demonstrated, this invention is not limited to this case.
 例えばS502において、物体認識部132が、撮像画像の中から検出した全てのエッジに対して物体の認識を行い、認識された物体のうち1個又は数個の物体に対してのみ、S504における物体保有情報認識部133による物体の保有情報の抽出処理、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行ってもよい。 For example, in S502, the object recognition unit 132 recognizes objects for all edges detected from the captured image, and the object in S504 is applied to only one or several of the recognized objects. Object possession information extraction processing by the possession information recognition unit 133, speech signal generation processing by the speech synthesis unit 134 in S505, and synthesized speech output processing by the speech output device 14A in S506 may be performed.
 好適には、情報提示装置1は、複数の物体に対して、物体名称を利用して重要度のランク付けを行い、重要度の高い物体から順番に、S504における物体保有情報認識部133による物体の保有情報の抽出処理、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行うことが望ましい。これにより、情報提示装置1が行うS504~S506の処理を低減すると共に、情報提示装置1が厳選した情報のみを利用者2に提示することができる。 Preferably, the information presenting device 1 ranks the importance of the plurality of objects by using the object names, and the objects by the object possession information recognition unit 133 in S504 are ordered in descending order of importance. It is desirable to perform a process of extracting the stored information, a process of generating a voice signal by the voice synthesizer 134 in S505, and a process of outputting a synthesized voice by the voice output device 14A in S506. Thereby, the processing of S504 to S506 performed by the information presentation apparatus 1 can be reduced, and only the information carefully selected by the information presentation apparatus 1 can be presented to the user 2.
 また、例えばS502において、物体認識部132が、撮像画像の中から検出した全てのエッジに対して物体の認識を行い、これらの全ての物体に対して、S504における物体保有情報認識部133による物体の保有情報の抽出処理を行った上で、認識された物体のうち1個又は数個の物体に対してのみ、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行ってもよい。 Further, for example, in S502, the object recognition unit 132 recognizes objects for all the edges detected from the captured image, and for all these objects, the object by the object possession information recognition unit 133 in S504. After performing the process of extracting the stored information, the voice signal generation process by the voice synthesizer 134 in S505 and the voice output device 14A in S506 only for one or several of the recognized objects. The synthesized voice output process may be performed.
 好適には、情報提示装置1は、複数の物体に対して、物体名称及び保有情報を利用して重要度のランク付けを行い、重要度の高い物体から順番に、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行うことが望ましい。これにより、情報提示装置1が行うS505、S506の処理を対象とする物体の選定において、物体の保有情報の認識結果を利用することにより、情報提示装置1による重要度のランク付けの精度を向上することができる。 Preferably, the information presentation apparatus 1 ranks the importance levels of the plurality of objects by using the object names and the possessed information, and the speech synthesis unit 134 in S505 sequentially performs the objects from the highest importance levels. It is desirable to perform an audio signal generation process and a synthesized voice output process by the audio output device 14A in S506. Thereby, in the selection of the object for the processing of S505 and S506 performed by the information presentation apparatus 1, the recognition result of the information held by the object is used, thereby improving the accuracy ranking of the importance by the information presentation apparatus 1 can do.
 また、本発明の第1実施形態に係る情報提示装置1の動作では、ルックアップテーブル120が記録装置12に記録されており、制御部131は、物体認識部132及び物体保有情報認識部133からのルックアップテーブル要求信号に応じて、ルックアップテーブル120を記録装置12から取得し、物体認識部132及び物体保有情報認識部133に送信した場合について説明したが、本発明はこの場合に限られない。 Further, in the operation of the information presentation device 1 according to the first embodiment of the present invention, the lookup table 120 is recorded in the recording device 12, and the control unit 131 includes the object recognition unit 132 and the object possession information recognition unit 133. The lookup table 120 is acquired from the recording device 12 in response to the lookup table request signal and transmitted to the object recognition unit 132 and the object holding information recognition unit 133. However, the present invention is limited to this case. Absent.
 例えば、図4に示すように、ルックアップテーブル120が情報提示装置1とは別の情報処理装置3に記録されており、制御部131は、物体認識部132及び物体保有情報認識部133からのルックアップテーブル要求信号に応じて、インターネット401を介してルックアップテーブル120を情報処理装置3から取得してもよい。これにより、情報処理装置3に記録されているルックアップテーブル120が定期的に更新されていれば、情報提示装置1は、記録装置12の記録内容を更新しなくても、最新のルックアップテーブル120を利用することができる。 For example, as shown in FIG. 4, the lookup table 120 is recorded in the information processing apparatus 3 different from the information presentation apparatus 1, and the control unit 131 receives the information from the object recognition unit 132 and the object possession information recognition unit 133. The lookup table 120 may be acquired from the information processing apparatus 3 via the Internet 401 in response to the lookup table request signal. Accordingly, if the lookup table 120 recorded in the information processing device 3 is regularly updated, the information presentation device 1 can update the latest lookup table without updating the recording content of the recording device 12. 120 can be used.
 本発明の第1実施形態に係る情報提示装置1では、カメラ11Aとしてカラーカメラを利用した場合について説明したが、本発明はこの場合に限られない。例えば、カメラ11Aとしてモノクロカメラを利用することも可能である。これにより、情報提示装置1は、図6に示す看板114に記載された文字のように、物体が保有する文字列の認識等を行うことができる。また、カメラ11Aとしてカラーカメラを利用する場合と比較して、情報提示装置1の処理にかかる負荷を軽減することが可能となるため、情報提示装置1の消費電力が低減し、情報提示装置1を駆動するための電池(図示せず)の寿命を延ばすことができる。 In the information presentation apparatus 1 according to the first embodiment of the present invention, the case where a color camera is used as the camera 11A has been described, but the present invention is not limited to this case. For example, a monochrome camera can be used as the camera 11A. Thereby, the information presentation apparatus 1 can recognize a character string held by an object, such as a character described on the signboard 114 shown in FIG. Moreover, since it becomes possible to reduce the load concerning the process of the information presentation apparatus 1 compared with the case where a color camera is used as the camera 11A, the power consumption of the information presentation apparatus 1 is reduced and the information presentation apparatus 1 is reduced. The life of a battery (not shown) for driving the battery can be extended.
 また、カメラ11Aは、さらに熱分布の画像を撮像するように構成されてもよい。例えば、カメラ11Aとして、サーモグラフィカメラを利用することも可能である。あるいは、カメラ11Aとして、カラーカメラもしくはモノクロカメラと、サーモグラフィカメラとを併用することも可能である。この場合、物体保有情報認識部133は、制御部131からサーモグラフィカメラの撮像画像と物体名称の組を受信すると、サーモグラフィカメラの撮像画像に基づいて、物体名称の組に該当する物体の温度を示す温度情報を取得して保有情報に関連付ける。 Further, the camera 11A may be configured to further capture an image of heat distribution. For example, a thermographic camera can be used as the camera 11A. Alternatively, a color camera or a monochrome camera and a thermography camera can be used in combination as the camera 11A. In this case, when the object possession information recognizing unit 133 receives the combination of the image captured by the thermography camera and the object name from the control unit 131, the object possession information recognition unit 133 indicates the temperature of the object corresponding to the object name combination based on the image captured by the thermography camera. Acquire temperature information and associate it with retained information.
 すなわち、物体保有情報認識部133は、取得した物体の温度に基づいて、物体保有情報文字列を作成する。これにより、情報提示装置1は、物体の温度を認識し、物体の温度も利用者2に提示することができる。また、物体保有情報認識部133は、認識した物体の温度が所定の温度条件を満たしているかどうかを判断し、所定の温度条件を満たしていると判断した場合のみ、物体保有情報文字列を作成してもよい。これにより、情報提示装置1が利用者2に提示する周囲環境の情報をより厳選することができる。 That is, the object possession information recognition unit 133 creates an object possession information character string based on the acquired temperature of the object. Thereby, the information presentation apparatus 1 can recognize the temperature of the object and can also present the temperature of the object to the user 2. Further, the object possession information recognition unit 133 determines whether the temperature of the recognized object satisfies a predetermined temperature condition, and creates an object possession information character string only when it is determined that the predetermined temperature condition is satisfied. May be. Thereby, the information of the surrounding environment which the information presentation apparatus 1 presents to the user 2 can be selected more carefully.
 上述した所定の温度条件としては、例えば、予め設定された温度をTD、物体保有情報認識部133が認識した温度をTと定義すると、温度Tが温度TD以上の条件(T≧TD)及び温度Tが温度TD以下の条件(T≦TD)のいずれかが成立したことを利用することが可能である。このように温度条件が設定されることにより、温度TD以上もしくは温度TD以下の物体に対してのみ、情報提示装置1が利用者2に周囲環境の情報を提示することができる。 As the predetermined temperature condition described above, for example, if a preset temperature is defined as TD and a temperature recognized by the object possession information recognition unit 133 is defined as T, a condition that the temperature T is equal to or higher than the temperature TD (T ≧ TD) and temperature It is possible to utilize that any of the conditions where T is equal to or lower than the temperature TD (T ≦ TD) is satisfied. By setting the temperature condition in this manner, the information presentation device 1 can present information on the surrounding environment to the user 2 only for an object that is equal to or higher than the temperature TD or lower than the temperature TD.
 本発明の第1実施形態に係る情報提示装置1では、カメラ11Aは、利用者2が情報提示装置1を装着した上で直立して正面を見たときに、利用者2の顔が向いている方向の画像を撮像することができるように、情報提示装置1に実装された場合について説明したが、本発明はこの場合に限られない。例えば、カメラ11Aは、利用者2から全方向を撮像可能に構成されてもよい。この場合、好適には、情報提示装置1は、物体に対して、物体名称、又は物体名称及び保有情報を利用して重要度のランク付けを行い、重要度の高い物体から順番に、S505における音声合成部134による音声信号の生成処理、及びS506における音声出力装置14Aによる合成音声の出力処理を行うことが望ましい。これにより、情報提示装置1は、例えば、利用者2の進行方向とは逆の方向から利用者2に迫る車等、利用者2の顔が向いていない方向の物体、及び当該物体の保有情報を利用者2に提示することできる。 In the information presentation apparatus 1 according to the first embodiment of the present invention, the camera 11A faces the user 2 when the user 2 wears the information presentation apparatus 1 and stands upright and looks at the front. Although the case where it mounted in the information presentation apparatus 1 was demonstrated so that the image of a certain direction could be imaged, this invention is not limited to this case. For example, the camera 11A may be configured to be able to capture images from all directions from the user 2. In this case, preferably, the information presenting apparatus 1 ranks the importance of the objects by using the object name or the object name and the possession information, and in order from the objects with the highest importance in S505. It is desirable to perform a voice signal generation process by the voice synthesizer 134 and a synthesized voice output process by the voice output device 14A in S506. As a result, the information presentation device 1 is configured so that, for example, an object in a direction in which the user 2 is not facing, such as a car approaching the user 2 from a direction opposite to the traveling direction of the user 2, and possession information of the object Can be presented to the user 2.
 また、本発明の第1実施形態に係る情報提示装置1では、記録装置12に記録されたルックアップテーブル120の物体保有情報種別として、物体の色及び物体に表された文字を利用した場合について説明したが、本発明はこの場合に限られない。例えば、ルックアップテーブル120の物体保有情報種別として、物体に示された図形を利用することも可能である。この場合、物体名称が歩行者用信号機であれば、物体保有情報認識部133は、歩行者用信号機に表示された人の形状を認識することにより、精度の高い保有情報を抽出することができる。 In the information presentation device 1 according to the first embodiment of the present invention, the object color and the character represented by the object are used as the object holding information type of the lookup table 120 recorded in the recording device 12. Although described, the present invention is not limited to this case. For example, as an object possession information type of the lookup table 120, it is possible to use a graphic shown in the object. In this case, if the object name is a traffic signal for pedestrians, the object possession information recognition unit 133 can extract possession information with high accuracy by recognizing the shape of the person displayed on the traffic signal for pedestrians. .
[第2実施形態]
 本発明の第2実施形態が前述した第1実施形態と異なるのは、第1実施形態に係る情報提示システム100は、利用者2の周囲環境の情報を提示する処理を行う情報提示装置1から構成され、この情報提示装置1が、物体認識部132による物体の認識処理、物体保有情報認識部133による物体の保有情報の抽出処理、及び音声合成部134による音声信号の生成処理を行ったのに対して、第2実施形態に係る情報提示システム100は、上述の情報提示装置1及び情報処理装置3から構成され、この情報処理装置3が、物体認識部132による物体の認識処理、物体保有情報認識部133による物体の保有情報の抽出処理、及び音声合成部134による音声信号の生成処理のうち少なくとも1つを行うように構成されたことである。
[Second Embodiment]
The second embodiment of the present invention is different from the first embodiment described above in that the information presentation system 100 according to the first embodiment starts from the information presentation apparatus 1 that performs processing for presenting information on the surrounding environment of the user 2. The information presentation apparatus 1 is configured to perform object recognition processing by the object recognition unit 132, object holding information extraction processing by the object holding information recognition unit 133, and voice signal generation processing by the voice synthesis unit 134. On the other hand, the information presentation system 100 according to the second embodiment includes the information presentation device 1 and the information processing device 3 described above, and the information processing device 3 performs object recognition processing and object possession by the object recognition unit 132. That is, at least one of extraction processing of object possession information by the information recognition unit 133 and generation processing of an audio signal by the speech synthesis unit 134 is performed.
 以下、本発明の第2実施形態に係る情報提示システム100の構成及び動作について、図11、図12を参照しながら詳細に説明する。なお、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation system 100 according to the second embodiment of the present invention will be described in detail with reference to FIGS. 11 and 12. In the following description, the same or corresponding parts as those in the configuration of the first embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 図11は本発明の第2実施形態に係る情報提示装置1の処理装置13の主な機能を示す機能ブロック図である。 FIG. 11 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the second embodiment of the present invention.
 本発明の第2実施形態に係る情報提示装置1は、図11に示すように、第1実施形態に係る情報提示装置1の構成と基本的には同じであるが、記録装置12、処理装置13の物体認識部132、物体保有情報認識部133、及び音声合成部134を備えていない。 The information presentation device 1 according to the second embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the first embodiment as shown in FIG. 13 object recognition units 132, object possession information recognition units 133, and speech synthesis units 134 are not provided.
 本発明の第2実施形態に係る情報提示装置1では、制御部131は、カメラ11Aから受信した撮像画像を、通信装置15を介して情報処理装置3に送信する。そのために、制御部131は、撮像画像、及び情報処理装置3との通信指令を通信処理部135に送信する。通信処理部135は、制御部131から受信した通信指令に従い、撮像画像を、通信装置15及びインターネット401を介して情報処理装置3に送信する処理を行う。 In the information presentation device 1 according to the second embodiment of the present invention, the control unit 131 transmits the captured image received from the camera 11A to the information processing device 3 via the communication device 15. For this purpose, the control unit 131 transmits the captured image and a communication command with the information processing device 3 to the communication processing unit 135. The communication processing unit 135 performs a process of transmitting the captured image to the information processing device 3 via the communication device 15 and the Internet 401 in accordance with the communication command received from the control unit 131.
 図12は本発明の第2実施形態に係る情報処理装置3の処理装置33の主な機能を示す機能ブロック図である。 FIG. 12 is a functional block diagram showing main functions of the processing device 33 of the information processing device 3 according to the second embodiment of the present invention.
 本発明の第2実施形態に係る情報処理装置3は、図12に示すように、記録装置12、処理装置33、及び通信装置35を備えており、処理装置33は、制御部331、物体認識部132、物体保有情報認識部133、音声合成部134、及び通信処理部335を含んで構成されている。 As shown in FIG. 12, the information processing apparatus 3 according to the second embodiment of the present invention includes a recording device 12, a processing device 33, and a communication device 35. The processing device 33 includes a control unit 331, object recognition, and the like. Unit 132, object possession information recognition unit 133, speech synthesis unit 134, and communication processing unit 335.
 本発明の第2実施形態に係る情報処理装置3では、通信処理部335は、制御部331からの通信指令に従い、インターネット401及び通信装置35を介して情報提示装置1から撮像画像を受信する。通信処理部335は、情報提示装置1から受信した撮像画像を制御部331に送信する。制御部331は、通信処理部335から受信した撮像画像を物体認識部132に送信する。 In the information processing device 3 according to the second embodiment of the present invention, the communication processing unit 335 receives a captured image from the information presentation device 1 via the Internet 401 and the communication device 35 in accordance with a communication command from the control unit 331. The communication processing unit 335 transmits the captured image received from the information presentation device 1 to the control unit 331. The control unit 331 transmits the captured image received from the communication processing unit 335 to the object recognition unit 132.
 制御部331は、音声合成部134から合成音声の組を受信すると、情報提示装置1との通信指令、及び合成音声の組を通信処理部335に送信する。通信処理部335は、制御部331からの通信指令に従い、合成音声の組を、通信装置35及びインターネット401を介して情報提示装置1に送信する処理を行う。 When the control unit 331 receives a set of synthesized speech from the speech synthesis unit 134, the control unit 331 transmits a communication command with the information presentation device 1 and a set of synthesized speech to the communication processing unit 335. The communication processing unit 335 performs processing for transmitting a set of synthesized speech to the information presentation device 1 via the communication device 35 and the Internet 401 in accordance with a communication command from the control unit 331.
 図11において、情報提示装置1の通信処理部135は、制御部131からの通信指令に従い、情報処理装置3から合成音声の組を受信して制御部131に送信する。制御部131は、通信処理部135から受信した合成音声の組を音声出力装置14Aに送信する。 In FIG. 11, the communication processing unit 135 of the information presentation device 1 receives a set of synthesized speech from the information processing device 3 and transmits it to the control unit 131 in accordance with a communication command from the control unit 131. The control unit 131 transmits the set of synthesized speech received from the communication processing unit 135 to the audio output device 14A.
 このように構成した本発明の第2実施形態に係る情報提示システム100によれば、上述した第1実施形態と同様の作用効果が得られる他、情報提示装置1とインターネット401を介して接続された外部の情報処理装置3が、物体認識部132による物体の認識処理、物体保有情報認識部133による物体の保有情報の抽出処理、及び音声合成部134による音声信号の生成処理を行うことにより、情報提示装置1が行う処理を軽減することができる。また、情報提示装置1が行う処理が低減されることに伴い、情報提示装置1の消費電力が低減し、情報提示装置1を駆動するための電池の寿命を延ばすことができる。 According to the information presentation system 100 according to the second embodiment of the present invention configured as described above, the same operational effects as the first embodiment described above can be obtained, and the information presentation system 1 is connected to the information presentation apparatus 1 via the Internet 401. The external information processing device 3 performs object recognition processing by the object recognition unit 132, object holding information extraction processing by the object holding information recognition unit 133, and voice signal generation processing by the voice synthesis unit 134. Processing performed by the information presentation device 1 can be reduced. Further, as the processing performed by the information presentation apparatus 1 is reduced, the power consumption of the information presentation apparatus 1 is reduced, and the life of the battery for driving the information presentation apparatus 1 can be extended.
 なお、本発明の第2実施形態に係る情報処理システム100では、情報処理装置3が、物体認識部132による物体の認識処理、物体保有情報認識部133による物体の保有情報の抽出処理、及び音声合成部134による音声信号の生成処理の全てを行った場合について説明したが、本発明はこの場合に限られない。例えば、これらの処理のうち少なくとも1つを情報処理装置3が行い、残りの処理を情報提示装置1が行う構成にしてもよい。このように構成しても、上述したのと同様の作用効果を得ることができる。 Note that in the information processing system 100 according to the second embodiment of the present invention, the information processing apparatus 3 performs object recognition processing by the object recognition unit 132, object possession information extraction processing by the object possession information recognition unit 133, and audio. Although the description has been given of the case where all of the audio signal generation processing by the synthesizer 134 is performed, the present invention is not limited to this case. For example, the information processing apparatus 3 may perform at least one of these processes, and the information presentation apparatus 1 may perform the remaining processes. Even if comprised in this way, the effect similar to having mentioned above can be acquired.
 あるいは、制御部131が、物体認識部132による物体の認識処理、物体保有情報認識部133による物体の保有情報の抽出処理、及び音声合成部134による音声信号の生成処理のうち、どの処理を情報提示装置1及び情報処理装置3にそれぞれ実行させるのかを動的に決定してもよい。これにより、例えば、情報提示装置1に搭載された電池の残量が少なくなったときに、制御部131が負荷の高い処理を情報処理装置3に実行させることにより、情報提示装置1の駆動時間を延ばすことができる。 Alternatively, the control unit 131 determines which process among the object recognition process by the object recognition unit 132, the object possession information extraction process by the object possession information recognition unit 133, and the speech signal generation process by the speech synthesis unit 134. You may determine dynamically whether the presentation apparatus 1 and the information processing apparatus 3 are made to each perform. Thereby, for example, when the remaining amount of the battery mounted on the information presentation device 1 is reduced, the control unit 131 causes the information processing device 3 to execute processing with a high load, thereby driving the information presentation device 1. Can be extended.
[第3実施形態]
 本発明の第3実施形態が前述した第1実施形態と異なるのは、第1実施形態に係る情報提示装置1のカメラ11Aは、可視光の画像を撮像するように構成されたのに対して、第3実施形態に係る情報提示装置1のカメラ11Aは、第1実施形態の構成に加え、さらに距離画像を撮像するように構成されたことである。そして、本発明の第3実施形態に係る情報提示装置1の処理装置13は、カメラ11Aによって撮像された距離画像に基づいて、認識した物体と利用者2との間の距離を示す距離情報を取得して物体の保有情報に関連付けるように構成されている。
[Third Embodiment]
The third embodiment of the present invention differs from the first embodiment described above in that the camera 11A of the information presentation apparatus 1 according to the first embodiment is configured to capture an image of visible light. In addition to the configuration of the first embodiment, the camera 11A of the information presentation device 1 according to the third embodiment is configured to further capture a distance image. And the processing apparatus 13 of the information presentation apparatus 1 which concerns on 3rd Embodiment of this invention shows the distance information which shows the distance between the recognized object and the user 2 based on the distance image imaged with the camera 11A. It is configured to acquire and associate with possession information of the object.
 以下、本発明の第3実施形態に係る情報提示装置1の構成及び動作について、詳細に説明する。なお、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation apparatus 1 according to the third embodiment of the present invention will be described in detail. In the following description, the same or corresponding parts as those in the configuration of the first embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 本発明の第3実施形態に係る情報提示装置1のカメラ11Aは、制御部131から撮像指令を受信すると、利用者2の周囲環境の撮像を開始する。カメラ11Aは、上述した第1実施形態と同様に、利用者2が情報提示装置1を装着したときに、利用者2の顔が向いている方向の画像を撮像することができるように、情報提示装置1に実装されている。また、カメラ11Aは、例えば、フルカラー画像と距離画像を撮像可能なカメラから構成されている。 When the camera 11A of the information presentation apparatus 1 according to the third embodiment of the present invention receives an imaging command from the control unit 131, the camera 11A starts imaging the surrounding environment of the user 2. As in the first embodiment described above, the camera 11A is configured to capture information in a direction in which the face of the user 2 is facing when the user 2 wears the information presentation device 1. It is mounted on the presentation device 1. In addition, the camera 11A is configured by a camera that can capture a full-color image and a distance image, for example.
 具体的には、本発明の第3実施形態に係るカメラ11Aとして、例えば、所定の間隔で配置された2個のカラーカメラから成るステレオカメラを利用することが可能である。カメラ11Aは、ステレオカメラの2個のカラーカメラの撮像画像から、フルカラー画像と距離画像を作成する。例えば、2個のカラーカメラのいずれか一方の撮像画像をフルカラー画像とすることが可能である。また、2個のカラーカメラによる撮像画像の両眼視差から距離画像を作成することが可能である。なお、カメラ11Aとして、ステレオカメラの代わりに、1個のカラーカメラとTime Of Flight方式の距離画像センサを備えたカメラを利用することが可能である。この場合、フルカラー画像と距離画像については、カラーカメラと距離画像センサを備えた上記カメラがそれぞれ撮像する。 Specifically, for example, a stereo camera composed of two color cameras arranged at a predetermined interval can be used as the camera 11A according to the third embodiment of the present invention. The camera 11A creates a full-color image and a distance image from the captured images of the two color cameras of the stereo camera. For example, a captured image of one of two color cameras can be a full color image. In addition, it is possible to create a distance image from the binocular parallax of images captured by two color cameras. As the camera 11A, a camera including a single color camera and a Time Of Flight distance image sensor can be used instead of a stereo camera. In this case, the full-color image and the distance image are captured by the color camera and the camera including the distance image sensor, respectively.
 本発明の第3実施形態に係る情報提示装置1では、カメラ11Aは、ステレオカメラを用いて撮像したフルカラー画像と距離画像を制御部131に送信する。制御部131は、カメラ11Aから受信したフルカラー画像と距離画像を物体認識部132に送信する。 In the information presentation apparatus 1 according to the third embodiment of the present invention, the camera 11A transmits a full color image and a distance image captured using a stereo camera to the control unit 131. The control unit 131 transmits the full color image and the distance image received from the camera 11A to the object recognition unit 132.
 物体認識部132は、フルカラー画像を利用して物体の認識を行う。物体認識部132は、認識した物体毎に、制御部131から受信した距離画像を利用して、利用者2から当該物体までの距離を取得する。物体認識部132は、認識した物体毎の物体名称と距離を物体名称と距離の組として集約し、この物体名称と距離の組を制御部131に送信する。制御部131は、カメラ11Aから受信したフルカラー画像、及び物体認識部132から受信した物体名称と距離の組を物体保有情報認識部133に送信する。 The object recognition unit 132 recognizes an object using a full color image. The object recognition unit 132 acquires the distance from the user 2 to the object by using the distance image received from the control unit 131 for each recognized object. The object recognition unit 132 aggregates the object name and distance for each recognized object as a set of object name and distance, and transmits the set of object name and distance to the control unit 131. The control unit 131 transmits the full-color image received from the camera 11 </ b> A and the object name / distance pair received from the object recognition unit 132 to the object possession information recognition unit 133.
 物体保有情報認識部133は、記録装置12からルックアップテーブル120を取得し、物体認識部132が認識した物体が保有している保有情報の認識を行った後、ルックアップテーブル120の文字列テンプレートに従って、物体保有情報文字列の作成を行う。なお、本発明の第3実施形態に係る情報提示装置1では、記録装置12に記録されたルックアップテーブル120の文字列テンプレートとして、例えば、図8に示す「車両用信号機は<色>です。」、「歩行者用信号機は<色>です。」、及び「<文字>があります。」の代わりに、「<距離>先の車両用信号機は<色>です。」、「<距離>先の歩行者用信号機は<色>です。」、及び「<距離>先に<文字>があります。」がそれぞれ用いられている。 The object possession information recognition unit 133 acquires the lookup table 120 from the recording device 12, recognizes possession information held by the object recognized by the object recognition unit 132, and then performs a character string template of the lookup table 120. The object possession information character string is created according to the above. In the information presentation apparatus 1 according to the third embodiment of the present invention, as a character string template of the look-up table 120 recorded in the recording apparatus 12, for example, “Vehicle traffic light is <color> shown in FIG. ”,“ Pedestrian traffic light is <color> ”and“ There is <character>. ”“ <Distance> ahead vehicle traffic light is <color>. ”,“ <Distance> ahead ” The pedestrian traffic light is <color>, and "<distance> is preceded by <character>."
 以下、物体保有情報文字列の作成の具体例として、物体名称及び距離がそれぞれ歩行者用信号機及び10mである場合について説明する。また、物体保有情報認識部133は、赤色点灯領域122の明るさが青色点灯領域123の明るさよりも明るいと判断したとして説明する。 Hereinafter, a case where the object name and the distance are a pedestrian traffic light and 10 m will be described as a specific example of creation of the object possession information character string. Further, the description will be made assuming that the object possession information recognition unit 133 determines that the brightness of the red lighting area 122 is brighter than the brightness of the blue lighting area 123.
 物体保有情報認識部133は、文字列テンプレート「<距離>先の歩行者用信号機は<色>です。」に従い、文字列テンプレートに含まれる<距離>及び<色>を、取得した距離「10m」及び「赤色」にそれぞれ置き換えることにより、文字列「10m先の歩行者用信号機は赤色です。」を作成する。 The object possession information recognizing unit 133 follows the character string template “<distance> the traffic signal for the pedestrian ahead is <color>.” The <distance> and <color> included in the character string template are acquired as “10 m ”And“ red ”, respectively, to create the character string“ The pedestrian traffic light 10 meters ahead is red. ”
 本発明の第3実施形態に係る情報提示システム100によれば、上述した第1実施形態と同様の作用効果が得られる他、情報提示装置1が、カメラ11Aの距離画像から得られた距離情報を物体の保有情報に関連付けることにより、利用者2は、当該物体の物体名称や保有情報に加え、当該物体と利用者2との間の距離も容易に把握することができる。このように、本発明の第3実施形態に係る情報提示システム100は、より詳細な物体に関する情報を利用者2に提示することができる。 According to the information presentation system 100 according to the third embodiment of the present invention, the same operational effects as the first embodiment described above can be obtained, and the information presentation device 1 can obtain distance information obtained from the distance image of the camera 11A. By associating with the possession information of the object, the user 2 can easily grasp the distance between the object and the user 2 in addition to the object name and possession information of the object. As described above, the information presentation system 100 according to the third embodiment of the present invention can present more detailed information about an object to the user 2.
 なお、上述した本発明の第3実施形態に係る情報提示装置1では、物体認識部132は、フルカラー画像のみを利用して物体の認識を行った場合について説明したが、本発明はこの場合に限られない。例えば、物体認識部132は、距離画像を利用して物体の認識を行ったり、フルカラー画像及び距離画像の両方を用いて物体の認識を行ったりすることも可能である。これにより、利用者2からの距離が異なる複数の物体をカメラ11Aが撮像した場合において、物体認識部132による物体の認識精度を向上することができる。 In addition, in the information presentation apparatus 1 according to the third embodiment of the present invention described above, the object recognition unit 132 has been described with respect to the case where the object recognition is performed using only the full-color image. Not limited. For example, the object recognition unit 132 can recognize an object using a distance image, or can recognize an object using both a full-color image and a distance image. Thereby, when the camera 11A images a plurality of objects with different distances from the user 2, the object recognition accuracy by the object recognition unit 132 can be improved.
[第4実施形態]
 本発明の第4実施形態に係る情報提示装置1は、上述した第1実施形態に係る情報提示装置1の構成に加え、第4実施形態に係る物体認識部133が、カメラ11Aの撮像画像のうち利用者2から所定の方向に存在する物体を認識するように構成されている。従って、本発明の第4実施形態に係る情報提示装置1では、利用者2から所定の方向に存在する物体に対してのみ、物体保有情報認識部133による物体の保有情報の抽出処理、音声合成部134による音声信号の生成処理、及び音声出力装置14Aによる合成音声の出力処理が行われる。
[Fourth Embodiment]
In addition to the configuration of the information presentation apparatus 1 according to the first embodiment described above, the information recognition apparatus 133 according to the fourth embodiment of the present invention includes an object recognition unit 133 according to the fourth embodiment that captures an image captured by the camera 11A. Among them, the user 2 is configured to recognize an object existing in a predetermined direction. Therefore, in the information presentation apparatus 1 according to the fourth embodiment of the present invention, the object possession information recognizing unit 133 extracts the possession information of the object, and the speech synthesis is performed only on the object existing in the predetermined direction from the user 2. The voice signal generation process by the unit 134 and the synthesized voice output process by the voice output device 14A are performed.
 ところで、カメラ11Aは、利用者2が情報提示装置1を装着したときに、利用者2の顔が向いている方向の画像を撮像することができるように、情報提示装置1に装着されているので、利用者2が情報を必要とする所定の方向を撮像しているとは限らない。例えば、歩行時や走行時のように、利用者2が移動しているときには、利用者2が情報を必要とする所定の方向は利用者2の進行方向となるが、利用者2の顔は必ずしも進行方向を向いているとは限らないため、カメラ11Aは、利用者2の進行方向以外の方向を撮像している場合もある。 By the way, the camera 11 </ b> A is attached to the information presentation device 1 so that when the user 2 wears the information presentation device 1, an image in a direction in which the user 2 is facing can be captured. Therefore, the user 2 is not always imaging the predetermined direction that requires information. For example, when the user 2 is moving, such as when walking or running, the predetermined direction in which the user 2 needs information is the traveling direction of the user 2, but the face of the user 2 is Since the camera 11A does not necessarily face the traveling direction, the camera 11A may capture a direction other than the traveling direction of the user 2.
 以下、本発明の第4実施形態に係る情報提示装置1の構成及び動作について、利用者2が歩行しているときを例に挙げ、図13~図16を参照しながら詳細に説明する。なお、利用者2が情報を必要とする所定の方向としては、例えば、利用者2の進行方向が設定されている。また、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation apparatus 1 according to the fourth embodiment of the present invention will be described in detail with reference to FIGS. 13 to 16, taking as an example the case where the user 2 is walking. As the predetermined direction in which the user 2 needs information, for example, the traveling direction of the user 2 is set. Moreover, in the following description, the same code | symbol is attached | subjected to the part which is the same as that of the structure of 1st Embodiment mentioned above, or respond | corresponds, and the overlapping description is abbreviate | omitted.
 図13は本発明の第4実施形態に係る情報提示装置1の処理装置13の主な機能を示す機能ブロック図である。 FIG. 13 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the fourth embodiment of the present invention.
 本発明の第4実施形態に係る情報提示装置1の処理装置13は、第1実施形態に係る処理装置13の構成に加え、さらに方向判定部136を含んで構成されている。 The processing device 13 of the information presentation device 1 according to the fourth embodiment of the present invention includes a direction determination unit 136 in addition to the configuration of the processing device 13 according to the first embodiment.
 本発明の第4実施形態に係る情報提示装置1では、制御部131は、物体認識部132から物体名称の組(以下、便宜的に第1の物体名称の組と称する)を受信すると、第1の物体名称の組、及びカメラ11Aから受信した撮像画像を方向判定部136に送信する。方向判定部136は、制御部131から受信した撮像画像に基づいて、第1の物体名称の組に該当する各物体が利用者2の進行方向に存在するかどうかを判定する。 In the information presentation apparatus 1 according to the fourth embodiment of the present invention, when the control unit 131 receives a set of object names (hereinafter referred to as a first set of object names for convenience) from the object recognition unit 132, The set of one object name and the captured image received from the camera 11 </ b> A are transmitted to the direction determination unit 136. The direction determination unit 136 determines whether each object corresponding to the first set of object names exists in the traveling direction of the user 2 based on the captured image received from the control unit 131.
 方向判定部136は、各物体が利用者2の進行方向に存在すると判定した場合、利用者2の進行方向に存在する物体の物体名称を新たな物体名称の組(以下、便宜的に第2の物体名称の組と称する)として集約し、この第2の物体名称の組を制御部131に送信する。制御部131は、方向判定部136から受信した第2の物体名称の組、及びカメラ11Aから受信した撮像画像を物体保有情報認識部133に送信する。 When the direction determination unit 136 determines that each object is present in the traveling direction of the user 2, the direction determination unit 136 converts the object name of the object existing in the traveling direction of the user 2 to a new set of object names (hereinafter referred to as a second object name for convenience). The second set of object names is transmitted to the control unit 131. The control unit 131 transmits the second set of object names received from the direction determination unit 136 and the captured image received from the camera 11A to the object possession information recognition unit 133.
 方向判定部136は、例えば、以下のようにして制御部131から受信した第1の物体名称の組に該当する各物体が、利用者2の進行方向に存在するかどうかを判定することができる。具体的には、方向判定部136は、第1の物体名称の組に歩道及び車道の少なくとも1つ、例えば、歩道及び車道の双方が含まれているかどうかを判断する。 The direction determination unit 136 can determine, for example, whether or not each object corresponding to the first set of object names received from the control unit 131 exists in the traveling direction of the user 2 as follows. . Specifically, the direction determination unit 136 determines whether or not the first set of object names includes at least one of a sidewalk and a roadway, for example, both the sidewalk and the roadway.
 方向判定部136は、歩道及び車道の双方が第1の物体名称の組に含まれていると判断した場合、歩道と車道の境界線と、撮像画像の下縁とのなす角θが、所定の角度として、例えば45度以上であるかどうかを判断する。方向判定部136は、なす角θが45度以上であると判断した場合、利用者2の顔が進行方向を向いており、第1の物体名称の組に該当する各物体は、利用者2の進行方向に存在する物体であると判定することができる。一方、方向判定部136は、なす角θが45度未満であると判断した場合、利用者2の顔が進行方向を向いておらず、第1の物体名称に該当する各物体は、利用者2の進行方向に存在する物体ではないと判定することができる。 When the direction determination unit 136 determines that both the sidewalk and the roadway are included in the first set of object names, the angle θ formed by the boundary between the sidewalk and the roadway and the lower edge of the captured image is predetermined. For example, it is determined whether the angle is 45 degrees or more. When the direction determination unit 136 determines that the formed angle θ is 45 degrees or more, the face of the user 2 is facing the traveling direction, and each object corresponding to the first set of object names is the user 2 It can be determined that the object exists in the traveling direction. On the other hand, when the direction determination unit 136 determines that the formed angle θ is less than 45 degrees, the face of the user 2 is not facing the traveling direction, and each object corresponding to the first object name is the user It can be determined that the object does not exist in the two traveling directions.
 具体例として、図6に示すカメラ11の撮像画像110には、歩道115及び車道116の双方が映し出されている。そして、歩道115と車道116の境界線118と、撮像画像110の下縁119とのなす角θ1は45度以上であるため、方向判定部136は、利用者2の顔が進行方向を向いていると判定する。 As a specific example, both the sidewalk 115 and the roadway 116 are shown in the captured image 110 of the camera 11 shown in FIG. Since the angle θ1 formed by the boundary line 118 between the sidewalk 115 and the roadway 116 and the lower edge 119 of the captured image 110 is 45 degrees or more, the direction determination unit 136 indicates that the face of the user 2 faces the traveling direction. It is determined that
 図14は図6に示す撮像画像110の比較例として挙げたカメラ11Aの撮像画像110Aを示す図である。 FIG. 14 is a diagram illustrating a captured image 110A of the camera 11A cited as a comparative example of the captured image 110 illustrated in FIG.
 一方、図14に示すように、歩道115と車道116の境界線118と撮像画像110Aの下縁119とのなす角θ2は45度未満であるため、方向判定部136は、利用者2の顔が進行方向を向いていないと判定する。 On the other hand, as shown in FIG. 14, the angle θ2 formed by the boundary line 118 of the sidewalk 115 and the roadway 116 and the lower edge 119 of the captured image 110A is less than 45 degrees. Is determined not to face the direction of travel.
 このように構成した本発明の第4実施形態に係る情報提示システム100によれば、上述した第1実施形態と同様の作用効果が得られる他、利用者2から所定の方向に存在する物体に対してのみ、物体保有情報認識部133による物体の保有情報の抽出処理、音声合成部134による音声信号の生成処理、及び音声出力装置14Aによる合成音声の出力処理が行われるので、情報提示装置1が、厳選した情報のみを利用者2に提示することができる。また、情報提示装置1の処理にかかる負荷を軽減することが可能となるため、情報提示装置1の消費電力が低減し、情報提示装置1を駆動するための電池の寿命を延ばすことができる。 According to the information presentation system 100 according to the fourth embodiment of the present invention configured as described above, the same effect as that of the first embodiment described above can be obtained, and an object existing in a predetermined direction from the user 2 can be obtained. Only for this, the object possession information recognition unit 133 performs object possession information extraction processing, the speech synthesis unit 134 performs speech signal generation processing, and the speech output device 14A performs synthetic speech output processing. However, only the carefully selected information can be presented to the user 2. In addition, since it is possible to reduce the load on the processing of the information presentation device 1, the power consumption of the information presentation device 1 can be reduced, and the life of the battery for driving the information presentation device 1 can be extended.
 なお、上述した本発明の第4実施形態に係る情報提示装置1では、方向判定部136は、制御部131から受信した撮像画像に基づいて、第1の物体名称の組に該当する各物体が利用者2の進行方向に存在するかどうかを判定した場合について説明したが、本発明はこの場合に限られない。 In the information presentation device 1 according to the fourth embodiment of the present invention described above, the direction determination unit 136 determines whether each object corresponding to the first set of object names is based on the captured image received from the control unit 131. Although the case where it was determined whether it exists in the advancing direction of the user 2 was demonstrated, this invention is not limited to this case.
 例えば、情報提示装置1は、利用者2が直立して正面を見ている状態から、左右に首を振る方向(以下、便宜的に回転方向と称する)の角速度を計測する角速度センサ(図示せず)を備えていてもよい。この場合、利用者2が歩行しているときには、利用者2は進行方向を向いている時間が長くなり、首を回転方向へ回転させないので、角速度センサの計測結果から首の回転方向への回転が検知されない。一方、利用者2が首を回転方向へ回転させると、角速度センサの計測結果から首の回転方向への回転が検知される。 For example, the information presentation apparatus 1 is an angular velocity sensor (not shown) that measures an angular velocity in a direction in which the user 2 swings his / her head from left to right (hereinafter referred to as a rotational direction for convenience) from a state in which the user 2 stands upright. May be provided. In this case, when the user 2 is walking, the user 2 has a longer time in the direction of travel, and the neck is not rotated in the rotation direction. Therefore, the rotation in the rotation direction of the neck is determined from the measurement result of the angular velocity sensor. Is not detected. On the other hand, when the user 2 rotates the neck in the rotation direction, the rotation of the neck in the rotation direction is detected from the measurement result of the angular velocity sensor.
 従って、方向判定部136は、角速度センサの計測結果から利用者2の首の回転方向への回転が検知されたとき、利用者2の顔が進行方向を向いていないと判定することができ、角速度センサの計測結果から利用者2の首の回転方向への回転が所定の時間(例えば、5秒間等)継続して検知されなかったとき、利用者2の顔が進行方向を向いていると判定することができる。 Therefore, the direction determination unit 136 can determine that the face of the user 2 is not facing the traveling direction when the rotation of the neck of the user 2 is detected from the measurement result of the angular velocity sensor, When the rotation of the user 2 in the direction of rotation of the neck of the user 2 is not continuously detected for a predetermined time (for example, 5 seconds) from the measurement result of the angular velocity sensor, the face of the user 2 is facing the traveling direction. Can be determined.
 また、方向判定部136による方向の判定処理の他の具体例として、方向判定部136は、以下のようにして第1の物体名称の組に該当する各物体が利用者2の進行方向に存在するかどうかを判定してもよい。 As another specific example of the direction determination process by the direction determination unit 136, the direction determination unit 136 includes each object corresponding to the first set of object names in the traveling direction of the user 2 as follows. It may be determined whether or not to do so.
 方向判定部136は、カメラ11Aの撮像画像及び第1の物体名称の組をそれぞれ1個ずつ一時的に記憶する一時記憶領域を備えている。情報提示装置1は、所定の時間間隔ΔTでカメラ11Aによる撮像を行っており、方向判定部136が第1の物体名称の組と撮像画像(以下、便宜的に第1の撮像画像と称する)を制御部131から受信した時刻tにおいて一時記憶領域に記憶されている物体名称の組(以下、便宜的に第3の物体名称の組と称する)及び撮像画像(以下、便宜的に第2の撮像画像と称する)は、時刻t-ΔTに方向判定部136が制御部131から受信した物体名称の組及び撮像画像である。 The direction determination unit 136 includes a temporary storage area for temporarily storing one set of the captured image of the camera 11A and the first object name. The information presentation apparatus 1 performs imaging with the camera 11A at a predetermined time interval ΔT, and the direction determination unit 136 uses a first object name set and a captured image (hereinafter referred to as a first captured image for convenience). Received from the control unit 131 at the time t, a set of object names stored in the temporary storage area (hereinafter referred to as a third set of object names for convenience) and a captured image (hereinafter referred to as a second set for convenience). (Referred to as “captured image”) is a set of object names and a captured image received by the direction determination unit 136 from the control unit 131 at time t−ΔT.
 方向判定部136は、一時記憶領域に第2の撮像画像及び第3の物体名称の組が記憶されているかどうかを判断する。このとき、方向判定部136は、一時記憶領域に第2の撮像画像及び第3の物体名称の組が記憶されていると判断した場合、第1の物体名称の組と第3の物体名称の組に共通する物体が、第1の撮像画像内及び第2の撮像画像内の如何なる位置に撮像されているかを取得する。 The direction determination unit 136 determines whether a set of the second captured image and the third object name is stored in the temporary storage area. At this time, when it is determined that the combination of the second captured image and the third object name is stored in the temporary storage area, the direction determination unit 136 stores the combination of the first object name and the third object name. It is acquired at which position in the first captured image and in the second captured image the objects common to the set are captured.
 そして、第1の物体名称の組と第3の物体名称の組に共通する物体のうち、例えば半分以上の物体に対して、第1の撮像画像内の物体の位置が、第2の撮像画像内の物体の位置と比べて、撮像画像の中心付近から同心円状に離れる方向へ移動しているとみなせる場合には、方向判定部136は、利用者2の顔が進行方向を向いており、第1の物体名称の組に該当する各物体が利用者2の進行方向に存在する物体であると判定することができる。 The position of the object in the first captured image is, for example, more than half of the objects common to the first set of object names and the third set of object names. The direction determination unit 136 indicates that the face of the user 2 is facing the traveling direction, when it can be considered that the movement is concentrically away from the vicinity of the center of the captured image compared to the position of the object inside It can be determined that each object corresponding to the first set of object names is an object that exists in the traveling direction of the user 2.
 一方、第1の物体名称の組と第3の物体名称の組に共通する物体のうち、半分以上の物体に対して、第1の撮像画像内の物体の位置が、第2の撮像画像内の物体の位置と比べて、撮像画像の中心付近から同心円状に離れる方向へ移動しているとはみなせない場合には、方向判定部136は、利用者2の顔が進行方向を向いておらず、第1の物体名称の組に該当する各物体が利用者2の進行方向に存在する物体ではないと判定することができる。 On the other hand, with respect to more than half of the objects common to the first set of object names and the third set of object names, the positions of the objects in the first captured image are within the second captured image. If it cannot be considered that the object is moving in a direction concentrically away from the vicinity of the center of the captured image as compared with the position of the object, the direction determination unit 136 indicates that the face of the user 2 is facing the traveling direction. Instead, it can be determined that each object corresponding to the first set of object names is not an object existing in the traveling direction of the user 2.
 図15はカメラ11Aの撮像画像の時間推移の一例を示す図であり、左図は第2の撮像画像110Bを示す図、右図は第1の撮像画像110Cを示す図である。 FIG. 15 is a diagram illustrating an example of the time transition of the captured image of the camera 11A, the left diagram illustrates the second captured image 110B, and the right diagram illustrates the first captured image 110C.
 図15に示すように、カメラ11Aの第2の撮像画像110Bには、物体110B1,110B2等が映し出されており、カメラ11Aの第1の撮像画像110Cには、物体110C1,110C2等が映し出されている。第2の撮像画像110B内の物体110B1と第1の撮像画像110C内の物体110C1は同一の物体であり、第2の撮像画像110B内の物体110B2と第1の撮像画像110C内の物体110C2は同一の物体である。第1の撮像画像110C内の物体110C1,110C2の位置は、第2の撮像画像110B内の物体110B1,110B2の位置と比べて、撮像画像の中心付近から同心円状に離れる方向へ移動しているため、方向判定部136は、利用者2の顔が進行方向を向いていると判定する。 As shown in FIG. 15, the objects 110B1, 110B2, etc. are projected on the second captured image 110B of the camera 11A, and the objects 110C1, 110C2, etc. are projected on the first captured image 110C of the camera 11A. ing. The object 110B1 in the second captured image 110B and the object 110C1 in the first captured image 110C are the same object, and the object 110B2 in the second captured image 110B and the object 110C2 in the first captured image 110C are The same object. The positions of the objects 110C1 and 110C2 in the first captured image 110C are moved in a direction that is concentrically separated from the vicinity of the center of the captured image as compared with the positions of the objects 110B1 and 110B2 in the second captured image 110B. Therefore, the direction determination unit 136 determines that the face of the user 2 is facing the traveling direction.
 図16はカメラ11Aの撮像画像の時間推移の他の例を示す図であり、左図は第2の撮像画像110Dを示す図、右図は第1の撮像画像110Eを示す図である。 FIG. 16 is a diagram illustrating another example of the time transition of the captured image of the camera 11A, the left diagram illustrates the second captured image 110D, and the right diagram illustrates the first captured image 110E.
 図16に示すように、カメラ11Aの第2の撮像画像110Dには、物体110D1,110D2等が映し出されており、カメラ11Aの第1の撮像画像110Eには、物体110E1,110E2等が映し出されている。第2の撮像画像110D内の物体110D1と第1の撮像画像110E内の物体110E1は同一の物体であり、第2の撮像画像110D内の物体110D2と第1の撮像画像110E内の物体110E2は同一の物体である。第1の撮像画像110E内の物体110E1,110E2の位置は、第2の撮像画像110D内の物体110D1,110D2の位置と比べて、一律撮像画像の右側へ移動しているため、方向判定部136は、利用者2の顔が進行方向を向いていないと判定する。 As shown in FIG. 16, the objects 110D1, 110D2, etc. are projected on the second captured image 110D of the camera 11A, and the objects 110E1, 110E2, etc. are projected on the first captured image 110E of the camera 11A. ing. The object 110D1 in the second captured image 110D and the object 110E1 in the first captured image 110E are the same object, and the object 110D2 in the second captured image 110D and the object 110E2 in the first captured image 110E are The same object. Since the positions of the objects 110E1 and 110E2 in the first captured image 110E have moved to the right side of the uniform captured image as compared with the positions of the objects 110D1 and 110D2 in the second captured image 110D, the direction determination unit 136 Determines that the face of the user 2 is not facing the traveling direction.
 なお、図15に示す例では、利用者2の顔が進行方向を向いているため、第2の撮像画像110Bの物体110B1,110B2と第1の撮像画像110Cの物体110C1,110C2の縮尺は大きく異なっているが、図16に示す例では、利用者2の顔が進行方向を向いていないため、第2の撮像画像110Dの物体110D1,110D2と第1の撮像画像110Eの物体110E1,110E2の縮尺は、図15に示す例ほど異なっていない。そのため、方向判定部136は、第1の撮像画像と第2の撮像画像の縮尺に基づいて、利用者2の顔が進行方向を向いているかどうかを判定してもよい。 In the example shown in FIG. 15, since the face of the user 2 is facing the traveling direction, the scales of the objects 110B1 and 110B2 of the second captured image 110B and the objects 110C1 and 110C2 of the first captured image 110C are large. Although different, in the example shown in FIG. 16, since the face of the user 2 does not face the traveling direction, the objects 110D1 and 110D2 of the second captured image 110D and the objects 110E1 and 110E2 of the first captured image 110E The scale is not different from the example shown in FIG. Therefore, the direction determination unit 136 may determine whether the face of the user 2 is facing the traveling direction based on the scales of the first captured image and the second captured image.
 また、本発明の第4実施形態に係る情報提示装置1では、利用者2が歩行しているときを例に挙げた場合について説明したが、本発明はこの場合に限られず、如何なるときも利用者2が情報を必要とする所定の方向に存在する物体に対してのみ、物体保有情報認識部133による物体の保有情報の抽出処理、音声合成部134による音声信号の生成処理、及び音声出力装置14Aによる合成音声の出力処理を行ってもよい。利用者2は、特定の行動を行っているとき、情報を必要とする所定の方向を向いている時間が長くなることを考慮し、方向判定部136は、例えば、以下のようにして制御部131から受信した物体名称の組に該当する各物体が所定の方向に存在するかどうかを判定することができる。 Moreover, in the information presentation apparatus 1 according to the fourth embodiment of the present invention, the case where the user 2 is walking has been described as an example, but the present invention is not limited to this case and can be used at any time. The object possession information recognizing unit 133 extracts the object possession information, the speech synthesizer 134 generates the speech signal, and the speech output apparatus only for an object that exists in a predetermined direction for which the person 2 needs information. You may perform the output process of the synthetic speech by 14A. Considering that the time during which the user 2 is facing a predetermined direction that requires information becomes longer when performing a specific action, the direction determination unit 136 may, for example, Whether or not each object corresponding to the set of object names received from 131 exists in a predetermined direction can be determined.
 具体的には、方向判定部136は、カメラ11Aの撮像画像及び物体名称の組をそれぞれn個ずつ一時的に記憶する一時記憶領域を備えている。この一時記憶領域は、第1の物体名称の組及び撮像画像を制御部131から受信した時刻tには、所定の時間間隔ΔTで制御部131から受信する物体名称の組及び撮像画像のうち、kを1以上n以下の自然数として、時刻(t-k×ΔT)に制御部131から受信した物体名称の組及び撮像画像を記憶している。 Specifically, the direction determination unit 136 includes a temporary storage area that temporarily stores n sets of captured images and object names of the camera 11A. The temporary storage area includes a set of object names and captured images received from the control unit 131 at a predetermined time interval ΔT at time t when the first set of object names and captured images are received from the control unit 131. A set of object names and captured images received from the control unit 131 at time (t−k × ΔT) are stored with k being a natural number between 1 and n.
 従って、方向判定部136は、時刻tに受信した物体名称の組を構成する各物体名称が、一時記憶領域に記憶されているn個の物体名称の組のうち、例えば、(n/2)個以上含まれているかどうかを判断する。そして、方向判定部136は、時刻tに受信した物体名称の組を構成する各物体名称が、一時記憶領域に記憶されているn個の物体名称の組のうち、(n/2)個以上含まれていると判断した場合、当該物体名称に対応する物体が所定の方向に存在する物体であると判定し、時刻tに受信した物体名称の組を構成する各物体名称が、一時記憶領域に記憶されているn個の物体名称の組のうち、(n/2)個以上含まれていないと判断した場合、当該物体名称に対応する物体が所定の方向に存在する物体ではないと判定する。 Therefore, the direction determination unit 136 includes, for example, (n / 2), among the n object name pairs stored in the temporary storage area, each object name constituting the object name set received at time t. Judge whether or not it is included. Then, the direction determination unit 136 has (n / 2) or more of n object name sets stored in the temporary storage area as the object names constituting the object name set received at time t. If it is determined that the object name is included, it is determined that the object corresponding to the object name is an object existing in a predetermined direction, and each object name constituting the set of object names received at time t is stored in the temporary storage area. When it is determined that (n / 2) or more of n sets of object names stored in the table are not included, it is determined that the object corresponding to the object name is not an object existing in a predetermined direction. To do.
 また、方向判定部136による方向の判定処理の他の具体例として、方向判定部136は、利用者2の目線の方向を検出する目線検出器(図示せず)を備え、この目線検出器の検出結果に基づいて、制御部131から受信した第1の物体名称の組に該当する各物体が、利用者2が情報を必要とする所定の方向に存在するかどうかを判定してもよい。目線検出器としては、例えば、利用者2の眼球を撮像するカメラを利用することが可能である。 As another specific example of the direction determination process by the direction determination unit 136, the direction determination unit 136 includes a line-of-sight detector (not shown) that detects the direction of the line of sight of the user 2. Based on the detection result, it may be determined whether or not each object corresponding to the first set of object names received from the control unit 131 exists in a predetermined direction in which the user 2 needs information. As the eye line detector, for example, a camera that images the eyeball of the user 2 can be used.
 この場合、方向判定部136は、当該カメラの撮像画像を利用して、利用者2の目線の方向を検出する。そして、方向判定部136は、第1の物体名称の組に該当する各物体が、検出した利用者2の目線の方向に存在すると判断した場合、当該物体は上記所定の方向に存在する物体であると判定し、物体名称の組に該当する各物体が、検出した利用者2の目線の方向に存在しないと判断した場合、当該物体は上記所定の方向に存在する物体ではないと判定する。このように、方向判定部136による方向の判定処理に目線検出器を用いることにより、利用者2が必要な周囲環境の情報を迅速に得ることができる。 In this case, the direction determination unit 136 detects the direction of the eye line of the user 2 using the captured image of the camera. When the direction determination unit 136 determines that each object corresponding to the first set of object names exists in the direction of the detected line of sight of the user 2, the object is an object existing in the predetermined direction. If it is determined that there is no object corresponding to the set of object names in the direction of the line of sight of the detected user 2, it is determined that the object is not an object existing in the predetermined direction. As described above, by using the eye detector for the direction determination processing by the direction determination unit 136, the user 2 can quickly obtain information on the surrounding environment that the user 2 needs.
[第5実施形態]
 本発明の第5実施形態が前述した第1実施形態と異なるのは、第1実施形態に係る情報提示装置1は、利用者2に提示するための所定の情報信号として、利用者2に提示する音声を合成した音声信号を利用したのに対して、第5実施形態に係る情報提示装置1は、音声信号以外の情報信号を利用するように構成されたことである。
[Fifth Embodiment]
The fifth embodiment of the present invention differs from the first embodiment described above in that the information presentation device 1 according to the first embodiment presents the user 2 with a predetermined information signal to be presented to the user 2. The information presentation apparatus 1 according to the fifth embodiment is configured to use an information signal other than the voice signal, whereas the voice signal obtained by synthesizing the voice is used.
 以下、本発明の第5実施形態に係る情報提示装置1の構成及び動作について、図17~19を参照しながら詳細に説明する。なお、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation apparatus 1 according to the fifth embodiment of the present invention will be described in detail with reference to FIGS. In the following description, the same or corresponding parts as those in the configuration of the first embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 図17は本発明の第5実施形態に係る情報提示装置1を模式的に示す全体図である。 FIG. 17 is an overall view schematically showing the information presentation apparatus 1 according to the fifth embodiment of the present invention.
 本発明の第5実施形態に係る情報提示装置1では、出力器14として、例えば図17に示すように、信号生成部134によって生成された情報信号に応じて振動させる振動素子14Bを利用することが可能である。 In the information presentation apparatus 1 according to the fifth embodiment of the present invention, as the output device 14, for example, as illustrated in FIG. 17, a vibration element 14 </ b> B that vibrates according to the information signal generated by the signal generation unit 134 is used. Is possible.
 信号生成部134は、制御部131から受信した物体保有情報文字列の組に基づいて、振動パターンを作成する。信号生成部134は、作成した振動パターンを情報信号として制御部131に送信する。振動素子14Bは、制御部131から受信した情報信号が示す振動パターンに従って振動する。これにより、利用者2は、振動素子14Bの振動を知覚することにより、情報提示装置1が提示した周囲環境の情報を認識する。 The signal generation unit 134 creates a vibration pattern based on the set of object possession information character strings received from the control unit 131. The signal generation unit 134 transmits the created vibration pattern to the control unit 131 as an information signal. The vibration element 14B vibrates according to the vibration pattern indicated by the information signal received from the control unit 131. Thereby, the user 2 recognizes the information on the surrounding environment presented by the information presentation device 1 by perceiving the vibration of the vibration element 14B.
 振動パターンとしては、例えば、物体保有情報文字列の組に含まれる物体保有情報文字列のモールス符号を利用することが可能である。これにより、利用者2は、物体保有情報文字列を的確に認識することができる。また、他の振動パターンとしては、特定の物体保有情報文字列に対して定義した振動パターンを利用することが可能であり、例えば、所定の時間の間において連続して発生する連続パターンとしての連続振動パターンや、予め設定された時間間隔で断続して発生する非連続パターンとしての非連続振動パターンを利用することが可能である。 As the vibration pattern, for example, it is possible to use the Morse code of the object possession information character string included in the set of object possession information character strings. Thereby, the user 2 can recognize the object possession information character string accurately. Further, as another vibration pattern, it is possible to use a vibration pattern defined for a specific object possession information character string, for example, a continuous pattern continuously generated during a predetermined time. It is possible to use a vibration pattern or a discontinuous vibration pattern as a discontinuous pattern generated intermittently at a preset time interval.
 具体例として、文字列「歩行者用信号機は青色です。」に対しては1秒間の連続振動パターン、文字列「歩行者用信号機は赤色です。」に対しては0.1秒間の振動と0.1秒間の非振動を1周期とした5周期の非連続振動パターン等を利用することが可能である。例えば、歩行者用信号機が赤色である等、利用者2にとって緊急性を要する特定の物体の保有情報を特定の振動パターンに割り当てることにより、利用者2は、緊急性を要する特定の物体の保有情報を即座に認識することできる。これにより、利用者2の歩行の安全性をより向上することができる。なお、上述した非連続振動パターンは、周期が全体で1秒以下のパターンで構成するのが望ましい。 As a specific example, the character string “pedestrian traffic light is blue” has a continuous vibration pattern of 1 second, and the character string “pedestrian traffic light is red.” Has a vibration of 0.1 second. It is possible to use a 5-period non-continuous vibration pattern or the like in which non-vibration for 0.1 seconds is one period. For example, by assigning possession information of a specific object that requires urgency to the user 2, such as a pedestrian traffic light being red, the user 2 possesses a specific object that requires urgency. Information can be recognized immediately. Thereby, the safety of walking of the user 2 can be further improved. Note that the above-described discontinuous vibration pattern is desirably configured as a pattern having a total period of 1 second or less.
 図18は本発明の第5実施形態に係る情報提示装置1の他の例を模式的に示す全体図である。 FIG. 18 is an overall view schematically showing another example of the information presentation apparatus 1 according to the fifth embodiment of the present invention.
 また、本発明の第5実施形態に係る情報提示装置1では、出力器14の他の例として、例えば図18に示すように、振動素子14Bに加え、さらに信号生成部134によって生成された情報信号に対応する点字を表示する電子点字機器14Cを利用することも可能である。この場合、情報提示装置1は、電子点字機器14Cを利用して利用者2に物体が表す保有情報を提示する。電子点字機器14Cは、例えば、情報信号に応じて表面の凹凸の配列を変更することにより、点字を表示するものである。また、情報提示装置1は、当該情報提示装置1が電子点字機器14Cを利用して利用者2に物体の保有情報を提示している旨を、振動素子14Bを利用して利用者2に提示する。 Moreover, in the information presentation apparatus 1 according to the fifth embodiment of the present invention, as another example of the output device 14, for example, as illustrated in FIG. 18, in addition to the vibration element 14B, information generated by the signal generation unit 134 is also included. It is also possible to use an electronic braille device 14C that displays braille corresponding to the signal. In this case, the information presentation apparatus 1 presents the possessed information represented by the object to the user 2 using the electronic Braille device 14C. For example, the electronic braille device 14C displays braille by changing the arrangement of the surface irregularities according to the information signal. Further, the information presentation device 1 presents to the user 2 using the vibration element 14B that the information presentation device 1 is presenting the possession information of the object to the user 2 using the electronic braille device 14C. To do.
 そして、信号生成部134は、制御部131から受信した物体保有情報文字列に基づいて、振動素子14Bの振動パターン及び電子点字機器14Cの電子点字文字列を作成する。振動パターンとしては、例えば、上述の1秒間の連続振動パターンを利用することが可能である。信号生成部134は、作成した振動素子14Bの振動パターン及び電子点字機器14Cの電子点字文字列を情報信号として制御部131に送信する。 Then, the signal generation unit 134 creates the vibration pattern of the vibration element 14B and the electronic braille character string of the electronic braille device 14C based on the object possession information character string received from the control unit 131. As the vibration pattern, for example, the above-mentioned continuous vibration pattern for 1 second can be used. The signal generation unit 134 transmits the created vibration pattern of the vibration element 14B and the electronic braille character string of the electronic braille device 14C to the control unit 131 as information signals.
 振動素子14Bは、制御部131から受信した情報信号が示す振動パターンに従って振動すると共に、電子点字機器14Cは、制御部131から受信した情報信号が示す電子点字文字列に基づいて点字を表示する。利用者2は、振動素子14Bの振動を知覚することにより、電子点字機器14Cが電子点字文字列の点字を表示していることを認識し、電子点字機器14Cの点字を読むことにより、情報提示装置1が提示した周囲環境の情報を認識する。 The vibration element 14B vibrates according to the vibration pattern indicated by the information signal received from the control unit 131, and the electronic braille device 14C displays the braille based on the electronic braille character string indicated by the information signal received from the control unit 131. The user 2 perceives the vibration of the vibration element 14B, recognizes that the electronic braille device 14C displays the braille of the electronic braille character string, and reads the braille of the electronic braille device 14C to provide information. Recognize the information of the surrounding environment presented by the device 1.
 図19は本発明の第5実施形態に係る情報提示装置1の他の例を模式的に示す全体図である。 FIG. 19 is an overall view schematically showing another example of the information presentation apparatus 1 according to the fifth embodiment of the present invention.
 また、本発明の第5実施形態に係る情報提示装置1では、出力器14の他の例として、例えば図19に示すように、振動素子14Bの代わりに、信号生成部134によって生成された情報信号に対応する映像を表示する映像表示素子14Dを利用することも可能である。具体的には、情報提示装置1は、例えば、利用者2の頭部に装着され、映像表示素子14Dを含む透過型のヘッドマウントディスプレイから構成されている。 Moreover, in the information presentation apparatus 1 according to the fifth embodiment of the present invention, as another example of the output device 14, as illustrated in FIG. 19, for example, information generated by the signal generation unit 134 instead of the vibration element 14 </ b> B. It is also possible to use an image display element 14D that displays an image corresponding to a signal. Specifically, the information presentation apparatus 1 is configured by, for example, a transmissive head-mounted display that is attached to the head of the user 2 and includes a video display element 14D.
 この場合、信号生成部134は、制御部131から受信した物体保有情報文字列に基づいて、映像表示素子14Dの映像を作成する。信号生成部134は、作成した映像表示素子14Dの映像を情報信号として制御部131に送信する。映像表示素子14Dは、制御部131から受信した情報信号に応じて映像を表示する。利用者2は、映像表示素子14Dが表示した映像を知覚することにより、情報提示装置1が提示した周囲環境の情報を認識する。 In this case, the signal generation unit 134 creates an image of the image display element 14D based on the object possession information character string received from the control unit 131. The signal generation unit 134 transmits the created video of the video display element 14D to the control unit 131 as an information signal. The video display element 14 </ b> D displays a video according to the information signal received from the control unit 131. The user 2 recognizes information on the surrounding environment presented by the information presentation device 1 by perceiving the video displayed by the video display element 14D.
 このように構成した本発明の第5実施形態に係る情報提示システム100によれば、上述した第1実施形態と同様の作用効果が得られる他、情報提示装置1は、音声を利用することなく、利用者2に周囲環境の情報を提示することにより、利用者2は、聴覚以外の触覚や視覚等を利用して物体が表す保有情報を認識することができる。また、これにより、利用者2は周囲の音を聞きながら、物体が表す保有情報を認識することができる。 According to the information presentation system 100 according to the fifth embodiment of the present invention configured as described above, the same effect as that of the first embodiment described above can be obtained, and the information presentation apparatus 1 can use the voice without using it. By presenting information on the surrounding environment to the user 2, the user 2 can recognize the possessed information represented by the object using tactile sensation or vision other than hearing. Thereby, the user 2 can recognize the possessed information represented by the object while listening to surrounding sounds.
 なお、上述した本発明の第5実施形態に係る情報提示装置1は、出力器14の他の例として、振動素子14B及び電子点字機器14Cを備え、電子点字機器14Cを利用して利用者2に物体が表す保有情報を提示すると共に、電子点字機器14Cが物体の保有情報を提示している旨を、振動素子14Bを利用して提示した場合について説明したが、本発明はこの場合に限られない。例えば、情報提示装置1は、電子点字機器14Cを利用して利用者2に物体が表す保有情報を提示するだけではなく、特定の物体保有情報文字列に対して定義した振動パターンに応じて振動素子14Bを振動させることにより、物体が表す保有情報を利用者に提示してもよい。 The information presentation apparatus 1 according to the fifth embodiment of the present invention described above includes, as another example of the output device 14, a vibration element 14B and an electronic braille device 14C, and uses the electronic braille device 14C to provide the user 2 The case where the holding information represented by the object is presented and the electronic braille device 14C presents the holding information of the object using the vibration element 14B has been described. However, the present invention is limited to this case. I can't. For example, the information presentation apparatus 1 not only presents the possession information represented by the object to the user 2 using the electronic braille device 14C, but also vibrates according to the vibration pattern defined for the specific object possession information character string. The possessed information represented by the object may be presented to the user by vibrating the element 14B.
[第6実施形態]
 本発明の第6実施形態が前述した第5実施形態と異なるのは、第5実施形態に係る情報提示システム100は、利用者2の周囲環境の情報を提示する処理を行う情報提示装置1から構成されたのに対して、第6実施形態に係る情報提示システム100は、当該情報提示装置1と、この情報提示装置1の外部に設けられ、情報提示装置1によって処理された情報を出力する外部出力装置4(図20参照)とから構成されたことである。
[Sixth Embodiment]
The sixth embodiment of the present invention differs from the fifth embodiment described above in that the information presentation system 100 according to the fifth embodiment is different from the information presentation device 1 that performs processing of presenting information on the surrounding environment of the user 2. In contrast, the information presentation system 100 according to the sixth embodiment is provided outside the information presentation device 1 and the information presentation device 1 and outputs information processed by the information presentation device 1. The external output device 4 (see FIG. 20).
 以下、本発明の第6実施形態に係る情報提示システム100の構成及び動作について、図20、図21を参照しながら詳細に説明する。なお、以下の説明において、上述した第5実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation system 100 according to the sixth embodiment of the present invention will be described in detail with reference to FIGS. 20 and 21. FIG. In the following description, the same or corresponding parts as those in the configuration of the fifth embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 本発明の第6実施形態に係る情報提示装置1は、第5実施形態に係る情報提示装置1の構成と基本的には同じであるが、図3に示す出力器14を備えていない。また、本発明の第6実施形態に係る通信装置15は、外部出力装置4と通信を行う情報提示装置側通信装置として機能し、例えば、外部出力装置4と無線通信回線を介して通信接続されている。 The information presentation device 1 according to the sixth embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the fifth embodiment, but does not include the output device 14 shown in FIG. Further, the communication device 15 according to the sixth embodiment of the present invention functions as an information presentation device side communication device that communicates with the external output device 4, and is connected to the external output device 4 through a wireless communication line, for example. ing.
 図20は本発明の第6実施形態に係る外部出力装置4の処理装置43の主な機能を示す機能ブロック図である。 FIG. 20 is a functional block diagram showing main functions of the processing device 43 of the external output device 4 according to the sixth embodiment of the present invention.
 本発明の第6実施形態に係る外部出力装置4は、例えば図20に示すように、処理装置43、出力器14、及び通信装置45を備えており、処理装置43は、制御部431及び通信処理部435を含んで構成されている。通信装置45は、情報提示装置1と通信を行う外部出力装置側通信装置として機能する。 As shown in FIG. 20, for example, the external output device 4 according to the sixth embodiment of the present invention includes a processing device 43, an output device 14, and a communication device 45. The processing device 43 includes a control unit 431 and a communication device. A processing unit 435 is included. The communication device 45 functions as an external output device side communication device that communicates with the information presentation device 1.
 本発明の第6実施形態に係る情報提示装置1では、制御部131は、出力信号生成部134から情報信号を受信すると、この情報信号を外部出力装置4に送信する。そのために、制御部131は、外部出力装置4との通信指令及び情報信号を通信処理部135に送信する。通信処理部135は、制御部131から受信した通信指令に従い、制御部131から受信した情報信号を、通信装置15を介して外部出力装置4に送信する。 In the information presentation device 1 according to the sixth embodiment of the present invention, when the control unit 131 receives an information signal from the output signal generation unit 134, the control unit 131 transmits the information signal to the external output device 4. For this purpose, the control unit 131 transmits a communication command and information signal with the external output device 4 to the communication processing unit 135. The communication processing unit 135 transmits the information signal received from the control unit 131 to the external output device 4 via the communication device 15 in accordance with the communication command received from the control unit 131.
 外部出力装置4の通信処理部435は、制御部431からの通信指令に従い、通信装置45を介して情報提示装置1から情報信号を受信する。制御部431は、通信処理部435から受信した情報信号を出力器14に送信する。出力器14は、制御部431から受信した情報信号に対応する出力を行う。利用者2は、出力器14からの出力を知覚することにより、情報提示装置1が提示した周囲環境の情報を認識する。 The communication processing unit 435 of the external output device 4 receives an information signal from the information presentation device 1 via the communication device 45 in accordance with a communication command from the control unit 431. The control unit 431 transmits the information signal received from the communication processing unit 435 to the output device 14. The output device 14 performs output corresponding to the information signal received from the control unit 431. The user 2 recognizes information on the surrounding environment presented by the information presentation device 1 by perceiving the output from the output device 14.
 図21は本発明の第6実施形態に係る外部出力装置4の一例として挙げた白杖4Aを示す全体図である。 FIG. 21 is an overall view showing a white cane 4A cited as an example of the external output device 4 according to the sixth embodiment of the present invention.
 本発明の第6実施形態に係る情報提示システム100では、外部出力装置4として、例えば図21に示すように、利用者2が把持する白杖4Aを利用することが可能である。この白杖4Aは、出力器14として、例えば上述した第5実施形態と同様に、信号生成部134によって生成された情報信号に応じて振動させる振動素子14Bを備えている。なお、この振動素子14Bは、利用者2が手で掴む把持部に形成されている。 In the information presentation system 100 according to the sixth embodiment of the present invention, it is possible to use, as the external output device 4, for example, a white cane 4A held by the user 2 as shown in FIG. The white cane 4A includes, as the output device 14, for example, a vibration element 14B that vibrates according to the information signal generated by the signal generation unit 134, as in the fifth embodiment described above. In addition, this vibration element 14B is formed in the holding part which the user 2 holds by hand.
 このように構成した本発明の第6実施形態に係る情報提示システム100によれば、上述した第5実施形態と同様の作用効果が得られる他、情報提示装置1が外部出力装置4と連携して利用者2に周囲環境の情報を提示することにより、周囲環境の情報の多様な提示形態を実現することができる。また、外部出力装置4として、白杖4Aのように利用者2が所持するものを利用することにより、利用者2の装着物を減らすことができるので、利用者に対して優れた利便性を提供することができる。 According to the information presentation system 100 according to the sixth embodiment of the present invention configured as described above, the same effect as that of the fifth embodiment described above can be obtained, and the information presentation device 1 can be linked with the external output device 4. By presenting the ambient environment information to the user 2, various presentation forms of the ambient environment information can be realized. Moreover, since the user's 2 wearing thing can be reduced by using what the user 2 possesses like the white cane 4A as the external output device 4, the user's 2 convenience can be reduced. Can be provided.
 なお、上述した本発明の第6実施形態に係る情報提示システム100では、外部出力装置4として、白杖4Aを利用した場合について説明したが、本発明はこの場合に限られず、例えば、利用者2が携帯するスマートフォン等の携帯端末、時計、指輪型デバイス等の機器を利用し、これらの各機器に上述の振動素子14Bを設けてもよい。 In the information presentation system 100 according to the sixth embodiment of the present invention described above, the case where the white cane 4A is used as the external output device 4 has been described. However, the present invention is not limited to this case. A device such as a mobile terminal such as a smartphone carried by 2, a watch, a ring device, or the like may be used, and the above-described vibration element 14 </ b> B may be provided in each of these devices.
 また、本発明の第6実施形態に係る情報提示システム100では、白杖4Aは、信号生成部134によって生成された情報信号に応じて振動させる振動素子14Bを備えた場合について説明したが、本発明はこの場合に限られず、例えば、当該情報信号に応じて電気刺激を与える電気刺激素子(図示せず)を備えてもよい。これにより、例えば、歩行者用信号機が赤色である等、利用者2にとって緊急性を要する特定の物体の保有情報が表されているときに、利用者2は、電気刺激素子によって緊急性を要する特定の物体の保有情報を即座に認識することできる。 In the information presentation system 100 according to the sixth embodiment of the present invention, the case where the white cane 4A includes the vibration element 14B that vibrates according to the information signal generated by the signal generation unit 134 has been described. The invention is not limited to this case, and may include, for example, an electrical stimulation element (not shown) that applies electrical stimulation according to the information signal. Thereby, for example, when possession information of a specific object that requires urgency for the user 2 is displayed, for example, the pedestrian traffic light is red, the user 2 needs urgency by the electrical stimulation element. The possession information of a specific object can be recognized immediately.
[第7実施形態]
 本発明の第7実施形態に係る情報提示装置1は、第1実施形態に係る情報提示装置1の構成に加え、さらに利用者2が到達する予定の目的地を入力する入力装置18(図22参照)を備えている。そして、本発明の第7実施形態に係る情報提示装置1の処理装置13は、利用者2の周囲環境の情報の処理において、カメラ11Aの撮像画像のうち入力装置18によって入力された目的地、又は撮像画像の物体のうち利用者2が当該目的地へ移動する際に利用する物体を認識し、抽出した保有情報から利用者2の目的地への所定の案内情報を示す情報信号を生成するように構成されたことである。すなわち、本発明の第7実施形態に係る情報提示装置1は、利用者2を目的地へ案内するナビゲーション機能を有している。
[Seventh Embodiment]
In addition to the configuration of the information presentation apparatus 1 according to the first embodiment, the information presentation apparatus 1 according to the seventh embodiment of the present invention further includes an input device 18 for inputting a destination that the user 2 is scheduled to reach (FIG. 22). See). And the processing apparatus 13 of the information presentation apparatus 1 which concerns on 7th Embodiment of this invention WHEREIN: In the process of the information of the surrounding environment of the user 2, the destination input by the input device 18 among the captured images of the camera 11A, Alternatively, an object used when the user 2 moves to the destination among the objects of the captured image is recognized, and an information signal indicating predetermined guidance information to the destination of the user 2 is generated from the extracted possessed information. It is configured as follows. That is, the information presentation apparatus 1 according to the seventh embodiment of the present invention has a navigation function for guiding the user 2 to the destination.
 以下、本発明の第7実施形態に係る情報提示装置1の構成及び動作について、図22を参照しながら詳細に説明する。なお、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation apparatus 1 according to the seventh embodiment of the present invention will be described in detail with reference to FIG. In the following description, the same or corresponding parts as those in the configuration of the first embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 図22は本発明の第7実施形態に係る情報提示装置1の処理装置13の主な機能を示す機能ブロック図である。 FIG. 22 is a functional block diagram showing main functions of the processing device 13 of the information presentation device 1 according to the seventh embodiment of the present invention.
 本発明の第7実施形態に係る情報提示装置1の処理装置13は、第1実施形態に係る処理装置13の構成に加え、さらにナビゲーション処理部137を含んで構成されている。 The processing device 13 of the information presentation device 1 according to the seventh embodiment of the present invention includes a navigation processing unit 137 in addition to the configuration of the processing device 13 according to the first embodiment.
 本発明の第7実施形態に係る情報提示装置1は、利用者2に対して目的地の入力を要求する。そのために、制御部131は、目的地の入力を言葉で促すための目的地入力要求文字列を記録装置12から取得する。この目的地入力要求文字列としては、例えば、「目的地を入力してください。」を利用することが可能である。 The information presentation device 1 according to the seventh embodiment of the present invention requests the user 2 to input a destination. For this purpose, the control unit 131 acquires from the recording device 12 a destination input request character string for prompting input of the destination by words. As this destination input request character string, for example, “Please enter a destination.” Can be used.
 次に、制御部131は、記録装置12から取得した目的地入力要求文字列を音声合成部134に送信する。音声合成部134は、目的地入力要求文字列から音声を合成した音声信号を生成し、生成した音声信号を制御部131に送信する。そして、制御部131は、音声合成部134から受信した音声信号を音声出力装置14Aに送信すると、音声出力装置14Aは、制御部131から受信した音声信号に対応する合成音声を出力する。 Next, the control unit 131 transmits the destination input request character string acquired from the recording device 12 to the speech synthesis unit 134. The voice synthesizer 134 generates a voice signal obtained by synthesizing voice from the destination input request character string, and transmits the generated voice signal to the control unit 131. Then, when the control unit 131 transmits the audio signal received from the audio synthesis unit 134 to the audio output device 14A, the audio output device 14A outputs the synthesized audio corresponding to the audio signal received from the control unit 131.
 入力装置18は、利用者2から目的地の入力を受信し、目的地を認識する。例えば、入力装置18は、利用者2が発した目的地の音声を受信するマイクロフォンを備え、このマイクロフォンは、受信した音声を電気信号に変換する音声認識を用いることにより、目的地を認識することができる。そして、入力装置18は、認識した目的地の情報を制御部131に送信する。 The input device 18 receives the destination input from the user 2 and recognizes the destination. For example, the input device 18 includes a microphone that receives the voice of the destination uttered by the user 2, and the microphone recognizes the destination by using voice recognition that converts the received voice into an electrical signal. Can do. Then, the input device 18 transmits the recognized destination information to the control unit 131.
 制御部131は、入力装置18から受信した目的地の情報をナビゲーション処理部137に送信する。また、制御部131は、物体保有情報認識部133から物体保有情報文字列の組を受信すると、この物体保有情報文字列の組、及びカメラ11Aから受信した撮像画像をナビゲーション処理部137に送信する。 The control unit 131 transmits the destination information received from the input device 18 to the navigation processing unit 137. When the control unit 131 receives a set of object possession information character strings from the object possession information recognition unit 133, the control unit 131 transmits the set of object possession information character strings and the captured image received from the camera 11A to the navigation processing unit 137. .
 ナビゲーション処理部137は、制御部131から受信した目的地の情報と物体保有情報文字列の組に基づいて、利用者2を目的地へ言葉で案内するためのナビゲーション文字列を作成し、作成したナビゲーション文字列を制御部131に送信する。制御部131は、ナビゲーション処理部137から受信したナビゲーション文字列を音声合成部134に送信する。 The navigation processing unit 137 creates a navigation character string for guiding the user 2 to the destination with words based on the destination information received from the control unit 131 and the object possession information character string. The navigation character string is transmitted to the control unit 131. The control unit 131 transmits the navigation character string received from the navigation processing unit 137 to the speech synthesis unit 134.
 音声合成部134は、制御部131から受信したナビゲーション文字列から音声を合成した音声信号を生成し、生成した音声信号を制御部131に送信する。制御部131は、音声合成部134から受信した音声信号を音声出力装置14Aに送信する。音声出力装置14Aは、制御部131から受信した音声信号に対応する合成音声を出力する。 The voice synthesis unit 134 generates a voice signal obtained by synthesizing voice from the navigation character string received from the control unit 131, and transmits the generated voice signal to the control unit 131. The control unit 131 transmits the audio signal received from the audio synthesis unit 134 to the audio output device 14A. The audio output device 14 </ b> A outputs synthesized audio corresponding to the audio signal received from the control unit 131.
 以下、利用者2の目的地がA駅、利用者2の現在地がB駅であり、物体認識部132が、利用者2が当該目的地へ移動する際に利用する物体として、列車の発車案内を示す電光掲示板を認識し、物体保有情報認識部133がこの電光掲示板に記載されている列車の発車案内の情報を認識した場合を例に挙げ、ナビゲーション処理部137によるナビゲーション処理について、詳細に説明する。 Hereinafter, the destination of the user 2 is the station A, the current location of the user 2 is the station B, and the object recognition unit 132 uses the train departure guidance as an object used when the user 2 moves to the destination. The navigation processing by the navigation processing unit 137 will be described in detail, taking as an example the case where the object possession information recognition unit 133 recognizes the information of the departure guidance of the train described in the electronic bulletin board. To do.
 ナビゲーション処理部137は、記録装置12に記録されている乗車列車情報データベースを記録装置12から取得し、この乗車列車情報データベースを参照しながら、B駅からA駅に到達するための列車の乗車列車情報を検索する。ナビゲーション処理部137は、物体保有情報文字列の組の中から電光掲示板に記載されている列車の発車案内の情報を検索する。この列車の発車案内の情報は、B駅を出発する列車の行先と、列車の発車時刻と、列車が発車するプラットホーム番号とを含んでいる。 The navigation processing unit 137 acquires the boarding train information database recorded in the recording device 12 from the recording device 12, and refers to the boarding train information database, and the train on the train to reach the A station from the B station Search for information. The navigation processing unit 137 searches train departure information described on the electronic bulletin board from the set of object possession information character strings. The information on the departure guidance of the train includes the destination of the train leaving the station B, the departure time of the train, and the platform number from which the train leaves.
 ナビゲーション処理部137は、列車の発車案内の情報の中から、B駅からA駅に到達するためにB駅で乗車するのに最適な乗車列車に関する情報を示す最適乗車列車情報を検索する。例えば、最適な乗車列車は、行先がC駅、発車時刻が12:34、発車するプラットホーム番号が5番線である場合に、ナビゲーション処理部137は、ナビゲーション文字列「A駅に行くためには、12:34に5番線から発車するC駅行きの列車に乗車して下さい。」を生成する。そして、ナビゲーション処理部137は、生成したナビゲーション文字列の情報を制御部131に送信する。 The navigation processing unit 137 searches the train departure guidance information for optimal boarding train information indicating information on a boarding train that is optimal for boarding at the B station in order to reach the A station from the B station. For example, when the destination is C station, the departure time is 12:34, and the departure platform number is line 5, the navigation processing unit 137 uses the navigation character string “To go to station A, Get on the train for Station C, which departs from Line 5 at 12:34. " Then, the navigation processing unit 137 transmits the generated navigation character string information to the control unit 131.
 ここで、ナビゲーション処理部137は、ナビゲーション文字列を補佐する文字列をナビゲーション文字列に付加することも可能である。例えば、ナビゲーション処理部137は、ナビゲーション文字列に含まれる発車時刻を補佐する情報を付加することが可能である。具体的には、情報提示装置1は、例えば、利用者2が携帯する時計を備え、ナビゲーション処理部137は、現在時刻から発車時刻までの時間差を計算し、計算した時間差を時計に表示してもよい。また、別の例として、ナビゲーション処理部137は、物体認識部132及び物体保有情報認識部133の認識結果に基づいて、例えば、現在位置から列車が発車するプラットホームまでの道順を案内する情報を付加してもよい。 Here, the navigation processing unit 137 can also add a character string that assists the navigation character string to the navigation character string. For example, the navigation processing unit 137 can add information that assists the departure time included in the navigation character string. Specifically, the information presentation device 1 includes, for example, a clock carried by the user 2, and the navigation processing unit 137 calculates a time difference from the current time to the departure time, and displays the calculated time difference on the clock. Also good. As another example, the navigation processing unit 137 adds information that guides the route from the current position to the platform where the train departs based on the recognition results of the object recognition unit 132 and the object possession information recognition unit 133, for example. May be.
 次に、利用者2の目的地が花屋、利用者2の現在地が目的地付近であり、カメラ11Aの撮像画像に目的地が撮像されている場合を例に挙げ、ナビゲーション処理部137によるナビゲーション処理の他の例について、詳細に説明する。なお、以下の説明において、物体認識部132が、目的地又は目的地の看板等を認識し、物体保有情報認識部133が目的地の名称を認識しているものとする。また、カメラ11Aには、第1実施形態と同様に、カラーカメラが利用されているものとする。 Next, a case where the destination of the user 2 is a flower shop, the current location of the user 2 is near the destination, and the destination is captured in the captured image of the camera 11A is taken as an example, and the navigation processing by the navigation processing unit 137 is performed. Another example will be described in detail. In the following description, it is assumed that the object recognition unit 132 recognizes a destination or a signboard of the destination, and the object possession information recognition unit 133 recognizes the name of the destination. In addition, as in the first embodiment, a color camera is used for the camera 11A.
 ナビゲーション処理部137は、物体保有情報文字列の組の中から目的地を検出し、さらに撮像画像内の当該目的地の位置を検出する。ナビゲーション処理部137は、撮像画像内の目的地の位置に基づいて、利用者2と目的地との位置関係を推定する。ナビゲーション処理部137は、推定した位置関係に基づいて、ナビゲーション文字列を作成する。具体例として、ナビゲーション処理部137は、撮像画像内の目的地である花屋の位置を参照し、当該花屋が利用者2の左手前方にあると推定した場合には、ナビゲーション文字列「目的地の花屋は左手前方にあります。」を作成する。そして、ナビゲーション処理部137は、作成したナビゲーション文字列の情報を制御部131に送信する。 The navigation processing unit 137 detects the destination from the set of object possession information character strings, and further detects the position of the destination in the captured image. The navigation processing unit 137 estimates the positional relationship between the user 2 and the destination based on the position of the destination in the captured image. The navigation processing unit 137 creates a navigation character string based on the estimated positional relationship. As a specific example, the navigation processing unit 137 refers to the position of the florist that is the destination in the captured image, and if it is estimated that the florist is in front of the left hand of the user 2, the navigation character string “Destination Florist is in front of left hand. " Then, the navigation processing unit 137 transmits information on the created navigation character string to the control unit 131.
 このように構成した本発明の第7実施形態に係る情報提示装置1によれば、上述した第1実施形態と同様の作用効果が得られる他、情報提示装置1が、入力装置18を用いて利用者2に入力された目的地へ案内するナビゲーション機能を有することにより、周囲環境の情報から利用者2に周囲の状況や現在地を推測させなくても、目的地への行動の指針を示すことができる。これにより、利用者2が目的地へ円滑に到達できるので、目的地までの所要時間を短縮することができる。 According to the information presentation device 1 according to the seventh embodiment of the present invention configured as described above, the same effect as that of the first embodiment described above can be obtained, and the information presentation device 1 can use the input device 18. By providing a navigation function that guides the destination input to the user 2, the user 2 can be guided to the destination without having the user 2 guess the surrounding situation and current location from the surrounding environment information. Can do. Thereby, since the user 2 can reach | attain the destination smoothly, the time required to the destination can be shortened.
 なお、本発明の第7実施形態に係る情報提示装置1では、カメラ11Aとして、第1実施形態と同様にカラーカメラを利用した場合について説明したが、本発明はこの場合に限られず、カメラ11Aとして、第3実施形態と同様にフルカラー画像と距離画像を撮像可能なカメラを利用してもよい。これにより、ナビゲーション処理部137は、ナビゲーション文字列を補佐する文字列として、利用者2と目的地との距離を示す文字列を付加することができる。 In the information presentation device 1 according to the seventh embodiment of the present invention, the case where a color camera is used as the camera 11A as in the first embodiment has been described. However, the present invention is not limited to this case, and the camera 11A is used. As in the third embodiment, a camera capable of capturing a full-color image and a distance image may be used. Accordingly, the navigation processing unit 137 can add a character string indicating the distance between the user 2 and the destination as a character string that assists the navigation character string.
 具体例として、ナビゲーション処理部137は、カメラ11Aのフルカラー画像と距離画像に基づいて、花屋が利用者2の10m先の左手にあると推定した場合には、ナビゲーション文字列「目的地の花屋は10m先の左手にあります。」を作成する。そして、ナビゲーション処理部137は、作成したナビゲーション文字列の情報を制御部131に送信する。これにより、情報提示装置1は、利用者2の目的地までの距離情報を含む詳細なナビゲーションを行うことできる。 As a specific example, when the navigation processing unit 137 estimates that the florist is on the left hand 10 m ahead of the user 2 based on the full-color image and the distance image of the camera 11A, the navigation character string “The florist at the destination is It is on the left hand 10m ahead. " Then, the navigation processing unit 137 transmits information on the created navigation character string to the control unit 131. Thereby, the information presentation apparatus 1 can perform detailed navigation including distance information to the destination of the user 2.
[第8実施形態]
 本発明の第8実施形態が前述した第1実施形態と異なるのは、第1実施形態に係る情報提示システム100は、利用者2の周囲環境の情報を提示する処理を行う情報提示装置1から構成されたのに対して、第8実施形態に係る情報提示システム100は、図4に示すように、当該情報提示装置1と、この情報提示装置1の外部に設けられ、情報提示装置1に対して指令を送信する外部指令装置5とから構成されたことである。
[Eighth Embodiment]
The eighth embodiment of the present invention is different from the first embodiment described above in that the information presentation system 100 according to the first embodiment is different from the information presentation device 1 that performs the process of presenting information on the surrounding environment of the user 2. In contrast, the information presentation system 100 according to the eighth embodiment is provided outside the information presentation apparatus 1 and the information presentation apparatus 1 as shown in FIG. In contrast, the external command device 5 is configured to transmit a command to the external command device 5.
 以下、本発明の第8実施形態に係る情報提示システム100の構成及び動作について、詳細に説明する。なお、以下の説明において、上述した第1実施形態の構成と同一の又は対応する部分には同一の符号を付し、重複する説明を省略する。 Hereinafter, the configuration and operation of the information presentation system 100 according to the eighth embodiment of the present invention will be described in detail. In the following description, the same or corresponding parts as those in the configuration of the first embodiment described above are denoted by the same reference numerals, and redundant description is omitted.
 本発明の第8実施形態に係る情報提示装置1は、第1実施形態に係る情報提示装置1の構成と基本的には同じである。また、本発明の第6実施形態に係る通信装置15は、外部指令装置5と通信を行う情報提示装置側通信装置として機能し、例えば情報処理装置3と同様に、外部指令装置5とインターネット401を介して通信接続されている。 The information presentation device 1 according to the eighth embodiment of the present invention is basically the same as the configuration of the information presentation device 1 according to the first embodiment. Further, the communication device 15 according to the sixth embodiment of the present invention functions as an information presentation device side communication device that communicates with the external command device 5. For example, as with the information processing device 3, the external command device 5 and the Internet 401. Is connected via communication.
 本発明の第8実施形態に係る情報提示装置1では、通信処理部135は、制御部131からの通信指令に従って、外部指令装置5から通信装置15を介して特定の物体(以下、便宜的に認識対象物体と称する)に対する保有情報の認識を要求する保有情報認識要求指令を受信し、この保有情報認識要求指令を制御部131に送信する。制御部131は、カメラ11Aから受信した撮像画像、物体認識部132から受信した物体名称の組、及び通信処理部135から受信した保有情報認識要求指令を物体保有情報認識部133に送信する。 In the information presentation device 1 according to the eighth embodiment of the present invention, the communication processing unit 135 performs a specific object (hereinafter, for convenience sake) from the external command device 5 via the communication device 15 in accordance with a communication command from the control unit 131. A possessed information recognition request command for requesting recognition of retained information for a recognition target object) is received, and the retained information recognition request command is transmitted to the control unit 131. The control unit 131 transmits the captured image received from the camera 11 </ b> A, the set of object names received from the object recognition unit 132, and the retained information recognition request command received from the communication processing unit 135 to the object retention information recognition unit 133.
 物体保有情報認識部133は、制御部131から受信した保有情報認識要求指令に従って、物体名称の組に含まれる物体名称を検索することにより、物体名称の組の中に認識対象物体の名称が含まれているかどうかを判断する。このとき、物体保有情報認識部133は、物体名称の組の中に認識対象物体の名称が含まれていると判断した場合、当該認識対象物体が表す保有情報を認識して抽出する。そして、物体保有情報認識部133は、物体保有情報文字列を作成し、作成した物体保有情報文字列の情報を制御部131に送信する。一方、物体保有情報認識部133は、物体名称の組の中に認識対象物体の名称が含まれていないと判断した場合、ゼロ物体保有情報検出信号を制御部131に送信する。 The object possession information recognition unit 133 includes the name of the recognition target object in the set of object names by searching for the object name included in the set of object names in accordance with the possession information recognition request command received from the control unit 131. Determine whether it is. At this time, if the object possession information recognition unit 133 determines that the name of the recognition target object is included in the set of object names, the object possession information recognition unit 133 recognizes and extracts the possession information represented by the recognition target object. Then, the object possession information recognition unit 133 creates an object possession information character string, and transmits information on the created object possession information character string to the control unit 131. On the other hand, when the object possession information recognition unit 133 determines that the name of the recognition target object is not included in the set of object names, the object possession information recognition unit 133 transmits a zero object possession information detection signal to the control unit 131.
 外部指令装置5としては、例えば、ナビゲーション機能を有するスマートフォンを利用することが可能である。この場合、外部指令装置5は、例えば、認識対象物体としての歩行者用信号機を備えた交差点付近に利用者2が到達すると、歩行者用信号機に対する保有情報要求指令を、インターネット401を介して情報提示装置1に送信する。情報提示装置1の物体保有情報認識部133は、外部指令装置5からの保有情報要求指令に従って、歩行者用信号機の点灯している色を認識して抽出し、例えば、物体保有情報文字列「前方の歩行者用信号機は青色です。」を作成する。 As the external command device 5, for example, a smartphone having a navigation function can be used. In this case, for example, when the user 2 arrives near an intersection equipped with a pedestrian traffic light as an object to be recognized, the external command device 5 sends an information on a possessed information request to the pedestrian traffic light via the Internet 401. It transmits to the presentation apparatus 1. The object possession information recognition unit 133 of the information presentation device 1 recognizes and extracts the lighted color of the pedestrian traffic light according to the possession information request command from the external command device 5, for example, the object possession information character string “ The pedestrian traffic light ahead is blue. "
 このように構成した本発明の第8実施形態に係る情報提示システム100によれば、上述した第1実施形態と同様の作用効果が得られる他、情報提示装置1が外部指令装置5と連携して利用者2に周囲環境の情報を提示することにより、利用者2は、外部指令装置5を用いて指定した物体が表す保有情報を効率良く得ることができる。従って、利用者2にとって使い勝手のよい情報提示システム100を提供することができる。 According to the information presentation system 100 according to the eighth embodiment of the present invention configured as described above, the same effect as that of the first embodiment described above can be obtained, and the information presentation device 1 cooperates with the external command device 5. By presenting information on the surrounding environment to the user 2, the user 2 can efficiently obtain the possessed information represented by the object designated using the external command device 5. Therefore, it is possible to provide the information presentation system 100 that is convenient for the user 2.
 なお、上述した本実施形態は、本発明を分かり易く説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。従って、例えば、第2実施形態に係る情報提示装置1及び情報処理装置3に、第8実施形態に係る外部指令装置5を組み合わせてもよい。 In addition, this embodiment mentioned above was described in detail in order to demonstrate this invention intelligibly, and is not necessarily limited to what is provided with all the demonstrated structures. Further, a part of the configuration of an embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of an embodiment. Therefore, for example, the information presentation device 1 and the information processing device 3 according to the second embodiment may be combined with the external command device 5 according to the eighth embodiment.
 この場合、情報提示装置1は、通信装置15を介して外部指令装置5から撮像指令を受信し、この撮像指令に従ってカメラ11Aの撮像画像を情報処理装置3へ送信することができる。また、情報提示装置1は、通信装置15を介して外部指令装置5から音声の出力を要求する音声出力要求指令を受信し、この音声出力要求指令に従って音声出力装置14Aによる音声出力を実行することができる。これにより、利用者2の意図に対応した周囲環境の情報の提示が可能となる。 In this case, the information presentation device 1 can receive an imaging command from the external command device 5 via the communication device 15, and can transmit a captured image of the camera 11A to the information processing device 3 according to the imaging command. Further, the information presentation device 1 receives a voice output request command for requesting voice output from the external command device 5 via the communication device 15, and executes voice output by the voice output device 14A according to the voice output request command. Can do. Thereby, it is possible to present information on the surrounding environment corresponding to the intention of the user 2.
 1…情報提示装置、2…利用者、3…情報処理装置、4…外部出力装置、4A…白杖、5…外部指令装置、11…撮像器、11A…カメラ、12…記録装置、13,33,43…処理装置、14…音声出力装置(出力器)、14A…ヘッドホン(音声出力装置)、14B…振動素子(出力器)、14C…電子点字機器(出力器)、14D…映像表示素子(出力器)、15…通信装置(情報提示装置側通信装置)、18…入力装置、35…通信装置(情報処理装置側通信装置)、45…通信装置(外部出力装置側通信装置)
 100…情報提示システム、120…ルックアップテーブル、131,331,431…制御部、132…物体認識部、133…物体保有情報認識部、134…音声合成部(信号生成部)、135,335,435…通信処理部、136…方向判定部、137…ナビゲーション処理部
DESCRIPTION OF SYMBOLS 1 ... Information presentation apparatus, 2 ... User, 3 ... Information processing apparatus, 4 ... External output device, 4A ... White cane, 5 ... External command device, 11 ... Image pick-up device, 11A ... Camera, 12 ... Recording device, 13, 33, 43 ... Processing device, 14 ... Audio output device (output device), 14A ... Headphone (audio output device), 14B ... Vibration element (output device), 14C ... Electronic braille device (output device), 14D ... Video display device (Output device), 15 ... communication device (information presentation device side communication device), 18 ... input device, 35 ... communication device (information processing device side communication device), 45 ... communication device (external output device side communication device)
DESCRIPTION OF SYMBOLS 100 ... Information presentation system, 120 ... Look-up table, 131, 331, 431 ... Control part, 132 ... Object recognition part, 133 ... Object possession information recognition part, 134 ... Speech synthesizer (signal generation part), 135, 335 435 ... Communication processing unit, 136 ... Direction determination unit, 137 ... Navigation processing unit

Claims (18)

  1.  利用者の周囲環境の情報を提示する情報提示システムであって、
     前記周囲環境を撮像する撮像器と、
     前記周囲環境に含まれる対象物が保有する所定の保有情報を記録する記録装置と、
     前記撮像器によって撮像された撮像画像及び前記記録装置に記録された前記保有情報に基づいて、前記撮像画像の中から予め設定された前記対象物を認識し、当該対象物が表す前記保有情報を抽出して所定の情報信号を生成する処理を行う処理装置と、
     前記処理装置によって生成された前記情報信号に対応する出力を行う出力器とを備えたことを特徴とする情報提示システム。
    An information presentation system for presenting information on the surrounding environment of a user,
    An imager for imaging the surrounding environment;
    A recording device for recording predetermined possession information possessed by an object included in the surrounding environment;
    Based on the captured image captured by the imaging device and the possessed information recorded in the recording device, the preset object is recognized from the captured image, and the retained information represented by the object is obtained. A processing device that performs processing to extract and generate a predetermined information signal;
    An information presentation system comprising: an output device that performs output corresponding to the information signal generated by the processing device.
  2.  請求項1に記載の情報提示システムにおいて、
     前記周囲環境の情報を前記利用者に提示する処理を行う情報提示装置と、
     前記情報提示装置の外部に設けられ、前記情報提示装置に対して指令を送信する外部指令装置とを備え、
     前記情報提示装置は、前記撮像器、前記記録装置、前記処理装置、及び前記出力器に加え、さらに前記外部指令装置と通信を行う情報提示装置側通信装置を含み、前記情報提示装置側通信装置を介して前記外部指令装置から受信した指令に従って、前記処理装置による前記処理を実行することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    An information presentation device for performing a process of presenting information on the surrounding environment to the user;
    An external command device that is provided outside the information presentation device and transmits a command to the information presentation device;
    The information presentation device includes an information presentation device side communication device that communicates with the external command device in addition to the imaging device, the recording device, the processing device, and the output device, and the information presentation device side communication device An information presentation system for executing the processing by the processing device in accordance with a command received from the external command device via
  3.  請求項1に記載の情報提示システムにおいて、
     前記周囲環境の情報を前記利用者に提示する処理を行う情報提示装置と、
     前記情報提示装置の外部に設けられ、前記周囲環境の情報を処理する情報処理装置とを備え、
     前記情報提示装置は、前記撮像器及び前記出力器に加え、さらに外部の装置と通信を行う情報提示装置側通信装置を含み、前記情報提示装置側通信装置を介して前記撮像画像を前記外部の装置へ送信し、
     前記情報処理装置は、前記記録装置及び前記処理装置に加え、さらに前記情報提示装置と通信を行う情報処理装置側通信装置を含み、前記情報処理装置側通信装置を介して前記情報提示装置から前記撮像画像を受信した後、前記処理装置によって生成された前記情報信号を、前記情報処理装置側通信装置を介して前記情報提示装置へ送信することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    An information presentation device for performing a process of presenting information on the surrounding environment to the user;
    An information processing device provided outside the information presentation device and processing information of the surrounding environment,
    The information presentation device includes an information presentation device side communication device that communicates with an external device in addition to the image pickup device and the output device, and the captured image is transmitted to the external device via the information presentation device side communication device. To the device,
    In addition to the recording device and the processing device, the information processing device further includes an information processing device side communication device that communicates with the information presentation device, and the information presentation device communicates with the information presentation device via the information processing device side communication device. After receiving a captured image, the information signal generated by the processing device is transmitted to the information presentation device via the information processing device side communication device.
  4.  請求項3に記載の情報提示システムにおいて、
     前記情報提示装置の外部に設けられ、前記情報提示装置に対して指令を送信する外部指令装置を備え、
     前記情報提示装置は、前記情報提示装置側通信装置を介して前記外部指令装置から受信した指令に従って前記撮像画像の送信を実行し、さらに前記情報提示装置側通信装置を介して前記外部指令装置から受信した指令に従って前記出力器による出力を実行することを特徴とする情報提示システム。
    In the information presentation system according to claim 3,
    Provided outside the information presentation device, comprising an external command device that transmits a command to the information presentation device,
    The information presentation device executes transmission of the captured image according to a command received from the external command device via the information presentation device side communication device, and further from the external command device via the information presentation device side communication device. An information presentation system for executing output by the output device in accordance with a received command.
  5.  請求項1に記載の情報提示システムにおいて、
     前記出力器は、前記情報信号に対応する音声を出力する音声出力装置、前記情報信号に応じて振動させる振動素子、前記情報信号に応じて電気刺激を与える電気刺激素子、及び前記情報信号に対応する点字を表示する電子点字機器の少なくとも1つから構成されたことを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    The output device corresponds to an audio output device that outputs sound corresponding to the information signal, a vibration element that vibrates according to the information signal, an electrical stimulation element that applies electrical stimulation according to the information signal, and the information signal An information presentation system comprising at least one electronic braille device that displays braille to be displayed.
  6.  請求項1に記載の情報提示システムにおいて、
     前記音声出力装置は、前記音声を空気振動により伝達する第1のヘッドホン、前記音声を空気振動により伝達する第1のイヤホン、前記音声を空気振動により伝達するスピーカ、前記音声を骨伝導により伝達する第2のヘッドホン、及び前記音声を骨伝導により伝達する第2のイヤホンの少なくとも1つから成ることを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    The audio output device includes a first headphone for transmitting the sound by air vibration, a first earphone for transmitting the sound by air vibration, a speaker for transmitting the sound by air vibration, and the sound by bone conduction. An information presentation system comprising at least one of a second headphone and a second earphone for transmitting the sound by bone conduction.
  7.  請求項1に記載の情報提示システムにおいて、
     前記周囲環境の情報を前記利用者に提示する処理を行う情報提示装置と、
     前記情報提示装置の外部に設けられ、前記情報提示装置によって処理された情報を出力する外部出力装置とを備え、
     前記情報提示装置は、前記撮像器、前記記録装置、前記処理装置に加え、さらに前記外部出力装置と通信を行う情報提示装置側通信装置を含み、前記処理装置によって生成された前記情報信号を、前記情報提示装置側通信装置を介して前記外部出力装置へ送信し、
     前記外部出力装置は、前記出力器に加え、さらに前記情報提示装置と通信を行う外部出力装置側通信装置を含み、前記外部出力装置側通信装置を介して前記情報提示装置から前記情報信号を受信することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    An information presentation device for performing a process of presenting information on the surrounding environment to the user;
    An external output device provided outside the information presentation device and outputting information processed by the information presentation device;
    The information presentation device includes an information presentation device side communication device that communicates with the external output device in addition to the imaging device, the recording device, and the processing device, and the information signal generated by the processing device, Send to the external output device via the information presentation device side communication device,
    In addition to the output device, the external output device further includes an external output device side communication device that communicates with the information presentation device, and receives the information signal from the information presentation device via the external output device side communication device An information presentation system characterized by:
  8.  請求項7に記載の情報提示システムにおいて、
     前記外部出力装置は、前記利用者が携帯する携帯端末及び前記利用者が把持する白杖の少なくとも1つから構成され、
     前記白杖は、前記情報信号に応じて振動させる振動素子、及び前記情報信号に応じて電気刺激を与える電気刺激素子の少なくとも1つを含むことを特徴とする情報提示システム。
    The information presentation system according to claim 7,
    The external output device is composed of at least one of a portable terminal carried by the user and a white cane held by the user,
    The information presentation system, wherein the white cane includes at least one of a vibration element that vibrates according to the information signal and an electrical stimulation element that applies electrical stimulation according to the information signal.
  9.  請求項1に記載の情報提示システムにおいて、
     前記処理装置は、前記情報信号として、所定の時間の間において連続して発生する連続パターンの信号、及び予め設定された時間間隔で断続して発生する非連続パターンの信号のいずれかを生成することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    The processing device generates, as the information signal, either a continuous pattern signal continuously generated during a predetermined time or a discontinuous pattern signal generated intermittently at a preset time interval. An information presentation system characterized by that.
  10.  請求項1に記載の情報提示システムにおいて、
     前記撮像器は可視光の画像を撮像するように構成され、
     前記記録装置は、前記所定の保有情報として、前記対象物の色、当該対象物の色のうち明るさが最も明るくなる領域の色、前記対象物に表された文字、及び前記対象物に示された図形の少なくとも1つを記録することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    The imager is configured to capture an image of visible light;
    The recording device displays the predetermined holding information as the color of the object, the color of the area where the brightness is the brightest among the colors of the object, the characters represented on the object, and the object. An information presentation system for recording at least one of the drawn figures.
  11.  請求項10に記載の情報提示システムにおいて、
     前記撮像器は、さらに距離画像を撮像するように構成され、
     前記処理装置は、前記撮像器によって撮像された前記距離画像に基づいて、認識した前記対象物と前記利用者との間の距離を示す距離情報を取得して前記保有情報に関連付けることを特徴とする情報提示システム。
    The information presentation system according to claim 10,
    The imager is further configured to capture a distance image;
    The processing device acquires distance information indicating a distance between the recognized object and the user based on the distance image captured by the imaging device, and associates the acquired distance information with the retained information. Information presentation system.
  12.  請求項10に記載の情報提示システムにおいて、
     前記撮像器は、さらに熱分布の画像を撮像するように構成され、
     前記処理装置は、前記撮像器によって撮像された前記熱分布の画像に基づいて、認識した前記対象物の温度を示す温度情報を取得して前記保有情報に関連付けることを特徴とする情報提示システム。
    The information presentation system according to claim 10,
    The imager is further configured to capture an image of heat distribution;
    The information processing system, wherein the processing device acquires temperature information indicating the temperature of the recognized object based on an image of the heat distribution imaged by the imaging device and associates the temperature information with the held information.
  13.  請求項1に記載の情報提示システムにおいて、
     前記処理装置は、前記撮像画像のうち前記利用者から所定の方向に存在する前記対象物を認識することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    The information processing system, wherein the processing device recognizes the object existing in a predetermined direction from the user in the captured image.
  14.  請求項13に記載の情報提示システムにおいて、
     前記所定の方向は、前記利用者が移動しているときの進行方向であることを特徴とする情報提示システム。
    The information presentation system according to claim 13,
    The information presentation system, wherein the predetermined direction is a traveling direction when the user is moving.
  15.  請求項13に記載の情報提示システムにおいて、
     前記所定の方向は、前記利用者の目線の方向又は前記利用者の顔の正面方向であることを特徴とする情報提示システム。
    The information presentation system according to claim 13,
    The information presentation system, wherein the predetermined direction is a direction of the user's eyes or a front direction of the user's face.
  16.  請求項1に記載の情報提示システムにおいて、
     前記利用者が到達する予定の目的地を入力する入力装置を備え、
     前記処理装置は、前記周囲環境の情報の処理において、前記撮像画像のうち前記入力装置によって入力された前記目的地、又は前記対象物のうち前記利用者が当該目的地へ移動する際に利用する物体を認識し、抽出した前記保有情報から前記目的地への所定の案内情報を示す前記情報信号を生成することを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    Comprising an input device for inputting a destination scheduled to be reached by the user;
    The processing device is used when the user moves to the destination among the captured images, the destination input by the input device or the target in the processing of the information on the surrounding environment. An information presentation system that recognizes an object and generates the information signal indicating predetermined guidance information to the destination from the extracted possessed information.
  17.  請求項1に記載の情報提示システムにおいて、
     前記周囲環境の情報を前記利用者に提示する処理を行う情報提示装置を備え、
     前記情報提示装置は、前記利用者の頭部に装着され、前記撮像器、前記記録装置、前記処理装置、及び前記出力器を含むヘッドマウントディスプレイから構成されたことを特徴とする情報提示システム。
    The information presentation system according to claim 1,
    An information presentation device that performs processing of presenting information on the surrounding environment to the user;
    The information presentation system, which is mounted on the user's head, includes a head-mounted display including the imaging device, the recording device, the processing device, and the output device.
  18.  利用者の周囲環境の情報を撮影する撮像器、前記周囲環境に含まれる対象物が保有する所定の保有情報を記録する記録装置、これらの撮像器と記録装置に接続され、前記周囲環境の情報を処理する処理装置、及び前記処理装置に接続された出力器を備えた情報提示システムにおいて使用される情報提示方法であって、
     前記処理装置が前記撮像器の撮像画像を取得するステップと、
     前記処理装置が前記所定の保有情報を前記記録装置から取得するステップと、
     前記処理装置が、これらのステップにおいて取得した前記撮像画像及び前記所定の保有情報に基づいて、前記撮像画像の中から予め設定された前記対象物を認識し、当該対象物が表す前記保有情報を抽出して所定の情報信号を生成するステップと、
     前記出力器が、前記処理装置によって生成された前記情報信号に対応する出力を行うステップとを備えたことを特徴とする情報提示方法。
    An image pickup device that captures information on the surrounding environment of the user, a recording device that records predetermined holding information held by an object included in the surrounding environment, and information on the surrounding environment that is connected to the image pickup device and the recording device. And an information presentation method used in an information presentation system including an output device connected to the processing device,
    The processing device obtaining a captured image of the imager;
    The processing device acquiring the predetermined holding information from the recording device;
    The processing device recognizes the object set in advance from the captured image based on the captured image and the predetermined possessed information acquired in these steps, and stores the retained information represented by the object. Extracting to generate a predetermined information signal;
    The output device comprises a step of performing output corresponding to the information signal generated by the processing device.
PCT/JP2015/066756 2015-06-10 2015-06-10 Information presentation system and information presentation method WO2016199248A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/066756 WO2016199248A1 (en) 2015-06-10 2015-06-10 Information presentation system and information presentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/066756 WO2016199248A1 (en) 2015-06-10 2015-06-10 Information presentation system and information presentation method

Publications (1)

Publication Number Publication Date
WO2016199248A1 true WO2016199248A1 (en) 2016-12-15

Family

ID=57503777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/066756 WO2016199248A1 (en) 2015-06-10 2015-06-10 Information presentation system and information presentation method

Country Status (1)

Country Link
WO (1) WO2016199248A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107007437A (en) * 2017-03-31 2017-08-04 北京邮电大学 Interactive blind person's householder method and equipment
CN109413278A (en) * 2018-11-30 2019-03-01 深圳龙图腾创新设计有限公司 A kind of cell phone system and corresponding mobile phone for knowing road for blind person
WO2019093105A1 (en) * 2017-11-07 2019-05-16 株式会社 資生堂 Client device, server, and program
JP2019107414A (en) * 2017-12-20 2019-07-04 穂積 正男 Walking support device
JP2019184820A (en) * 2018-04-10 2019-10-24 穂積 正男 Walking support device
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
JP2020081684A (en) * 2018-11-30 2020-06-04 愛子 木村 Handcart for walking aid
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
US20220187906A1 (en) * 2020-12-16 2022-06-16 Starkey Laboratories, Inc. Object avoidance using ear-worn devices and image sensors
WO2023233526A1 (en) * 2022-05-31 2023-12-07 Loovic株式会社 Guidance system, guidance method, and guidance program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002238942A (en) * 2001-02-21 2002-08-27 Fuji Heavy Ind Ltd Man-machine interface and vision aid
JP2006251596A (en) * 2005-03-14 2006-09-21 Kyoto Institute Of Technology Visually impaired person support device
JP2014508596A (en) * 2011-02-24 2014-04-10 アイシス イノベーション リミテッド Optical device for individuals with visual impairment
JP2014157389A (en) * 2013-02-14 2014-08-28 Nec Commun Syst Ltd Stick, transmitter, portable terminal, guide system and guidance method
JP2014236903A (en) * 2013-06-10 2014-12-18 株式会社Cijネクスト Stick navigation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002238942A (en) * 2001-02-21 2002-08-27 Fuji Heavy Ind Ltd Man-machine interface and vision aid
JP2006251596A (en) * 2005-03-14 2006-09-21 Kyoto Institute Of Technology Visually impaired person support device
JP2014508596A (en) * 2011-02-24 2014-04-10 アイシス イノベーション リミテッド Optical device for individuals with visual impairment
JP2014157389A (en) * 2013-02-14 2014-08-28 Nec Commun Syst Ltd Stick, transmitter, portable terminal, guide system and guidance method
JP2014236903A (en) * 2013-06-10 2014-12-18 株式会社Cijネクスト Stick navigation system

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107007437A (en) * 2017-03-31 2017-08-04 北京邮电大学 Interactive blind person's householder method and equipment
JPWO2019093105A1 (en) * 2017-11-07 2020-12-17 株式会社 資生堂 Client devices, servers, programs
JP7390891B2 (en) 2017-11-07 2023-12-04 株式会社 資生堂 Client device, server, program, and information processing method
WO2019093105A1 (en) * 2017-11-07 2019-05-16 株式会社 資生堂 Client device, server, and program
JP2019107414A (en) * 2017-12-20 2019-07-04 穂積 正男 Walking support device
JP2019184820A (en) * 2018-04-10 2019-10-24 穂積 正男 Walking support device
WO2020063614A1 (en) * 2018-09-26 2020-04-02 上海肇观电子科技有限公司 Smart glasses tracking method and apparatus, and smart glasses and storage medium
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
JP2020081684A (en) * 2018-11-30 2020-06-04 愛子 木村 Handcart for walking aid
JP7101602B2 (en) 2018-11-30 2022-07-15 直也 度▲会▼ Wheelbarrow for the visually impaired
CN109413278A (en) * 2018-11-30 2019-03-01 深圳龙图腾创新设计有限公司 A kind of cell phone system and corresponding mobile phone for knowing road for blind person
US20220187906A1 (en) * 2020-12-16 2022-06-16 Starkey Laboratories, Inc. Object avoidance using ear-worn devices and image sensors
WO2023233526A1 (en) * 2022-05-31 2023-12-07 Loovic株式会社 Guidance system, guidance method, and guidance program
JP7617694B2 (en) 2022-05-31 2025-01-20 Loovic株式会社 Guidance system, guidance method and guidance program

Similar Documents

Publication Publication Date Title
WO2016199248A1 (en) Information presentation system and information presentation method
CN105278670B (en) Eyeglasses-type terminal and method of controlling the same
US10169923B2 (en) Wearable display system that displays a workout guide
US20220129942A1 (en) Content output system, terminal device, content output method, and recording medium
JP6476643B2 (en) Head-mounted display device, information system, head-mounted display device control method, and computer program
JP6770536B2 (en) Techniques for displaying text more efficiently in virtual image generation systems
US20210350628A1 (en) Program, information processing method, and information processing terminal
Chanana et al. Assistive technology solutions for aiding travel of pedestrians with visual impairment
JP6705124B2 (en) Head-mounted display device, information system, head-mounted display device control method, and computer program
JP2017529521A (en) Wearable earpieces that provide social and environmental awareness
TW201725462A (en) Work assistance device, work learning device, and work assistance system
JP2011128220A (en) Information presenting device, information presenting method, and program
CN108293171A (en) Information processing equipment, information processing method and program
JP7405083B2 (en) Information processing device, information processing method, and program
JP2005037181A (en) Navigation device, server, navigation system, and navigation method
US11670157B2 (en) Augmented reality system
EP4124073A1 (en) Augmented reality device performing audio recognition and control method therefor
WO2020012955A1 (en) Information processing device, information processing method, and program
EP3640840B1 (en) Tracking method and apparatus for smart glasses, smart glasses and storage medium
JP6500139B1 (en) Visual support device
JP2021081372A (en) Display image generator and display image generation method
WO2019054086A1 (en) Information processing device, information processing method, and program
JP6582403B2 (en) Head-mounted display device, method for controlling head-mounted display device, computer program
KR102263695B1 (en) Apparatus and control method for mobile device using multiple cameras
JP7611774B2 (en) Remote support device, remote support method, remote support program, and remote support system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15894933

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 15894933

Country of ref document: EP

Kind code of ref document: A1