[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018198314A1 - Sound icon distribution system for wearable terminal, and method and program - Google Patents

Sound icon distribution system for wearable terminal, and method and program Download PDF

Info

Publication number
WO2018198314A1
WO2018198314A1 PCT/JP2017/016936 JP2017016936W WO2018198314A1 WO 2018198314 A1 WO2018198314 A1 WO 2018198314A1 JP 2017016936 W JP2017016936 W JP 2017016936W WO 2018198314 A1 WO2018198314 A1 WO 2018198314A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
wearable terminal
icon
uttered
information
Prior art date
Application number
PCT/JP2017/016936
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2017/016936 priority Critical patent/WO2018198314A1/en
Publication of WO2018198314A1 publication Critical patent/WO2018198314A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G7/00Botany in general
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining

Definitions

  • the present invention relates to a voice icon placement system, method, and program for wearable terminals.
  • Patent Document 1 Conventionally, a technique for converting the contents of recorded voice into text has been proposed (see Patent Document 1).
  • the present invention has been made in view of such a demand, and by recording a work situation or the like, a system that allows a user to more intuitively grasp the recorded content with positional information.
  • the purpose is to provide.
  • the present invention provides the following solutions.
  • the invention which concerns on 1st characteristic is the audio
  • Voice acquisition means for acquiring voice uttered by a user of the wearable terminal;
  • Position acquisition means for acquiring a position where the voice is uttered;
  • Voice recognition means for voice recognition of the voice;
  • Classification means for classifying the voice into a predetermined category according to the voice-recognized content;
  • a voice icon arrangement system for wearable terminals is provided.
  • the voice acquisition unit acquires the voice uttered by the user of the wearable terminal
  • the voice recognition unit recognizes the voice
  • the classification unit classifies the voice into a predetermined category.
  • the classifying and displaying means arranges and displays icons corresponding to the classified categories on the map according to the position acquired by the position acquiring means on the display unit of the wearable terminal.
  • the invention according to the second feature is the invention according to the first feature,
  • the voice acquisition means provides a wearable terminal voice icon arrangement system that acquires the uttered voice from a microphone of the wearable terminal.
  • the user can cause the voice acquisition unit to acquire the user's voice without holding the wearable terminal by hand. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
  • the invention according to the third feature is the invention according to the first or second feature,
  • the position acquisition means provides a wearable terminal voice icon arrangement system that acquires a position where the voice is uttered from position information of the wearable terminal.
  • the user can acquire the position where the voice is uttered by the position acquisition means, even if the user does not hold the wearable terminal by hand or the user does not particularly explain the position. Can be made. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
  • the invention according to the fourth feature is the invention according to any one of the first to third features,
  • the classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether the voice-recognized content is positive or negative.
  • the display means since the classification means classifies the speech-recognized content according to whether it is positive or negative, the display means includes an icon indicating positive content, a negative It can be displayed on a map by distinguishing it from an icon indicating a detailed content. Therefore, it is possible to provide a system in which a position indicating a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
  • the invention according to the fifth feature is the invention according to any one of the first to fourth features,
  • the classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether or not a specific keyword is included in the voice-recognized content.
  • the display unit since the classification unit classifies the specific keyword according to whether the specific keyword is included, the display unit includes an icon indicating that the specific keyword is included, and a specific item. It can be distinguished from an icon indicating that no keyword is entered and displayed on the map. Therefore, it is possible to provide a system in which a position in a specific keyword situation and a position that is not so can be grasped more intuitively via a wearable terminal.
  • the invention according to a sixth feature is the invention according to any one of the first to fifth features, Provided is a wearable terminal voice icon arrangement system further comprising switching means for switching ON / OFF the displayed icon under a predetermined condition.
  • the size of the display means of the wearable terminal is limited, and if too much information is displayed on the display means at once, it will be difficult for the user to understand.
  • the display means of the wearable terminal since the icon to be displayed on the display means can be turned on and the icon to be hidden from the display means can be turned off, the display means of the wearable terminal having a limited size Even so, it is possible to provide a system that the user can use without any hesitation.
  • the present invention it is possible to provide a system capable of intuitively grasping the recorded contents by voice through the wearable terminal by causing the obtaining means to obtain voice.
  • a wearable terminal since a wearable terminal is used, it is not necessary to carry the terminal.
  • FIG. 1 is a block diagram showing a hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to the present embodiment.
  • FIG. 2 is a flowchart showing a voice icon arrangement method according to this embodiment.
  • FIG. 3 is an example for explaining the contents of the voice acquisition module 11.
  • FIG. 4 is an example following FIG.
  • FIG. 5 is an example following FIG.
  • FIG. 6 is an example of the voice database 31 in the present embodiment.
  • FIG. 7 is an example of the dictionary database 32 in the present embodiment.
  • FIG. 8 is an example of the Web content database 33 in the present embodiment.
  • FIG. 9 is an example of the classification database 34 in the present embodiment.
  • FIG. 10 is an example when all icons are displayed in the image display unit 70 of the present embodiment.
  • FIG. 11 is an example when some icons are displayed on the image display unit 70 of the present embodiment.
  • FIG. 1 is a block diagram for explaining the hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to this embodiment.
  • the voice icon arrangement system 1 includes a control unit 10 that controls data, a communication unit 20 that communicates with other devices, a storage unit 30 that stores data, an input unit 40 that receives user operations, and user voices.
  • the voice icon arrangement system 1 is a wearable terminal such as a smart glass, a wearable terminal, and a smart watch. Thereby, since a user such as a farmer does not need to carry the terminal, the user can provide the voice icon arrangement system 1 that is highly convenient for a user whose both hands tend to be occupied with work tools.
  • the voice icon placement system 1 may be a smartphone. In this case, it is essential that the smartphone is attached to the body and both hands are in a freehand state.
  • the voice icon arrangement system 1 may be a stand-alone type system provided integrally with a wearable terminal, or a cloud type system including a wearable terminal and a server connected to the wearable terminal via a network. It may be. In the present embodiment, for the sake of simplicity, the audio icon arrangement system 1 will be described as a stand-alone system.
  • the control unit 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the communication unit 20 includes a device for enabling communication with other devices, for example, a Wi-Fi (Wireless Fidelity) compatible device compliant with IEEE 802.11.
  • Wi-Fi Wireless Fidelity
  • the control unit 10 reads a predetermined program and cooperates with the communication unit 20 as necessary, thereby obtaining a voice acquisition module 11, a position acquisition module 12, a voice recognition module 13, a specific module 14, and a classification module. 15, the display module 16, and the switching module 17 are realized.
  • the storage unit 30 is a device that stores data and files, and includes a data storage unit such as a hard disk, a semiconductor memory, a recording medium, and a memory card.
  • the storage unit 30 stores an audio database 31, a dictionary database 32, a web content database 33, a classification database 34, and a map database 35, which will be described later.
  • the storage unit 30 also stores image data to be displayed on the image display unit 70.
  • the type of the input unit 40 is not particularly limited. Examples of the input unit 40 include a keyboard, a mouse, and a touch panel.
  • the type of the sound collecting unit 50 is not particularly limited. Examples of the sound collecting unit 50 include a microphone.
  • the position detection unit 60 is not particularly limited as long as it is a device that can detect the latitude and longitude where the voice icon arrangement system 1 is located. Examples of the position detection unit 60 include a GPS (Global Positioning System).
  • the type of the image display unit 70 is not particularly limited. Examples of the image display unit 70 include a monitor and a touch panel.
  • FIG. 2 is a flowchart showing a voice icon placement method using the voice icon placement system 1. The processing executed by each hardware and the software module described above will be described.
  • Step S10 Acquisition of voice
  • positioning system 1 performs the audio
  • Step S11 Acquisition of a position where sound is generated
  • the control part 10 performs the position acquisition module 12, and acquires the position where the audio
  • the control unit 10 refers to a calendar (not shown) stored in the storage unit 30 and further acquires the date on which the voice was uttered.
  • step S10 and step S11 are examples for explaining the processing of step S10 and step S11.
  • the farmer who operates Yamada Farm observes the state of the long leek field cultivated on Yamada Farm A.
  • the stem grew to 30 cm. “The soil is good. It seems to be about a week before harvesting.”
  • the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
  • the position detector 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located.
  • the position detection unit 60 detects that the latitude is 35 degrees 52 minutes 7 seconds north latitude and the longitude is 139 degrees 46 minutes 56 seconds east longitude.
  • the information regarding the position is also set in a predetermined area of the storage unit 30 together with the A / D converted information.
  • the farmhouse moved to a point of latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude 139 degrees 47 minutes 52 seconds east longitude, and voiced “There was a pest A here.” It has occurred.
  • the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
  • the position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
  • the farmhouse moves to a point at latitude: 35 ° 51: 57 latitude N and longitude: 139 ° 47: 1 east longitude, and produces a voice saying “A lot of locusts have occurred.” ing.
  • the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
  • the position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
  • Step S12 Speech recognition
  • the control unit 10 transcribes the voice collected by the sound collection unit 50 from the waveform of the sound wave included in the A / D converted information.
  • the information A / D-converted at the stage shown in FIG. It is said.
  • the information that has been A / D converted in the stage shown in FIG. The information that has been A / D converted at the stage shown in FIG. 5 is referred to as “Inagoga Taiyo Hassei”.
  • the control unit 10 refers to the dictionary database 32 shown in FIG. 7, replaces the transcribed information with a language, and converts it into a sentence.
  • the information A / D converted at the stage shown in FIG. 3 is “It was rainy in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. It is said.
  • the information A / D converted at the stage shown in FIG. 4 is “There was a pest A here”.
  • the information subjected to A / D conversion at the stage shown in FIG. 5 is “Large locusts”.
  • All the documented information is set in a predetermined area of the storage unit 30 in association with A / D converted information and position information.
  • Step S13 Identification of Web Content
  • the control unit 10 refers to the Web content database 33.
  • FIG. 8 is an example of the Web content database 33.
  • information on the field and the range of the field is stored in advance in association with the identification number.
  • an area surrounded by latitude 35 ° 51: 55 to 35 ° 52: 10 and latitude longitude 139 ° 46: 55 to 139 ° 47: 5 east is an area of Yamada Farm A.
  • the area of Yamada Farm A is associated with the identification number “1”.
  • the area surrounded by latitude 35 ° 52: 10 seconds to 35 ° 52: 20 seconds north and longitude 139 ° 46: 55 seconds to 139 ° 47: 5 seconds east is the Yamada Farm B area.
  • the area of Yamada Farm B is associated with the identification number “2”.
  • the position information set in the predetermined area of the storage unit 30 through the steps of FIG. 3 to FIG. 5 includes (1) latitude: 35 ° 52: 7 latitude north, longitude: 139 ° 46: 56 east longitude, (2) Latitude: north latitude 35 degrees 52 minutes 2 seconds, longitude: 139 longitude 47 minutes 52 seconds east longitude, (3) latitude: north latitude 35 degrees 51 minutes 57 seconds, longitude: east longitude 139 degrees 47 minutes 1 second.
  • the control unit 10 can specify that the Web content associated with the position information acquired in the process of Step S10 is the Web content of Yamada Farm A with the identification number “1”.
  • the control unit 10 determines whether the position acquired in the process of step S ⁇ b> 11 is inside a specific range defined by the Web content database 33, and specifies the Web content associated with the specific range. I am doing so. For example, in the case of occupations that work in a wide area with a certain area, such as agriculture, if the position where the voice is uttered is set too strictly, the amount of data will be too large and it may become a system that is difficult to use. possible. According to the invention described in the present embodiment, since Web content is managed in association with a specific range, it is possible to prevent the amount of data from becoming too large and complicated.
  • the control unit 10 reads a calendar (not shown) stored in the storage unit 30 so that “February 14”, which is today's date, is recorded in advance in the “date” of the web content database 33. ing. Further, the control unit 10 reads out weather information from an external weather forecast providing website via the communication unit 20, so that the “weather” in the web content database 33 includes “sunny” that is the current weather in advance. It is recorded.
  • control unit 10 uses the past information to record information such as “Yamada Farm A” and “Leek onion” in advance.
  • the control unit 10 acquires the voice, the position where the voice is uttered, and the date when the voice is uttered, and in the process of step S13, The control unit 10 identifies Web content associated with the position and date acquired in the process of step S10. Thereby, since the date is associated with the Web content, it is possible to provide the voice icon arrangement system 1 that is more convenient for the user.
  • Step S14 Classify speech into predetermined categories
  • the control unit 10 of the voice icon arrangement system 1 executes the classification module 15 to classify and record the contents recognized by the voice in the process of step S12 into the predetermined category in the Web content specified in the process of step S13. (Step S14).
  • the control unit 10 reads the content that has been voice-recognized in the process of step S11. In a predetermined area of the storage unit 30, in order, “It was raining in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. And the information “This is dead” is stored. The control unit 10 reads out these pieces of information from a predetermined area of the storage unit 30.
  • the control unit 10 refers to the classification database 34.
  • FIG. 9 is an example of the classification database 34.
  • the classification database whether the word included in the sentenced content, the item listed in the Web content database 33, the word included in the sentenced content is positive or negative, A relationship with a flag for identifying whether the keyword is a specific keyword is recorded in advance.
  • the Web content database 33 (FIG. 8) includes “date”, “weather”, “farm field”, “crop”, “stem”, “soil”, “harvest”, “pest”, “withered”. Etc. "are listed.
  • word groups related to these items are recorded.
  • the control unit 10 refers to the classification database 34 and associates “30 cm” included in this information with the item “stem”. Further, “good” is associated with the item “soil”, and “one week” is associated with the item “harvest”. Therefore, the control unit 10 sets the item “stem” in the Web content database 33 (FIG. 8) with the identification number “1” of “2. Crop growth state” and the date “February 14, 2017”. The information “30 cm”, the information “good” in the item “soil”, and the information “around one week” are set in the item “harvest”.
  • the control unit 10 refers to the classification database 34 and associates “here pest” included in this information with the item “pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude: 139 degrees 47 minutes 52 seconds east longitude", which is the position information when the information "There was pest A here.” Information on the type of “pest A” is set.
  • the control unit 10 refers to the classification database 34 and associates “Locust” included in this information with the item “Pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 51 minutes 57 seconds north latitude, longitude: 139 degrees 47 minutes 1 second east longitude", which is the position information when the information "Locus occurs in large quantities”, is set.
  • “Locust” included in the above information corresponds to a specific word set in advance. Therefore, a flag indicating that the word is a specific word is set for the information “Large locusts are generated.”
  • control unit 10 when there is already information in the Web content specified in the process of step S12, the control unit 10 overwrites and records the content recognized in the process of step S11. This makes it possible to manage the work records of farm work in time series.
  • control unit 10 specifies specific items (for example, date, weather, field, crop, stem, soil, harvest) in the Web content specified in the process in step S12 based on the content recognized in the process in step S11. , Pests, withering items, etc.), the related specific content recognized by voice recognition is recorded.
  • specific items for example, date, weather, field, crop, stem, soil, harvest
  • the type of flag that is contained most in one piece of information may be set.
  • a flag given to a word or the like may be weighted and the flag with the highest weight may be set.
  • Step S15 Web Content Image Display
  • FIG. 10 shows a display example of the image display unit 70 of the wearable terminal at that time.
  • Information recorded in the Web content database 33 is displayed on the image display unit 70 of the wearable terminal. Specifically, “2017/2/14” which is today's date is displayed on the upper right, and “sunny” which is today's weather is displayed next to it.
  • control unit 10 refers to the map database 35 and causes the image display unit 70 of the wearable terminal to display a map of an area corresponding to the identification number “1” of the Web content database 33. Then, the control unit 10 displays an icon corresponding to the flag classified in the process of step S14 on the map according to the position detected by the position detection unit 60 in the process of step S11 on the image display unit 70 of the wearable terminal. Place and display in.
  • the index of the mark on the map is displayed to the right of the image display unit 70 of the wearable terminal.
  • White circles on the map indicate positions that contain positive information.
  • White circles on the map indicate positions that contain positive information.
  • Boxes with halftone dots on the map indicate positions that contain negative information.
  • a hatched box on the map indicates a position including information having a specific word relating to, for example, “Locust”.
  • indexes are all “ON”. This indicates that all indexes are displayed on the map.
  • Step S16 Switch Icon
  • FIG. 11 shows a display example of the image display unit 70 of the wearable terminal at that time.
  • the size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand.
  • the image display unit of the wearable terminal having a size limit Even if it is 70, the icon arrangement
  • the voice recognition module 13 executes the voice recognition.
  • the voice is classified into a predetermined category (for each predetermined flag) by executing the classification module 15, and the icon corresponding to the classified category (flag) is displayed on the image display unit 70 of the wearable terminal by executing the display module 16.
  • voice icon arrangement system 1 which can grasp
  • a wearable terminal since a wearable terminal is used, it is not necessary to carry the terminal. As a result, it is possible to provide a voice icon arrangement system 1 that is particularly convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
  • the control unit 10 acquires the voice generated by the user from the sound collection unit 50 of the wearable terminal by executing the voice acquisition module 11.
  • the control unit 10 can acquire the position where the voice is uttered from the position information of the wearable terminal by executing the position acquisition module 12. Even if the user does not hold the wearable terminal by hand, and the user does not particularly explain the position, the user can acquire the contents of the voice and the information on the position where the voice is uttered. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for a user who tends to be occupied with work tools in both hands like farm work.
  • control unit 10 can execute the classification module 15 to classify the speech-recognized content according to whether it is positive or negative.
  • the image display unit 70 of the wearable terminal an icon indicating positive contents and an icon indicating negative contents can be distinguished from each other and displayed on a map. Therefore, it is possible to provide the voice icon arrangement system 1 in which a position showing a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
  • the control unit 10 can classify according to whether a specific keyword is included in the speech-recognized content.
  • an icon indicating that the specific keyword is included and an icon indicating that the specific keyword is not included are distinguished from each other and displayed on the map. Can do. Therefore, it is possible to provide the voice icon arrangement system 1 in which the position in the situation of the specific keyword and the position that is not so can be grasped more intuitively through the wearable terminal.
  • the displayed icon can be switched on / off under a predetermined condition.
  • the size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand.
  • an icon that is desired to be displayed on the image display unit 70 can be turned on, and an icon that is desired to be hidden from the image display unit 70 can be turned off. Even with the image display unit 70 of the terminal, it is possible to provide the voice icon arrangement system 1 that the user can use without any difficulty.
  • control unit 10 when the control unit 10 acquires voice in the process of step S10, the control unit 10 recognizes the voice in the process of step S12, and the control is performed in the process of step S13.
  • the unit 10 specifies the Web content associated with the position where the voice is acquired.
  • the control unit 10 records the speech-recognized content in the specified web content.
  • positioning system 1 which links
  • the Web content displayed on the image display unit 70 includes a map including position information including the position where the voice is acquired, and the control unit 10 performs voice recognition on the map of the Web content in the process of step S12.
  • the displayed contents are displayed in a superimposed manner.
  • the voice content acquired by the control unit 10 is recorded on the Web content in association with the position where the voice is uttered by acquiring the voice in the process of step S10. .
  • the speech-recognized content is superimposed and displayed on the Web content map. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for the user.
  • the means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program.
  • the program is provided in a form recorded on a computer-readable recording medium such as a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD-RAM, etc.).
  • the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Animal Husbandry (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Mining & Mineral Resources (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Marketing (AREA)
  • Agronomy & Crop Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Botany (AREA)
  • Ecology (AREA)
  • Forests & Forestry (AREA)
  • Environmental Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

[Problem] To provide a system which, by creating an audio recording of work conditions, etc., connects the recorded content with position information to enable the user to more intuitively understand said recorded content. [Solution] In this sound icon distribution system 1, a control unit 10 executes a sound acquisition module 11; when a sound uttered by the user of the wearable terminal is acquired, then that sound is recognized by executing a sound recognition module 13, the sound is classified into prescribed categories (for each prescribed flag) by executing a classification module 15 and, by executing a display module 16, an icon corresponding to the classified category (flag) is displayed on an image display unit 70 of the wearable terminal, arranged on a map and corresponding to a position acquired through execution of a position acquisition module 12.

Description

ウェアラブル端末用音声アイコン配置システム、方法及びプログラムVoice icon placement system, method and program for wearable terminal
 本発明は、ウェアラブル端末用音声アイコン配置システム、方法及びプログラムに関する。 The present invention relates to a voice icon placement system, method, and program for wearable terminals.
 従来より、録音された音声の内容をテキスト化する技術が提案されている(特許文献1参照)。 Conventionally, a technique for converting the contents of recorded voice into text has been proposed (see Patent Document 1).
特開2014-202848号公報JP 2014-202848 A
 ところで、農業従事者にとって、農作業をしながらメモをとることは煩雑であり、作業状況を録音するだけでテキストとして記録できるシステムは、非常に有用である。しかしながら、農業のように、一定の面積を有する広い範囲で作業する職業の場合、記録された作業状況をユーザがより直感的に把握できるような態様で表示することができれば、よりいっそう利便性が高まる。 By the way, it is cumbersome for farmers to take notes while farming, and a system that can record the work situation as text is very useful. However, in the case of occupations that work in a wide range having a certain area, such as agriculture, it would be even more convenient if the recorded work status could be displayed in a manner that the user can grasp more intuitively. Rise.
 本発明は、このような要望に鑑みてなされたものであり、作業状況等を録音することで、録音した内容を位置情報と紐付けてユーザがより直感的に把握することの可能なシステムを提供することを目的とする。 The present invention has been made in view of such a demand, and by recording a work situation or the like, a system that allows a user to more intuitively grasp the recorded content with positional information. The purpose is to provide.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、ウェアラブル端末の表示部に音声内容に応じたアイコンを表示するウェアラブル端末用音声アイコン表示システムであって、
 前記ウェアラブル端末のユーザから発声された音声を取得する音声取得手段と、
 前記音声が発声された位置を取得する位置取得手段と、
 前記音声を音声認識する音声認識手段と、
 前記音声認識された内容に応じて、前記音声を所定のカテゴリに分類する分類手段と、
 前記ウェアラブル端末の表示部に、前記分類されたカテゴリに対応するアイコンを、前記位置に応じて地図上に配置して表示する表示手段と、
を備えるウェアラブル端末用音声アイコン配置システムを提供する。
The invention which concerns on 1st characteristic is the audio | voice icon display system for wearable terminals which displays the icon according to the audio | voice content on the display part of a wearable terminal,
Voice acquisition means for acquiring voice uttered by a user of the wearable terminal;
Position acquisition means for acquiring a position where the voice is uttered;
Voice recognition means for voice recognition of the voice;
Classification means for classifying the voice into a predetermined category according to the voice-recognized content;
Display means for arranging and displaying icons corresponding to the classified categories on a map according to the position on the display unit of the wearable terminal;
A voice icon arrangement system for wearable terminals is provided.
 第1の特徴に係る発明によれば、音声取得手段が、ウェアラブル端末のユーザから発声された音声を取得すると、その音声を音声認識手段が認識するとともに、分類手段は、音声を所定のカテゴリに分類し、表示手段は、ウェアラブル端末の表示部に、分類されたカテゴリに対応するアイコンを、位置取得手段が取得した位置に応じて地図上に配置して表示する。これにより、取得手段に音声を取得させることで、音声による記録内容を、ウェアラブル端末を介して直感的に把握可能なシステムを提供できる。 According to the first aspect of the invention, when the voice acquisition unit acquires the voice uttered by the user of the wearable terminal, the voice recognition unit recognizes the voice, and the classification unit classifies the voice into a predetermined category. The classifying and displaying means arranges and displays icons corresponding to the classified categories on the map according to the position acquired by the position acquiring means on the display unit of the wearable terminal. As a result, it is possible to provide a system capable of intuitively grasping the recorded contents by voice through the wearable terminal by causing the obtaining unit to obtain voice.
 特に、ウェアラブル端末を用いているため、端末を持ち運びする必要がない。その結果、農作業のように、両手が作業用具でふさがりがちなユーザにとって特に利便性の高いシステムを提供できる。 Especially, since a wearable terminal is used, it is not necessary to carry the terminal. As a result, it is possible to provide a system that is particularly convenient for users who tend to block their hands with work tools, such as farm work.
 第2の特徴に係る発明は、第1の特徴に係る発明であって、
 前記音声取得手段は、前記ウェアラブル端末のマイクから、前記発声された音声を取得する、ウェアラブル端末用音声アイコン配置システムを提供する。
The invention according to the second feature is the invention according to the first feature,
The voice acquisition means provides a wearable terminal voice icon arrangement system that acquires the uttered voice from a microphone of the wearable terminal.
 第2の特徴に係る発明によれば、ユーザは、ウェアラブル端末を手で持たなくても、音声取得手段にユーザの音声を取得させることができる。そのため、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高いシステムを提供できる。 According to the second aspect of the invention, the user can cause the voice acquisition unit to acquire the user's voice without holding the wearable terminal by hand. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
 第3の特徴に係る発明は、第1又は第2の特徴に係る発明であって、
 前記位置取得手段は、前記ウェアラブル端末の位置情報から、前記音声が発声された位置を取得する、ウェアラブル端末用音声アイコン配置システムを提供する。
The invention according to the third feature is the invention according to the first or second feature,
The position acquisition means provides a wearable terminal voice icon arrangement system that acquires a position where the voice is uttered from position information of the wearable terminal.
 第3の特徴に係る発明によれば、ユーザは、ウェアラブル端末を手で持たなくても、また、ユーザが位置について特に説明しなくても、位置取得手段に、音声が発声された位置を取得させることができる。そのため、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高いシステムを提供できる。 According to the third aspect of the invention, the user can acquire the position where the voice is uttered by the position acquisition means, even if the user does not hold the wearable terminal by hand or the user does not particularly explain the position. Can be made. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
 第4の特徴に係る発明は、第1から第3のいずれかの特徴に係る発明であって、
 前記分類手段は、前記音声認識された内容がポジティブなのかネガティブなのかに応じて、前記音声を分類する、ウェアラブル端末用音声アイコン配置システムを提供する。
The invention according to the fourth feature is the invention according to any one of the first to third features,
The classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether the voice-recognized content is positive or negative.
 第4の特徴に係る発明によれば、分類手段によって、音声認識された内容がポジティブなのかネガティブなのかに応じて分類されているため、表示手段には、ポジティブな内容を示すアイコンと、ネガティブな内容を示すアイコンとに区別して、地図上に配置して表示することができる。そのため、ポジティブな状況を示す位置と、ネガティブな状況を示す位置とを、ウェアラブル端末を介して、よりいっそう直感的に把握可能なシステムを提供できる。 According to the fourth aspect of the invention, since the classification means classifies the speech-recognized content according to whether it is positive or negative, the display means includes an icon indicating positive content, a negative It can be displayed on a map by distinguishing it from an icon indicating a detailed content. Therefore, it is possible to provide a system in which a position indicating a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
 第5の特徴に係る発明は、第1から第4のいずれかの特徴に係る発明であって、
 前記分類手段は、前記音声認識された内容に特定のキーワードが入っているかどうかに応じて、前記音声を分類する、ウェアラブル端末用音声アイコン配置システムを提供する。
The invention according to the fifth feature is the invention according to any one of the first to fourth features,
The classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether or not a specific keyword is included in the voice-recognized content.
 第5の特徴に係る発明によれば、分類手段によって、特定のキーワードが入っているかに応じて分類されているため、表示手段には、特定のキーワードが入っていることを示すアイコンと、特定のキーワードが入っていないことを示すアイコンとに区別して、地図上に配置して表示することができる。そのため、特定のキーワードの状況にある位置と、そうではない位置とを、ウェアラブル端末を介して、よりいっそう直感的に把握可能なシステムを提供できる。 According to the fifth aspect of the invention, since the classification unit classifies the specific keyword according to whether the specific keyword is included, the display unit includes an icon indicating that the specific keyword is included, and a specific item. It can be distinguished from an icon indicating that no keyword is entered and displayed on the map. Therefore, it is possible to provide a system in which a position in a specific keyword situation and a position that is not so can be grasped more intuitively via a wearable terminal.
 また、非常に広範囲の田畑を歩き回らなければならない農業従事者にとって、特定のキーワードについての音声を発声するだけで、ウェアラブル端末を手で持たなくても、特定のキーワードの状況にある位置を記録できる。そのため、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高いシステムを提供できる。 Also, for farmers who have to walk around a very wide range of fields, they can simply utter a voice about a specific keyword and record the position in a specific keyword situation without having to wear a wearable device by hand. it can. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
 第6の特徴に係る発明は、第1から第5のいずれかの特徴に係る発明であって、
 前記表示されたアイコンを所定の条件でON/OFFを切り替える切替手段をさらに備える、ウェアラブル端末用音声アイコン配置システムを提供する。
The invention according to a sixth feature is the invention according to any one of the first to fifth features,
Provided is a wearable terminal voice icon arrangement system further comprising switching means for switching ON / OFF the displayed icon under a predetermined condition.
 ウェアラブル端末の表示手段の大きさには限りがあり、表示手段に、一度に多くの情報を表示し過ぎると、ユーザにとってかえって分かりづらくなる。第6の特徴に係る発明によれば、表示手段に表示させたいアイコンをONにし、表示手段から非表示にしたいアイコンをOFFにすることができるので、大きさに制限があるウェアラブル端末の表示手段であっても、ユーザがとまどうことなく利用可能なシステムを提供できる。 The size of the display means of the wearable terminal is limited, and if too much information is displayed on the display means at once, it will be difficult for the user to understand. According to the sixth aspect of the invention, since the icon to be displayed on the display means can be turned on and the icon to be hidden from the display means can be turned off, the display means of the wearable terminal having a limited size Even so, it is possible to provide a system that the user can use without any hesitation.
 本発明によれば、取得手段に音声を取得させることで、音声による記録内容を、ウェアラブル端末を介して直感的に把握可能なシステムを提供できる。特に、ウェアラブル端末を用いているため、端末を持ち運びする必要がない。その結果、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高いシステムを提供できる。 According to the present invention, it is possible to provide a system capable of intuitively grasping the recorded contents by voice through the wearable terminal by causing the obtaining means to obtain voice. In particular, since a wearable terminal is used, it is not necessary to carry the terminal. As a result, it is possible to provide a system that is even more convenient for users who tend to block their hands with work tools, such as farming.
図1は、本実施形態におけるウェアラブル端末用音声アイコン配置システム1のハードウェア構成とソフトウェア機能を示すブロック図である。FIG. 1 is a block diagram showing a hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to the present embodiment. 図2は、本実施形態における音声アイコン配置方法を示すフローチャートである。FIG. 2 is a flowchart showing a voice icon arrangement method according to this embodiment. 図3は、音声取得モジュール11の内容を説明するための一例である。FIG. 3 is an example for explaining the contents of the voice acquisition module 11. 図4は、図3に続く例である。FIG. 4 is an example following FIG. 図5は、図4に続く例である。FIG. 5 is an example following FIG. 図6は、本実施形態における音声データベース31の一例である。FIG. 6 is an example of the voice database 31 in the present embodiment. 図7は、本実施形態における辞書データベース32の一例である。FIG. 7 is an example of the dictionary database 32 in the present embodiment. 図8は、本実施形態におけるWebコンテンツデータベース33の一例である。FIG. 8 is an example of the Web content database 33 in the present embodiment. 図9は、本実施形態における分類データベース34の一例である。FIG. 9 is an example of the classification database 34 in the present embodiment. 図10は、本実施形態の画像表示部70において、全てのアイコンを表示したときの一例である。FIG. 10 is an example when all icons are displayed in the image display unit 70 of the present embodiment. 図11は、本実施形態の画像表示部70において、一部のアイコンを表示したときの一例である。FIG. 11 is an example when some icons are displayed on the image display unit 70 of the present embodiment.
 以下、本発明を実施するための形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 Hereinafter, modes for carrying out the present invention will be described with reference to the drawings. This is merely an example, and the technical scope of the present invention is not limited to this.
<ウェアラブル端末用音声アイコン配置システム1の構成>
 図1は、本実施形態におけるウェアラブル端末用音声アイコン配置システム1のハードウェア構成とソフトウェア機能を説明するためのブロック図である。
<Configuration of Voice Icon Placement System 1 for Wearable Terminal>
FIG. 1 is a block diagram for explaining the hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to this embodiment.
 音声アイコン配置システム1は、データを制御する制御部10と、他の機器と通信を行う通信部20と、データを記憶する記憶部30と、ユーザの操作を受け付ける入力部40と、ユーザの声を集音する集音部50と、音声アイコン配置システム1が存在する位置を検出する位置検出部60と、制御部10で制御したデータや画像を出力表示する画像表示部70とを備える。 The voice icon arrangement system 1 includes a control unit 10 that controls data, a communication unit 20 that communicates with other devices, a storage unit 30 that stores data, an input unit 40 that receives user operations, and user voices. A sound collecting unit 50, a position detecting unit 60 that detects a position where the voice icon arrangement system 1 exists, and an image display unit 70 that outputs and displays data and images controlled by the control unit 10.
 音声アイコン配置システム1は、スマートグラス、イヤラブル端末、スマートウォッチをはじめとしたウェアラブル端末である。これにより、農家等のユーザは、端末を持ち運びする必要がないため、両手が作業用具でふさがりがちなユーザにとって利便性の高い音声アイコン配置システム1を提供できる。 The voice icon arrangement system 1 is a wearable terminal such as a smart glass, a wearable terminal, and a smart watch. Thereby, since a user such as a farmer does not need to carry the terminal, the user can provide the voice icon arrangement system 1 that is highly convenient for a user whose both hands tend to be occupied with work tools.
 音声アイコン配置システム1は、スマートフォンであってもよいが、この場合、体にスマートフォンを装着し、両手がフリーハンドの状態にあることが必須である。 The voice icon placement system 1 may be a smartphone. In this case, it is essential that the smartphone is attached to the body and both hands are in a freehand state.
 音声アイコン配置システム1は、ウェアラブル端末に一体的に設けられたスタンドアローン型のシステムであってもよいし、ウェアラブル端末と当該ウェアラブル端末とネットワークを介して接続されるサーバとを備えるクラウド型のシステムであってもよい。本実施形態では、簡単のため、音声アイコン配置システム1がスタンドアローン型のシステムであるものとして説明する。 The voice icon arrangement system 1 may be a stand-alone type system provided integrally with a wearable terminal, or a cloud type system including a wearable terminal and a server connected to the wearable terminal via a network. It may be. In the present embodiment, for the sake of simplicity, the audio icon arrangement system 1 will be described as a stand-alone system.
 制御部10は、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)等を備える。 The control unit 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
 通信部20は、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWi-Fi(Wireless Fidelity)対応デバイスを備える。 The communication unit 20 includes a device for enabling communication with other devices, for example, a Wi-Fi (Wireless Fidelity) compatible device compliant with IEEE 802.11.
 制御部10は、所定のプログラムを読み込み、必要に応じて通信部20と協働することで、音声取得モジュール11と、位置取得モジュール12と、音声認識モジュール13と、特定モジュール14と、分類モジュール15と、表示モジュール16と、切替モジュール17とを実現する。 The control unit 10 reads a predetermined program and cooperates with the communication unit 20 as necessary, thereby obtaining a voice acquisition module 11, a position acquisition module 12, a voice recognition module 13, a specific module 14, and a classification module. 15, the display module 16, and the switching module 17 are realized.
 記憶部30は、データやファイルを記憶する装置であって、ハードディスクや半導体メモリ、記録媒体、メモリカード等による、データのストレージ部を備える。記憶部30は、後に説明する音声データベース31、辞書データベース32、Webコンテンツデータベース33、分類データベース34、及び地図データベース35を記憶する。また、記憶部30は、画像表示部70に表示させる画像のデータを記憶する。 The storage unit 30 is a device that stores data and files, and includes a data storage unit such as a hard disk, a semiconductor memory, a recording medium, and a memory card. The storage unit 30 stores an audio database 31, a dictionary database 32, a web content database 33, a classification database 34, and a map database 35, which will be described later. The storage unit 30 also stores image data to be displayed on the image display unit 70.
 入力部40の種類は、特に限定されない。入力部40として、例えば、キーボード、マウス、タッチパネル等が挙げられる。 The type of the input unit 40 is not particularly limited. Examples of the input unit 40 include a keyboard, a mouse, and a touch panel.
 集音部50の種類は、特に限定されない。集音部50として、例えば、マイク等が挙げられる。 The type of the sound collecting unit 50 is not particularly limited. Examples of the sound collecting unit 50 include a microphone.
 位置検出部60は、音声アイコン配置システム1が位置する緯度及び経度を検出できる装置であれば、特に限定されない。位置検出部60として、例えば、GPS(Global Positioning System)が挙げられる。 The position detection unit 60 is not particularly limited as long as it is a device that can detect the latitude and longitude where the voice icon arrangement system 1 is located. Examples of the position detection unit 60 include a GPS (Global Positioning System).
 画像表示部70の種類は、特に限定されない。画像表示部70して、例えば、モニタ、タッチパネル等が挙げられる。 The type of the image display unit 70 is not particularly limited. Examples of the image display unit 70 include a monitor and a touch panel.
<音声アイコン配置システム1を用いた音声アイコン配置方法を示すフローチャート]
 図2は、音声アイコン配置システム1を用いた音声アイコン配置方法を示すフローチャートである。上述した各ハードウェアと、ソフトウェアモジュールが実行する処理について説明する。
<Flowchart showing Voice Icon Placement Method Using Voice Icon Placement System 1>
FIG. 2 is a flowchart showing a voice icon placement method using the voice icon placement system 1. The processing executed by each hardware and the software module described above will be described.
〔ステップS10:音声の取得〕
 最初に、音声アイコン配置システム1の制御部10は、音声取得モジュール11を実行し、ウェアラブル端末のユーザから発声された音声を取得する(ステップS10)。
[Step S10: Acquisition of voice]
Initially, the control part 10 of the audio | voice icon arrangement | positioning system 1 performs the audio | voice acquisition module 11, and acquires the audio | voice spoken by the user of the wearable terminal (step S10).
〔ステップS11:音声が発生された位置の取得〕
 続いて、制御部10は、位置取得モジュール12を実行し、ステップS10の処理で音声が発声された位置を取得する(ステップS11)。なお、必須ではないが、制御部10は、記憶部30に記憶されているカレンダー(図示せず)を参照し、音声が発声された日付をさらに取得することが好ましい。
[Step S11: Acquisition of a position where sound is generated]
Then, the control part 10 performs the position acquisition module 12, and acquires the position where the audio | voice was uttered by the process of step S10 (step S11). Although not essential, it is preferable that the control unit 10 refers to a calendar (not shown) stored in the storage unit 30 and further acquires the date on which the voice was uttered.
 図3~図5は、ステップS10及びステップS11の処理を説明するための一例である。ここでは、山田農場を営む農家が山田農場Aで栽培する長ネギ畑の様子を観察するものとして説明する。図3に示すように、農家は、緯度:北緯35度52分7秒、経度:東経139度46分56秒の地点にて、「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」と音声を発生している。 3 to 5 are examples for explaining the processing of step S10 and step S11. Here, it is assumed that the farmer who operates Yamada Farm observes the state of the long leek field cultivated on Yamada Farm A. As shown in FIG. 3, the farmer at the point of latitude: north latitude 35 degrees 52 minutes 7 seconds and longitude: 139 degrees 46 minutes 56 seconds east longitude, “It was raining in the weather forecast, but it was clear. The stem grew to 30 cm. “The soil is good. It seems to be about a week before harvesting.”
 音声アイコン配置システム1の集音部50は、その音声を集音する。そして、制御部10は、集音部50で集音した音声をA/D変換し、A/D変換された情報を記憶部30の所定領域にセットする。 The sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
 その際、音声アイコン配置システム1の位置検出部60は、音声アイコン配置システム1が位置する緯度及び経度を検出する。ここでは、位置検出部60は、緯度が北緯35度52分7秒、経度が東経139度46分56秒であると検出する。位置に関する情報もまた、A/D変換された情報とともに記憶部30の所定領域にセットされる。 At that time, the position detector 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located. Here, the position detection unit 60 detects that the latitude is 35 degrees 52 minutes 7 seconds north latitude and the longitude is 139 degrees 46 minutes 56 seconds east longitude. The information regarding the position is also set in a predetermined area of the storage unit 30 together with the A / D converted information.
 続いて、図4に示すように、農家は、緯度:北緯35度52分2秒、経度:東経139度47分52秒の地点に移動し、「ここに害虫Aがいた。」と音声を発生している。 Subsequently, as shown in FIG. 4, the farmhouse moved to a point of latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude 139 degrees 47 minutes 52 seconds east longitude, and voiced “There was a pest A here.” It has occurred.
 音声アイコン配置システム1の集音部50は、その音声を集音する。そして、制御部10は、集音部50で集音した音声をA/D変換し、A/D変換された情報を記憶部30の所定領域にセットする。また、音声アイコン配置システム1の位置検出部60は、音声アイコン配置システム1が位置する緯度及び経度を検出し、位置に関する情報もまた、A/D変換された情報とともに記憶部30の所定領域にセットされる。 The sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30. The position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
 続いて、図5に示すように、農家は、緯度:北緯35度51分57秒、経度:東経139度47分1秒の地点に移動し、「イナゴが大量発生。」と音声を発生している。 Subsequently, as shown in FIG. 5, the farmhouse moves to a point at latitude: 35 ° 51: 57 latitude N and longitude: 139 ° 47: 1 east longitude, and produces a voice saying “A lot of locusts have occurred.” ing.
 音声アイコン配置システム1の集音部50は、その音声を集音する。そして、制御部10は、集音部50で集音した音声をA/D変換し、A/D変換された情報を記憶部30の所定領域にセットする。また、音声アイコン配置システム1の位置検出部60は、音声アイコン配置システム1が位置する緯度及び経度を検出し、位置に関する情報もまた、A/D変換された情報とともに記憶部30の所定領域にセットされる。 The sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30. The position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
〔ステップS12:音声認識〕
 図2に戻る。続いて、音声アイコン配置システム1の制御部10は、音声認識モジュール13を実行し、ステップS10の処理で取得した音声を音声認識する(ステップS12)。
[Step S12: Speech recognition]
Returning to FIG. Subsequently, the control unit 10 of the voice icon arrangement system 1 executes the voice recognition module 13 and recognizes the voice acquired in the process of step S10 (step S12).
 制御部10は、図6に示す音声データベース31を参照し、A/D変換された情報が有する音波の波形から、集音部50で集音した音声を文字起こしする。この処理により、図3に示した段階でA/D変換された情報は、「テンキヨホウデハアメダッタガハレタ/クキガサンジュッセンチニセイチョウシタ/ドジョウハリョウコウ/シュウカクマデイッシュウカンカナ」とされる。図4に示した段階でA/D変換された情報は、「ココニガイチュウエーガイタ」とされる。図5に示した段階でA/D変換された情報は、「イナゴガタイリョウハッセイ」とされる。 Referring to the voice database 31 shown in FIG. 6, the control unit 10 transcribes the voice collected by the sound collection unit 50 from the waveform of the sound wave included in the A / D converted information. As a result of this processing, the information A / D-converted at the stage shown in FIG. It is said. The information that has been A / D converted in the stage shown in FIG. The information that has been A / D converted at the stage shown in FIG. 5 is referred to as “Inagoga Taiyo Hassei”.
 続いて、制御部10は、図7に示す辞書データベース32を参照し、文字起こしされた情報を言語に置き換え、文章化する。この処理により、図3に示した段階でA/D変換された情報は、「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」とされる。図4に示した段階でA/D変換された情報は、「ここに害虫Aがいた。」とされる。図5に示した段階でA/D変換された情報は、「イナゴが大量発生。」とされる。 Subsequently, the control unit 10 refers to the dictionary database 32 shown in FIG. 7, replaces the transcribed information with a language, and converts it into a sentence. By this processing, the information A / D converted at the stage shown in FIG. 3 is “It was rainy in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. It is said. The information A / D converted at the stage shown in FIG. 4 is “There was a pest A here”. The information subjected to A / D conversion at the stage shown in FIG. 5 is “Large locusts”.
 文章化された情報は、いずれも、A/D変換された情報、位置に関する情報と関連づけて、記憶部30の所定領域にセットされる。 All the documented information is set in a predetermined area of the storage unit 30 in association with A / D converted information and position information.
〔ステップS13:Webコンテンツの特定〕
 図2に戻る。続いて、音声アイコン配置システム1の制御部10は、特定モジュール14を実行し、ステップS11の処理で取得した位置情報に紐づいたWebコンテンツを特定する。
[Step S13: Identification of Web Content]
Returning to FIG. Subsequently, the control unit 10 of the voice icon arrangement system 1 executes the specifying module 14 and specifies the Web content associated with the position information acquired in the process of step S11.
 制御部10は、Webコンテンツデータベース33を参照する。図8は、Webコンテンツデータベース33の一例である。Webコンテンツデータベース33には、圃場と、圃場の範囲に関する情報が識別番号と関連づけて予め記憶されている。 The control unit 10 refers to the Web content database 33. FIG. 8 is an example of the Web content database 33. In the Web content database 33, information on the field and the range of the field is stored in advance in association with the identification number.
 例えば、緯度が北緯35度51分55秒~35度52分10秒、経度が東経139度46分55秒~139度47分5秒で囲まれる領域は、山田農場Aの領域である。そして、山田農場Aの領域は、識別番号「1」として関連づけられている。 For example, an area surrounded by latitude 35 ° 51: 55 to 35 ° 52: 10 and latitude longitude 139 ° 46: 55 to 139 ° 47: 5 east is an area of Yamada Farm A. The area of Yamada Farm A is associated with the identification number “1”.
 同様に、緯度が北緯35度52分10秒~35度52分20秒、経度が東経139度46分55秒~139度47分5秒で囲まれる領域は、山田農場Bの領域である。そして、山田農場Bの領域は、識別番号「2」として関連づけられている。 Similarly, the area surrounded by latitude 35 ° 52: 10 seconds to 35 ° 52: 20 seconds north and longitude 139 ° 46: 55 seconds to 139 ° 47: 5 seconds east is the Yamada Farm B area. The area of Yamada Farm B is associated with the identification number “2”.
 図3~図5の段階を経て記憶部30の所定領域にセットされている位置情報は、(1)緯度:北緯35度52分7秒、経度:東経139度46分56秒、(2)緯度:北緯35度52分2秒、経度:東経139度47分52秒、(3)緯度:北緯35度51分57秒、経度:東経139度47分1秒である。Webコンテンツデータベース33を参照すると、これらの位置情報は、いずれも、識別番号「1」の山田農場Aにおいて特定された範囲の内側に相当する。そのため、制御部10は、ステップS10の処理で取得した位置情報に紐づいたWebコンテンツが、識別番号「1」の山田農場AのWebコンテンツであると特定できる。 The position information set in the predetermined area of the storage unit 30 through the steps of FIG. 3 to FIG. 5 includes (1) latitude: 35 ° 52: 7 latitude north, longitude: 139 ° 46: 56 east longitude, (2) Latitude: north latitude 35 degrees 52 minutes 2 seconds, longitude: 139 longitude 47 minutes 52 seconds east longitude, (3) latitude: north latitude 35 degrees 51 minutes 57 seconds, longitude: east longitude 139 degrees 47 minutes 1 second. Referring to the Web content database 33, all of the position information corresponds to the inside of the range specified in the Yamada farm A with the identification number “1”. Therefore, the control unit 10 can specify that the Web content associated with the position information acquired in the process of Step S10 is the Web content of Yamada Farm A with the identification number “1”.
 本実施形態において、制御部10は、ステップS11の処理で取得した位置が、Webコンテンツデータベース33で定める特定範囲の内側にあるかどうかを判断し、当該特定範囲に紐づいたWebコンテンツを特定するようにしている。例えば、農業のように、一定の面積を有する広い範囲で作業する職業の場合、音声が発声された位置を厳密に定めすぎると、データ量が多くなりすぎて、かえって使いづらいシステムになることもあり得る。本実施形態に記載の発明によれば、Webコンテンツを特定範囲と紐付けて管理しているため、データ量が多くなりすぎ、煩雑になることを防ぐことができる。 In the present embodiment, the control unit 10 determines whether the position acquired in the process of step S <b> 11 is inside a specific range defined by the Web content database 33, and specifies the Web content associated with the specific range. I am doing so. For example, in the case of occupations that work in a wide area with a certain area, such as agriculture, if the position where the voice is uttered is set too strictly, the amount of data will be too large and it may become a system that is difficult to use. possible. According to the invention described in the present embodiment, since Web content is managed in association with a specific range, it is possible to prevent the amount of data from becoming too large and complicated.
 Webコンテンツデータベース33には、作物の生育状態の情報も記録されている。Webコンテンツデータベース33には、「日付」、「天気」、「圃場」、「作物」、「茎」、「土壌」、「収穫」、「害虫」、「枯れ」等の項目がリストアップされている。 In the web content database 33, information on the growth state of the crop is also recorded. In the Web content database 33, items such as “date”, “weather”, “farm field”, “crop”, “stem”, “soil”, “harvest”, “pest”, “wither” are listed. Yes.
 例えば、2017年2月1日、2月7日の状況が既に記録されている。本日は、2月14日である。制御部10は、記憶部30に記憶されているカレンダー(図示せず)を読み出すことで、Webコンテンツデータベース33の「日付」には、本日の日付である「2月14日」が予め記録されている。また、制御部10は、通信部20を介して外部の天気予報提供Webサイトから天気の情報を読み出すことで、Webコンテンツデータベース33の「天気」には、本日の天気である「晴れ」が予め記録されている。 For example, the situation of February 1, 2017 and February 7 has already been recorded. Today is February 14th. The control unit 10 reads a calendar (not shown) stored in the storage unit 30 so that “February 14”, which is today's date, is recorded in advance in the “date” of the web content database 33. ing. Further, the control unit 10 reads out weather information from an external weather forecast providing website via the communication unit 20, so that the “weather” in the web content database 33 includes “sunny” that is the current weather in advance. It is recorded.
 また、Webコンテンツデータベース33の「圃場」及び「作物」には、制御部10は、過去の情報を援用することで、「山田農場A」、「長ネギ」との情報が予め記録されている。 Further, in the “field” and “crop” of the Web content database 33, the control unit 10 uses the past information to record information such as “Yamada Farm A” and “Leek onion” in advance.
 本実施形態によると、ステップS10及びステップS11の処理において、制御部10は、音声と、音声が発声された位置と、前記音声が発声された日付と、を取得し、ステップS13の処理において、制御部10は、ステップS10の処理で取得した位置及び日付に紐づいたWebコンテンツを特定する。これにより、Webコンテンツに日付が紐付けられるため、ユーザにとってよりいっそう利便性の高い音声アイコン配置システム1を提供できる。 According to the present embodiment, in the process of step S10 and step S11, the control unit 10 acquires the voice, the position where the voice is uttered, and the date when the voice is uttered, and in the process of step S13, The control unit 10 identifies Web content associated with the position and date acquired in the process of step S10. Thereby, since the date is associated with the Web content, it is possible to provide the voice icon arrangement system 1 that is more convenient for the user.
〔ステップS14:音声を所定のカテゴリに分類〕
 図2に戻る。続いて、音声アイコン配置システム1の制御部10は、分類モジュール15を実行し、ステップS13の処理で特定したWebコンテンツに、ステップS12の処理で音声認識した内容を所定のカテゴリに分類して記録する(ステップS14)。
[Step S14: Classify speech into predetermined categories]
Returning to FIG. Subsequently, the control unit 10 of the voice icon arrangement system 1 executes the classification module 15 to classify and record the contents recognized by the voice in the process of step S12 into the predetermined category in the Web content specified in the process of step S13. (Step S14).
 制御部10は、ステップS11の処理で音声認識した内容を読み出す。記憶部30の所定領域には、順に、「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」との情報、「ここに害虫Aがいた。」との情報、「ここが枯れている。」との情報が記憶されている。制御部10は、記憶部30の所定領域から、これらの情報を読み出す。 The control unit 10 reads the content that has been voice-recognized in the process of step S11. In a predetermined area of the storage unit 30, in order, “It was raining in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. And the information “This is dead” is stored. The control unit 10 reads out these pieces of information from a predetermined area of the storage unit 30.
 続いて、制御部10は、分類データベース34を参照する。図9は、分類データベース34の一例である。分類データベースは、文章化された内容に含まれる単語等と、Webコンテンツデータベース33にリストアップされている項目と、文章化された内容に含まれる単語等がポジティブであるか、ネガティブであるか、特定のキーワードであるかを識別するフラグとの関係が予め記録されている。本実施形態では、Webコンテンツデータベース33(図8)には、「日付」、「天気」、「圃場」、「作物」、「茎」、「土壌」、「収穫」、「害虫」、「枯れ」等の項目がリストアップされている。分類データベース34には、これら項目に関連する単語群が記録されている。 Subsequently, the control unit 10 refers to the classification database 34. FIG. 9 is an example of the classification database 34. In the classification database, whether the word included in the sentenced content, the item listed in the Web content database 33, the word included in the sentenced content is positive or negative, A relationship with a flag for identifying whether the keyword is a specific keyword is recorded in advance. In this embodiment, the Web content database 33 (FIG. 8) includes “date”, “weather”, “farm field”, “crop”, “stem”, “soil”, “harvest”, “pest”, “withered”. Etc. "are listed. In the classification database 34, word groups related to these items are recorded.
 音声認識された内容の一つである「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」との情報について説明する。制御部10は、分類データベース34を参照し、この情報に含まれる「30cm」を項目「茎」と関連づける。また、「良好」を項目「土壌」と関連づけ、「1週間」を項目「収穫」と関連づける。そこで、制御部10は、Webコンテンツデータベース33(図8)の「2.作物の生育状態」の識別番号「1」、日付「2017年2月14日」のところに、項目「茎」には、「30cm」との情報を、項目「土壌」には「良好」との情報を、項目「収穫」には「1週間前後」との情報をセットする。 】 Explain the information that is one of the recognized contents: “It was raining in the weather forecast but it was clear. The stem grew to 30 cm. The soil was good. The control unit 10 refers to the classification database 34 and associates “30 cm” included in this information with the item “stem”. Further, “good” is associated with the item “soil”, and “one week” is associated with the item “harvest”. Therefore, the control unit 10 sets the item “stem” in the Web content database 33 (FIG. 8) with the identification number “1” of “2. Crop growth state” and the date “February 14, 2017”. The information “30 cm”, the information “good” in the item “soil”, and the information “around one week” are set in the item “harvest”.
 ところで、上記情報に含まれる「良好」は、ポジティブな意味合いの単語である。そこで、「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」との情報については、ポジティブであることを示すフラグがセットされる。 By the way, “good” included in the above information is a word with a positive meaning. Therefore, a flag indicating that it is positive is set for the information “It was rainy in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good.
 また、「ここに害虫Aがいた。」との情報について説明する。制御部10は、分類データベース34を参照し、この情報に含まれる「ここに害虫」を項目「害虫」と関連づける。そこで、制御部10は、Webコンテンツデータベース33(図8)の「2.作物の生育状態」の識別番号「1」、日付「2017年2月14日」のところに、項目「害虫」には、「ここに害虫Aがいた。」との情報をセットしたときの位置情報である「緯度:北緯35度52分2秒、経度:東経139度47分52秒」との情報、及び害虫の種類である「害虫A」の情報をセットする。 Also, the information that “There was a pest A here” will be explained. The control unit 10 refers to the classification database 34 and associates “here pest” included in this information with the item “pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude: 139 degrees 47 minutes 52 seconds east longitude", which is the position information when the information "There was pest A here." Information on the type of “pest A” is set.
 ここで、上記情報に含まれる「ここに害虫」は、ネガティブな意味合いの単語である。そこで、「ここに害虫Aがいた。」との情報については、ネガティブであることを示すフラグがセットされる。 Here, “here pest” included in the above information is a word with a negative meaning. Therefore, a flag indicating negative is set for the information “Pest A was here”.
 また、「イナゴが大量発生。」との情報について説明する。制御部10は、分類データベース34を参照し、この情報に含まれる「イナゴ」を項目「害虫」と関連づける。そこで、制御部10は、Webコンテンツデータベース33(図8)の「2.作物の生育状態」の識別番号「1」、日付「2017年2月14日」のところに、項目「害虫」には、「イナゴが大量発生。」との情報をセットしたときの位置情報である「緯度:北緯35度51分57秒、経度:東経139度47分1秒」との情報をセットする。 Also, explain the information that “a large number of locusts have occurred.” The control unit 10 refers to the classification database 34 and associates “Locust” included in this information with the item “Pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 51 minutes 57 seconds north latitude, longitude: 139 degrees 47 minutes 1 second east longitude", which is the position information when the information "Locus occurs in large quantities", is set.
 ここで、上記情報に含まれる「イナゴ」は、予め設定した特定ワードに相当する。そこで、「イナゴが大量発生。」との情報については、特定ワードであることを示すフラグがセットされる。 Here, “Locust” included in the above information corresponds to a specific word set in advance. Therefore, a flag indicating that the word is a specific word is set for the information “Large locusts are generated.”
 本実施形態において、制御部10は、ステップS12の処理で特定したWebコンテンツに既に情報がある場合、ステップS11の処理で音声認識した内容を上書きして記録する。これにより、農作業の作業記録等を時系列で管理することが可能になる。 In this embodiment, when there is already information in the Web content specified in the process of step S12, the control unit 10 overwrites and records the content recognized in the process of step S11. This makes it possible to manage the work records of farm work in time series.
 また、制御部10は、ステップS11の処理で音声認識した内容に基づいて、ステップS12の処理で特定したWebコンテンツ内の特定の項目(例えば、日付、天気、圃場、作物、茎、土壌、収穫、害虫、枯れといった項目)には、関連する音声認識された特定の内容を記録する。 In addition, the control unit 10 specifies specific items (for example, date, weather, field, crop, stem, soil, harvest) in the Web content specified in the process in step S12 based on the content recognized in the process in step S11. , Pests, withering items, etc.), the related specific content recognized by voice recognition is recorded.
 これにより、音声認識された全ての情報(ここでは、「天気予報では雨だったが晴れた。茎が30cmに成長した。土壌は良好。収穫まで1週間前後かな。」との情報、「ここに害虫Aがいた。」との情報、及び「イナゴが大量発生。」との情報)の全てをWebコンテンツに記録されることがなくなり、不要な内容を削除できる。そのため、ユーザにとってよりいっそう利便性の高い音声アイコン配置システム1を提供できる。 As a result, all the information that was recognized by the voice (here, it was raining in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. And the information that “there was a pest A” and “information that a lot of locusts have occurred.” Are not recorded in the Web content, and unnecessary contents can be deleted. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for the user.
 なお、1つの情報に複数種類のフラグが含まれている場合、1つの情報に最も多く含まれる種類のフラグをセットすればよい。あるいは、単語等に与えるフラグに重み付けをして、最も重み付けの高いフラグをセットしてもよい。 In addition, when a plurality of types of flags are included in one piece of information, the type of flag that is contained most in one piece of information may be set. Alternatively, a flag given to a word or the like may be weighted and the flag with the highest weight may be set.
〔ステップS15:Webコンテンツの画像表示〕
 図2に戻る。続いて、音声アイコン配置システム1の制御部10は、ステップS13の処理で記録したWebコンテンツをウェアラブル端末の画像表示部70に表示する。
[Step S15: Web Content Image Display]
Returning to FIG. Subsequently, the control unit 10 of the audio icon arrangement system 1 displays the Web content recorded in the process of step S13 on the image display unit 70 of the wearable terminal.
 図10は、そのときのウェアラブル端末の画像表示部70の表示例を示す。 FIG. 10 shows a display example of the image display unit 70 of the wearable terminal at that time.
 ウェアラブル端末の画像表示部70には、Webコンテンツデータベース33に記録された情報が表示される。具体的に、右上には、本日の日付である「2017/2/14」と表示され、その隣には、本日の天気である「晴れ」と表示される。 Information recorded in the Web content database 33 is displayed on the image display unit 70 of the wearable terminal. Specifically, “2017/2/14” which is today's date is displayed on the upper right, and “sunny” which is today's weather is displayed next to it.
 そして、ウェアラブル端末の画像表示部70の左上には、圃場として「山田農場A」と表示される。 Then, “Yamada Farm A” is displayed as an agricultural field in the upper left of the image display unit 70 of the wearable terminal.
 また、制御部10は、地図データベース35を参照し、Webコンテンツデータベース33の識別番号「1」に相当する領域の地図をウェアラブル端末の画像表示部70に表示させる。そして、制御部10は、ウェアラブル端末の画像表示部70に、ステップS14の処理で分類されたフラグに対応するアイコンを、ステップS11の処理において位置検出部60で検出された位置に応じて地図上に配置して表示する。 Further, the control unit 10 refers to the map database 35 and causes the image display unit 70 of the wearable terminal to display a map of an area corresponding to the identification number “1” of the Web content database 33. Then, the control unit 10 displays an icon corresponding to the flag classified in the process of step S14 on the map according to the position detected by the position detection unit 60 in the process of step S11 on the image display unit 70 of the wearable terminal. Place and display in.
 ウェアラブル端末の画像表示部70の右には、地図上の印の索引が表示される。地図上の白丸は、ポジティブな情報を含む位置を示す。地図上の白丸は、ポジティブな情報を含む位置を示す。地図上の網点が付された囲みは、ネガティブな情報を含む位置を示す。地図上の斜線(網掛け)が付された囲みは、例えば、「イナゴ」等に関する特定ワードを有する情報を含む位置を示す。 The index of the mark on the map is displayed to the right of the image display unit 70 of the wearable terminal. White circles on the map indicate positions that contain positive information. White circles on the map indicate positions that contain positive information. Boxes with halftone dots on the map indicate positions that contain negative information. A hatched box on the map indicates a position including information having a specific word relating to, for example, “Locust”.
 これらの索引は、いずれも「ON」とされている。これは、全ての索引を地図上に表示していることを示す。 These indexes are all “ON”. This indicates that all indexes are displayed on the map.
〔ステップS16:アイコンの切替〕
 図2に戻る。続いて、音声アイコン配置システム1の制御部10は、切替モジュール17を実行し、ステップS15の処理で表示したアイコンについて、所定の条件で(例えば、ユーザの選択操作によって)ON/OFFを切り替える。
[Step S16: Switch Icon]
Returning to FIG. Subsequently, the control unit 10 of the voice icon arrangement system 1 executes the switching module 17 and switches ON / OFF of the icon displayed in the process of step S15 under a predetermined condition (for example, by a user's selection operation).
 図11は、そのときのウェアラブル端末の画像表示部70の表示例を示す。 FIG. 11 shows a display example of the image display unit 70 of the wearable terminal at that time.
 画像表示部70の右側に示す「索引」について、ポジティブ及びネガティブな情報については「ON」から「OFF」に切り替えられている。他方、特定ワードを含む情報については、「ON」のままである。そして、画像表示部70の左側に示す地図では、斜線(網掛け)が付された囲みだけが表示され、その右には、大きく「イナゴ」の文字が表示されている。 Regarding the “index” shown on the right side of the image display unit 70, positive and negative information is switched from “ON” to “OFF”. On the other hand, information including a specific word remains “ON”. In the map shown on the left side of the image display unit 70, only a box with hatching (shaded) is displayed, and on the right side, a large “Locust” character is displayed.
 ウェアラブル端末の画像表示部70の大きさには限りがあり、画像表示部70に、一度に多くの情報を表示し過ぎると、ユーザにとってかえって分かりづらくなる。本実施形態に示すように、画像表示部70に表示させたいアイコンをONにし、画像表示部70から非表示にしたいアイコンをOFFにすることで、大きさに制限があるウェアラブル端末の画像表示部70であっても、ユーザがとまどうことなく利用可能なアイコン配置システム1を提供できる。 The size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand. As shown in this embodiment, by turning on an icon to be displayed on the image display unit 70 and turning off an icon to be hidden from the image display unit 70, the image display unit of the wearable terminal having a size limit Even if it is 70, the icon arrangement | positioning system 1 which a user can use without any trouble can be provided.
 本実施形態に記載の発明によれば、制御部10が音声取得モジュール11を実行し、ウェアラブル端末のユーザから発声された音声を取得すると、音声認識モジュール13の実行により、その音声を認識するとともに、分類モジュール15の実行により、音声を所定のカテゴリ(所定のフラグ毎)に分類し、表示モジュール16の実行により、ウェアラブル端末の画像表示部70に、分類されたカテゴリ(フラグ)に対応するアイコンを、位置取得モジュール12の実行によって取得された位置に応じて地図上に配置して表示する。これにより、制御部10が音声取得モジュール11を実行し、音声を取得することで、音声による記録内容を、ウェアラブル端末を介して直感的に把握可能な音声アイコン配置システム1を提供できる。 According to the invention described in the present embodiment, when the control unit 10 executes the voice acquisition module 11 and acquires voice uttered by the user of the wearable terminal, the voice recognition module 13 executes the voice recognition. The voice is classified into a predetermined category (for each predetermined flag) by executing the classification module 15, and the icon corresponding to the classified category (flag) is displayed on the image display unit 70 of the wearable terminal by executing the display module 16. Are arranged and displayed on the map according to the position acquired by the execution of the position acquisition module 12. Thereby, the audio | voice icon arrangement system 1 which can grasp | ascertain intuitively the recording content by an audio | voice via a wearable terminal can be provided because the control part 10 performs the audio | voice acquisition module 11 and acquires an audio | voice.
 特に、ウェアラブル端末を用いているため、端末を持ち運びする必要がない。その結果、農作業のように、両手が作業用具でふさがりがちなユーザにとって特に利便性の高い音声アイコン配置システム1を提供できる。 Especially, since a wearable terminal is used, it is not necessary to carry the terminal. As a result, it is possible to provide a voice icon arrangement system 1 that is particularly convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
 また、本実施形態に記載の発明において、制御部10は、音声取得モジュール11の実行により、ウェアラブル端末の集音部50から、ユーザが発生した音声を取得する。加えて、制御部10は、位置取得モジュール12を実行することで、ウェアラブル端末の位置情報から、音声が発声された位置を取得可能である。ユーザは、ウェアラブル端末を手で持たなくても、また、ユーザが位置について特に説明しなくても、音声の内容、音声が発声された位置の情報を取得させることができる。そのため、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高い音声アイコン配置システム1を提供できる。 In the invention described in the present embodiment, the control unit 10 acquires the voice generated by the user from the sound collection unit 50 of the wearable terminal by executing the voice acquisition module 11. In addition, the control unit 10 can acquire the position where the voice is uttered from the position information of the wearable terminal by executing the position acquisition module 12. Even if the user does not hold the wearable terminal by hand, and the user does not particularly explain the position, the user can acquire the contents of the voice and the information on the position where the voice is uttered. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for a user who tends to be occupied with work tools in both hands like farm work.
 また、本実施形態に記載の発明によると、制御部10は、分類モジュール15を実行することによって、音声認識された内容をポジティブなのかネガティブなのかに応じて分類できる。そして、ウェアラブル端末の画像表示部70には、ポジティブな内容を示すアイコンと、ネガティブな内容を示すアイコンとに区別して、地図上に配置して表示することができる。そのため、ポジティブな状況を示す位置と、ネガティブな状況を示す位置とを、ウェアラブル端末を介して、よりいっそう直感的に把握可能な音声アイコン配置システム1を提供できる。 Also, according to the invention described in the present embodiment, the control unit 10 can execute the classification module 15 to classify the speech-recognized content according to whether it is positive or negative. In the image display unit 70 of the wearable terminal, an icon indicating positive contents and an icon indicating negative contents can be distinguished from each other and displayed on a map. Therefore, it is possible to provide the voice icon arrangement system 1 in which a position showing a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
 加えて、制御部10は、分類モジュール15を実行することによって、音声認識された内容に特定のキーワードが入っているかどうかに応じて分類できる。そして、ウェアラブル端末の画像表示部70には、特定のキーワードが入っていることを示すアイコンと、特定のキーワードが入っていないことを示すアイコンとに区別して、地図上に配置して表示することができる。そのため、特定のキーワードの状況にある位置と、そうではない位置とを、ウェアラブル端末を介して、よりいっそう直感的に把握可能な音声アイコン配置システム1を提供できる。 In addition, by executing the classification module 15, the control unit 10 can classify according to whether a specific keyword is included in the speech-recognized content. In the image display unit 70 of the wearable terminal, an icon indicating that the specific keyword is included and an icon indicating that the specific keyword is not included are distinguished from each other and displayed on the map. Can do. Therefore, it is possible to provide the voice icon arrangement system 1 in which the position in the situation of the specific keyword and the position that is not so can be grasped more intuitively through the wearable terminal.
 中でも、非常に広範囲の田畑を歩き回らなければならない農業従事者にとって、特定のキーワードについての音声を発声するだけで、ウェアラブル端末を手で持たなくても、特定のキーワードの状況にある位置を記録できる。そのため、農作業のように、両手が作業用具でふさがりがちなユーザにとってよりいっそう利便性の高い音声アイコン配置システム1を提供できる。 Above all, for farmers who have to walk around a very wide range of fields, they only utter a voice about a specific keyword and record the position in the situation of a specific keyword without having to wear a wearable device by hand. it can. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for a user who tends to be occupied with work tools in both hands like farm work.
 また、本実施形態に記載の発明によると、表示されたアイコンを所定の条件でON/OFFを切り替え可能である。ウェアラブル端末の画像表示部70の大きさには限りがあり、画像表示部70に、一度に多くの情報を表示し過ぎると、ユーザにとってかえって分かりづらくなる。本実施形態に記載の発明によれば、画像表示部70に表示させたいアイコンをONにし、画像表示部70から非表示にしたいアイコンをOFFにすることができるので、大きさに制限があるウェアラブル端末の画像表示部70であっても、ユーザがとまどうことなく利用可能な音声アイコン配置システム1を提供できる。 Further, according to the invention described in the present embodiment, the displayed icon can be switched on / off under a predetermined condition. The size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand. According to the invention described in the present embodiment, an icon that is desired to be displayed on the image display unit 70 can be turned on, and an icon that is desired to be hidden from the image display unit 70 can be turned off. Even with the image display unit 70 of the terminal, it is possible to provide the voice icon arrangement system 1 that the user can use without any difficulty.
 また、本実施形態に記載の発明によれば、ステップS10の処理で制御部10が音声を取得すると、ステップS12の処理においてその音声を制御部10が認識するとともに、ステップS13の処理において、制御部10は、音声を取得した位置に紐づいたWebコンテンツを特定する。そして、ステップS14の処理において、制御部10は、特定されたWebコンテンツに、音声認識された内容を記録する。これにより、ステップS10の処理で制御部10が取得した音声の内容を、音声が発声された位置と紐付けてWebコンテンツに記録する音声アイコン配置システム1を提供できる。 Further, according to the invention described in the present embodiment, when the control unit 10 acquires voice in the process of step S10, the control unit 10 recognizes the voice in the process of step S12, and the control is performed in the process of step S13. The unit 10 specifies the Web content associated with the position where the voice is acquired. In step S14, the control unit 10 records the speech-recognized content in the specified web content. Thereby, the audio | voice icon arrangement | positioning system 1 which links | relates the content of the audio | voice acquired by the control part 10 by the process of step S10 with the position where an audio | voice was uttered, and can be recorded on web content can be provided.
 また、画像表示部70に表示されるWebコンテンツは、音声を取得した位置をはじめとした位置情報を含む地図を備え、制御部10は、Webコンテンツの地図上に、ステップS12の処理で音声認識した内容を重畳表示する。本実施形態に記載の発明によれば、ステップS10の処理で音声を取得させることで、制御部10が取得した音声の内容を、音声が発声された位置と紐付けてWebコンテンツに記録される。そして、画像表示部70において、Webコンテンツの地図上に、音声認識された内容が重畳表示される。そのため、ユーザにとってよりいっそう利便性の高い音声アイコン配置システム1を提供できる。 Further, the Web content displayed on the image display unit 70 includes a map including position information including the position where the voice is acquired, and the control unit 10 performs voice recognition on the map of the Web content in the process of step S12. The displayed contents are displayed in a superimposed manner. According to the invention described in the present embodiment, the voice content acquired by the control unit 10 is recorded on the Web content in association with the position where the voice is uttered by acquiring the voice in the process of step S10. . In the image display unit 70, the speech-recognized content is superimposed and displayed on the Web content map. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for the user.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、フレキシブルディスク、CD(CD-ROMなど)、DVD(DVD-ROM、DVD-RAMなど)等のコンピュータ読取可能な記録媒体に記録された形態で提供される。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program. The program is provided in a form recorded on a computer-readable recording medium such as a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD-RAM, etc.). In this case, the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it. The program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. The effects described in the embodiments of the present invention are only the most preferable effects resulting from the present invention, and the effects of the present invention are limited to those described in the embodiments of the present invention. is not.
 1  音声内容記録システム
 10 制御部
 11 音声取得モジュール
 12 位置取得モジュール
 13 音声認識モジュール
 14 特定モジュール
 15 分類モジュール
 16 表示モジュール
 17 切替モジュール
 20 通信部
 30 記憶部
 31 音声データベース
 32 辞書データバース
 33 Webコンテンツデータベース
 34 分類データベース
 35 地図データベース
 40 入力部
 50 集音部
 60 位置検出部
 70 画像表示部

 
DESCRIPTION OF SYMBOLS 1 Audio | voice content recording system 10 Control part 11 Voice acquisition module 12 Position acquisition module 13 Voice recognition module 14 Specific module 15 Classification module 16 Display module 17 Switching module 20 Communication part 30 Storage part 31 Voice database 32 Dictionary data verse 33 Web content database 34 Classification database 35 Map database 40 Input unit 50 Sound collection unit 60 Position detection unit 70 Image display unit

Claims (8)

  1.  ウェアラブル端末の表示部に音声内容に応じたアイコンを表示するウェアラブル端末用音声アイコン表示システムであって、
     前記ウェアラブル端末のユーザから発声された音声を取得する音声取得手段と、
     前記音声が発声された位置を取得する位置取得手段と、
     前記音声を音声認識する音声認識手段と、
     前記音声認識された内容に応じて、前記音声を所定のカテゴリに分類する分類手段と、
     前記ウェアラブル端末の表示部に、前記分類されたカテゴリに対応するアイコンを、前記位置に応じて地図上に配置して表示する表示手段と、
    を備えるウェアラブル端末用音声アイコン配置システム。
    A wearable terminal voice icon display system for displaying an icon corresponding to a voice content on a display unit of a wearable terminal,
    Voice acquisition means for acquiring voice uttered by a user of the wearable terminal;
    Position acquisition means for acquiring a position where the voice is uttered;
    Voice recognition means for voice recognition of the voice;
    Classification means for classifying the voice into a predetermined category according to the voice-recognized content;
    Display means for arranging and displaying icons corresponding to the classified categories on a map according to the position on the display unit of the wearable terminal;
    A voice icon placement system for wearable terminals.
  2.  前記音声取得手段は、前記ウェアラブル端末のマイクから、前記発声された音声を取得する、請求項1に記載のウェアラブル端末用音声アイコン配置システム。 The voice icon arrangement system for wearable terminals according to claim 1, wherein the voice acquisition means acquires the voice uttered from a microphone of the wearable terminal.
  3.  前記位置取得手段は、前記ウェアラブル端末の位置情報から、前記音声が発声された位置を取得する、請求項1又は2に記載のウェアラブル端末用音声アイコン配置システム。 The wearable terminal voice icon arrangement system according to claim 1 or 2, wherein the position acquisition means acquires the position where the voice is uttered from the position information of the wearable terminal.
  4.  前記分類手段は、前記音声認識された内容がポジティブなのかネガティブなのかに応じて、前記音声を分類する、請求項1から3のいずれかに記載のウェアラブル端末用音声アイコン配置システム。 The wearable terminal voice icon arrangement system according to any one of claims 1 to 3, wherein the classification means classifies the voice according to whether the voice-recognized content is positive or negative.
  5.  前記分類手段は、前記音声認識された内容に特定のキーワードが入っているかどうかに応じて、前記音声を分類する、請求項1から4のいずれかに記載のウェアラブル端末用音声アイコン配置システム。 The wearable terminal voice icon arrangement system according to any one of claims 1 to 4, wherein the classification means classifies the voice according to whether or not a specific keyword is included in the voice-recognized content.
  6.  前記表示されたアイコンを所定の条件でON/OFFを切り替える切替手段をさらに備える、請求項1から5のいずれかに記載のウェアラブル端末用音声アイコン配置システム。 The wearable terminal voice icon arrangement system according to any one of claims 1 to 5, further comprising switching means for switching the displayed icon on and off under a predetermined condition.
  7.  ウェアラブル端末の表示部に音声内容に応じたアイコンを表示するウェアラブル端末用音声アイコン表示方法であって、
     前記ウェアラブル端末のユーザから発声された音声を取得するステップと、
     前記音声が発声された位置を取得するステップと、
     前記音声を音声認識するステップと、
     前記音声認識された内容に応じて、前記音声を所定のカテゴリに分類するステップと、
     前記ウェアラブル端末の表示部に、前記分類されたカテゴリに対応するアイコンを、前記位置に応じて地図上に配置して表示するステップと、
    を備えるウェアラブル端末用音声アイコン表示方法。
    A wearable terminal voice icon display method for displaying an icon corresponding to a voice content on a display unit of a wearable terminal,
    Obtaining voice uttered by a user of the wearable terminal;
    Obtaining a position where the voice is uttered;
    Recognizing the voice;
    Classifying the voice into a predetermined category according to the voice-recognized content;
    Arranging and displaying icons corresponding to the classified categories on a map according to the position on the display unit of the wearable terminal; and
    A voice icon display method for a wearable terminal comprising:
  8.  ウェアラブル端末の表示部に音声内容に応じたアイコンを表示するウェアラブル端末用音声アイコン表示システムに、
     前記ウェアラブル端末のユーザから発声された音声を取得するステップと、
     前記音声が発声された位置を取得するステップと、
     前記音声を音声認識するステップと、
     前記音声認識された内容に応じて、前記音声を所定のカテゴリに分類するステップと、
     前記ウェアラブル端末の表示部に、前記分類されたカテゴリに対応するアイコンを、前記位置に応じて地図上に配置して表示するステップと、
    を実行させるためのプログラム。
    In the wearable terminal voice icon display system that displays an icon corresponding to the voice content on the display of the wearable terminal,
    Obtaining voice uttered by a user of the wearable terminal;
    Obtaining a position where the voice is uttered;
    Recognizing the voice;
    Classifying the voice into a predetermined category according to the voice-recognized content;
    Arranging and displaying icons corresponding to the classified categories on a map according to the position on the display unit of the wearable terminal; and
    A program for running
PCT/JP2017/016936 2017-04-28 2017-04-28 Sound icon distribution system for wearable terminal, and method and program WO2018198314A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/016936 WO2018198314A1 (en) 2017-04-28 2017-04-28 Sound icon distribution system for wearable terminal, and method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/016936 WO2018198314A1 (en) 2017-04-28 2017-04-28 Sound icon distribution system for wearable terminal, and method and program

Publications (1)

Publication Number Publication Date
WO2018198314A1 true WO2018198314A1 (en) 2018-11-01

Family

ID=63920356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/016936 WO2018198314A1 (en) 2017-04-28 2017-04-28 Sound icon distribution system for wearable terminal, and method and program

Country Status (1)

Country Link
WO (1) WO2018198314A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179440A1 (en) * 2021-02-28 2022-09-01 International Business Machines Corporation Recording a separated sound from a sound stream mixture on a personal device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004220149A (en) * 2003-01-10 2004-08-05 National Agriculture & Bio-Oriented Research Organization Confirmation system of farm field cultivation state
JP2012216135A (en) * 2011-04-01 2012-11-08 Olympus Corp Image generation system, program, and information storage medium
JP2013254356A (en) * 2012-06-07 2013-12-19 Topcon Corp Farming support system
JP2015084226A (en) * 2014-10-24 2015-04-30 パイオニア株式会社 Terminal device, display method, display program, system, and server
WO2015059764A1 (en) * 2013-10-22 2015-04-30 三菱電機株式会社 Server for navigation, navigation system, and navigation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004220149A (en) * 2003-01-10 2004-08-05 National Agriculture & Bio-Oriented Research Organization Confirmation system of farm field cultivation state
JP2012216135A (en) * 2011-04-01 2012-11-08 Olympus Corp Image generation system, program, and information storage medium
JP2013254356A (en) * 2012-06-07 2013-12-19 Topcon Corp Farming support system
WO2015059764A1 (en) * 2013-10-22 2015-04-30 三菱電機株式会社 Server for navigation, navigation system, and navigation method
JP2015084226A (en) * 2014-10-24 2015-04-30 パイオニア株式会社 Terminal device, display method, display program, system, and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIN'YA HIRUTA ET AL.: "Detection and Visualization of Place-triggered Geotagged Tweets", INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 54, no. 2, 15 February 2013 (2013-02-15), pages 710 - 720, XP055526748 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179440A1 (en) * 2021-02-28 2022-09-01 International Business Machines Corporation Recording a separated sound from a sound stream mixture on a personal device
GB2619229A (en) * 2021-02-28 2023-11-29 Ibm Recording a separated sound from a sound stream mixture on a personal device

Similar Documents

Publication Publication Date Title
US20210312930A1 (en) Computer system, speech recognition method, and program
US20190250882A1 (en) Systems, methods, and apparatuses for agricultural data collection, analysis, and management via a mobile device
EP3591577A1 (en) Information processing apparatus, information processing method, and program
RU2653283C2 (en) Method for dialogue between machine, such as humanoid robot, and human interlocutor, computer program product and humanoid robot for implementing such method
CN109360550A (en) Test method, device, equipment and the storage medium of voice interactive system
CN109885810A (en) Nan-machine interrogation&#39;s method, apparatus, equipment and storage medium based on semanteme parsing
KR102284750B1 (en) User terminal device and method for recognizing object thereof
KR20120038000A (en) Method and system for determining the topic of a conversation and obtaining and presenting related content
CN109902158A (en) Voice interactive method, device, computer equipment and storage medium
WO2020253064A1 (en) Speech recognition method and apparatus, and computer device and storage medium
US11881209B2 (en) Electronic device and control method
WO2022262586A1 (en) Method for plant identification, computer system and computer-readable storage medium
WO2021147528A1 (en) Computer-executable method relating to weeds and computer system
WO2019133638A1 (en) Voice tagging of video while recording
US10235456B2 (en) Audio augmented reality system
EP4385009A1 (en) Conversational artificial intelligence system in a virtual reality space
JP2015104078A (en) Imaging apparatus, imaging system, server, imaging method and imaging program
WO2018022301A1 (en) Systems, methods, and apparatuses for agricultural data collection, analysis, and management via a mobile device
WO2018198314A1 (en) Sound icon distribution system for wearable terminal, and method and program
US20220172047A1 (en) Information processing system and information processing method
JP2022053520A (en) Cutting time determination program
CN110265005A (en) Export content control device, output contents controlling method and storage medium
KR20170086233A (en) Method for incremental training of acoustic and language model using life speech and image logs
JP6845446B2 (en) Audio content recording system, method and program
Ortenzi et al. Italian speech commands for forestry applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907613

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17907613

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP