[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020202347A1 - Information provision system and information terminal - Google Patents

Information provision system and information terminal Download PDF

Info

Publication number
WO2020202347A1
WO2020202347A1 PCT/JP2019/014254 JP2019014254W WO2020202347A1 WO 2020202347 A1 WO2020202347 A1 WO 2020202347A1 JP 2019014254 W JP2019014254 W JP 2019014254W WO 2020202347 A1 WO2020202347 A1 WO 2020202347A1
Authority
WO
WIPO (PCT)
Prior art keywords
article
display
information
information terminal
user
Prior art date
Application number
PCT/JP2019/014254
Other languages
French (fr)
Japanese (ja)
Inventor
山岡 大祐
田中 一彦
祐 瀧口
瞳 ▲濱▼村
Original Assignee
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 本田技研工業株式会社 filed Critical 本田技研工業株式会社
Priority to JP2021511722A priority Critical patent/JP7237149B2/en
Priority to PCT/JP2019/014254 priority patent/WO2020202347A1/en
Priority to CN201980090738.2A priority patent/CN113383363A/en
Publication of WO2020202347A1 publication Critical patent/WO2020202347A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to an information providing system and an information terminal that provide information to a user.
  • Patent Document 1 An application that provides services by connecting to a network via a mobile terminal such as a smartphone has been devised.
  • a captured image is displayed on a display in a portable wireless communication terminal (smartphone), and guidance (name) of a component included in the captured image is superimposed and displayed on the display. It is disclosed that the operation manual of the component is displayed on the display when the guidance of the superimposed component is pressed.
  • the in-vehicle part can the in-vehicle part be mounted or attached to the vehicle, or if the user is considering purchasing a large item, the user can load the item in the vehicle. It is desirable to easily grasp whether or not the article that the user is considering purchasing is suitable for the vehicle (that is, the suitability of the article for the vehicle), such as whether or not the article can be purchased. is there. In addition, the user may want to easily grasp information on articles that fit the structure of the vehicle, for example.
  • an object of the present invention is to provide the user with information desired by the user easily and intuitively.
  • the information providing system as one aspect of the present invention is an information providing system that provides a user with information on compatibility between a designated article and a target location where the designated article is to be mounted by using an information terminal having a camera and a display. Therefore, the information terminal has display control means for displaying the captured image obtained by the camera on the display, acquisition means for acquiring image data of the designated article, and image data acquired by the acquisition means. Based on this, the display control means has a generation means for generating an extended reality image of the designated article, and the display control means uses the extended reality image of the designated article generated by the generation means as the information in the captured image. It is characterized in that it is superimposed on the target location and displayed on the display.
  • the information desired by the user can be provided to the user through augmented reality, so that the user can easily and intuitively grasp the information.
  • Block diagram showing the configuration of the information provision system Flowchart showing the process of accepting the designation of goods
  • the figure which shows the display example of article information A diagram showing the inside of a car being photographed with an information terminal Figure showing display example of augmented reality image of designated article Figure showing display example of augmented reality image of designated article Figure showing display example of augmented reality image of designated article
  • FIG. 1 is a block diagram showing a configuration of the information providing system 100 of the present embodiment.
  • the information providing system 100 of the present embodiment includes, for example, an information terminal 10 and a server device 20 that are communicably connected to each other via a network NTW, and is a target of an article specified by a user and a vehicle on which the article is to be mounted. It is a system for providing a user with information on compatibility with a location.
  • the article designated by the user may be referred to as a "designated article”
  • the target location where the designated article is to be mounted may be referred to as a "scheduled loading location”.
  • a four-wheeled vehicle will be illustrated as a vehicle.
  • the information terminal 10 may include, for example, a processing unit 11, a storage unit 12, a camera 13, a display 14, a position detection sensor 15, a posture detection sensor 16, and a communication unit 17. Each part of the information terminal 10 is connected to each other so as to be able to communicate with each other via the system bus 18. Examples of the information terminal 10 include a smartphone and a tablet terminal. In the present embodiment, an example in which a smartphone is used as the information terminal 10 will be described. Smartphones and tablet terminals are mobile terminals having various functions other than the call function, but the dimensions of the displays are different from each other. In general, tablet terminals have larger display dimensions than smartphones.
  • the processing unit 11 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like.
  • the storage unit 12 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 11 reads the program stored in the storage unit 12 into a storage device such as a memory and executes the program. be able to.
  • the storage unit 12 stores an application program (information providing program) for providing the user with information on the compatibility between the designated article and the planned mounting location of the vehicle, and the processing unit 11 stores the application program (information providing program).
  • the information providing program stored in the storage unit 12 can be read into a storage device such as a memory and executed.
  • the camera 13 has a lens and an image sensor, and captures a subject to acquire a captured image.
  • the camera 13 may be provided, for example, on an outer surface opposite to the outer surface on which the display 14 is provided.
  • the display 14 notifies the user of information by displaying an image.
  • the display 14 can display the captured image acquired by the camera 13 in real time.
  • the display 14 of the present embodiment includes, for example, a touch panel type LCD (Liquid Crystal Display), and has a function of receiving information input from a user in addition to a function of displaying an image.
  • the present invention is not limited to this, and the display 14 may have only the function of displaying an image, and an input unit (for example, a keyboard, a mouse, etc.) may be provided independently of the display 14.
  • the position detection sensor 15 detects the position and orientation of the information terminal 10.
  • the position detection sensor 15 for example, a GPS sensor that receives a signal from a GPS satellite to acquire the current position of the information terminal 10 or detects the direction in which the camera 13 of the information terminal 10 is directed based on geomagnetism or the like.
  • An orientation sensor or the like can be used.
  • the posture detection sensor 16 detects the posture of the information terminal 10.
  • an acceleration sensor, a gyro sensor, or the like can be used.
  • the communication unit 17 is communicably connected to the server device 20 via the network NTW. Specifically, the communication unit 17 has a function as a transmission unit that transmits information to the server device 20 via the network NTW and a function as a reception unit that receives information from the server device 20 via the network NTW. Have.
  • a first acquisition unit 11a acquires the data of the captured image acquired by the camera 13.
  • the second acquisition unit 11b acquires the article information and the like stored in the server device 20 in association with the article from the server device 20.
  • the identification unit 11c analyzes the captured image by performing image processing such as a pattern matching method, and identifies a planned mounting location in the captured image displayed on the display 14.
  • the generation unit 11d generates an Augmented Reality (AR) image of the designated article based on the image data of the appearance of the article.
  • AR Augmented Reality
  • the reception unit 11e executes a process of accepting the designation of the article by the user.
  • the display control unit 11d displays the captured image acquired by the first acquisition unit 11a on the display 14. Further, the display control unit 11e is based on the position and orientation of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16 when the augmented reality image of the designated article is generated by the generation unit 11d. , The augmented reality image of the designated article is superimposed on the photographed image and displayed on the display 14.
  • the server device 20 may include a processing unit 21, a storage unit 22, and a communication unit 23.
  • the processing unit 21 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like.
  • the storage unit 22 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 21 reads the program stored in the storage unit 22 into a storage device such as a memory and executes the program. can do.
  • the communication unit 23 is communicably connected to the information terminal 10 via the network NTW. Specifically, the communication unit 23 has a function as a receiving unit that receives information from the information terminal 10 via the network NTW and a function as a transmitting unit that transmits information to the information terminal 10 via the network NTW. Have.
  • the server device 20 stores article information for each of a plurality of types of articles.
  • the article may be, for example, an in-vehicle part attached to the vehicle, or a transported object (including a packaging container) loaded in the luggage compartment of the vehicle for transportation by the vehicle.
  • the article information includes, for example, information indicating the model and dimensions of the article, image data of the appearance of the article, facility where the article is sold and its location information, information posted by a user of the article, and the like.
  • the posted information is information posted by the user of each article, for example, the model of the vehicle on which the article can be loaded, the orientation of the article when the article is loaded in the luggage compartment of the vehicle, and the article. It may contain information such as how to crush the corrugated cardboard when it is loaded on the vehicle.
  • the posted information may include information that is not described in the operation manual of the article, such as a method of using the article more comfortably and the ease of use of the article.
  • FIG. 2A is a flowchart showing a process of accepting the designation of the article by the user
  • FIG. 2B is a flowchart showing the process of providing the user with information on the suitability of the designated article with respect to the vehicle.
  • FIG. 2A the process of accepting the designation of an article by the user will be described using the flowchart shown in FIG. 2A.
  • the processing of the flowchart shown in FIG. 2A can be performed by the reception unit 11e of the processing unit 11.
  • 3 to 7 are diagrams showing images displayed on the display 14 of the information terminal 10, and can be used to explain the processing performed by the processing unit 11 of the information terminal 10.
  • FIG. 3 shows a state in which the initial screen is displayed on the display 14 of the information terminal 10.
  • an input field 31a in which the category (classification, type) of the article is input by the user, and a search button for receiving the user's instruction to start the search for the article. 31b and can be provided.
  • "vehicle-mounted parts" is input by the user in the input field 31a as the category of the article.
  • the processing unit 11 determines whether or not the search button 31b has been touched on the display 14 by the user. If the search button 31b is touched by the user, the process proceeds to S13, and if the search button 31b is not touched, S12 is repeated. In S13, the processing unit 11 determines whether or not the category of the article has been input to the input field 31a by the user. If the search button 31b is touched while the article category is not entered in the input field 31a, the process proceeds to S14, and selection processes (S14 to S15) for allowing the user to select the article category are executed. On the other hand, when the search button 31b is touched while the article category is entered in the input field 31a, the selection process of S14 to S15 is omitted and the process proceeds to S16.
  • the processing unit 11 displays a list of facilities such as shops existing around the information terminal 10 on the display 14.
  • the storage unit 22 of the server device 20 stores map information indicating the locations of a plurality of facilities.
  • the processing unit 11 transmits the current position of the information terminal 10 detected by the position detection sensor 15 to the server device 20 via the communication unit 17.
  • the server device 20 that has received the current position of the information terminal 10 searches for facilities existing within a predetermined range from the current position of the information terminal 10 based on the map information stored in the storage unit 22, and uses the search results.
  • the obtained list of facilities is transmitted to the information terminal 10 via the communication unit 23.
  • the information terminal 10 displays an image showing a list of received facilities on the display 14. In the example shown in FIG.
  • the area where the name of each facility is displayed is the selection button 32 for the user to select the facility, and the user touches one of the selection buttons 32 on the display 14. By doing so, you can select the facility. If the facility is selected by the user, the process proceeds to S15.
  • the predetermined range can be set in advance and arbitrarily by the user. The predetermined range may be the range of the distance from the information terminal 10 or the range of the time until arrival at the facility.
  • the processing unit 11 displays a list of categories of articles provided (sold) at the facility selected in S14 on the display 14. Specifically, in the storage unit 22 of the server device 20, article information that can be provided by each of the plurality of facilities is stored for each category of articles associated with each facility. The processing unit 11 obtains the article information stored in the server device 20 (storage unit 22) linked to the facility selected in S14, thereby displaying a list of article categories as shown in FIG. It can be displayed on the display 14. In the example shown in FIG. 5, the area where the category name of the article is displayed is the selection button 33 for the user to select the category of the article, and the user presses one of the selection buttons 33 on the display 14. By touching with, you can select the category of the article.
  • the processing unit 11 displays an article candidate list on the display 14 for the article category specified by the user.
  • the article category specified by the user is an article category input by the user in the input field 31a of the initial screen, or an article category selected by the user in S15.
  • the processing unit 11 acquires the article information stored in the storage unit 22 of the server device 20 in association with the category of the article specified by the user from the server device 20.
  • the candidate list of articles can be displayed on the display 14.
  • "drive recorder" is specified by the user as the category of the article, and the area where the model of the article is displayed is the selection button 34 for the user to select the article. It has become.
  • the user can specify (select) an article by touching one of the selection buttons 34 on the display 14.
  • the processing unit 11 can display a candidate list of articles provided (sold) at the selected facility on the display 14.
  • the processing unit 11 determines whether or not the article has been designated by the user. If the article is specified by the user, the process proceeds to S18, and if the article is not specified by the user, S17 is repeated.
  • the processing unit 11 (second acquisition unit 11b) acquires the article information regarding the article (designated article) designated in S17 from the server device 20, and displays the acquired article information on the display 14.
  • the article information includes information indicating the model and dimensions of the article, image data of the appearance of the article, information posted by the user of the article, and the like, and the information and data are displayed on the display 14. Is displayed in.
  • FIG. 7 shows an example of a display screen of article information regarding a drive recorder as a designated article.
  • the article information display screen is provided with a designated article model display column 35a, an appearance display column 35b of the designated article, an article dimension display column 35c, and a posted information display column 35d. .. Further, the OK button 35e and the cancel button 35f are provided on the article information display screen. When the OK button 35e is touched by the user, the process proceeds to S21 in FIG. 2B, and when the cancel button 35f is touched by the user, the process ends.
  • FIG. 8 to 10B are diagrams showing images displayed on the display 14 of the information terminal 10, and can be used to explain the processing performed by the processing unit 11 of the information terminal 10.
  • the processing unit 11 determines the target location (for example, the position of the structure or the vehicle) of the vehicle on which the designated article is to be mounted, based on the article information of the designated article.
  • the target location where the designated article will be loaded may be referred to as the “planned loading location”.
  • the article information may include information on whether or not the article is an in-vehicle component, and if the article is an in-vehicle component, information indicating a planned mounting location of the article. Therefore, the processing unit 11 can determine whether or not the designated article is an in-vehicle component and determine the planned mounting location of the designated article based on the article information of the designated article.
  • the processing unit 11 can determine the planned mounting location of the drive recorder to be the windshield near the room mirror based on the article information.
  • the processing unit 11 determines that the designated article is not an in-vehicle part based on the article information of the designated article, the processing unit 11 can determine the planned loading location of the designated article in the luggage compartment (loading section) of the vehicle. ..
  • the processing unit 11 requests the user to take a picture of the planned mounting location determined in S21. For example, when the designated article is a drive recorder, the processing unit 11 displays a comment requesting a picture of the windshield near the rearview mirror, which is the planned mounting location, on the display 14. If it is determined that the designated article is not an in-vehicle part, a comment requesting a photograph of the luggage compartment of the vehicle, which is the planned mounting location, is displayed on the display 14.
  • the processing unit 11 (first acquisition unit 11a) causes the camera 13 to start shooting and acquires a shot image from the camera 13.
  • the processing unit 11 display control unit 11f sequentially displays the captured images acquired from the camera 13 on the display 14.
  • FIG. 8 shows a state in which the inside of the vehicle is photographed by the information terminal 10 (camera 13) so as to include the rearview mirror and the center console.
  • the processing unit 11 identifies the planned mounting location determined in S21 in the captured image displayed on the display 14. For example, the processing unit 11 can determine which part of the vehicle is captured by the camera 13 and the captured image is an image obtained by performing known image processing. As an example of known image processing, a portion (feature point) having a feature amount such as a corner, a curvature, a change in brightness, or a change in color is detected in a captured image, and the feature of the detected feature point is detected. A method of recognizing a vehicle portion (photographed portion) photographed by the camera 13 from feature information indicating an amount, a positional relationship, or the like can be mentioned. By such a method, the processing unit 11 can specify the planned mounting location in the captured image.
  • known image processing a portion (feature point) having a feature amount such as a corner, a curvature, a change in brightness, or a change in color is detected in a captured image, and the feature of the detected feature point is detected.
  • the processing unit 11 determines whether or not the planned mounting location is specified in the captured image displayed on the display 14. If the planned mounting location is specified in the captured image, the process proceeds to S27, and if the planned mounting location is not specified in the captured image, the process returns to S25.
  • the processing unit 11 acquires the dimensional information of the planned mounting location specified in S25 to S26.
  • the storage unit 12 of the information terminal 10 stores feature information for each of a plurality of types of vehicles, and the processing unit 11 has a high degree of agreement with the feature information specified in S25 to S26 ( That is, it is possible to specify the model of the vehicle having the characteristic information (the degree of matching exceeds a predetermined value).
  • the storage unit 22 of the server device 20 dimensional information of each part of the vehicle is stored in association with the model of the vehicle for each model of the vehicle.
  • the processing unit 11 of the information terminal 10 transmits information on the model of the specified vehicle to the server device 20, and transmits the dimensional information of each component associated with the model of the vehicle and stored in the server device 20 to the server device 20. Receive from. As a result, the processing unit 11 of the information terminal 10 can acquire the dimensional information of the planned mounting location specified in S25 to S26.
  • the processing unit 11 (generation unit 11d) generates an augmented reality image 40 of the designated article based on the image data of the appearance of the designated article. Then, the processing unit 11 (display control unit 11f) superimposes (superimposes) the augmented reality image 40 of the generated designated article on the planned mounting location in the captured image obtained by the camera 13 and displays it on the display 14. To do. At this time, the processing unit 11 moves the augmented reality image 40 of the designated article according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As such, the image captured by the camera 13 and the augmented reality image 40 of the designated article are aligned.
  • the processing unit 11 displays the augmented reality image 40 of the designated article on the display 14 so as to match the position of the planned mounting location in the captured image displayed on the display 14 based on the information on the position and orientation of the information terminal 10. To display. Further, the processing unit 11 reflects the actual dimensional relationship between the designated article and the planned mounting location based on the dimensional information of the designated article and the dimensional information of the planned mounting location acquired in S27 so that the designated article is reflected.
  • the augmented reality image 40 of the above is displayed on the display 14.
  • the image data of the appearance of the designated article and the dimensional information of the designated article are the information included in the article information acquired from the server device 20 in S18.
  • FIG. 9 shows an example in which the augmented reality image 40a of the drive recorder is superimposed on the planned mounting location (windshield near the rearview mirror) in the captured image and displayed on the display 14 when the designated article is a drive recorder. Is shown.
  • a photographed image of the inside of the vehicle including the rearview mirror 41 and the center console 42 is displayed on the display 14, and the drive recorder is used with respect to the windshield near the rearview mirror specified as the planned mounting location.
  • the augmented reality image 40a is displayed.
  • the user can easily and intuitively grasp the suitability of the drive recorder for the vehicle, such as the positional relationship and the dimensional relationship between the vehicle and the drive recorder, before purchasing the drive recorder.
  • FIGS. 10A to 10B when the designated article is a transported object (for example, a packaging container such as a desk or a bed), the augmented reality image 40b of the transported object is mounted on the captured image (vehicle 43).
  • a photographed image of the rear part (baggage compartment) of the vehicle 43 with the back door open is displayed on the display 14, and the luggage compartment specified as the planned mounting location is displayed.
  • the augmented reality image 40b of the transported object is displayed.
  • FIG. 10A shows the case where the size of the transported object is smaller than the size of the luggage compartment of the vehicle 43
  • FIG. 10A shows the case where the size of the transported object is smaller than the size of the luggage compartment of the vehicle 43, and FIG.
  • 10B shows the case where the size of the transported object is larger than the size of the luggage compartment of the vehicle 43.
  • the user can easily and intuitively grasp the suitability of the transported object for the vehicle, such as whether or not the transported object as the designated article can be loaded on the loading platform of the vehicle 43, before purchasing the transported object. be able to.
  • the augmented reality image of the designated article designated by the user is included in the captured image obtained by the camera 13 as information indicating the suitability of the designated article with respect to the planned mounting location. It is superimposed on the planned mounting location in the above and displayed on the display 14.
  • FIG. 11 is a diagram showing a state in which the information terminal 10 (camera 13) is photographing the vicinity of the key insertion portion 51 of the motorcycle 50.
  • the augmented reality image 40c of the cover of the key insertion portion 51 is superimposed on the key insertion portion 51 in the captured image and displayed on the display 14.
  • the user can easily determine the suitability of the cover of the key insertion portion 51 with respect to the motorcycle, such as the positional relationship and the dimensional relationship between the key insertion portion 51 of the motorcycle 50 owned by the user and the cover, before purchasing the cover. It can be grasped intuitively.
  • FIG. 12 is a diagram showing a state in which the cultivator 60 is photographed by the information terminal 10 (camera 13).
  • the augmented reality image 40d of the wheel is superimposed on the claw portion 61 in the captured image and displayed on the display 14.
  • the model of the cultivator 60 (vehicle) is specified in order to acquire the dimensional information of the planned mounting location.
  • the processing unit 11 determines that the wheel does not fit (that is, cannot be attached) to the specified type of cultivator 60 based on the article information of the wheel, a comment indicating that the wheel does not fit.
  • a mark such as a wheel or a cross mark may be displayed on the display 14.
  • a third embodiment according to the present invention will be described.
  • This embodiment basically inherits the first to second embodiments, and the terms, definitions, and the like are as described in the first and second embodiments.
  • a process performed by the information terminal 10 when the information providing program is executed a structure included in the captured image obtained by the camera 13 is specified, and an article conforming to the specified structure is specified. The process of providing information to the user will be described.
  • the information terminal 10 displays a screen on the display 14 for allowing the user to select either the first mode or the second mode, and is selected by the user.
  • the first mode is a mode for acquiring article information about an article designated by the user, and when the first mode is selected, the processes described in the first to second embodiments are executed.
  • the second mode is a mode for identifying a structure in a captured image and acquiring article information about an article conforming to the specified structure, and the process described below is executed.
  • FIG. 13 is a flowchart showing processing performed by the processing unit 11 of the information terminal 10 when the second mode is selected by the user.
  • the processing unit 11 first acquisition unit 11a
  • the processing unit 11 causes the camera 13 to start shooting and acquires a shot image from the camera 13.
  • the processing unit 11 display control unit 11f
  • the processing unit 11 identifies the structure included in the captured image displayed on the display 14. For example, the processing unit 11 first recognizes the structure contained in the captured image by performing known image processing. As an example of known image processing, as described in S25 of the flowchart of FIG. 2B, a feature point is detected in a captured image, and a structure is obtained from feature information indicating the feature amount and positional relationship of the detected feature point. There is a method of recognition. In the storage unit 22 of the server device 20, feature information about each of the plurality of types of structures is stored in association with the model of each structure, and the processing unit 11 of the information terminal 10 recognizes it.
  • the processing unit 11 can specify the model of the structure included in the captured image. As an example, as shown in FIG. 8, when the user photographs the inside of the vehicle with the information terminal 10 (camera 13) so as to include in-vehicle parts such as the rearview mirror and the center console, the processing unit 11 sets the rearview mirror as a structure. Can be specified as.
  • the processing unit 11 determines whether or not the structure has been specified in the captured image displayed on the display 14. If the structure is specified in the captured image, the process proceeds to S35, and if the structure is not specified in the captured image, the process returns to S33.
  • the processing unit 11 acquires a list of goods that can be provided at facilities such as shops existing around the information terminal 10. For example, the processing unit 11 may acquire a list of articles recommended by the facility (a list of recommended articles) as an article list that can be provided by a facility such as a store existing around the information terminal 10.
  • the storage unit 22 of the server device 20 stores map information indicating the locations of a plurality of facilities.
  • the processing unit 11 transmits the current position of the information terminal 10 detected by the position detection sensor 15 to the server device 20 via the communication unit 17.
  • the server device 20 that has received the current position of the information terminal 10 searches for facilities existing within a predetermined range from the current position of the information terminal 10 based on the map information stored in the storage unit 22, and uses the search results.
  • a list of a plurality of articles (article list) that can be provided at the obtained facility is transmitted to the information terminal 10 via the communication unit 23.
  • the information terminal 10 can acquire a list of articles that can be provided at the surrounding facilities.
  • the predetermined range can be set in advance and arbitrarily by the user.
  • the predetermined range may be the range of the distance from the information terminal 10 or the range of the time until arrival at the facility.
  • the processing unit 11 determines whether or not there is an article conforming to the structure specified in S33 (hereinafter, may be referred to as a conforming article) in the article list acquired in S35. If there is a conforming article, the process proceeds to S37, and if there is no conforming article, the process returns to S33. Further, in S37, the processing unit 11 (second acquisition unit 11b) acquires the article information stored in the server device 20 (storage unit 22) associated with the conforming article from the server device 20. As described above, the article information may include information indicating the model and dimensions of the article, image data of the appearance of the article, facilities where the article is sold, and location information thereof.
  • the processing unit generation unit 11d
  • the processing unit 11 displays the augmented reality image of the conforming article on the structure specified in S33 in the captured image obtained by the camera 13 and displays it on the display 14.
  • the processing unit 11 moves the augmented reality image of the conforming article according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As described above, the image captured by the camera 13 and the augmented reality image of the conforming article are aligned.
  • the processing unit 11 displays the augmented reality image of the conforming article on the display 14 so as to match the position of the structure in the captured image displayed on the display 14 based on the information on the position and orientation of the information terminal 10. To do. Further, the processing unit 11 determines the conforming article so that the actual dimensional relationship between the conforming article and the structure is reflected based on the dimensional information of the conforming article and the dimensional information of the structure specified in S33.
  • the augmented reality image 40 may be displayed on the display 14.
  • the processing unit 11 may display information on the facility where the conforming article is provided on the display 14.
  • the facility information may include, for example, information such as the homepage and telephone number of the facility, and information on the route to the facility.
  • the information of such a facility may be displayed on the display 14 when the user touches the augmented reality image of the conforming article on the display 14.
  • the inside of the vehicle is photographed by the information terminal 10 (camera 13) so that the user includes in-vehicle parts such as the rearview mirror and the center console, and in S33, the front of the rearview mirror or the vicinity of the rearview mirror is taken.
  • glass is identified as a structure.
  • the processing unit 11 has acquired a list of articles that can be provided at a car accessory store (facility) existing within a predetermined range from the information terminal 10 in S35, in S37, the room mirror or the windshield near the room mirror Goods information for drive recorders compatible with glass can be obtained.
  • the augmented reality image 40a of the drive recorder generated based on the acquired article information (image data) is superimposed on the structure in the captured image (for example, the windshield near the rearview mirror). Is displayed on the display 14.
  • the structure in the photographed image is specified, and the article information regarding the article (conforming article) conforming to the specified structure is acquired.
  • an augmented reality image of the conforming article is generated based on the article information (image data), and the augmented reality image of the conforming article is superimposed on the structure in the captured image and displayed on the display.
  • the information providing system of the above embodiment is An information providing system (for example, 10) that uses an information terminal (for example, 10) having a camera (for example, 13) and a display (for example, 14) to provide a user with information on compatibility between a designated article and a target location where the designated article will be mounted.
  • the information terminal is A display control means (for example, 11f) for displaying a captured image obtained by the camera on the display, and An acquisition means (for example, 11b) for acquiring image data of the designated article, and A generation means (for example, 11d) that generates an augmented reality image of the designated article based on the image data acquired by the acquisition means, and Have,
  • the display control means superimposes the augmented reality image of the designated article generated by the generation means on the target location in the captured image as the information and displays it on the display.
  • the user specifies an article on the information terminal, and if the target location where the designated article (designated article) is to be mounted is photographed by the camera of the information terminal, the designated article is adapted to the target location. It is possible to easily and intuitively grasp the sex.
  • the display control means displays an augmented reality image of the designated article on the display so that the actual dimensional relationship between the designated article and the target location is reflected. According to this configuration, the user can more easily grasp the dimensional relationship between the designated article and the target location, so that the suitability of the designated article with respect to the target location can be grasped more easily and intuitively. ..
  • the information terminal further includes a specific means (for example, 11c) for identifying the target location in the captured image.
  • the display control means superimposes and displays an augmented reality image of the designated article on the target location in the captured image specified by the specific means.
  • the target location is specified on the information terminal side without the user instructing the position of the target location in the captured image, so that the user can grasp the suitability of the designated article for the target location. It is possible to improve convenience.
  • the information terminal further has a receiving means (for example, 11e) for displaying a candidate list of articles on the display and accepting designation of articles by a user.
  • the designated article is an article that has been designated by the user by the receiving means. According to this configuration, it is possible to improve the convenience of the user when designating the article.
  • the reception means displays on the display as the candidate list the articles that can be provided at the facility existing within a predetermined range from the current position of the information terminal.
  • information on articles provided (sold) at a nearby facility can be presented to the user, for example, while moving, so that the user can obtain necessary articles based on the information. For example, it is possible to drop in at the facility and improve the convenience of the user.
  • the reception means displays a list of articles belonging to the category input by the user on the display as the candidate list. According to this configuration, the user can search for the necessary articles by category, so that the convenience of the user can be improved.
  • the information providing system of the above embodiment is An information providing system (eg 100) that provides information to a user using an information terminal (eg 10) having a camera (eg 13) and a display (eg 14).
  • the information terminal is A display control means (for example, 11f) for displaying a captured image obtained by the camera on the display, and Specific means (for example, 11c) for identifying a structure contained in the captured image displayed on the display, and Acquisition means (for example, 11b) for acquiring image data of an article conforming to the structure specified by the specific means among a plurality of articles that can be provided at a facility existing within a predetermined range from the current position of the information terminal.
  • a generation means for example, 11d
  • the display control means superimposes an augmented reality image of the article generated by the generation means on the structure in the captured image and displays it on the display. According to this configuration, the user can easily and intuitively acquire information on an article that matches the structure in the captured image obtained by the camera of the information terminal without specifying the article on the information terminal. Is possible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This information provision system which uses an information terminal having a camera and a display to provide a user with information pertaining to the compatibility of the designated article with the applicable site to which said designated article is to be mounted, wherein the information terminal has a display control means which displays images captured by the camera on the display, an acquiring means which acquires image data for the designated article, and a generating means which generates an augmented reality image of the designated article on the basis of the image data acquired by the acquiring means. The display control means displays, on the display, an augmented reality image of the designated article generated by the generating means as the information, overlaid on the applicable site within the captured image.

Description

情報提供システム、および情報端末Information provision system and information terminal
 本発明は、ユーザに情報を提供する情報提供システムおよび情報端末に関する。 The present invention relates to an information providing system and an information terminal that provide information to a user.
 スマートフォンなどの携帯端末を介してネットワークと接続することでサービスを提供するアプリケーションが考案されている。例えば、特許文献1には、携帯無線通信端末(スマートフォン)において撮像画像をディスプレイに表示するとともに、当該撮影画像内に含まれる部品の案内(名称)をディスプレイに重畳表示すること、および、ディスプレイに重畳表示された部品の案内を押下した場合に、当該部品の操作マニュアルをディスプレイに表示することが開示されている。 An application that provides services by connecting to a network via a mobile terminal such as a smartphone has been devised. For example, in Patent Document 1, a captured image is displayed on a display in a portable wireless communication terminal (smartphone), and guidance (name) of a component included in the captured image is superimposed and displayed on the display. It is disclosed that the operation manual of the component is displayed on the display when the guidance of the superimposed component is pressed.
特開2014-215845号公報Japanese Unexamined Patent Publication No. 2014-215845
 ユーザは、例えば、車載部品の購入を検討している場合に当該車載部品を車両に搭載または取付けることができるのかや、大型の物品の購入を検討している場合に当該物品を車両に積むことができるのかなど、ユーザが購入を検討している物品が車両に対して適合するのか否か(即ち、車両に対する物品の適合性)を、当該物品の購入前に容易に把握したいと望むことがある。また、ユーザは、例えば、車両の構造物に適合する物品の情報を容易に把握したいと望むことがある。 For example, if the user is considering purchasing an in-vehicle part, can the in-vehicle part be mounted or attached to the vehicle, or if the user is considering purchasing a large item, the user can load the item in the vehicle. It is desirable to easily grasp whether or not the article that the user is considering purchasing is suitable for the vehicle (that is, the suitability of the article for the vehicle), such as whether or not the article can be purchased. is there. In addition, the user may want to easily grasp information on articles that fit the structure of the vehicle, for example.
 そこで、本発明は、ユーザの所望する情報を容易且つ直感的にユーザに提供することを目的とする。 Therefore, an object of the present invention is to provide the user with information desired by the user easily and intuitively.
 本発明の一側面としての情報提供システムは、カメラおよびディスプレイを有する情報端末を用いて、指定物品と当該指定物品を搭載予定の対象箇所との適合性に関する情報をユーザに提供する情報提供システムであって、前記情報端末は、前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段と、前記指定物品の画像データを取得する取得手段と、前記取得手段で取得された画像データに基づいて、前記指定物品の拡張現実画像を生成する生成手段と、を有し、前記表示制御手段は、前記生成手段で生成された前記指定物品の拡張現実画像を前記情報として、前記撮影画像内の前記対象箇所に重畳させて前記ディスプレイに表示する、ことを特徴とする。 The information providing system as one aspect of the present invention is an information providing system that provides a user with information on compatibility between a designated article and a target location where the designated article is to be mounted by using an information terminal having a camera and a display. Therefore, the information terminal has display control means for displaying the captured image obtained by the camera on the display, acquisition means for acquiring image data of the designated article, and image data acquired by the acquisition means. Based on this, the display control means has a generation means for generating an extended reality image of the designated article, and the display control means uses the extended reality image of the designated article generated by the generation means as the information in the captured image. It is characterized in that it is superimposed on the target location and displayed on the display.
 本発明によれば、例えば、ユーザの所望する情報を拡張現実を通して当該ユーザに提供することができるため、ユーザは、当該情報を容易に且つ直感的に把握することが可能となる。 According to the present invention, for example, the information desired by the user can be provided to the user through augmented reality, so that the user can easily and intuitively grasp the information.
 添付図面は明細書に含まれ、その一部を構成し、本発明の実施の形態を示し、その記述と共に本発明の原理を説明するために用いられる。
情報提供システムの構成を示すブロック図 物品の指定を受け付ける処理を示すフローチャート 指定物品の適合性に関する情報をユーザに提供する処理を示すフローチャート 初期画面の表示例を示す図 施設リストの表示例を示す図 物品のカテゴリのリストの表示例を示す図 物品の候補リストの表示例を示す図 物品情報の表示例を示す図 車内を情報端末で撮影している様子を示す図 指定物品の拡張現実画像の表示例を示す図 指定物品の拡張現実画像の表示例を示す図 指定物品の拡張現実画像の表示例を示す図 二輪車に情報提供システムを適用した例を示す図 耕運機に情報提供システムを適用した例を示す図 適合物品の情報をユーザに提供する処理を示すフローチャート
The accompanying drawings are included in the specification, which form a part thereof, show an embodiment of the present invention, and are used together with the description to explain the principle of the present invention.
Block diagram showing the configuration of the information provision system Flowchart showing the process of accepting the designation of goods A flowchart showing a process of providing a user with information on the suitability of a designated article. Diagram showing a display example of the initial screen Diagram showing a display example of the facility list Diagram showing a display example of a list of article categories Diagram showing a display example of a candidate list of goods The figure which shows the display example of article information A diagram showing the inside of a car being photographed with an information terminal Figure showing display example of augmented reality image of designated article Figure showing display example of augmented reality image of designated article Figure showing display example of augmented reality image of designated article The figure which shows the example which applied the information provision system to a motorcycle Diagram showing an example of applying an information provision system to a cultivator Flowchart showing the process of providing information on conforming goods to the user
 以下、添付図面を参照して実施形態を詳しく説明する。尚、以下の実施形態は特許請求の範囲に係る発明を限定するものでなく、また実施形態で説明されている特徴の組み合わせの全てが発明に必須のものとは限らない。実施形態で説明されている複数の特徴のうち二つ以上の特徴が任意に組み合わされてもよい。また、同一若しくは同様の構成には同一の参照番号を付し、重複した説明は省略する。 Hereinafter, embodiments will be described in detail with reference to the attached drawings. It should be noted that the following embodiments do not limit the invention according to the claims, and not all combinations of features described in the embodiments are essential to the invention. Two or more of the plurality of features described in the embodiments may be arbitrarily combined. In addition, the same or similar configuration will be given the same reference number, and duplicate description will be omitted.
 <第1実施形態>
 本発明に係る第1実施形態について説明する。図1は、本実施形態の情報提供システム100の構成を示すブロック図である。本実施形態の情報提供システム100は、例えばネットワークNTWを介して互いに通信可能に接続された情報端末10およびサーバ装置20を含み、ユーザが指定した物品と当該物品を搭載する予定である車両の対象箇所との適合性に関する情報をユーザに提供するためのシステムである。以下では、ユーザが指定した物品を「指定物品」と呼ぶことがあり、指定物品を搭載予定の対象箇所を「搭載予定箇所」と呼ぶことがある。また、本実施形態では、車両として四輪車を例示して説明する。
<First Embodiment>
The first embodiment according to the present invention will be described. FIG. 1 is a block diagram showing a configuration of the information providing system 100 of the present embodiment. The information providing system 100 of the present embodiment includes, for example, an information terminal 10 and a server device 20 that are communicably connected to each other via a network NTW, and is a target of an article specified by a user and a vehicle on which the article is to be mounted. It is a system for providing a user with information on compatibility with a location. In the following, the article designated by the user may be referred to as a "designated article", and the target location where the designated article is to be mounted may be referred to as a "scheduled loading location". Further, in the present embodiment, a four-wheeled vehicle will be illustrated as a vehicle.
 まず、情報端末10の構成について説明する。情報端末10は、例えば、処理部11と、記憶部12と、カメラ13と、ディスプレイ14と、位置検知センサ15と、姿勢検知センサ16と、通信部17とを含みうる。情報端末10の各部は、システムバス18を介して相互に通信可能に接続されている。情報端末10としては、例えばスマートフォンやタブレット端末などが挙げられ、本実施形態では、情報端末10としてスマートフォンを用いる例について説明する。スマートフォンおよびタブレット端末は、通話機能以外の様々な機能を有する携帯端末のことであるが、ディスプレイの寸法が互いに異なる。一般的には、タブレット端末の方が、スマートフォンよりディスプレイの寸法が大きい。 First, the configuration of the information terminal 10 will be described. The information terminal 10 may include, for example, a processing unit 11, a storage unit 12, a camera 13, a display 14, a position detection sensor 15, a posture detection sensor 16, and a communication unit 17. Each part of the information terminal 10 is connected to each other so as to be able to communicate with each other via the system bus 18. Examples of the information terminal 10 include a smartphone and a tablet terminal. In the present embodiment, an example in which a smartphone is used as the information terminal 10 will be described. Smartphones and tablet terminals are mobile terminals having various functions other than the call function, but the dimensions of the displays are different from each other. In general, tablet terminals have larger display dimensions than smartphones.
 処理部11は、CPUに代表されるプロセッサ、半導体メモリ等の記憶デバイス、外部デバイスとのインターフェース等を含む。記憶部12には、プロセッサが実行するプログラムやプロセッサが処理に使用するデータ等が格納されており、処理部11は、記憶部12に記憶されたプログラムをメモリ等の記憶デバイスに読み出して実行することができる。本実施形態の場合、記憶部12には、指定物品と車両の搭載予定箇所との適合性に関する情報をユーザに提供するためのアプリケーションプログラム(情報提供プログラム)が格納されており、処理部11は、記憶部12に記憶された情報提供プログラムをメモリ等の記憶デバイスに読み出して実行しうる。 The processing unit 11 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like. The storage unit 12 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 11 reads the program stored in the storage unit 12 into a storage device such as a memory and executes the program. be able to. In the case of the present embodiment, the storage unit 12 stores an application program (information providing program) for providing the user with information on the compatibility between the designated article and the planned mounting location of the vehicle, and the processing unit 11 stores the application program (information providing program). , The information providing program stored in the storage unit 12 can be read into a storage device such as a memory and executed.
 カメラ13は、レンズと撮像素子とを有し、被写体を撮影して撮影画像を取得する。カメラ13は、例えばディスプレイ14が設けられた外面とは反対側の外面に設けられうる。また、ディスプレイ14は、画像の表示によりユーザに対して情報を報知する。本実施形態の場合、ディスプレイ14は、カメラ13で取得された撮影画像をリアルタイムに表示することができる。ここで、本実施形態のディスプレイ14は、例えばタッチパネル式LCD(Liquid Crystal Display)などを含み、画像を表示する機能に加えて、ユーザからの情報の入力を受け付ける機能を有する。しかしながら、それに限られず、画像を表示する機能のみをディスプレイ14に持たせ、当該ディスプレイ14とは独立して入力部(例えばキーボードやマウスなど)を設けてもよい。 The camera 13 has a lens and an image sensor, and captures a subject to acquire a captured image. The camera 13 may be provided, for example, on an outer surface opposite to the outer surface on which the display 14 is provided. In addition, the display 14 notifies the user of information by displaying an image. In the case of the present embodiment, the display 14 can display the captured image acquired by the camera 13 in real time. Here, the display 14 of the present embodiment includes, for example, a touch panel type LCD (Liquid Crystal Display), and has a function of receiving information input from a user in addition to a function of displaying an image. However, the present invention is not limited to this, and the display 14 may have only the function of displaying an image, and an input unit (for example, a keyboard, a mouse, etc.) may be provided independently of the display 14.
 位置検知センサ15は、情報端末10の位置および方位を検知する。位置検知センサ15としては、例えば、GPS衛星からの信号を受信して情報端末10の現在位置を取得するGPSセンサや、地磁気などに基づいて情報端末10のカメラ13が向けられている方位を検知する方位センサなどを用いることができる。本実施形態では、「情報端末10の位置」と記載した場合、情報端末10の現在位置に加えて、情報端末10の方位をも含むものとする。また、姿勢検知センサ16は、情報端末10の姿勢を検知する。姿勢検知センサ16としては、例えば、加速度センサやジャイロセンサなどを用いることができる。 The position detection sensor 15 detects the position and orientation of the information terminal 10. As the position detection sensor 15, for example, a GPS sensor that receives a signal from a GPS satellite to acquire the current position of the information terminal 10 or detects the direction in which the camera 13 of the information terminal 10 is directed based on geomagnetism or the like. An orientation sensor or the like can be used. In the present embodiment, when the description "position of the information terminal 10" is described, it is assumed that the orientation of the information terminal 10 is included in addition to the current position of the information terminal 10. In addition, the posture detection sensor 16 detects the posture of the information terminal 10. As the posture detection sensor 16, for example, an acceleration sensor, a gyro sensor, or the like can be used.
 通信部17は、ネットワークNTWを介してサーバ装置20と通信可能に接続される。具体的には、通信部17は、ネットワークNTWを介してサーバ装置20に情報を送信する送信部としての機能と、ネットワークNTWを介してサーバ装置20から情報を受信する受信部としての機能とを有する。 The communication unit 17 is communicably connected to the server device 20 via the network NTW. Specifically, the communication unit 17 has a function as a transmission unit that transmits information to the server device 20 via the network NTW and a function as a reception unit that receives information from the server device 20 via the network NTW. Have.
 処理部11の具体的な構成としては、例えば、第1取得部11aと、第2取得部11bと、特定部11cと、生成部11dと、受付部11eと、表示制御部11fとが設けられうる。第1取得部11aは、カメラ13で得られて撮影画像のデータを取得する。第2取得部11bは、物品に紐づけられてサーバ装置20に記憶されている物品情報などをサーバ装置20から取得する。特定部11cは、例えばパターンマッチング手法などの画像処理を行うことにより撮影画像を解析し、ディスプレイ14に表示された撮影画像内において搭載予定箇所を特定する。生成部11dは、物品の外観の画像データに基づいて、当該指定物品の拡張現実(AR;Augmented Reality)画像を生成する。受付部11eは、ユーザによる物品の指定を受け付ける処理を実行する。表示制御部11dは、第1取得部11aで取得された撮影画像をディスプレイ14に表示する。また、表示制御部11eは、生成部11dで指定物品の拡張現実画像が生成された場合には、位置検知センサ15および姿勢検知センサ16でそれぞれ検知された情報端末10の位置および姿勢に基づいて、当該指定物品の拡張現実画像を、撮影画像に重畳させてディスプレイ14に表示する。 As a specific configuration of the processing unit 11, for example, a first acquisition unit 11a, a second acquisition unit 11b, a specific unit 11c, a generation unit 11d, a reception unit 11e, and a display control unit 11f are provided. sell. The first acquisition unit 11a acquires the data of the captured image acquired by the camera 13. The second acquisition unit 11b acquires the article information and the like stored in the server device 20 in association with the article from the server device 20. The identification unit 11c analyzes the captured image by performing image processing such as a pattern matching method, and identifies a planned mounting location in the captured image displayed on the display 14. The generation unit 11d generates an Augmented Reality (AR) image of the designated article based on the image data of the appearance of the article. The reception unit 11e executes a process of accepting the designation of the article by the user. The display control unit 11d displays the captured image acquired by the first acquisition unit 11a on the display 14. Further, the display control unit 11e is based on the position and orientation of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16 when the augmented reality image of the designated article is generated by the generation unit 11d. , The augmented reality image of the designated article is superimposed on the photographed image and displayed on the display 14.
 次に、サーバ装置20の構成について説明する。サーバ装置20は、処理部21と、記憶部22と、通信部23とを含みうる。処理部21は、CPUに代表されるプロセッサ、半導体メモリ等の記憶デバイス、外部デバイスとのインターフェース等を含む。記憶部22には、プロセッサが実行するプログラムや、プロセッサが処理に使用するデータ等が格納されており、処理部21は、記憶部22に記憶されたプログラムをメモリ等の記憶デバイスに読み出して実行することができる。また、通信部23は、ネットワークNTWを介して情報端末10と通信可能に接続される。具体的には、通信部23は、ネットワークNTWを介して情報端末10から情報を受信する受信部としての機能と、ネットワークNTWを介して情報端末10に情報を送信する送信部としての機能とを有する。 Next, the configuration of the server device 20 will be described. The server device 20 may include a processing unit 21, a storage unit 22, and a communication unit 23. The processing unit 21 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like. The storage unit 22 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 21 reads the program stored in the storage unit 22 into a storage device such as a memory and executes the program. can do. Further, the communication unit 23 is communicably connected to the information terminal 10 via the network NTW. Specifically, the communication unit 23 has a function as a receiving unit that receives information from the information terminal 10 via the network NTW and a function as a transmitting unit that transmits information to the information terminal 10 via the network NTW. Have.
 本実施形態の場合、サーバ装置20(記憶部22)には、複数種類の物品の各々についての物品情報が記憶されている。物品とは、例えば、車両に取り付けられる車載部品であってもよいし、車両によって運搬するために車両の荷室に積載される運搬物(包装容器を含む)であってもよい。また、物品情報は、例えば、当該物品の型式や寸法を示す情報、当該物品の外観の画像データ、当該物品が販売されている施設およびその位置情報、当該物品の利用者からの投稿情報などを含みうる。投稿情報とは、各物品の利用者から投稿された情報であり、例えば、物品を搭載することができる車両の型式や、物品を車両の荷室に積載するときの当該物品の向き、物品としての段ボールを車両に積載する際の潰し方などの情報を含みうる。投稿情報としては、物品をより快適に使用する方法や、当該物品の使い易さなど、当該物品の操作マニュアルに記載されていない情報が含まれてもよい。 In the case of the present embodiment, the server device 20 (storage unit 22) stores article information for each of a plurality of types of articles. The article may be, for example, an in-vehicle part attached to the vehicle, or a transported object (including a packaging container) loaded in the luggage compartment of the vehicle for transportation by the vehicle. In addition, the article information includes, for example, information indicating the model and dimensions of the article, image data of the appearance of the article, facility where the article is sold and its location information, information posted by a user of the article, and the like. Can include. The posted information is information posted by the user of each article, for example, the model of the vehicle on which the article can be loaded, the orientation of the article when the article is loaded in the luggage compartment of the vehicle, and the article. It may contain information such as how to crush the corrugated cardboard when it is loaded on the vehicle. The posted information may include information that is not described in the operation manual of the article, such as a method of using the article more comfortably and the ease of use of the article.
 次に、情報提供プログラムが実行されたときに情報端末10で行われる処理について説明する。図2A~図2Bは、情報端末10の処理部11で行われる処理を示すフローチャートである。図2Aは、ユーザによる物品の指定を受け付ける処理を示すフローチャートであり、図2Bは、車両に対する指定物品の適合性に関する情報をユーザに提供する処理を示すフローチャートである。 Next, the processing performed on the information terminal 10 when the information providing program is executed will be described. 2A to 2B are flowcharts showing the processing performed by the processing unit 11 of the information terminal 10. FIG. 2A is a flowchart showing a process of accepting the designation of the article by the user, and FIG. 2B is a flowchart showing the process of providing the user with information on the suitability of the designated article with respect to the vehicle.
 まず、ユーザによる物品の指定を受け付ける処理について、図2Aに示すフローチャートを用いて説明する。図2Aに示すフローチャートの処理は、処理部11の受付部11eによって行われうる。なお、図3~図7は、情報端末10のディスプレイ14に表示される画像を示す図であり、情報端末10の処理部11で行われる処理を説明するために用いられうる。 First, the process of accepting the designation of an article by the user will be described using the flowchart shown in FIG. 2A. The processing of the flowchart shown in FIG. 2A can be performed by the reception unit 11e of the processing unit 11. 3 to 7 are diagrams showing images displayed on the display 14 of the information terminal 10, and can be used to explain the processing performed by the processing unit 11 of the information terminal 10.
 S11では、処理部11は、初期画面をディスプレイ14に表示する。図3は、情報端末10のディスプレイ14に初期画面を表示した状態を示している。初期画面には、図3に示すように、例えば、ユーザにより物品のカテゴリ(分類、種類)が入力される入力欄31aと、物品の検索を開始するためのユーザの指示を受け付けるための検索ボタン31bとが設けられうる。図3に示す例では、物品のカテゴリとして「車載部品」が、ユーザにより入力欄31aに入力されている。 In S11, the processing unit 11 displays the initial screen on the display 14. FIG. 3 shows a state in which the initial screen is displayed on the display 14 of the information terminal 10. On the initial screen, as shown in FIG. 3, for example, an input field 31a in which the category (classification, type) of the article is input by the user, and a search button for receiving the user's instruction to start the search for the article. 31b and can be provided. In the example shown in FIG. 3, "vehicle-mounted parts" is input by the user in the input field 31a as the category of the article.
 S12では、処理部11は、ユーザにより検索ボタン31bがディスプレイ14上でタッチされたか否かを判断する。ユーザにより検索ボタン31bがタッチされた場合にはS13に進み、検索ボタン31bがタッチされていない場合にはS12を繰り返す。S13では、処理部11は、ユーザによって物品のカテゴリが入力欄31aに入力されていたか否かを判断する。物品のカテゴリが入力欄31aに入力されていない状態で検索ボタン31bがタッチされた場合にはS14に進み、ユーザに物品のカテゴリを選択させる選択処理(S14~S15)を実行する。一方、物品のカテゴリが入力欄31aに入力された状態で検索ボタン31bがタッチされた場合には、S14~S15の選択処理を省略してS16に進む。 In S12, the processing unit 11 determines whether or not the search button 31b has been touched on the display 14 by the user. If the search button 31b is touched by the user, the process proceeds to S13, and if the search button 31b is not touched, S12 is repeated. In S13, the processing unit 11 determines whether or not the category of the article has been input to the input field 31a by the user. If the search button 31b is touched while the article category is not entered in the input field 31a, the process proceeds to S14, and selection processes (S14 to S15) for allowing the user to select the article category are executed. On the other hand, when the search button 31b is touched while the article category is entered in the input field 31a, the selection process of S14 to S15 is omitted and the process proceeds to S16.
 S14では、処理部11は、情報端末10の周辺に存在する商店等の施設のリストをディスプレイ14に表示する。具体的には、サーバ装置20の記憶部22には、複数の施設の位置が示された地図情報が記憶されている。処理部11は、位置検知センサ15で検知された情報端末10の現在位置を、通信部17を介してサーバ装置20に送信する。情報端末10の現在位置を受信したサーバ装置20では、記憶部22に記憶されている地図情報に基づいて、情報端末10の現在位置から所定範囲内に存在する施設を検索し、検索の結果で得られた施設のリストを通信部23を介して情報端末10に送信する。そして、情報端末10は、図4に示すように、受信した施設のリストを示す画像をディスプレイ14に表示する。図4に示す例では、各施設の名称が表示されている領域が、ユーザが施設を選択するための選択ボタン32となっており、ユーザは、いずれかの選択ボタン32をディスプレイ14上でタッチすることで、施設を選択することができる。ユーザにより施設が選択された場合にはS15に進む。ここで、所定範囲は、ユーザにより事前に且つ任意に設定されうる。所定範囲としては、情報端末10からの距離の範囲であってもよいし、施設に到着するまでの時間の範囲であってもよい。 In S14, the processing unit 11 displays a list of facilities such as shops existing around the information terminal 10 on the display 14. Specifically, the storage unit 22 of the server device 20 stores map information indicating the locations of a plurality of facilities. The processing unit 11 transmits the current position of the information terminal 10 detected by the position detection sensor 15 to the server device 20 via the communication unit 17. The server device 20 that has received the current position of the information terminal 10 searches for facilities existing within a predetermined range from the current position of the information terminal 10 based on the map information stored in the storage unit 22, and uses the search results. The obtained list of facilities is transmitted to the information terminal 10 via the communication unit 23. Then, as shown in FIG. 4, the information terminal 10 displays an image showing a list of received facilities on the display 14. In the example shown in FIG. 4, the area where the name of each facility is displayed is the selection button 32 for the user to select the facility, and the user touches one of the selection buttons 32 on the display 14. By doing so, you can select the facility. If the facility is selected by the user, the process proceeds to S15. Here, the predetermined range can be set in advance and arbitrarily by the user. The predetermined range may be the range of the distance from the information terminal 10 or the range of the time until arrival at the facility.
 S15では、処理部11は、S14で選択された施設で提供(販売)されている物品のカテゴリのリストをディスプレイ14に表示する。具体的には、サーバ装置20の記憶部22には、複数の施設の各々で提供されうる物品情報が、各施設に紐づけられて物品のカテゴリごとに記憶されている。処理部11は、S14で選択された施設に紐づけられてサーバ装置20(記憶部22)に記憶されている物品情報を取得することで、図5に示すように、物品のカテゴリのリストをディスプレイ14に表示することができる。図5に示す例では、物品のカテゴリ名が表示されている領域が、ユーザが物品のカテゴリを選択するための選択ボタン33となっており、ユーザは、いずれかの選択ボタン33をディスプレイ14上でタッチすることで、物品のカテゴリを選択することができる。 In S15, the processing unit 11 displays a list of categories of articles provided (sold) at the facility selected in S14 on the display 14. Specifically, in the storage unit 22 of the server device 20, article information that can be provided by each of the plurality of facilities is stored for each category of articles associated with each facility. The processing unit 11 obtains the article information stored in the server device 20 (storage unit 22) linked to the facility selected in S14, thereby displaying a list of article categories as shown in FIG. It can be displayed on the display 14. In the example shown in FIG. 5, the area where the category name of the article is displayed is the selection button 33 for the user to select the category of the article, and the user presses one of the selection buttons 33 on the display 14. By touching with, you can select the category of the article.
 S16では、処理部11は、ユーザにより指定された物品のカテゴリについて、物品の候補リストをディスプレイ14に表示する。ユーザにより指定された物品のカテゴリとは、ユーザにより初期画面の入力欄31aに入力された物品のカテゴリ、または、S15でユーザにより選択された物品のカテゴリである。例えば、処理部11は、ユーザにより指定された物品のカテゴリに紐づけられてサーバ装置20の記憶部22に記憶されている物品情報をサーバ装置20から取得することで、図6に示すように、物品の候補リストをディスプレイ14に表示することができる。図6に示す例では、物品のカテゴリとして「ドライブレコーダ」がユーザによって指定された例を示しており、物品の型式等が表示されている領域が、ユーザが物品を選択するための選択ボタン34となっている。ユーザは、いずれかの選択ボタン34をディスプレイ14上でタッチすることで、物品を指定(選択)することができる。ここで、S14で施設が選択されている場合には、処理部11は、選択された施設で提供(販売)される物品の候補リストをディスプレイ14上に表示しうる。 In S16, the processing unit 11 displays an article candidate list on the display 14 for the article category specified by the user. The article category specified by the user is an article category input by the user in the input field 31a of the initial screen, or an article category selected by the user in S15. For example, as shown in FIG. 6, the processing unit 11 acquires the article information stored in the storage unit 22 of the server device 20 in association with the category of the article specified by the user from the server device 20. , The candidate list of articles can be displayed on the display 14. In the example shown in FIG. 6, "drive recorder" is specified by the user as the category of the article, and the area where the model of the article is displayed is the selection button 34 for the user to select the article. It has become. The user can specify (select) an article by touching one of the selection buttons 34 on the display 14. Here, when a facility is selected in S14, the processing unit 11 can display a candidate list of articles provided (sold) at the selected facility on the display 14.
 S17では、処理部11は、ユーザにより物品が指定されたか否かを判断する。ユーザにより物品が指定された場合にはS18に進み、ユーザにより物品が指定されていない場合にはS17を繰り返す。S18では、処理部11(第2取得部11b)は、S17で指定された物品(指定物品)に関する物品情報をサーバ装置20から取得し、取得した物品情報をディスプレイ14上に表示する。例えば、上述したように、物品情報には、物品の型式や寸法を示す情報、物品の外観の画像データ、物品の利用者からの投稿情報などが含まれ、それらの情報およびデータがディスプレイ14上に表示される。図7は、指定物品としてのドライブレコーダに関する物品情報の表示画面の一例を示している。物品情報の表示画面には、図7に示すように、指定物品の型式の表示欄35a、指定物品の外観の表示欄35b、物品の寸法の表示欄35c、投稿情報の表示欄35dが設けられる。また、物品情報の表示画面には、OKボタン35eとキャンセルボタン35fとが設けられている。ユーザによりOKボタン35eがタッチされた場合には図2BのS21に進み、ユーザによりキャンセルボタン35fがタッチされた場合には終了する。 In S17, the processing unit 11 determines whether or not the article has been designated by the user. If the article is specified by the user, the process proceeds to S18, and if the article is not specified by the user, S17 is repeated. In S18, the processing unit 11 (second acquisition unit 11b) acquires the article information regarding the article (designated article) designated in S17 from the server device 20, and displays the acquired article information on the display 14. For example, as described above, the article information includes information indicating the model and dimensions of the article, image data of the appearance of the article, information posted by the user of the article, and the like, and the information and data are displayed on the display 14. Is displayed in. FIG. 7 shows an example of a display screen of article information regarding a drive recorder as a designated article. As shown in FIG. 7, the article information display screen is provided with a designated article model display column 35a, an appearance display column 35b of the designated article, an article dimension display column 35c, and a posted information display column 35d. .. Further, the OK button 35e and the cancel button 35f are provided on the article information display screen. When the OK button 35e is touched by the user, the process proceeds to S21 in FIG. 2B, and when the cancel button 35f is touched by the user, the process ends.
 次に、車両に対する指定物品の適合性に関する情報をユーザに提供する処理について、図2Bに示すフローチャートを用いて説明する。なお、図8~図10Bは、情報端末10のディスプレイ14に表示される画像を示す図であり、情報端末10の処理部11で行われる処理を説明するために用いられうる。 Next, the process of providing the user with information on the suitability of the designated article with respect to the vehicle will be described with reference to the flowchart shown in FIG. 2B. 8 to 10B are diagrams showing images displayed on the display 14 of the information terminal 10, and can be used to explain the processing performed by the processing unit 11 of the information terminal 10.
 S21では、処理部11は、指定物品の物品情報に基づいて、当該指定物品を搭載する予定である車両の対象箇所(例えば構造物や車両の位置)を決定する。以下では、指定物品を搭載する予定の対象箇所を「搭載予定箇所」と呼ぶことがある。例えば、物品情報には、物品が車載部品であるか否かの情報、および、物品が車載部品である場合には当該物品の搭載予定箇所を示す情報が含まれうる。したがって、処理部11は、指定物品の物品情報に基づいて、当該指定物品が車載部品であるか否かを判断するとともに、当該指定物品の搭載予定箇所を決定することができる。具体的には、車載部品であるドライブレコーダが指定物品である場合、処理部11は、物品情報に基づいて、ドライブレコーダの搭載予定箇所を、ルームミラー付近のフロントガラスと決定することができる。一方、処理部11は、指定物品の物品情報に基づいて当該指定物品が車載部品でないと判断した場合には、指定物品の搭載予定箇所を車両の荷室(積載部)に決定することができる。 In S21, the processing unit 11 determines the target location (for example, the position of the structure or the vehicle) of the vehicle on which the designated article is to be mounted, based on the article information of the designated article. In the following, the target location where the designated article will be loaded may be referred to as the “planned loading location”. For example, the article information may include information on whether or not the article is an in-vehicle component, and if the article is an in-vehicle component, information indicating a planned mounting location of the article. Therefore, the processing unit 11 can determine whether or not the designated article is an in-vehicle component and determine the planned mounting location of the designated article based on the article information of the designated article. Specifically, when the drive recorder, which is an in-vehicle component, is a designated article, the processing unit 11 can determine the planned mounting location of the drive recorder to be the windshield near the room mirror based on the article information. On the other hand, when the processing unit 11 determines that the designated article is not an in-vehicle part based on the article information of the designated article, the processing unit 11 can determine the planned loading location of the designated article in the luggage compartment (loading section) of the vehicle. ..
 S22では、処理部11は、S21で決定した搭載予定箇所を撮影する旨の要求をユーザに対して行う。例えば、指定物品がドライブレコーダである場合、処理部11は、搭載予定箇所であるルームミラー付近のフロントガラスの撮影を要求するコメントをディスプレイ14に表示する。また、指定物品が車載部品でないと判断した場合には、搭載予定箇所である車両の荷室の撮影を要求するコメントをディスプレイ14に表示する。 In S22, the processing unit 11 requests the user to take a picture of the planned mounting location determined in S21. For example, when the designated article is a drive recorder, the processing unit 11 displays a comment requesting a picture of the windshield near the rearview mirror, which is the planned mounting location, on the display 14. If it is determined that the designated article is not an in-vehicle part, a comment requesting a photograph of the luggage compartment of the vehicle, which is the planned mounting location, is displayed on the display 14.
 S23では、処理部11(第1取得部11a)は、カメラ13に撮影を開始させるとともに、カメラ13から撮影画像を取得する。S24では、処理部11(表示制御部11f)は、カメラ13から取得した撮影画像をディスプレイ14に逐次表示する。図8は、ルームミラーとセンターコンソールとを含むように車内を情報端末10(カメラ13)で撮影している様子を示している。 In S23, the processing unit 11 (first acquisition unit 11a) causes the camera 13 to start shooting and acquires a shot image from the camera 13. In S24, the processing unit 11 (display control unit 11f) sequentially displays the captured images acquired from the camera 13 on the display 14. FIG. 8 shows a state in which the inside of the vehicle is photographed by the information terminal 10 (camera 13) so as to include the rearview mirror and the center console.
 S25では、処理部11(特定部11c)は、ディスプレイ14に表示された撮影画像内において、S21で決定された搭載予定箇所の特定を行う。例えば、処理部11は、公知の画像処理を行うことにより、撮影画像が、車両のどの部分をカメラ13で撮影して得られた画像であるのかを判断することができる。公知の画像処理の一例としては、撮影画像内において、角部(コーナー)や曲率、明度の変化、色の変化などの特徴量を有する部分(特徴点)を検出し、検出した特徴点の特徴量や位置関係などを示す特徴情報から、カメラ13で撮影されている車両の部分(撮影部分)を認識する方法が挙げられる。このような方法により、処理部11は、撮影画像内において、搭載予定箇所の特定を行うことができる。 In S25, the processing unit 11 (specific unit 11c) identifies the planned mounting location determined in S21 in the captured image displayed on the display 14. For example, the processing unit 11 can determine which part of the vehicle is captured by the camera 13 and the captured image is an image obtained by performing known image processing. As an example of known image processing, a portion (feature point) having a feature amount such as a corner, a curvature, a change in brightness, or a change in color is detected in a captured image, and the feature of the detected feature point is detected. A method of recognizing a vehicle portion (photographed portion) photographed by the camera 13 from feature information indicating an amount, a positional relationship, or the like can be mentioned. By such a method, the processing unit 11 can specify the planned mounting location in the captured image.
 S26では、処理部11は、ディスプレイ14に表示された撮影画像内において、搭載予定箇所が特定されたか否かを判断する。撮影画像内において搭載予定箇所が特定された場合にはS27に進み、撮影画像内において搭載予定箇所が特定されなかった場合にはS25に戻る。 In S26, the processing unit 11 determines whether or not the planned mounting location is specified in the captured image displayed on the display 14. If the planned mounting location is specified in the captured image, the process proceeds to S27, and if the planned mounting location is not specified in the captured image, the process returns to S25.
 S27では、処理部11は、S25~S26で特定された搭載予定箇所の寸法情報を取得する。例えば、情報端末10の記憶部12には、複数種類の車両の各々についての特徴情報が記憶されており、処理部11は、S25~S26で特定された特徴情報に対して一致度が高い(即ち、一致度が所定値を超える)特徴情報を有する車両の型式を特定することができる。また、サーバ装置20の記憶部22には、車両の型式ごとに、車両の各部品の寸法情報が、車両の型式に紐づけられて記憶されている。情報端末10の処理部11は、特定した車両の型式の情報をサーバ装置20に送信し、当該車両の型式に紐づけられてサーバ装置20に記憶されている各部品の寸法情報をサーバ装置20から受信する。これにより、情報端末10の処理部11は、S25~S26で特定された搭載予定箇所の寸法情報を取得することができる。 In S27, the processing unit 11 acquires the dimensional information of the planned mounting location specified in S25 to S26. For example, the storage unit 12 of the information terminal 10 stores feature information for each of a plurality of types of vehicles, and the processing unit 11 has a high degree of agreement with the feature information specified in S25 to S26 ( That is, it is possible to specify the model of the vehicle having the characteristic information (the degree of matching exceeds a predetermined value). Further, in the storage unit 22 of the server device 20, dimensional information of each part of the vehicle is stored in association with the model of the vehicle for each model of the vehicle. The processing unit 11 of the information terminal 10 transmits information on the model of the specified vehicle to the server device 20, and transmits the dimensional information of each component associated with the model of the vehicle and stored in the server device 20 to the server device 20. Receive from. As a result, the processing unit 11 of the information terminal 10 can acquire the dimensional information of the planned mounting location specified in S25 to S26.
 S28では、処理部11(生成部11d)は、指定物品の外観の画像データに基づいて当該指定物品の拡張現実画像40を生成する。そして、処理部11(表示制御部11f)は、生成した指定物品の拡張現実画像40を、カメラ13で得られた撮影画像内における搭載予定箇所に重畳させて(重ね合わせて)ディスプレイ14に表示する。このとき、処理部11は、位置検知センサ15および姿勢検知センサ16で検知された情報端末10の位置姿勢の情報に基づいて、情報端末10の動きに合わせて指定物品の拡張現実画像40が移動するように、カメラ13での撮影画像と指定物品の拡張現実画像40との位置合わせを行う。つまり、処理部11は、情報端末10の位置姿勢の情報に基づいて、ディスプレイ14に表示された撮影画像内における搭載予定箇所の位置に整合させるように、指定物品の拡張現実画像40をディスプレイ14に表示する。また、処理部11は、指定物品の寸法情報とS27で取得された搭載予定箇所の寸法情報とに基づいて、指定物品と搭載予定箇所との実際の寸法関係が反映されるように、指定物品の拡張現実画像40をディスプレイ14に表示する。なお、指定物品の外観の画像データおよび指定物品の寸法情報は、S18でサーバ装置20から取得された物品情報に含まれている情報である。 In S28, the processing unit 11 (generation unit 11d) generates an augmented reality image 40 of the designated article based on the image data of the appearance of the designated article. Then, the processing unit 11 (display control unit 11f) superimposes (superimposes) the augmented reality image 40 of the generated designated article on the planned mounting location in the captured image obtained by the camera 13 and displays it on the display 14. To do. At this time, the processing unit 11 moves the augmented reality image 40 of the designated article according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As such, the image captured by the camera 13 and the augmented reality image 40 of the designated article are aligned. That is, the processing unit 11 displays the augmented reality image 40 of the designated article on the display 14 so as to match the position of the planned mounting location in the captured image displayed on the display 14 based on the information on the position and orientation of the information terminal 10. To display. Further, the processing unit 11 reflects the actual dimensional relationship between the designated article and the planned mounting location based on the dimensional information of the designated article and the dimensional information of the planned mounting location acquired in S27 so that the designated article is reflected. The augmented reality image 40 of the above is displayed on the display 14. The image data of the appearance of the designated article and the dimensional information of the designated article are the information included in the article information acquired from the server device 20 in S18.
 例えば、図9は、指定物品がドライブレコーダである場合において、ドライブレコーダの拡張現実画像40aを、撮影画像内の搭載予定箇所(ルームミラー付近のフロントガラス)に重畳させてディスプレイ14に表示した例を示している。図9に示す例では、ルームミラー41とセンターコンソール42とを含む車内の撮影画像がディスプレイ14に表示されており、搭載予定箇所として特定されたルームミラー付近のフロントガラスに対して、ドライブレコーダの拡張現実画像40aが表示されている。これにより、ユーザは、車両とドライブレコーダとの位置関係および寸法関係など、車両に対するドライブレコーダの適合性を、当該ドライブレコーダを購入する前に容易に且つ直感的に把握することができる。 For example, FIG. 9 shows an example in which the augmented reality image 40a of the drive recorder is superimposed on the planned mounting location (windshield near the rearview mirror) in the captured image and displayed on the display 14 when the designated article is a drive recorder. Is shown. In the example shown in FIG. 9, a photographed image of the inside of the vehicle including the rearview mirror 41 and the center console 42 is displayed on the display 14, and the drive recorder is used with respect to the windshield near the rearview mirror specified as the planned mounting location. The augmented reality image 40a is displayed. As a result, the user can easily and intuitively grasp the suitability of the drive recorder for the vehicle, such as the positional relationship and the dimensional relationship between the vehicle and the drive recorder, before purchasing the drive recorder.
 また、図10A~図10Bは、指定物品が運搬物(例えば、机やベッドなどの包装容器)である場合において、当該運搬物の拡張現実画像40bを、撮影画像内の搭載予定箇所(車両43の荷室)に重畳させてディスプレイ14に表示した例を示している。図10A~図10Bに示す例では、バックドアを開けた状態の車両43の後部(荷室)の撮影画像がディスプレイ14に表示されており、搭載予定箇所として特定された荷室に対して、運搬物の拡張現実画像40bが表示されている。図10Aは、運搬物の寸法が車両43の荷室の寸法より小さい場合を示しており、図10Bは、運搬物の寸法が車両43の荷室の寸法より大きい場合を示している。これにより、ユーザは、指定物品としての運搬物が車両43の荷台に積載可能か否かなど、車両に対する運搬物の適合性を、当該運搬物を購入する前に容易に且つ直感的に把握することができる。 Further, in FIGS. 10A to 10B, when the designated article is a transported object (for example, a packaging container such as a desk or a bed), the augmented reality image 40b of the transported object is mounted on the captured image (vehicle 43). An example is shown in which the display 14 is superimposed on the luggage compartment of the above. In the examples shown in FIGS. 10A to 10B, a photographed image of the rear part (baggage compartment) of the vehicle 43 with the back door open is displayed on the display 14, and the luggage compartment specified as the planned mounting location is displayed. The augmented reality image 40b of the transported object is displayed. FIG. 10A shows the case where the size of the transported object is smaller than the size of the luggage compartment of the vehicle 43, and FIG. 10B shows the case where the size of the transported object is larger than the size of the luggage compartment of the vehicle 43. As a result, the user can easily and intuitively grasp the suitability of the transported object for the vehicle, such as whether or not the transported object as the designated article can be loaded on the loading platform of the vehicle 43, before purchasing the transported object. be able to.
 このように、本実施形態の情報提供システム100では、ユーザにより指定された指定物品の拡張現実画像を、搭載予定箇所に対する指定物品の適合性を示す情報として、カメラ13で得られた撮影画像内における搭載予定箇所に重畳させてディスプレイ14に表示する。これにより、ユーザは、情報端末10により物品を指定して、車両の搭載予定箇所を情報端末10のカメラ13で撮影すれば、車両の搭載予定箇所に対する指定物品の適合性を容易に且つ直感的に把握することができる。 As described above, in the information providing system 100 of the present embodiment, the augmented reality image of the designated article designated by the user is included in the captured image obtained by the camera 13 as information indicating the suitability of the designated article with respect to the planned mounting location. It is superimposed on the planned mounting location in the above and displayed on the display 14. As a result, if the user designates an article by the information terminal 10 and takes a picture of the planned mounting location of the vehicle with the camera 13 of the information terminal 10, the suitability of the designated article with respect to the planned mounting location of the vehicle can be easily and intuitively obtained. Can be grasped.
 <第2実施形態>
 本発明に係る第2実施形態について説明する。上述した第1実施形態では、情報提供システム100を四輪車に適用する例について説明したが、それに限られるものではない。例えば、鞍乗型車両(二輪車、三輪車)、或いは、船舶、航空機、耕運機、芝刈機、発電機、除雪機などの他の機械や装置に対しても、上記の情報提供システム100を適用することができる。本実施形態では、二輪車および耕運機に対して、上記の情報提供システムを適用した例について説明する。
<Second Embodiment>
A second embodiment according to the present invention will be described. In the first embodiment described above, an example of applying the information providing system 100 to a four-wheeled vehicle has been described, but the present invention is not limited thereto. For example, the above information providing system 100 shall be applied to saddle-mounted vehicles (motorcycles, tricycles), or other machines and devices such as ships, aircraft, cultivators, lawnmowers, generators, and snowplows. Can be done. In this embodiment, an example in which the above information providing system is applied to a motorcycle and a cultivator will be described.
 まず、二輪車に対して情報提供システム100を適用する例について説明する。ここでは、上述した図2A~図2Bに示すフローチャートを実行することにより、二輪車50のキー挿入部51に搭載される蓄光性のカバーが指定物品として指定され、キー挿入部51が搭載予定箇所として決定された例を示す。図11は、情報端末10(カメラ13)により二輪車50のキー挿入部51付近を撮影している様子を示す図である。図11に示す例では、キー挿入部51のカバーの拡張現実画像40cが、撮影画像内におけるキー挿入部51に重畳してディスプレイ14に表示されている。これにより、ユーザは、所有している二輪車50のキー挿入部51とカバーとの位置関係および寸法関係など、二輪車に対するキー挿入部51のカバーの適合性を、当該カバーの購入前に容易に且つ直感的に把握することができる。 First, an example of applying the information providing system 100 to a two-wheeled vehicle will be described. Here, by executing the flowcharts shown in FIGS. 2A to 2B described above, the phosphorescent cover mounted on the key insertion portion 51 of the motorcycle 50 is designated as a designated article, and the key insertion portion 51 is designated as a planned mounting location. Here is a determined example. FIG. 11 is a diagram showing a state in which the information terminal 10 (camera 13) is photographing the vicinity of the key insertion portion 51 of the motorcycle 50. In the example shown in FIG. 11, the augmented reality image 40c of the cover of the key insertion portion 51 is superimposed on the key insertion portion 51 in the captured image and displayed on the display 14. As a result, the user can easily determine the suitability of the cover of the key insertion portion 51 with respect to the motorcycle, such as the positional relationship and the dimensional relationship between the key insertion portion 51 of the motorcycle 50 owned by the user and the cover, before purchasing the cover. It can be grasped intuitively.
 次に、耕運機に対して情報提供システム100を適用する例について説明する。ここでは、上述した図2A~図2Bに示すフローチャートを実行することにより、オプション装備として耕運機60の爪部61の左右に搭載される移動用の車輪が指定物品として指定され、爪部61が搭載予定箇所として決定された例を示す。図12は、情報端末10(カメラ13)により耕運機60を撮影している様子を示す図である。図12に示す例では、車輪の拡張現実画像40dが、撮影画像内における爪部61に重畳してディスプレイ14に表示されている。これにより、ユーザは、所有している耕運機60に対する車輪の適合性を、当該車輪の購入前に容易に且つ直感的に把握することができる。ここで、図2Bに示すフローチャートのS27では、搭載予定箇所の寸法情報を取得するために、耕運機60(車両)の型式が特定される。この場合、処理部11は、車輪の物品情報に基づいて、特定した型式の耕運機60に対して当該車輪が適合しない(即ち、取り付けることができない)と判断した場合には、適合しない旨のコメントやバツ印などのマークをディスプレイ14に表示してもよい。 Next, an example of applying the information providing system 100 to the cultivator will be described. Here, by executing the flowcharts shown in FIGS. 2A to 2B described above, the moving wheels mounted on the left and right sides of the claw portion 61 of the cultivator 60 are designated as designated articles as optional equipment, and the claw portion 61 is mounted. An example determined as a planned location is shown. FIG. 12 is a diagram showing a state in which the cultivator 60 is photographed by the information terminal 10 (camera 13). In the example shown in FIG. 12, the augmented reality image 40d of the wheel is superimposed on the claw portion 61 in the captured image and displayed on the display 14. As a result, the user can easily and intuitively grasp the suitability of the wheel for the cultivator 60 owned by the user before purchasing the wheel. Here, in S27 of the flowchart shown in FIG. 2B, the model of the cultivator 60 (vehicle) is specified in order to acquire the dimensional information of the planned mounting location. In this case, if the processing unit 11 determines that the wheel does not fit (that is, cannot be attached) to the specified type of cultivator 60 based on the article information of the wheel, a comment indicating that the wheel does not fit. A mark such as a wheel or a cross mark may be displayed on the display 14.
 <第3実施形態>
 本発明に係る第3実施形態について説明する。本実施形態は、第1~第2実施形態を基本的に引き継ぐものであり、用語や定義等は第1~第2実施形態で説明したとおりである。本実施形態では、情報提供プログラムが実行されたときに情報端末10で行われる処理として、カメラ13で得られた撮影画像内に含まれる構造物を特定し、特定した構造物に適合する物品の情報をユーザに提供する処理について説明する。
<Third Embodiment>
A third embodiment according to the present invention will be described. This embodiment basically inherits the first to second embodiments, and the terms, definitions, and the like are as described in the first and second embodiments. In the present embodiment, as a process performed by the information terminal 10 when the information providing program is executed, a structure included in the captured image obtained by the camera 13 is specified, and an article conforming to the specified structure is specified. The process of providing information to the user will be described.
 例えば、情報提供プログラムが起動された場合、情報端末10(処理部11)は、第1モードおよび第2モードのいずれかをユーザに選択させるための画面をディスプレイ14に表示し、ユーザにより選択されたモードを実行する。第1モードとは、ユーザにより指定された物品に関する物品情報を取得するモードであり、第1モードが選択されると、第1~第2実施形態で説明した処理が実行される。一方、第2モードとは、撮影画像内の構造物を特定し、特定した構造物に適合する物品に関する物品情報を取得するモードであり、以下に説明する処理が実行される。 For example, when the information providing program is activated, the information terminal 10 (processing unit 11) displays a screen on the display 14 for allowing the user to select either the first mode or the second mode, and is selected by the user. Execute the mode. The first mode is a mode for acquiring article information about an article designated by the user, and when the first mode is selected, the processes described in the first to second embodiments are executed. On the other hand, the second mode is a mode for identifying a structure in a captured image and acquiring article information about an article conforming to the specified structure, and the process described below is executed.
 図13は、ユーザにより第2モードが選択された場合に、情報端末10の処理部11で行われる処理を示すフローチャートである。 S31では、処理部11(第1取得部11a)は、カメラ13に撮影を開始させるとともに、カメラ13から撮影画像を取得する。S32では、処理部11(表示制御部11f)は、カメラ13から取得した撮影画像をディスプレイ14に逐次表示する。 FIG. 13 is a flowchart showing processing performed by the processing unit 11 of the information terminal 10 when the second mode is selected by the user. In S31, the processing unit 11 (first acquisition unit 11a) causes the camera 13 to start shooting and acquires a shot image from the camera 13. In S32, the processing unit 11 (display control unit 11f) sequentially displays the captured images acquired from the camera 13 on the display 14.
 S33では、処理部11(特定部11b)は、ディスプレイ14に表示された撮影画像内に含まれる構造物の特定を行う。例えば、処理部11は、まず、公知の画像処理を行うことにより、撮影画像内に含まれる構造物を認識する。公知の画像処理の一例としては、図2BのフローチャートのS25で説明したように、撮影画像内において特徴点を検出し、検出した特徴点の特徴量や位置関係などを示す特徴情報から構造物を認識する方法が挙げられる。サーバ装置20の記憶部22には、複数種類の構造物の各々についての特徴情報が、各構造物の型式等に紐づけられて記憶されており、情報端末10の処理部11は、認識した構造物に対して特徴情報の一致度が高い(即ち、一致度が所定値を超える)構造物があるか否かの調査(判定)を行う。これにより、処理部11は、撮影画像内に含まれる構造物の型式を特定することができる。一例として、図8に示すように、ユーザが、ルームミラーやセンターコンソールなどの車載部品を含むように車内を情報端末10(カメラ13)で撮影した場合、処理部11は、ルームミラーを構造物として特定することができる。 In S33, the processing unit 11 (specific unit 11b) identifies the structure included in the captured image displayed on the display 14. For example, the processing unit 11 first recognizes the structure contained in the captured image by performing known image processing. As an example of known image processing, as described in S25 of the flowchart of FIG. 2B, a feature point is detected in a captured image, and a structure is obtained from feature information indicating the feature amount and positional relationship of the detected feature point. There is a method of recognition. In the storage unit 22 of the server device 20, feature information about each of the plurality of types of structures is stored in association with the model of each structure, and the processing unit 11 of the information terminal 10 recognizes it. Investigate (determine) whether or not there is a structure having a high degree of matching of feature information with respect to the structure (that is, the degree of matching exceeds a predetermined value). Thereby, the processing unit 11 can specify the model of the structure included in the captured image. As an example, as shown in FIG. 8, when the user photographs the inside of the vehicle with the information terminal 10 (camera 13) so as to include in-vehicle parts such as the rearview mirror and the center console, the processing unit 11 sets the rearview mirror as a structure. Can be specified as.
 S34では、処理部11は、ディスプレイ14に表示された撮影画像内において、構造物が特定されたか否かを判断する。撮影画像内において構造物が特定された場合にはS35に進み、撮影画像内において構造物が特定されなかった場合にはS33に戻る。 In S34, the processing unit 11 determines whether or not the structure has been specified in the captured image displayed on the display 14. If the structure is specified in the captured image, the process proceeds to S35, and if the structure is not specified in the captured image, the process returns to S33.
 S35では、処理部11は、情報端末10の周辺に存在する商店等の施設で提供可能な物品リストを取得する。例えば、処理部11は、情報端末10の周辺に存在する商店等の施設で提供可能な物品リストとして、当該施設で推奨されている物品のリスト(おすすめ物品のリスト)を取得してもよい。 In S35, the processing unit 11 acquires a list of goods that can be provided at facilities such as shops existing around the information terminal 10. For example, the processing unit 11 may acquire a list of articles recommended by the facility (a list of recommended articles) as an article list that can be provided by a facility such as a store existing around the information terminal 10.
 具体的には、サーバ装置20の記憶部22には、複数の施設の位置が示された地図情報が記憶されている。処理部11は、位置検知センサ15で検知された情報端末10の現在位置を、通信部17を介してサーバ装置20に送信する。情報端末10の現在位置を受信したサーバ装置20では、記憶部22に記憶されている地図情報に基づいて、情報端末10の現在位置から所定範囲内に存在する施設を検索し、検索の結果で得られた施設で提供可能な複数の物品のリスト(物品リスト)を通信部23を介して情報端末10に送信する。これにより、情報端末10は、周辺の施設で提供可能な物品リストを取得することができる。ここで、所定範囲は、ユーザにより事前に且つ任意に設定されうる。所定範囲としては、情報端末10からの距離の範囲であってもよいし、施設に到着するまでの時間の範囲であってもよい。 Specifically, the storage unit 22 of the server device 20 stores map information indicating the locations of a plurality of facilities. The processing unit 11 transmits the current position of the information terminal 10 detected by the position detection sensor 15 to the server device 20 via the communication unit 17. The server device 20 that has received the current position of the information terminal 10 searches for facilities existing within a predetermined range from the current position of the information terminal 10 based on the map information stored in the storage unit 22, and uses the search results. A list of a plurality of articles (article list) that can be provided at the obtained facility is transmitted to the information terminal 10 via the communication unit 23. As a result, the information terminal 10 can acquire a list of articles that can be provided at the surrounding facilities. Here, the predetermined range can be set in advance and arbitrarily by the user. The predetermined range may be the range of the distance from the information terminal 10 or the range of the time until arrival at the facility.
 S36では、処理部11は、S35で取得した物品リストの中に、S33で特定された構造物に適合する物品(以下、適合物品と呼ぶことがある)があるか否かを判断する。適合物品がある場合にはS37に進み、適合物品がない場合にはS33に戻る。また、S37では、処理部11(第2取得部11b)は、適合物品に紐づけられてサーバ装置20(記憶部22)に記憶されている物品情報をサーバ装置20から取得する。物品情報は、上述したように、物品の型式や寸法を示す情報、物品の外観の画像データ、物品が販売されている施設およびその位置情報などを含みうる。 In S36, the processing unit 11 determines whether or not there is an article conforming to the structure specified in S33 (hereinafter, may be referred to as a conforming article) in the article list acquired in S35. If there is a conforming article, the process proceeds to S37, and if there is no conforming article, the process returns to S33. Further, in S37, the processing unit 11 (second acquisition unit 11b) acquires the article information stored in the server device 20 (storage unit 22) associated with the conforming article from the server device 20. As described above, the article information may include information indicating the model and dimensions of the article, image data of the appearance of the article, facilities where the article is sold, and location information thereof.
 S38では、処理部(生成部11d)は、適合物品の外観の画像データに基づいて当該適合物品の拡張現実画像40を生成する。そして、処理部11(表示制御部11f)は、カメラ13で得られた撮影画像内において、適合物品の拡張現実画像を、S33で特定された構造物に重畳させてディスプレイ14に表示する。このとき、処理部11は、位置検知センサ15および姿勢検知センサ16で検知された情報端末10の位置姿勢の情報に基づいて、情報端末10の動きに合わせて適合物品の拡張現実画像が移動するように、カメラ13での撮影画像と適合物品の拡張現実画像との位置合わせを行う。つまり、処理部11は、情報端末10の位置姿勢の情報に基づいて、ディスプレイ14に表示された撮影画像内における構造物の位置に整合させるように、適合物品の拡張現実画像をディスプレイ14に表示する。また、処理部11は、適合物品の寸法情報とS33で特定された構造物の寸法情報とに基づいて、適合物品と当該構造物との実際の寸法関係が反映されるように、適合物品の拡張現実画像40をディスプレイ14に表示するとよい。 In S38, the processing unit (generation unit 11d) generates an augmented reality image 40 of the conforming article based on the image data of the appearance of the conforming article. Then, the processing unit 11 (display control unit 11f) superimposes the augmented reality image of the conforming article on the structure specified in S33 in the captured image obtained by the camera 13 and displays it on the display 14. At this time, the processing unit 11 moves the augmented reality image of the conforming article according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As described above, the image captured by the camera 13 and the augmented reality image of the conforming article are aligned. That is, the processing unit 11 displays the augmented reality image of the conforming article on the display 14 so as to match the position of the structure in the captured image displayed on the display 14 based on the information on the position and orientation of the information terminal 10. To do. Further, the processing unit 11 determines the conforming article so that the actual dimensional relationship between the conforming article and the structure is reflected based on the dimensional information of the conforming article and the dimensional information of the structure specified in S33. The augmented reality image 40 may be displayed on the display 14.
 ここで、処理部11は、S38において、適合物品が提供される施設の情報をディスプレイ14に表示してもよい。施設の情報とは、例えば、当該施設のホームページや電話番号などの情報や、当該施設までの経路(ルート)の情報などを含みうる。このような施設の情報は、適合物品の拡張現実画像をユーザがディスプレイ14上でタッチした場合にディスプレイ14に表示される態様であってもよい。 Here, in S38, the processing unit 11 may display information on the facility where the conforming article is provided on the display 14. The facility information may include, for example, information such as the homepage and telephone number of the facility, and information on the route to the facility. The information of such a facility may be displayed on the display 14 when the user touches the augmented reality image of the conforming article on the display 14.
 一例として、図8に示すように、ユーザが、ルームミラーやセンターコンソールなどの車載部品を含むように車内が情報端末10(カメラ13)で撮影され、S33において、ルームミラー或いはルームミラー付近のフロントガラスが構造物として特定されたとする。この場合、処理部11は、S35において、情報端末10から所定範囲内に存在するカー用品店(施設)で提供可能な物品リストを取得したとすると、S37では、ルームミラー或いはルームミラー付近のフロントガラスに適合するドライブレコーダの物品情報が取得されうる。そして、図9に示すように、取得した物品情報(画像データ)に基づいて生成したドライブレコーダの拡張現実画像40aを、撮影画像内の構造物(例えば、ルームミラー付近のフロントガラス)に重畳させてディスプレイ14に表示する。 As an example, as shown in FIG. 8, the inside of the vehicle is photographed by the information terminal 10 (camera 13) so that the user includes in-vehicle parts such as the rearview mirror and the center console, and in S33, the front of the rearview mirror or the vicinity of the rearview mirror is taken. Suppose glass is identified as a structure. In this case, assuming that the processing unit 11 has acquired a list of articles that can be provided at a car accessory store (facility) existing within a predetermined range from the information terminal 10 in S35, in S37, the room mirror or the windshield near the room mirror Goods information for drive recorders compatible with glass can be obtained. Then, as shown in FIG. 9, the augmented reality image 40a of the drive recorder generated based on the acquired article information (image data) is superimposed on the structure in the captured image (for example, the windshield near the rearview mirror). Is displayed on the display 14.
 このように、第2モードでは、撮影画像内の構造物を特定し、特定した構造物に適合する物品(適合物品)に関する物品情報を取得する。そして、物品情報(画像データ)に基づいて適合物品の拡張現実画像を生成し、適合物品の拡張現実画像を、撮影画像内の構造物に重畳させてディスプレイに表示する。これにより、ユーザは、情報端末10で物品を指定しなくても、情報端末10のカメラ13で得られた撮影画像内の構造物に適合する物品の情報を容易に且つ直感的に取得することができる。 In this way, in the second mode, the structure in the photographed image is specified, and the article information regarding the article (conforming article) conforming to the specified structure is acquired. Then, an augmented reality image of the conforming article is generated based on the article information (image data), and the augmented reality image of the conforming article is superimposed on the structure in the captured image and displayed on the display. As a result, the user can easily and intuitively acquire information on the article that matches the structure in the captured image obtained by the camera 13 of the information terminal 10 without designating the article on the information terminal 10. Can be done.
 <実施形態のまとめ>
 1.上記実施形態の情報提供システムは、
 カメラ(例えば13)およびディスプレイ(例えば14)を有する情報端末(例えば10)を用いて、指定物品と当該指定物品を搭載予定の対象箇所との適合性に関する情報をユーザに提供する情報提供システム(例えば100)であって、
 前記情報端末は、
  前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段(例えば11f)と、
  前記指定物品の画像データを取得する取得手段(例えば11b)と、
  前記取得手段で取得された画像データに基づいて、前記指定物品の拡張現実画像を生成する生成手段(例えば11d)と、
を有し、
 前記表示制御手段は、前記生成手段で生成された前記指定物品の拡張現実画像を前記情報として、前記撮影画像内の前記対象箇所に重畳させて前記ディスプレイに表示する。
 この構成によれば、ユーザは、情報端末で物品を指定し、指定した物品(指定物品)が搭載される予定の対象箇所を情報端末のカメラで撮影すれば、当該対象箇所に対する指定物品の適合性を容易に且つ直感的に把握することが可能となる。
<Summary of Embodiment>
1. 1. The information providing system of the above embodiment is
An information providing system (for example, 10) that uses an information terminal (for example, 10) having a camera (for example, 13) and a display (for example, 14) to provide a user with information on compatibility between a designated article and a target location where the designated article will be mounted. For example, 100)
The information terminal is
A display control means (for example, 11f) for displaying a captured image obtained by the camera on the display, and
An acquisition means (for example, 11b) for acquiring image data of the designated article, and
A generation means (for example, 11d) that generates an augmented reality image of the designated article based on the image data acquired by the acquisition means, and
Have,
The display control means superimposes the augmented reality image of the designated article generated by the generation means on the target location in the captured image as the information and displays it on the display.
According to this configuration, the user specifies an article on the information terminal, and if the target location where the designated article (designated article) is to be mounted is photographed by the camera of the information terminal, the designated article is adapted to the target location. It is possible to easily and intuitively grasp the sex.
 2.上記実施形態の情報提供システムにおいて、
 前記表示制御手段は、前記指定物品と前記対象箇所との実際の寸法関係が反映されるように、前記指定物品の拡張現実画像を前記ディスプレイに表示する。
 この構成によれば、ユーザは、指定物品と対象箇所との寸法関係の把握がより容易になるため、対象箇所に対する指定物品の適合性をより容易に且つ直感的に把握することが可能となる。
2. In the information providing system of the above embodiment
The display control means displays an augmented reality image of the designated article on the display so that the actual dimensional relationship between the designated article and the target location is reflected.
According to this configuration, the user can more easily grasp the dimensional relationship between the designated article and the target location, so that the suitability of the designated article with respect to the target location can be grasped more easily and intuitively. ..
 3.上記実施形態の情報提供システムにおいて、
 前記情報端末は、前記撮影画像内において前記対象箇所を特定する特定手段(例えば11c)を更に含み、
 前記表示制御手段は、前記特定手段で特定された前記撮影画像内の前記対象箇所に対して、前記指定物品の拡張現実画像を重畳して表示する。
 この構成によれば、ユーザが撮影画像内における対象箇所の位置を指示しなくても、情報端末側で対象箇所が特定されるため、対象箇所に対する指定物品の適合性の把握する際のユーザの利便性を向上させることが可能となる。
3. 3. In the information providing system of the above embodiment
The information terminal further includes a specific means (for example, 11c) for identifying the target location in the captured image.
The display control means superimposes and displays an augmented reality image of the designated article on the target location in the captured image specified by the specific means.
According to this configuration, the target location is specified on the information terminal side without the user instructing the position of the target location in the captured image, so that the user can grasp the suitability of the designated article for the target location. It is possible to improve convenience.
 4.上記実施形態の情報提供システムにおいて、
 前記情報端末は、物品の候補リストを前記ディスプレイに表示して、ユーザによる物品の指定を受け付ける受付手段(例えば11e)を更に有し、
 前記指定物品は、前記受付手段でユーザの指定を受け付けた物品である。
 この構成によれば、物品を指定する際のユーザの利便性を向上させることが可能となる。
4. In the information providing system of the above embodiment
The information terminal further has a receiving means (for example, 11e) for displaying a candidate list of articles on the display and accepting designation of articles by a user.
The designated article is an article that has been designated by the user by the receiving means.
According to this configuration, it is possible to improve the convenience of the user when designating the article.
 5.上記実施形態の情報提供システムにおいて、
 前記受付手段は、前記情報端末の現在位置から所定範囲内に存在する施設で提供可能な物品を前記候補リストとして前記ディスプレイに表示する。
 この構成によれば、例えば移動中などにおいて、近くの施設で提供(販売)されている物品の情報をユーザに提示することができるため、ユーザは、その情報に基づいて、必要な物品があれば施設に立ち寄ることができ、ユーザの利便性を向上させることが可能となる。
5. In the information providing system of the above embodiment
The reception means displays on the display as the candidate list the articles that can be provided at the facility existing within a predetermined range from the current position of the information terminal.
According to this configuration, information on articles provided (sold) at a nearby facility can be presented to the user, for example, while moving, so that the user can obtain necessary articles based on the information. For example, it is possible to drop in at the facility and improve the convenience of the user.
 6.上記実施形態の情報提供システムにおいて、
 前記受付手段は、ユーザにより入力されたカテゴリに属する物品のリストを前記候補リストとして前記ディスプレイに表示する。
 この構成によれば、ユーザは、必要な物品をカテゴリで検索することができるため、ユーザの利便性を向上させることが可能となる。
6. In the information providing system of the above embodiment
The reception means displays a list of articles belonging to the category input by the user on the display as the candidate list.
According to this configuration, the user can search for the necessary articles by category, so that the convenience of the user can be improved.
 7.上記実施形態の情報提供システムは、
 カメラ(例えば13)およびディスプレイ(例えば14)を有する情報端末(例えば10)を用いてユーザに情報を提供する情報提供システム(例えば100)であって、
 前記情報端末は、
  前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段(例えば11f)と、
  前記ディスプレイに表示された前記撮影画像内に含まれる構造物を特定する特定手段(例えば11c)と、
  前記情報端末の現在位置から所定範囲内に存在する施設で提供可能な複数の物品のうち、前記特定手段で特定された前記構造物に適合する物品の画像データを取得する取得手段(例えば11b)と、
  前記取得手段で取得された画像データに基づいて、前記物品の拡張現実画像を生成する生成手段(例えば11d)と、
 を有し、
 前記表示制御手段は、前記生成手段で生成された前記物品の拡張現実画像を、前記撮影画像内の前記構造物に重畳させて前記ディスプレイに表示する。
 この構成によれば、ユーザは、情報端末で物品を指定しなくても、情報端末のカメラで得られた撮影画像内の構造物に適合する物品の情報を容易に且つ直感的に取得することが可能となる。
7. The information providing system of the above embodiment is
An information providing system (eg 100) that provides information to a user using an information terminal (eg 10) having a camera (eg 13) and a display (eg 14).
The information terminal is
A display control means (for example, 11f) for displaying a captured image obtained by the camera on the display, and
Specific means (for example, 11c) for identifying a structure contained in the captured image displayed on the display, and
Acquisition means (for example, 11b) for acquiring image data of an article conforming to the structure specified by the specific means among a plurality of articles that can be provided at a facility existing within a predetermined range from the current position of the information terminal. When,
A generation means (for example, 11d) that generates an augmented reality image of the article based on the image data acquired by the acquisition means, and
Have,
The display control means superimposes an augmented reality image of the article generated by the generation means on the structure in the captured image and displays it on the display.
According to this configuration, the user can easily and intuitively acquire information on an article that matches the structure in the captured image obtained by the camera of the information terminal without specifying the article on the information terminal. Is possible.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために、以下の請求項を添付する。 The present invention is not limited to the above embodiments, and various modifications and modifications can be made without departing from the spirit and scope of the present invention. Therefore, in order to make the scope of the present invention public, the following claims are attached.
10:情報端末、11:処理部、12:記憶部、13:カメラ、14:ディスプレイ、15:位置検知センサ、16:姿勢検知センサ、17:通信部、20:サーバ装置、21:処理部、22:記憶部、23:通信部 10: Information terminal, 11: Processing unit, 12: Storage unit, 13: Camera, 14: Display, 15: Position detection sensor, 16: Posture detection sensor, 17: Communication unit, 20: Server device, 21: Processing unit, 22: Storage unit, 23: Communication unit

Claims (9)

  1.  カメラおよびディスプレイを有する情報端末を用いて、指定物品と当該指定物品を搭載予定の対象箇所との適合性に関する情報をユーザに提供する情報提供システムであって、
     前記情報端末は、
      前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段と、
      前記指定物品の画像データを取得する取得手段と、
      前記取得手段で取得された画像データに基づいて、前記指定物品の拡張現実画像を生成する生成手段と、
    を有し、
     前記表示制御手段は、前記生成手段で生成された前記指定物品の拡張現実画像を前記情報として、前記撮影画像内の前記対象箇所に重畳させて前記ディスプレイに表示する、ことを特徴とする情報提供システム。
    An information providing system that provides a user with information on compatibility between a designated article and a target location where the designated article is to be mounted, using an information terminal having a camera and a display.
    The information terminal is
    A display control means for displaying the captured image obtained by the camera on the display, and
    An acquisition means for acquiring image data of the designated article, and
    A generation means for generating an augmented reality image of the designated article based on the image data acquired by the acquisition means, and
    Have,
    The display control means provides information by superimposing an augmented reality image of the designated article generated by the generation means on the target location in the captured image as the information and displaying it on the display. system.
  2.  前記表示制御手段は、前記指定物品と前記対象箇所との実際の寸法関係が反映されるように、前記指定物品の拡張現実画像を前記ディスプレイに表示する、ことを特徴とする請求項1に記載の情報提供システム。 The first aspect of the present invention is characterized in that the display control means displays an augmented reality image of the designated article on the display so that the actual dimensional relationship between the designated article and the target location is reflected. Information provision system.
  3.  前記情報端末は、前記撮影画像内において前記対象箇所を特定する特定手段を更に含み、
     前記表示制御手段は、前記特定手段で特定された前記撮影画像内の前記対象箇所に対して、前記指定物品の拡張現実画像を重畳して表示する、ことを特徴とする請求項1又は2に記載の情報提供システム。
    The information terminal further includes a specific means for identifying the target location in the captured image.
    The display control means according to claim 1 or 2, wherein the augmented reality image of the designated article is superimposed and displayed on the target location in the captured image specified by the specific means. The information provision system described.
  4.  前記情報端末は、物品の候補リストを前記ディスプレイに表示して、ユーザによる物品の指定を受け付ける受付手段を更に有し、
     前記指定物品は、前記受付手段でユーザの指定を受け付けた物品である、ことを特徴とする請求項1乃至3のいずれか1項に記載の情報提供システム。
    The information terminal further has a receiving means for displaying an article candidate list on the display and accepting a user's designation of an article.
    The information providing system according to any one of claims 1 to 3, wherein the designated article is an article that has been designated by the user by the receiving means.
  5.  前記受付手段は、前記情報端末の現在位置から所定範囲内に存在する施設で提供可能な物品を前記候補リストとして前記ディスプレイに表示する、ことを特徴とする請求項4に記載の情報提供システム。 The information providing system according to claim 4, wherein the receiving means displays on the display as a candidate list items that can be provided at a facility existing within a predetermined range from the current position of the information terminal.
  6.  前記受付手段は、ユーザにより入力されたカテゴリに属する物品のリストを前記候補リストとして前記ディスプレイに表示する、ことを特徴とする請求項4に記載の情報提供システム。 The information providing system according to claim 4, wherein the reception means displays a list of articles belonging to the category input by the user as the candidate list on the display.
  7.  カメラおよびディスプレイを有する情報端末を用いてユーザに情報を提供する情報提供システムであって、
     前記情報端末は、
      前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段と、
      前記ディスプレイに表示された前記撮影画像内に含まれる構造物を特定する特定手段と、
      前記情報端末の現在位置から所定範囲内に存在する施設で提供可能な複数の物品のうち、前記特定手段で特定された前記構造物に適合する物品の画像データを取得する取得手段と、
      前記取得手段で取得された画像データに基づいて、前記物品の拡張現実画像を生成する生成手段と、
     を有し、
     前記表示制御手段は、前記生成手段で生成された前記物品の拡張現実画像を、前記撮影画像内の前記構造物に重畳させて前記ディスプレイに表示する、ことを特徴とする情報提供システム。
    An information providing system that provides information to users using an information terminal having a camera and a display.
    The information terminal is
    A display control means for displaying the captured image obtained by the camera on the display, and
    A specific means for identifying a structure contained in the captured image displayed on the display, and
    An acquisition means for acquiring image data of an article conforming to the structure specified by the specific means among a plurality of articles that can be provided at a facility existing within a predetermined range from the current position of the information terminal.
    A generation means for generating an augmented reality image of the article based on the image data acquired by the acquisition means, and
    Have,
    The display control means is an information providing system characterized in that an augmented reality image of the article generated by the generation means is superimposed on the structure in the captured image and displayed on the display.
  8.  カメラおよびディスプレイを有し、ユーザによる指定物品と当該指定物品を搭載予定の対象箇所との適合性に関する情報をユーザに提供する情報端末であって、
     前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段と、
     前記指定物品の画像データを取得する取得手段と、
     前記取得手段で取得された画像データに基づいて、前記指定物品の拡張現実画像を生成する生成手段と、
    を有し、
     前記表示制御手段は、前記生成手段で生成された前記指定物品の拡張現実画像を前記情報として、前記撮影画像内の前記対象箇所に重畳させて前記ディスプレイに表示する、ことを特徴とする情報端末。
    An information terminal that has a camera and a display and provides the user with information on the compatibility between the designated article by the user and the target location where the designated article is to be mounted.
    A display control means for displaying the captured image obtained by the camera on the display, and
    An acquisition means for acquiring image data of the designated article, and
    A generation means for generating an augmented reality image of the designated article based on the image data acquired by the acquisition means, and
    Have,
    The display control means is an information terminal characterized in that an augmented reality image of the designated article generated by the generation means is superimposed as the information on the target location in the captured image and displayed on the display. ..
  9.  カメラおよびディスプレイを有し、ユーザに情報を提供する情報端末であって、
     前記カメラで得られた撮影画像を前記ディスプレイに表示する表示制御手段と、
     前記ディスプレイに表示された前記撮影画像内に含まれる構造物を特定する特定手段と、
     前記情報端末の現在位置から所定範囲内に存在する施設で提供可能な複数の物品のうち、前記特定手段で特定された前記構造物に適合する物品の画像データを取得する取得手段と、
     前記取得手段で取得された画像データに基づいて、前記物品の拡張現実画像を生成する生成手段と、
     を有し、
     前記表示制御手段は、前記生成手段で生成された前記物品の拡張現実画像を、前記撮影画像内の前記構造物に重畳させて前記ディスプレイに表示する、ことを特徴とする情報端末。
    An information terminal that has a camera and a display and provides information to the user.
    A display control means for displaying the captured image obtained by the camera on the display, and
    A specific means for identifying a structure contained in the captured image displayed on the display, and
    An acquisition means for acquiring image data of an article conforming to the structure specified by the specific means among a plurality of articles that can be provided at a facility existing within a predetermined range from the current position of the information terminal.
    A generation means for generating an augmented reality image of the article based on the image data acquired by the acquisition means, and
    Have,
    The display control means is an information terminal characterized in that an augmented reality image of the article generated by the generation means is superimposed on the structure in the captured image and displayed on the display.
PCT/JP2019/014254 2019-03-29 2019-03-29 Information provision system and information terminal WO2020202347A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021511722A JP7237149B2 (en) 2019-03-29 2019-03-29 Information provision system and information terminal
PCT/JP2019/014254 WO2020202347A1 (en) 2019-03-29 2019-03-29 Information provision system and information terminal
CN201980090738.2A CN113383363A (en) 2019-03-29 2019-03-29 Information providing system and information terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/014254 WO2020202347A1 (en) 2019-03-29 2019-03-29 Information provision system and information terminal

Publications (1)

Publication Number Publication Date
WO2020202347A1 true WO2020202347A1 (en) 2020-10-08

Family

ID=72666511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/014254 WO2020202347A1 (en) 2019-03-29 2019-03-29 Information provision system and information terminal

Country Status (3)

Country Link
JP (1) JP7237149B2 (en)
CN (1) CN113383363A (en)
WO (1) WO2020202347A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044704A (en) * 2001-07-31 2003-02-14 Honda Motor Co Ltd Method for providing service
JP2003331075A (en) * 2002-05-09 2003-11-21 Honda Motor Co Ltd Service providing system
JP2011222000A (en) * 2010-03-25 2011-11-04 Choushin Inc Image combination service system
US9928544B1 (en) * 2015-03-10 2018-03-27 Amazon Technologies, Inc. Vehicle component installation preview image generation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9449342B2 (en) * 2011-10-27 2016-09-20 Ebay Inc. System and method for visualization of items in an environment using augmented reality
CN108255304B (en) * 2018-01-26 2022-10-04 腾讯科技(深圳)有限公司 Video data processing method and device based on augmented reality and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044704A (en) * 2001-07-31 2003-02-14 Honda Motor Co Ltd Method for providing service
JP2003331075A (en) * 2002-05-09 2003-11-21 Honda Motor Co Ltd Service providing system
JP2011222000A (en) * 2010-03-25 2011-11-04 Choushin Inc Image combination service system
US9928544B1 (en) * 2015-03-10 2018-03-27 Amazon Technologies, Inc. Vehicle component installation preview image generation

Also Published As

Publication number Publication date
JPWO2020202347A1 (en) 2020-10-08
JP7237149B2 (en) 2023-03-10
CN113383363A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
JP5280590B2 (en) Information processing system, information processing method, and program
EP2418621B1 (en) Apparatus and method for providing augmented reality information
CN108885452A (en) multi-axis controller
JP6177998B2 (en) Information display method and information display terminal
CN111400610B (en) Vehicle-mounted social method and device and computer storage medium
CN111243200A (en) Shopping method, wearable device and medium
JP2007080060A (en) Object specification device
CN111742281A (en) Electronic device for providing second content according to movement of external object for first content displayed on display and operating method thereof
CN114758100A (en) Display method, display device, electronic equipment and computer-readable storage medium
US20170186073A1 (en) Shopping cart display
KR101928456B1 (en) Field support system for providing electronic document
WO2020202347A1 (en) Information provision system and information terminal
JP6817643B2 (en) Information processing device
WO2021029043A1 (en) Information provision system, information terminal, and information provision method
JP2014215327A (en) Information display apparatus, on-vehicle device and information display system
JP2019090828A (en) Display control device and program
JP7117454B2 (en) Support method and support system
WO2020202346A1 (en) Information provision system and information terminal
US20210224926A1 (en) Server apparatus, control apparatus, medium, mobile shop, and operation method for information processing system
US11556976B2 (en) Server apparatus, mobile shop, and information processing system
JP2021086355A (en) Information processing method, program, and information processing device
JP6833472B2 (en) In-vehicle device and information processing system
WO2018179312A1 (en) Image generating device and image generating method
CN115690194B (en) Vehicle-mounted XR equipment positioning method, device, equipment and storage medium
JP2014164407A (en) Object identification system and object identification method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922237

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021511722

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19922237

Country of ref document: EP

Kind code of ref document: A1