[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024166412A1 - Information processing device, information processing method, information processing program, and storage medium - Google Patents

Information processing device, information processing method, information processing program, and storage medium Download PDF

Info

Publication number
WO2024166412A1
WO2024166412A1 PCT/JP2023/023186 JP2023023186W WO2024166412A1 WO 2024166412 A1 WO2024166412 A1 WO 2024166412A1 JP 2023023186 W JP2023023186 W JP 2023023186W WO 2024166412 A1 WO2024166412 A1 WO 2024166412A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
guided
information processing
matching
Prior art date
Application number
PCT/JP2023/023186
Other languages
French (fr)
Japanese (ja)
Inventor
廣人 根岸
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Publication of WO2024166412A1 publication Critical patent/WO2024166412A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/18Service support devices; Network management devices

Definitions

  • the present invention relates to an information processing device, an information processing method, an information processing program, and a storage medium, and, for example, to an information processing device, an information processing method, an information processing program, and a storage medium that processes information regarding a call between a person inside a mobile body and a person outside the mobile body.
  • Patent Document 1 discloses a vehicle information utilization system that references vehicle information transmitted from a mobile terminal mounted on the vehicle, searches for vehicles that are within a specified range from the user's current location, and displays information about the vehicles on the display of the user's terminal.
  • the combination of the driver of the mobile body and the user of the terminal outside the mobile body is one in which both parties are satisfied with the content of the conversation.
  • simply searching for vehicles traveling within a specified range from the perspective of the user of the terminal outside the mobile body may not result in a good combination depending on the purpose of the driver or user, which is one of the issues.
  • the present invention has been made in consideration of the above points, and one of its objectives is to provide an information processing device, an information processing method, an information processing program, and a storage medium that enable matching of a suitable partner when a driver of a mobile body and a user outside the mobile body make a call.
  • the invention described in claim 1 is an information processing device that processes information related to voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized by having a desired information acquisition unit that acquires multiple desired information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desired type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a desired guide location where the user wishes to be guided if the desired type is the person who is guided, and a guideable location where the guide can provide guidance if the desired type is the guide, a matching unit that matches users of the first device with users of the second device based on the desired information, and a communication unit that performs the voice communication between the first device and the second device after the matching is performed.
  • a desired information acquisition unit that acquires multiple desired information including user type information indicating whether the user is a user of the first device or a user of the second device, and
  • the invention described in claim 11 is an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in having a desired information acquisition unit that acquires guided information including a guided location from a user of the second device, an extraction unit that extracts one or more of the first devices based on the guided information and the current location of the first device or a route set in the mobile body, and a communication unit that performs the voice communication between one of the extracted one or more first devices and the second device.
  • the invention described in claim 12 is an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in having a desired information acquisition unit that acquires guided desired information including a guided desired place from a user of the first device, a guideable information acquisition unit that acquires guideable information including whether or not the user of the second device is guided and, if guided, a guideable place that is a place where guidance is possible, an extraction unit that extracts one or more of the second devices based on the guided desired information and the guideable information, and a communication unit that performs the voice communication between the first device and one of the extracted one or more second devices.
  • the invention described in claim 13 is an information processing method executed by an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in that it includes a preference information acquisition step of acquiring multiple preference information including user type information indicating whether the user is a user of the first device or a user of the second device, and a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guidance location where the user wishes to be guided if the preference type is the person who is guided, and a guideable location where the guide can provide guidance if the preference type is the guide, a matching step of matching users of the first device and users of the second device based on the preference information, and a communication step of performing the voice communication between the first device and the second device after the matching is performed.
  • a preference information acquisition step of acquiring multiple preference information including user type information indicating whether the user is a user of the first device or a user
  • the invention described in claim 14 is an information processing program executed by an information processing device having a computer and processing information regarding voice communication between a first device moving with a mobile body and a second device which is a terminal outside the mobile body, the information processing program causing the computer to execute the following steps: a preference information acquisition step of acquiring a plurality of preference information including user type information indicating whether the user is a user of the first device or a user of the second device, a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guidance location where the guide can provide guidance if the preference type is the person who is guided, and a guideable location where the guide can provide guidance if the preference type is the guide; a matching step of matching a user of the first device with a user of the second device based on the preference information; and a communication step of performing the voice communication between the first device and the second device after the matching.
  • the invention described in claim 15 is a computer-readable storage medium that stores an information processing program for causing an information processing device that includes a computer and processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body to execute the following steps: a preference information acquisition step that acquires multiple preference information including user type information indicating whether the user is a user of the first device or a user of the second device, and a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guide location where the guide can provide guidance if the preferred type is the person who is guided, and a guideable location where the guide can provide guidance if the preferred type is the guide; a matching step that matches the user of the first device with the user of the second device based on the preference information; and a communication step that performs the voice communication between the first device and the second device after the matching is performed.
  • FIG. 1 is a schematic diagram illustrating an overview of an information processing system according to an embodiment of the present invention.
  • FIG. 1 is a diagram showing a configuration of a front seat portion of an automobile according to an embodiment.
  • 1 is a block diagram showing an example of a configuration of an in-vehicle device according to an embodiment
  • FIG. 2 is a block diagram showing an example of a configuration of an external device according to an embodiment.
  • FIG. 2 is a block diagram showing an example of a configuration of a server device according to an embodiment.
  • 4 is a diagram showing an example of information about a driver (a user of an in-vehicle device) held by a server device according to an embodiment;
  • FIG. 1 is a diagram showing a configuration of a front seat portion of an automobile according to an embodiment.
  • 1 is a block diagram showing an example of a configuration of an in-vehicle device according to an embodiment
  • FIG. 2 is a block diagram showing an example of a configuration of an external device according to an embodiment
  • FIG. 11 is a diagram showing an example of information about a virtual fellow passenger (user of an external device) held by a server device in the embodiment;
  • FIG. FIG. 11 is a diagram illustrating an example of request information received by a server device according to an embodiment.
  • 10 is a schematic diagram showing an example of matching between a guideable location and a guide-desired location when matching is performed in the information processing system according to the embodiment;
  • FIG. FIG. 2 is a sequence diagram illustrating an example of information processing executed by an information processing system according to an embodiment.
  • FIG. 2 is a sequence diagram illustrating an example of information processing executed by an information processing system according to an embodiment.
  • 5 is a flowchart illustrating an example of a routine executed by the server device according to the embodiment.
  • 10 is a flowchart illustrating an example of a subroutine executed by the server device according to the embodiment.
  • 10 is a flowchart illustrating an example of a subroutine executed by the server device according to the embodiment.
  • FIG. 1 shows an overview of the configuration of an information processing system 100.
  • the information processing system 100 includes an in-vehicle device 10, an external device 30, and a server 50 as an information processing device.
  • FIG. 1 shows a case in which the in-vehicle device 10 is mounted on an automobile M as an example of a moving body.
  • FIG. 1 shows a smartphone as an example of the external device 30.
  • the in-vehicle device 10, the external device 30, and the server 50 can transmit and receive data to and from each other via the network NW using communication protocols such as TCP/IP and UDP/IP.
  • the network NW can be constructed, for example, by a mobile communication network, wireless communication such as Wi-Fi (registered trademark), and Internet communication including wired communication.
  • audio-visual communication is possible in which video and audio captured by the in-vehicle device 10 in the automobile M are transmitted to the external device 30, and audio captured by the external device 30 is transmitted to the in-vehicle device 10.
  • a voice call is established between the in-car device 10 and the external device 30, and then the video captured in the automobile M is distributed from the in-car device 10 to the external device 30.
  • video communication which is one form of audio-video communication.
  • the user of the external device 30 watching the video transmitted from the in-vehicle device 10 can feel as if he or she is riding in the vehicle M with the driver of the vehicle M.
  • video communication can realize a virtual ride with the user of the external device 30 in the vehicle M.
  • a system such as the information processing system 100 of this embodiment that can realize such video communication is also called a virtual ride-along system.
  • the in-car device 10 and the external device 30 perform video communication via the server 50.
  • the server 50 establishes voice communication between the in-car device 10 and the external device 30, and is capable of receiving from the in-car device 10 images captured in the automobile M and acquired by the in-car device 10, and transmitting the images to the external device 30.
  • the driver of automobile M who is the user of in-vehicle device 10
  • the user of external device 30 is not familiar with the location and wishes to be guided, for example, by planning a trip to the location, both parties can achieve their goals.
  • the user of external device 30 can receive guidance while watching actual driving footage through video communication.
  • the user of the external device 30 can also provide guidance.
  • the user of the external device 30 who is familiar with the place can provide guidance about the place via video communication by providing tourist information in the vicinity, information about the area such as places where caution is required when driving, and places where there are always traffic jams.
  • the server 50 performs matching to determine a pairing of a user of the in-vehicle device 10 and a user of the external device 30, and can establish video communication between the determined parties.
  • the occupants of the automobile M can be users of the in-vehicle device 10.
  • the explanation will be centered on the case where the user of the in-vehicle device 10 is the driver of the automobile M.
  • the in-vehicle device 10 is a car navigation device.
  • the in-vehicle device 10 is a terminal device of a so-called cloud-type car navigation device that receives a destination to which the user wishes to be guided from the user, transmits the destination to the server 50, and the server 50 generates a route to the destination.
  • the external device 30 is described as a smartphone.
  • the touch panel 31 is, for example, a touch panel monitor that combines a display, such as a liquid crystal display capable of displaying images, with a touch pad.
  • the touch panel 31 is capable of generating a signal that represents an input operation to the touch panel 31 received from the user.
  • the touch panel 31 displays the video delivered from the in-vehicle device 10 during video communication.
  • the user of the external device 30 can talk to the driver of the automobile M while watching the video displayed on the touch panel 31.
  • information related to video communication is displayed on the touch panel 31.
  • the touch panel 31 displays a registration screen for the user of the external device 30 to register information necessary for video communication in the information processing system 100, a confirmation screen for whether or not to consent to video communication with the driver of the automobile M, and other screens.
  • the user of the external device 30 can input information related to video communication by performing an input operation on the touch panel 31.
  • the speaker 33 is capable of emitting sounds such as music and voice. During video communication, the speaker 33 emits the sound from the in-car device 10 during a voice call.
  • the microphone 35 is a microphone device that receives sounds emitted toward the external device 30. During video communication, the sound picked up by the microphone 35 is transmitted to the in-car device 10 via the server 50 as the sound of the voice call.
  • FIG. 2 is a perspective view showing the vicinity of the front seat of an automobile M equipped with the in-vehicle device 10.
  • FIG. 2 shows a case where the in-vehicle device 10 is installed in the dashboard DB of the front seat of the automobile M.
  • the GPS receiver 11 is a device that receives signals (GPS signals) from GPS (Global Positioning System) satellites.
  • GPS Global Positioning System
  • the GPS receiver 11 is arranged, for example, on a dashboard DB.
  • the GPS receiver 11 may be arranged anywhere as long as it can receive GPS signals.
  • the GPS receiver 11 is capable of transmitting the received GPS signals to the in-vehicle device 10.
  • the in-vehicle device 10 obtains current position information of the automobile M using the GPS signals.
  • the exterior camera 13 is an imaging device that captures images of the area in front of the automobile M.
  • the exterior camera 13 is arranged on the dashboard DB so that the imaging direction is the front.
  • the exterior camera 13 can capture images of the area in front of the automobile M through the windshield.
  • the exterior camera 13 may be provided near the rearview mirror RM or attached to the inside of the windshield FG. During video communication, the image captured by the exterior camera 13 is distributed to the external device 30.
  • the in-vehicle camera 14 is an imaging device that captures images of the interior of the automobile M.
  • the in-vehicle camera 14 is installed at the top end of the windshield FG or on the ceiling near the top end.
  • the in-vehicle camera 14 is capable of capturing images of the driver of the automobile M.
  • the in-vehicle camera 14 may be installed so as to be capable of capturing images of the interior of the automobile M, including the passenger seat and rear seats of the automobile M.
  • the image captured by the in-vehicle camera 14 is distributed to the external device 30 together with the image captured by the exterior camera 13.
  • the image from the exterior camera 13 and the image from the in-vehicle camera 14 may be displayed in a switched manner at the user's choice, or may be displayed side by side.
  • the touch panel 15 is, for example, a touch panel monitor that combines a display, such as a liquid crystal display capable of displaying images, with a touch pad.
  • the touch panel 15 is disposed, for example, in the center console of the dashboard DB.
  • the touch panel 15 may be disposed in a location that is visible to the driver and within the driver's reach.
  • the touch panel 15 may be attached to the dashboard DB.
  • the touch panel 15 is capable of displaying a screen based on the control of the in-vehicle device 10.
  • the touch panel 15 is also capable of transmitting a signal representing an input operation to the touch panel 15 received from a user to the in-vehicle device 10.
  • the touch panel 15 may display car navigation guidance.
  • it may be possible to perform operations related to the car navigation function, such as setting a destination, via the touch panel 15.
  • the touch panel 15 may display information regarding video communication, and may display a screen such as an operation reception screen for requesting the server 50 to match with a user of the external device 30, or a screen asking whether or not video communication with the user of the external device 30 with whom a match has been made is possible.
  • the user of the in-vehicle device 10 can input information required for requesting matching with a user of the external device 30 and select whether or not to allow video communication in response to a question about whether or not video communication is possible, by performing an input operation on the touch panel 15.
  • the speaker 17 is provided, for example, on the interior side of the A-pillar AP.
  • the speaker 17 is capable of emitting sounds such as music and voices under the control of the in-vehicle device 10. During video communication, the speaker 17 emits sound from the external device 30 during a voice call.
  • a voice such as an inquiry as to whether or not video communication with the user of the external device 30 determined by matching is possible is issued from the speaker 17.
  • a pre-recorded message may be issued in the voice of the user of the external device 30.
  • the microphone 19 is a microphone device that picks up sounds inside the vehicle, and is located, for example, on the dashboard DB.
  • the microphone 19 may be located anywhere, such as on the rearview mirror RM or the steering wheel, as long as it is capable of picking up sounds inside the vehicle.
  • the sound picked up by the microphone 19 is transmitted to the external device 30 as the sound of the voice call.
  • operation inputs for requesting matching with a user of the external device 30 or selecting whether or not to accept an inquiry about whether or not video communication is possible may be made by voice via the microphone 19.
  • FIG. 3 is a block diagram showing the configuration of the in-vehicle device 10.
  • the in-vehicle device 10 is a device in which a storage unit 23, a control unit 25, and a communication unit 27 work together via a system bus.
  • the automobile M is also equipped with an acceleration sensor 21.
  • the acceleration sensor 21 is capable of measuring the acceleration of the automobile M and outputting a signal indicating the measured acceleration.
  • the acceleration sensor 21 is a sensor capable of detecting the acceleration in the traveling direction of the automobile M, i.e., the forward/rearward direction, as viewed from above the automobile M.
  • the acceleration sensor is also capable of detecting, for example, acceleration in the lateral direction (width direction) perpendicular to the traveling direction of the automobile M.
  • the in-vehicle device 10 may obtain current position information of the automobile M based on, for example, the acceleration indicated by the sensor signal of the acceleration sensor 21 in addition to the GPS signal from the GPS receiver 11.
  • the storage unit 23 is a storage device configured, for example, by a hard disk drive, a solid state drive (SSD), a flash memory, etc.
  • the storage unit 23 stores various programs executed in the in-vehicle device 10, such as an operating system and software for the terminal.
  • the various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices.
  • the various programs stored in the storage unit 23 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.
  • the memory unit 23 also stores map information including road maps. This map information is used, for example, for displaying directions in car navigation systems.
  • the control unit 25 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer.
  • the control unit 25 realizes various functions by the CPU reading and executing various programs stored in the ROM and the storage unit 23.
  • the control unit 25 performs functions such as voice calls and video distribution functions during video communication, as well as car navigation functions.
  • the control unit 25 is connected to be able to communicate with various devices provided in the automobile M, namely, the GPS receiver 11, the exterior camera 13, the interior camera 14, the touch panel 15, the speaker 17, the microphone 19, and the acceleration sensor 21.
  • the control unit 25 acquires data from the various devices provided in the automobile M.
  • the control unit 25 also supplies data to the various devices provided in the automobile M.
  • control unit 25 sequentially acquires GPS signals from the GPS receiver 11.
  • the control unit 25 also sequentially acquires a signal indicating the acceleration measured by the acceleration sensor 21.
  • the control unit 25 acquires current position information of the automobile M based on, for example, the GPS signal and the signal from the acceleration sensor.
  • the control unit 25 sequentially acquires images captured by the exterior camera 13 and the interior camera 14.
  • the control unit 25 acquires signals representing input operations performed on the touch panel 15.
  • the control unit 25 also supplies image data to be displayed on the touch panel 15.
  • the control unit 25 also supplies audio data to the speaker 17.
  • the control unit 25 also sequentially acquires audio in the automobile M picked up by the microphone 19.
  • the communication unit 27 is a communication device that transmits and receives data to and from external devices in accordance with instructions from the control unit 25.
  • the communication unit 27 is, for example, a NIC (Network Interface Card) for connecting to the network NW.
  • NIC Network Interface Card
  • the communication unit 27 is connected to the network NW described above, and transmits and receives various data to and from the server 50.
  • the communication unit 27 also transmits and receives various data to and from the external device 30 via the server 50.
  • control unit 25 can transmit information including a destination input by a user of the in-vehicle device 10 to the server 50 via the communication unit 27, and receive route information or navigation information to the destination from the server 50.
  • control unit 25 sequentially transmits images captured by the exterior camera 13 or the interior camera 14 to the server 50 via the communication unit 27.
  • the control unit 25 also receives audio data from the external device 30 via the server 50 via the communication unit 27, and sequentially transmits audio from inside the automobile M picked up by the microphone 19 to the server 50.
  • control unit 25 sequentially transmits current location information of the automobile M obtained based on the GPS signal and the signal from the acceleration sensor to the server 50 via the communication unit 27.
  • the control unit 25 transmits information about the video communication input by the occupant to the server 50, for example, via the communication unit 27.
  • the control unit 25 transmits information to the server 50, such as information input by the driver via the touch panel 15 or microphone 19 indicating a request for matching with a user of the external device 30.
  • the control unit 25 receives information related to video communication from the server 50, for example, via the communication unit 27.
  • the control unit 25 receives an image for displaying a screen such as an inquiry screen about whether video communication is possible with the user of the external device 30 determined by matching, and supplies the received image to the touch panel 15.
  • the control unit 25 receives audio data such as an inquiry about whether video communication is possible with the user of the external device 30 with whom matching has been made, and supplies the received audio data to the speaker 17.
  • FIG. 4 is a block diagram showing an example of the configuration of the external device 30.
  • the external device 30 is a device in which a storage unit 37, a control unit 39, and a communication unit 41 work together via a system bus (not shown).
  • the storage unit 37 is composed of, for example, a hard disk drive, a solid state drive (SSD), flash memory, etc., and stores various programs such as an operating system and software for the external device 30.
  • SSD solid state drive
  • the storage unit 37 stores various programs such as an operating system and software for the external device 30.
  • the various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices.
  • the various programs stored in the storage unit 37 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.
  • the control unit 39 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer.
  • the CPU reads and executes various programs stored in the ROM and memory unit 37 to realize various functions.
  • the control unit 39 is communicatively connected to the above-mentioned touch panel 31, speaker 33, and microphone 35.
  • the control unit 39 is capable of receiving signals indicating input operations to the touch panel 31 and audio input signals from the microphone 35.
  • the control unit 39 is also capable of transmitting video or image signals to the touch panel 31 to cause it to display, and transmitting audio signals to the speaker 33 to cause it to output sound.
  • control unit 39 causes the image during the video communication to be displayed on the touch panel 31, and causes the audio during the video communication picked up in the automobile M to be output to the speaker 33.
  • the control unit 39 can also accept input operations related to video communication made by the user via the touch panel 31 or microphone 35.
  • the control unit 39 can accept input operations via the touch panel 31 and microphone 35, such as information indicating a request for matching with the user of the in-vehicle device 10, or information indicating a choice as to whether or not to allow video communication with the user of the in-vehicle device 10 with whom a match has been made.
  • control unit 39 causes the touch panel 31 to display a screen for accepting operations to request matching with the user of the in-vehicle device 10, a screen for inquiring whether video communication with the user of the in-vehicle device 10 with whom the match has been made is possible, and other such screens.
  • control unit 39 may cause the speaker 33 to output a sound such as an inquiry as to whether video communication with the user of the in-vehicle device 10 with whom the match has been made is possible.
  • the communication unit 41 is connected to the network NW described above, and transmits and receives various data to and from the server 50.
  • the communication unit 41 also transmits and receives various data, such as voice and route information transmitted from the in-vehicle device 10, and voice acquired by the external device 30, to and from the in-vehicle device 10 via the server 50.
  • the control unit 39 of the external device 30 can receive information via the communication unit 41 to display a screen such as a screen asking whether or not video communication is possible with the user of the in-vehicle device 10 that has been matched as described above.
  • control unit 39 sequentially receives images from the exterior camera 13 and the interior camera 14 transmitted from the in-vehicle device 10 via the server 50 via the communication unit 41.
  • the control unit 39 can also transmit audio data of the voice picked up by the microphone 35 to the in-vehicle device 10 via the communication unit 41 for voice calls in video communication.
  • the control unit 39 can also receive audio data transmitted from the in-vehicle device 10 via the communication unit 41 for voice calls in video communication.
  • control unit 39 of the external device 30 can receive current location information of the automobile M transmitted from the in-vehicle device 10 via the server 50 through the communication unit 41. Also, for example, the control unit 39 can receive route information or navigation information of the automobile M from the server 50 through the communication unit 41. For example, navigation information of the automobile M may also be displayed on the external device 30 during video communication.
  • FIG. 5 is a block diagram showing the configuration of the server 50.
  • the server 50 is a device in which a storage unit 51, a communication unit 53, and a control unit 55 work together via a system bus.
  • the server 50 sequentially receives from the in-vehicle device 10 images captured by the exterior camera 13 or the interior camera 14 and acquired by the in-vehicle device 10. During video communication, the server 50 sequentially transmits the images received from the in-vehicle device 10 to the external device 30.
  • the server 50 also has a function similar to that of a SIP server, which establishes a voice call between the in-car device 10 and the external device 30 during video communication and transfers the data of the voice call.
  • the server 50 also has the function of receiving current location information of the automobile M from the in-vehicle device 10 and information on a destination set by a user who is an occupant of the automobile M, and generating a route to the destination based on the current information and the destination information.
  • the server 50 has a function of matching the user of the in-vehicle device 10 with the user of the external device 30, and carrying out video communication between the matched user of the in-vehicle device 10 and the user of the external device 30.
  • the user of the in-vehicle device 10 or the user of the external device 30 who wishes to find a partner for video communication will be referred to simply as a requester without distinction.
  • the requester is a user of the in-vehicle device 10
  • the server 50 will match the requester with multiple users of the external devices 30 who have been registered in advance with the server 50.
  • the storage unit 51 is composed of, for example, a hard disk device and an SSD (solid state drive), and stores various programs such as the operating system and software for the server 50.
  • the various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices.
  • the various programs stored in the storage unit 51 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.
  • the storage unit 51 stores, for example, an information processing program that enables the server 50 to match a user of the in-vehicle device 10 with a user of the external device 30 and to perform video communication between the matched user of the in-vehicle device 10 and the user of the external device 30.
  • the memory unit 51 also has a registration information storage unit that stores registration information, which is information registered in the server 50 by users of multiple in-vehicle devices 10 and multiple external devices 30.
  • the registration information is stored separately for the user of the in-vehicle device 10 and the user of the external device 30, but the registration information may also be stored together in a single database.
  • FIG. 6 is a diagram showing data LD1, which is an example of registration information of a user of an in-vehicle device 10, among the registration information.
  • the registration information of a user of an in-vehicle device 10 is stored, for example, for each pair of an in-vehicle device ID that can identify each in-vehicle device 10 and a user ID that can identify each user of the in-vehicle device 10.
  • the registration information includes user type information, which is information indicating whether the user corresponding to each user ID is a user of the in-vehicle device 10 or a user of the external device 30.
  • the registration information also includes a desired type, which is information indicating whether the user corresponding to each user ID is a "guide” who wishes to be guided or a "guide” who can provide guidance.
  • the user type is set to "driver,” and if the user is the external device 30 user, the user type is set to "passenger.”
  • the registration information includes a desired guided location, which is information indicating a location to which guidance is desired. Furthermore, if the desired type is to be a guide, the registration information includes a possible guided location, which is information indicating a location to which guidance by the guide is possible.
  • the locations where guidance is available and the desired locations for guidance may be, for example, information indicating a location on a map or an address, or may be information indicating an area such as a city name, municipality, or "near XX.” For example, multiple locations may be registered as locations where guidance is available or desired locations for guidance.
  • places that the user of the in-vehicle device 10 is familiar with are registered in advance as guideable places.
  • places such as the user's hometown, place of work, or living area are registered as guideable places.
  • the current location of the in-vehicle device 10 may be set as a navigable location.
  • the server 50 may obtain current location information of the automobile M from the in-vehicle device 10 and set it as a navigable location for the user of the in-vehicle device 10.
  • major waypoints on the route may be registered as navigable locations.
  • Major waypoints may be identified by the server 50 as a predetermined area that includes navigable locations, such as intersections, buildings, and other locations that can serve as landmarks when providing guidance.
  • a predetermined area including "X Temple" may be identified as a major route point.
  • the server 50 may receive route information for the automobile M from the in-vehicle device 10, identify major route points based on the received route information, and set them as navigable locations.
  • the desired location to be guided to is set, for example, when the server 50 receives information indicating that the desired type is a person to be guided and the user of the in-vehicle device 10 requests matching with the user of the external device 30.
  • information input into the in-vehicle device 10 by the user of the in-vehicle device 10, or a major waypoint or destination on the route set by the user of the in-vehicle device 10 is set as the desired location to be guided to by the user of the in-vehicle device 10 who is the person to be guided to.
  • the desired location to be guided to is registered by the user of the in-vehicle device 10, for example, when the user, whose desired category is a guided person, wishes to receive guidance in response to a request from the user of the external device 30, whose desired category is a guide.
  • the desired location may also include the current location or a major intermediate location, just like the locations that can be guided.
  • the registration information may further include, for example, information indicating the timing when video communication is possible.
  • the registration information includes information indicating the standby state of the in-vehicle device 10 as information indicating the timing when video communication is possible.
  • the standby state is "ON" when the in-vehicle device 10 is powered on.
  • a planned standby time indicating until when video communication is possible may be registered.
  • the registration information of the user of the in-vehicle device 10 may include, for example, if the desired type is a person to be guided, information indicating that the communication partner is required to have high guidance skills for the desired location, such as information indicating whether or not a local person is desired. Also, if the desired type is a guide, information indicating guidance skills for the possible locations, such as information indicating whether or not the person is from the user's hometown, and information indicating driving skills such as driving history may be included.
  • FIG. 7 is a diagram showing data LD2, which is an example of registration information of users of external devices 30, among the registration information.
  • the registration information of users of external devices 30 is stored, for example, for each pair of an external device ID capable of identifying each external device 30 and a user ID capable of identifying each user of the external device 30.
  • the registration information of the user of the external device 30 also includes user type information and desired type. Similar to the registration information of the user of the in-vehicle device 10 described above, the registration information of the user of the external device 30 also includes a desired guided location indicating a location to which guidance is desired if the desired type is a guidee, and includes a guideable location indicating a location to which guidance by the guide is possible if the desired type is a guide.
  • places that the user of the external device 30 is familiar with are registered in advance as locations to which the user can provide guidance.
  • places such as the user's hometown, place of work, or living area are registered as locations to which the user can provide guidance.
  • the desired location for guidance is registered, for example, when a user of the external device 30 whose desired type is a person to be guided to transmits information to the server 50 indicating a request for matching with the user of the in-vehicle device 10.
  • the desired location to be guided to is registered by the user of the external device 30, for example, when the user wishes to be guided to the location when requested by the user of the in-vehicle device 10, for example, when the user wishes to be guided to the desired location by the user.
  • the registration information may further include, for example, information indicating the timing when video communication is possible.
  • the registration information includes information indicating the standby state of the external device 30 as information indicating the timing when the user of the external device 30 is able to perform video communication.
  • the standby state is set to "ON” by a setting on an application for video communication used in the external device 30. For example, even if the standby state is OFF, it may be possible to receive a notification on the application that a match has been made with the user of the in-vehicle device 10. Also, for example, when the standby state is "ON", a planned standby time indicating how long video communication is possible may be registered.
  • the registration information of the user of the external device 30 may include, for example, if the desired type is a person to be guided, information indicating that the communication partner is required to have high guidance skills for the desired location, such as information indicating whether or not a local person is desired. Also, if the desired type is a guide, information indicating guidance skills for the possible locations, such as information indicating whether or not the location is the user's hometown, and information indicating driving advice skills such as driving history may be included.
  • the information stored in the registration information storage unit is not limited to the above examples, but may include other information and be used for matching.
  • the memory unit 51 also stores map information including road maps.
  • the map information is used by the server 50 when it generates a route for car navigation.
  • the map information is also used by the server 50 when it acquires major stopovers on the route of the vehicle M based on the current position of the vehicle M.
  • the communication unit 53 is connected to the network NW described above, and transmits and receives various data between the in-vehicle device 10 and the external device 30.
  • the control unit 55 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer.
  • the CPU realizes various functions by reading and executing various programs stored in the ROM and the memory unit 51.
  • the control unit 55 has a function of acquiring registration information of a user of the in-vehicle device 10 or registration information of a user of the external device 30.
  • the control unit 55 stores the acquired registration information in the registration information storage unit 51A.
  • the control unit 55 acquires the registration information, for example, by receiving the registration information transmitted from each of the in-vehicle devices 10 or each of the external devices 30 via the communication unit 53.
  • the registration information of the user of the in-vehicle device 10 or the registration information of the user of the external device 30 as illustrated in Figs. 6 and 7 is stored in the registration information storage unit 51A.
  • control unit 55 has a function of acquiring request information that is sent to the server 50 when a user of the in-vehicle device 10 or a user of the external device 30 as a requester requests the server 50 to search for a partner for video communication.
  • FIG. 8 shows data RD1, which is an example of request information.
  • the request information includes an external device ID or an in-vehicle device ID and a user ID, similar to the registration information described above.
  • the request information also includes information indicating a request to search for a partner for video communication.
  • the request information includes a user type indicating whether the user is the user of the in-vehicle device 10 or the user of the external device 30, and a desired type indicating whether the user is a person to be guided or a guide who can provide guidance. If the desired type is a person to be guided, it includes a desired guide location, which is a place where the user wishes to be guided, and if the desired type is a guide, it includes a guideable location, which is a place where the guide can provide guidance.
  • the request information is generated, for example, by inputting an input operation by the user on the in-vehicle device 10 or an input operation by the user on the external device 30.
  • a guidance type is selected, and a location where guidance is available or a desired location is input.
  • the guidance type and a location where guidance is available or a desired location may be input by voice, for example, "Look for a communication partner who can provide guidance within XX city.”
  • the guidance type of the user of the in-vehicle device 10 is a person to be guided
  • the guidance type is set to a person to be guided and the desired location to be guided is set to Kyoto city by a voice command such as "Find a communication partner who can guide you around Kyoto city.”
  • route information may be sent to the server 50 instead of the desired location for guidance.
  • the server 50 may set a major stopover or a destination on the route as the desired location for guidance based on the route information.
  • the desired type of the user of the external device 30 is to be a guide
  • places where the user frequently drives by car, places such as stores that the user frequently visits, and places such as the user's living area are input as places where guidance is available.
  • the user of the external device 30 who is a guide may set the time when he/she is looking for people who want to be guided, that is, the planned time when video communication is possible, as the planned waiting time. In that case, for example, the planned waiting time is added to the request information and transmitted to the server 50.
  • the user sending the request information is a user of the external device 30, and the data RD1 includes the external device ID.
  • the flag in the "Request” item when the flag in the "Request" item is ON, it indicates a request to find a partner for video communication.
  • the user type since the user is a user of the external device 30, the user type is "passenger.” Note that, if the user sending the request information is a user of the in-vehicle device 10, the request information includes the in-vehicle device ID instead of the external device ID, and the user type is "driver.”
  • the desired type is "person to be guided” and includes the desired location to which the requester, who is the person to be guided, wishes to be guided. Note that if the desired type is "guide,” the request information includes the location to which the requester can be guided.
  • both the registration information and the request information include a user type, a desired type, and a place to which guidance can be given or a desired place to be given according to the desired type.
  • the user type, the desired type, and the place to which guidance can be given or a desired place to be given according to the desired type are referred to as desired information.
  • the desired information is used to match the user of the in-vehicle device 10 with the user of the external device 30 in this embodiment.
  • the control unit 55 acquires registration information and request information to acquire desired information that can be used for matching in this embodiment, and functions as a desired information acquisition unit 57.
  • control unit 55 functions as a desired information acquisition unit 57 that executes a step of acquiring multiple pieces of desired information (desired information acquisition step) including user type information indicating whether the user is the user of the in-vehicle device 10 as the first device or the user of the external device 30 as the second device, and a desired type indicating whether the user is a person wishing to be guided or a guide who can provide guidance, and including a desired guidance location where the user wishes to be guided if the desired type is a person wishing to be guided, and including a guideable location where the guide can provide guidance if the desired type is a guide.
  • the control unit 55 has the desired information acquisition unit 57 as a functional unit.
  • the desired information acquisition unit 57 when the desired information acquisition unit 57 receives request information with the desired type being a person to be guided from the user of the external device 30, it functions as a desired information acquisition unit that acquires guided desired information including a guided location, which is a location to which the user of the second device wishes to be guided.
  • the desired information acquisition unit 57 when the desired information acquisition unit 57 receives request information with the desired type set as "recipient" from the user of the in-vehicle device 10, it functions as a desired information acquisition unit that acquires desired information to be guided, which is a desired guided location, from the user of the first device.
  • the desired information acquisition unit 57 functions as a navigable information acquisition unit that acquires navigable information including whether or not the user of the second device can be guided and, if so, a navigable location, which is a location where guidance is possible, by acquiring, for example, registration information of the user of the external device 30.
  • the desired information may further include information indicating the timing when video communication is possible, such as information indicating a standby state, as described above.
  • the desired information is used to match the user of the in-vehicle device 10 with the user of the external device 30.
  • control unit 55 When the control unit 55 acquires the request information, it executes processing as a matching unit 59 that matches the user of the in-vehicle device 10 with the user of the external device 30 based on the desired information included in the acquired request information and the desired information included in the registration information. In other words, the control unit 55 has a matching unit 59 as a functional unit.
  • the matching unit 59 compares the desired location to be guided to and the locations to which guidance can be provided between the requester and each of the users of the in-vehicle devices 10 or the users of the external devices 30 (also referred to as registrants) who have registered registration information.
  • the matching unit 59 matches the users of the in-vehicle devices 10 and the users of the external devices 30 based on the comparison results.
  • the matching unit 59 compares the desired location to be guided with the locations to be guided for each of the registered users who have a desired location different from the desired location indicated by the request information, among the registered users who have a user type different from the user type indicated by the request information.
  • the desired location of guidance and the location where guidance is available are information indicating a location or information indicating an area.
  • the location of guidance desired and the location where guidance is available are matched based on criteria such as the distance between locations, the distance between areas, the distance between locations and areas, or whether or not a location or area is included within an area.
  • matching may be performed based on whether the area indicating the desired guidance location (also referred to as the desired area) is included in the area indicating the guidable location (also referred to as the possible area), and whether the desired area and the possible area are within a specified distance (adjacent or nearby). Furthermore, if there are multiple desired areas, matching may be performed based on the number of areas in which the desired area and the guiding area overlap. For example, the matching result may be calculated as the degree of match between the desired guidance location and the guiding possible location, and expressed as a numerical value.
  • Figure 9 shows a case where an area AR including a certain temple T exists as a major stop on route R along which a vehicle M is scheduled to travel from point P to point Q.
  • a driver of a car M (assumed to be user A), who is a user of the in-vehicle device 10, plans to travel from point P, where the driver lives, to point Q, where the driver works.
  • the driver's desired type is assumed to be "guide,” and the driver has already registered information with "Kyoto City” as a possible guide location.
  • an area AR including "XX Temple" is identified as a major stopover and added to the registration information.
  • a user of the external device 30 sends request information specifying the desired type as "person to be guided” and the desired location as "X temple in Kyoto city," the area AR including X temple T is identified as the desired location.
  • the matching unit 59 compares the location where user B wishes to be guided with the locations where user A can be guided to among registered users whose user type is "driver” and whose desired user type is "guide,” that is, each of the registered users including user A.
  • the comparison result shows that the area AR, which is the location where user B wishes to be guided to, matches the locations where user A can be guided to.
  • familiar locations do not need to be preregistered as navigable locations; in that case, the current location or major intermediate locations can be used for matching.
  • the matching unit 59 calculates the degree of match as the comparison result, it prioritizes each of the registered users whose degree of match is calculated in order of the degree of match, and extracts a predetermined number of registered users from the highest priority as candidates for video communication with the requester.
  • the matching unit 59 determines a partner for video communication of the requester from among the extracted partner candidates. For example, the matching unit 59 notifies the registered person with the highest priority among the extracted video communication partner candidates, and if consent is obtained from that registered person, determines that that registered person as the partner for video communication of the requester. In this manner, matching is performed between the user of the in-vehicle device 10 and the user of the external device 30.
  • the locations familiar to the user and the current location or major intermediate points that match may be given a higher priority and may be more easily matched. Also, if the driver is not set as having a familiar area as a guideable location, the priority may be lowered and matching may be more difficult.
  • a registered user is in a standby state, that is, if the registered user is registered as being available for video communication, matching of the registered user in a standby state may be prioritized.
  • the control unit 55 When the matching unit 59 matches the user of the in-car device 10 with the user of the external device 30, the control unit 55 establishes video communication between the matched in-car device 10 and external device 30.
  • the control unit 55 functions as a communication unit that performs audio communication between the matched in-car device 10 and external device 30.
  • the following describes the communication sequence between the devices in the information processing system 100 and the control routine by the server 50.
  • [When a virtual passenger becomes the client] 10 is a sequence diagram showing an example of a sequence executed when a user of the external device 30 becomes a requester in the information processing system 100 of this embodiment. In this sequence, it is assumed that a plurality of registrants have completed the registration of the registration information as described above.
  • the external device 30 accepts an input operation of request information from a user (step S11), and the request information is sent to the server 50 (step S12).
  • the server 50 When the server 50 receives the request information, it performs a matching process (step S13) to match the user of the external device 30 with one of the multiple registered users who are users of the in-vehicle device 10.
  • the server 50 When the server 50 has completed the matching process, it transmits a notification of a desire to ride to the matched in-vehicle device 10, indicating that the user wishes to virtually ride in the automobile M via video communication (step S14).
  • the in-vehicle device 10 When the in-vehicle device 10 receives the notification of a passenger request, it accepts the input operation of consent by the driver of the vehicle M (step S15) and transmits a notification of consent to the server 50 (step S16).
  • the server 50 When the server 50 receives the notification of approval, it sends a URL for starting video communication to the external device 30 (step S17). When the user of the external device 30 taps the URL and the external device 30 accesses the URL (step S18), the server 50 establishes a voice call between the in-car device 10 and the external device 30 and starts video communication in which the video captured in the automobile M is distributed from the in-car device 10 to the external device 30 (step S19).
  • the video communication may be configured to only deliver video without conducting a voice call, at least at the start.
  • the user type of the external device 30 in this sequence is a "guideee"
  • one of the motivations for sending request information is to scout out a trip.
  • a user of the external device 30 who is planning a trip can receive guidance from a driver who is familiar with the area to a place where the user wishes to be guided.
  • the user can do preliminary research for the trip by receiving information about the place where the user wishes to be guided through a voice call in video communication or by watching distributed video.
  • the motivation for sending request information may be the utilization of guidance skills or the utilization of spare time.
  • the guidance skills of the user of the external device 30 for that place can be actively utilized.
  • the request information as part of the desired information, information indicating the timing at which guidance can be provided, spare time can be utilized effectively.
  • the user of the external device 30 is motivated to become a requester who can provide guidance.
  • FIG. 11 is a sequence diagram showing an example of a sequence executed when the user of the in-vehicle device 10 becomes a requester in the information processing system 100 of this embodiment.
  • the in-vehicle device 10 accepts an input operation of request information from the user (step S21). For example, a voice input operation such as "Find a communication partner who can guide me around Kyoto city" is accepted. After that, the request information is transmitted to the server 50 (step S22).
  • the server 50 When the server 50 receives the request information, it performs a matching process (step S23) to match the user of the in-vehicle device 10 with one of the multiple registered users who are users of the external device 30.
  • the server 50 When the server 50 has completed the matching process, it sends a ride request notification to the matched external device 30, requesting the registered person to virtually ride in the automobile M via video communication (step S24).
  • the external device 30 When the external device 30 receives the notification of the ride request, it accepts the input operation of acceptance by the user of the external device 30 (step S25) and sends a notification of acceptance to the server 50 (step S26).
  • the server 50 When the server 50 receives the notification of approval, it transmits the notification of approval to the in-car device 10 (step S27) and transmits a URL for starting video communication to the external device 30 (step S28).
  • the server 50 When the user of the external device 30 taps the URL and the external device 30 accesses the URL (step S29), the server 50 establishes a voice call between the in-car device 10 and the external device 30 and starts video communication in which the video captured in the automobile M is distributed from the in-car device 10 to the external device 30 (step S30).
  • the user type of the in-vehicle device 10 in this sequence is a "guided person"
  • one of the motivations for sending request information is to relieve anxiety about driving in unfamiliar places.
  • one of the motivations for sending request information may be to add value to daily travel or to relieve boredom or drowsiness.
  • Control routine] 12 is a flowchart showing a control routine RT1 which is an example of a control routine executed by the control unit 55 of the server 50 when matching a user of the in-car device 10 with a user of the external device 30 and performing video communication between the matched in-car device 10 and the external device 30. For example, when the server 50 is powered on, the control unit 55 repeatedly executes the control routine RT1.
  • step S101 determines whether or not request information is present.
  • step S101 for example, the control unit 55 determines whether or not the desired information acquisition unit 57 has received request information that includes the desired information of the requester and information indicating a request to find a partner for video communication.
  • step S101 If it is determined in step S101 that no request information exists (step S101: NO), the control unit 55 ends the control routine RT1.
  • step S101 If it is determined in step S101 that request information exists (step S101: YES), the control unit 55 acquires the request information (step S102). In step S102, for example, the control unit 55 supplies the request information received by the desired information acquisition unit 57 to the matching unit 59.
  • control unit 55 After executing step S102, the control unit 55 causes the matching unit 59 to execute a partner candidate extraction subroutine to extract registered persons who can be potential video communication partners for the requester (step S103).
  • control unit 55 determines whether or not one or more registered users have been extracted by step S103 (step S104).
  • step S104 If it is determined in step S104 that a registered person has not been extracted (step S104: NO), the control unit 55 notifies the requester that there is no corresponding person (step S105). In step S105, for example, information indicating that there is no corresponding person is transmitted to the requester's in-vehicle device 10 or external device 30.
  • step S104 If it is determined in step S104 that a registered person has been extracted (step S104: YES), the control unit 55 executes a communication partner determination subroutine and executes processing to determine one registered person as a call partner for the requester (step S106).
  • step S106 for example, the control unit 55 causes the matching unit 59 to execute the communication partner determination subroutine.
  • step S106 for example, a notification is sent from among the extracted multiple registered people in descending order of the score calculated in step S103, requesting consent to conduct video communication with the requester.
  • step S107 the control unit 55 determines whether or not a communication partner has been determined. In step S107, for example, if consent for video communication was not obtained from any of the registered users in step S106, it is determined that a communication partner has not been determined.
  • step S107 If it is determined in step S107 that a communication partner has not been determined (step S107: NO), the control unit 55 proceeds to step S105 and notifies the requester that there is no match.
  • step S107 If it is determined in step S107 that the communication partner has been determined (step S107: YES), the control unit 55 starts video communication between the in-vehicle device 10 and the external device 30 between the requester who sent the request information and the determined registered person (step S108).
  • step S108 After executing step S108 or step S105, the control unit 55 ends the control routine RT1.
  • the matching unit 59 executes a matching step of matching a user of the first device with a user of the second device based on the desired information.
  • control unit 55 functions as a communication unit that executes a step (communication step) of performing voice communication between the matched in-vehicle device 10 and the external device 30.
  • control unit 55 functions as a communication unit that performs voice communication between one of the extracted one or more in-vehicle devices 10 and one external device 30.
  • control unit 55 functions as a communication unit that performs voice communication between one of the extracted one or more external devices 30 and one in-vehicle device 10.
  • FIG. 13 is a flowchart showing a partner candidate extraction subroutine RT2, which is an example of a routine executed by the matching unit 59 in step S103 of the control routine RT1.
  • the matching unit 59 When the matching unit 59 starts the partner candidate extraction subroutine RT2, it extracts registered persons with a user type different from the user type of the requester (step S201).
  • the matching unit 59 reads out registration information of registered persons with a user type different from the user type indicated by the request information from the registration information holding unit 51A of the memory unit 51. For example, if the user type of the requester is "passenger” indicating a user of the external device 30, the matching unit 59 reads out registration information whose user type is "driver” indicating a user of the in-vehicle device 10.
  • step S201 After executing step S201, the matching unit 59 extracts registrants with a desired type different from the desired type of the requester from among the registrants extracted in step S201 (step S202).
  • step S202 for example, if the user type indicated by the request information is "recipient,” registration information with a user type of "guide” is extracted.
  • step S202 the matching unit 59 calculates the degree of agreement between the desired guidance location and the guidable location between the requester and each of the one or more registered users extracted in step S202 (step S203).
  • step S203 the matching unit 59 calculates the degree of agreement between the desired guidance location and the guidable location between the user of one in-vehicle device 10 and each of the users of the multiple external devices 30, or between the user of one external device 30 and each of the users of the multiple in-vehicle devices 10.
  • step S203 for example, as described above, the degree of match is calculated based on the distance between the point or area indicating the desired guidance location and the point or area indicating the guidance available location.
  • step S204 a priority is assigned to each of one or more registered users based on the degree of match calculated in step S203 (step S204).
  • the priority is assigned in descending order of the degree of match. For example, if there is no difference in the degree of match, the priority may be assigned based on other parameters. For example, if the request information includes the requester's requirements regarding the level of guidance skills or driving skills, the priority may be assigned in descending order of the degree of suitability of the items in the registration information that correspond to the requirements.
  • the matching unit 59 In steps S203 and S204, the matching unit 59 generates multiple combinations between the requester and each of the extracted one or more registered users by combining a user of one in-vehicle device 10 with each of the users of multiple external devices 30, or by combining a user of one external device 30 with users of multiple in-vehicle devices 10, and prioritizes the multiple combinations according to the result of matching between the desired location for guidance and the locations that can be guided to.
  • step S204 the matching unit 59 extracts a predetermined number of registrants based on the priority assigned in step S204 (step S205).
  • step S205 for example, about 3 to 5 registrants are extracted.
  • step S205 for example, a threshold value for the degree of match may be set, and registrants whose degree of match exceeds the threshold may be extracted.
  • step S205 After executing step S205, the matching unit 59 ends the partner candidate extraction subroutine RT2.
  • the matching unit 59 functions as an extraction unit that extracts one or more in-vehicle devices 10 based on the guidance request information of the user of the external device 30 and the current position of the in-vehicle device 10 or the route set in the automobile M.
  • the matching unit 59 functions as an extraction unit that extracts one or more external devices 30 based on the guided information of the user of the in-vehicle device 10 and guide availability information including whether the external device 30 is guideable and, if so, the location to which the external device 30 can be guided.
  • the matching unit 59 may exclude the user of the in-car device 10 from being extracted as a candidate for a video communication partner.
  • a user of the in-car device 10 that is within a specified distance (e.g., within 1 km, within a few minutes) of the location to which the user of the external device 30 wishes to be guided may be excluded from matching targets. This makes it possible to prevent, for example, a situation in which the destination is reached immediately after video communication has begun, and the video communication ends with little guidance being given.
  • the matching unit 59 may not match the user of the in-vehicle device 10 with the user of the external device 30 if the current location of the in-vehicle device 10 and the current location of the external device 30 are within a predetermined distance (e.g., within 1 km).
  • the matching unit 59 may obtain the current location of each external device 30 when extracting video communication partners for a user of one in-vehicle device 10 from among multiple external devices 30. If the current location is too close to, for example, the current location of the automobile M or a major route point as a navigable location or a desired location for guidance, the user of that external device 30 may be excluded from the candidates to be extracted as video communication partners.
  • the user of the external device 30 may be excluded from matching targets. This, for example, can prevent the automobile M from being identified by the user of the external device 30, and protect the privacy of the user of the in-car device 10.
  • control may be performed to end the call when the current location of the external device 30 and the current location of the automobile M come within a predetermined distance during video communication.
  • the requester may be able to choose whether or not to exclude registered users who meet certain conditions from matching targets.
  • FIG. 14 is a flowchart showing a communication partner determination subroutine RT3, which is an example of a routine executed by the matching unit 59 in step S106 of the control routine RT1.
  • step S301 When the matching unit 59 starts the communication partner determination subroutine RT3, it notifies the registered person with the highest priority among the registered people extracted in the partner candidate extraction subroutine RT2 of a video communication request (step S301).
  • step S301 for example, if the requester is a user of the external device 30, a notification of a request to ride is sent to the user of the in-vehicle device 10, which has the highest priority. Also, in step S301, for example, if the requester is a user of the in-vehicle device 10, a notification of a request to ride is sent to the user of the external device 30.
  • step S301 the matching unit 59 determines whether the video communication has been accepted (step S302).
  • step S302 for example, if the server 50 receives a response indicating acceptance to the notification sent in step S301, it is determined that the video communication has been accepted.
  • step S301 for example, if a response to reject the video communication is received, or if a response indicating acceptance is not received within a predetermined time, it is determined that the video communication has not been accepted.
  • step S302 If it is determined in step S302 that the video communication has been accepted (step S302: YES), the matching unit 59 determines the registrant who accepted the request as the communication partner with the requester (step S303). In step S303, the user of one in-vehicle device 10 or the user of one external device 30 is determined as the communication partner.
  • step S302 If it is determined in step S302 that video communication has not been accepted (step S302: NO), it is determined whether or not there is a registered user with the next highest priority among the registered users extracted in the partner candidate extraction subroutine RT2 (step S304).
  • step S304 if it is determined that there is no registrant with the next highest priority (step S304: NO), the matching unit 59 ends the communication partner determination subroutine RT3 without determining a communication partner.
  • step S304 If it is determined in step S304 that a registered user with the next highest priority exists (step S304: YES), the matching unit 59 sends a notification of a video communication request to the registered user with the next highest priority (step S305).
  • step S305 the matching unit 59 proceeds to step S302 and determines whether or not the video communication has been accepted.
  • This routine sends requests for video communication in descending order of priority, and if any registered person accepts the request, that registered person is selected as the communication partner. If none of the registered people accepts the request, no communication partner is selected, and as described above, in step S105 of control routine RT1, the requester is notified that there is no match.
  • the matching unit 59 notifies the user of the in-vehicle device 10 or the user of the external device 30, in descending order of priority, of at least some of the multiple combinations generated in steps S203 and S204 of the partner candidate extraction subroutine RT2.
  • the matching unit 59 may simply determine, for example, the registrant with the highest priority as the communication partner with the requester.
  • the requester may be able to select a communication partner from one or more registered users extracted in the partner candidate extraction subroutine RT2.
  • the information processing device of the present invention is an information processing device that processes information regarding voice communication between an in-vehicle device 10 as a first device that moves with an automobile M as a mobile body, and an external device 30 as a second device that is a terminal outside the mobile body, and has a desired information acquisition unit that acquires multiple desired information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desired type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a desired guide location where the user wishes to be guided if the desired type is a person who is guided, and a guideable location where the guide can provide guidance if the desired type is a guide, a matching unit that matches users of the first device with users of the second device based on the desired information, and a communication unit that performs voice communication between the matched first device and the second device.
  • the desired information includes a desired type, so that, for example, a user whose desired information is to be a guide can be matched with a user whose desired type is to be guided based on the desired information. Therefore, through voice communication between the matched first device and second device, a user who wishes to be guided can receive guidance from a user who is able to provide guidance, and both the user of the first device and the user of the second device can achieve their goals through the voice communication.
  • an information processing device an information processing method, an information processing program, and a storage medium that can match a driver of a mobile body and a user outside the mobile body with a suitable partner when they make a call.
  • the information processing device of this embodiment has a desired information acquisition unit that acquires guided information including a guided location from a user of a second device, which is a location to which the user wishes to be guided, an extraction unit that extracts one or more of the first devices based on the guided information and the current location of the first device or a route set in a mobile body, and a communication unit that performs voice communication between one of the extracted one or more first devices and the second device.
  • a desired information acquisition unit that acquires guided information including a guided location from a user of a second device, which is a location to which the user wishes to be guided
  • an extraction unit that extracts one or more of the first devices based on the guided information and the current location of the first device or a route set in a mobile body
  • a communication unit that performs voice communication between one of the extracted one or more first devices and the second device.
  • an in-vehicle device 10 installed in an automobile M traveling along a route that includes a location to which a user of an external device 30 wishes to receive guidance can be extracted.
  • the user of the external device 30 can receive guidance about the desired location, for example, through voice communication with the user of one of the extracted in-vehicle devices 10.
  • the information processing device of this embodiment has a desired information acquisition unit that acquires guided desired information including a guided location from a user of a first device, which is a location to which the user desires guidance; a guideable information acquisition unit that acquires guideable information including whether or not the user of a second device is guideable and, if guideable, which is a location to which guidance is possible; an extraction unit that extracts one or more second devices based on the guided desired information and the guideable information; and a communication unit that performs voice communication between one of the extracted one or more second devices and the first device.
  • an external device 30 used by a user of an external device 30 capable of providing guidance to a location to which the user of the in-vehicle device 10 wishes to be guided can be extracted.
  • the user of the in-vehicle device 10 can receive guidance to the desired location, for example, through voice communication with the user of one of the extracted external devices 30.
  • the configurations, routines, etc. of the in-vehicle device 10, server 50, and external device 30 in the above-described embodiment are merely examples, and can be appropriately selected or changed depending on the application, etc.
  • the registration information is stored in the registration information storage unit of the server 50, but this is not limited to the above.
  • the information stored in the registration information storage unit may be stored in an external server separate from the server 50.
  • the present invention is not limited to this.
  • the present invention is applicable to cases where only voice calls are made between the external device 30 and the in-car device 10 in the information processing system 100.
  • the present invention is also applicable to communication in a form in which no video is distributed from the in-car device 10.
  • only video may be distributed, and the voice call may be started later.
  • the in-car device 10 and the external device 30 perform video communication via the server 50, but the video communication may be performed directly between the in-car device 10 and the external device 30.
  • a voice call may be established between the in-vehicle device 10 and the external device 30 in parallel via a path separate from the information processing system 100.
  • the automobile M may be equipped with a camera that captures the side or rear of the automobile M, or a 360-degree camera, and video footage captured by these cameras may be distributed during video communication.
  • the in-vehicle device 10 is an in-vehicle navigation device, but the in-vehicle device 10 does not need to have a navigation function.
  • the in-vehicle device 10 may transmit current position information of the automobile M to the server 50, and the current location may be registered as a navigable location, or major intermediate points may be predicted and registered based on the change in the current location.
  • the in-vehicle device 10 may be configured by integrating a terminal device having a similar configuration to the in-vehicle device 10 with an external camera 13 and a touch panel 15.
  • the in-vehicle device 10 may be a terminal device such as a smartphone, tablet, or PC with a camera that is equipped with an application that performs the same functions as the in-vehicle device 10.
  • the in-vehicle device 10 may be mounted on the dashboard DB, for example, by a cradle or the like, so that the built-in camera can capture an image of the front of the automobile M through the windshield of the automobile M.
  • the in-vehicle device 10 may be configured not to display a screen to be presented to the driver of the automobile M.
  • the in-vehicle device 10 may have a configuration similar to that of a drive recorder, and may be, for example, a device integrated with the exterior camera 13.
  • the in-vehicle device 10 may be, for example, a device in which hardware that performs the above-mentioned video communication function of the in-vehicle device 10 is built into the housing of the exterior camera 13. In this case, the in-vehicle device 10 may not perform the various display outputs as described above.
  • the external device 30 is described as being a smartphone, but this is not limited thereto.
  • the external device 30 is a terminal device that the user of the external device 30 can use for video communication, and is configured to be able to display or present messages related to video communication, accept operational inputs necessary for video communication, send and receive audio data, and receive and display video.
  • the external device 30 may be a terminal device such as a tablet, PC, or wearable device.
  • the in-vehicle device 10 was mounted on an automobile M, but the in-vehicle device 10 may also be mounted on other moving objects such as bicycles, motorbikes, and boats.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

One purpose of the present invention is to provide an information processing device, an information processing method, an information processing program, and a storage medium that make it possible for suitable partners to match with each other during a phone call between a driver of a moving body and a user outside the moving body. The information processing device performs information processing relating to audio communication between a first device that moves together with a moving body, and a second device which is a terminal outside the moving body, the information processing device comprising: a matching unit that performs matching between a user of the first device and a user of the second device on the basis of a plurality of desired information, including user type information indicating whether the user is the user of the first device or the user of the second device, desired type indicating a distinction between a guided person who desires to be guided and a guiding person who can guide, and a guiding-desired location of the guided person and a guiding-possible location of the guiding person; and a communication unit that performs the audio communication between the first device and the second device that have been matched.

Description

情報処理装置、情報処理方法、情報処理プログラム及び記憶媒体Information processing device, information processing method, information processing program, and storage medium

 本発明は、情報処理装置、情報処理方法、情報処理プログラム及び記憶媒体に関し、例えば、移動体内の人と移動体外の人との間の通話に関する情報処理を行う情報処理装置、情報処理方法、情報処理プログラム及び記憶媒体に関する。 The present invention relates to an information processing device, an information processing method, an information processing program, and a storage medium, and, for example, to an information processing device, an information processing method, an information processing program, and a storage medium that processes information regarding a call between a person inside a mobile body and a person outside the mobile body.

 特許文献1には、車両に搭載されている移動体端末から送信される車両情報を参照し、ユーザ現在地から所定の範囲内に存在する車両を検索し、当該車両に関する情報をユーザ端末のディスプレイに表示する車両情報利用システムが開示されている。 Patent Document 1 discloses a vehicle information utilization system that references vehicle information transmitted from a mobile terminal mounted on the vehicle, searches for vehicles that are within a specified range from the user's current location, and displays information about the vehicles on the display of the user's terminal.

特開2003-208462号公報JP 2003-208462 A

 例えば、実際の乗車を伴わずに移動体の運転者と移動体外の端末のユーザとの間での通話がなされる仮想的なヒッチハイクを行う場合を考える。この場合、移動体の運転者と移動体外の端末のユーザとの組み合わせは、会話の内容に双方が満足できるような組み合わせであることが好ましい。しかし、上記文献のように、単に移動体外の端末のユーザから見て所定の範囲内を走行中の車両を検索するだけでは、上記運転者または上記ユーザの目的によっては良好な組み合わせを得られない場合があることが課題の1つとして挙げられる。 For example, consider the case of virtual hitchhiking, in which a conversation takes place between the driver of a mobile body and the user of a terminal outside the mobile body without actually riding in the vehicle. In this case, it is preferable that the combination of the driver of the mobile body and the user of the terminal outside the mobile body is one in which both parties are satisfied with the content of the conversation. However, as in the above document, simply searching for vehicles traveling within a specified range from the perspective of the user of the terminal outside the mobile body may not result in a good combination depending on the purpose of the driver or user, which is one of the issues.

 本発明は上記した点に鑑みてなされたものであり、移動体の運転者と移動体外のユーザとの通話の際に、互いに適した相手とマッチングすることを可能とする情報処理装置、情報処理方法、情報処理プログラム及び記憶媒体を提供することを目的の1つとしている。 The present invention has been made in consideration of the above points, and one of its objectives is to provide an information processing device, an information processing method, an information processing program, and a storage medium that enable matching of a suitable partner when a driver of a mobile body and a user outside the mobile body make a call.

 請求項1に記載の発明は、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得部と、前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチング部と、前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置である。 The invention described in claim 1 is an information processing device that processes information related to voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized by having a desired information acquisition unit that acquires multiple desired information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desired type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a desired guide location where the user wishes to be guided if the desired type is the person who is guided, and a guideable location where the guide can provide guidance if the desired type is the guide, a matching unit that matches users of the first device with users of the second device based on the desired information, and a communication unit that performs the voice communication between the first device and the second device after the matching is performed.

 請求項11に記載の発明は、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、前記第2装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、前記被案内希望情報及び前記第1装置の現在位置または前記移動体において設定されたに経路に基づいて、1又は複数の前記第1装置を抽出する抽出部と、当該抽出された1又は複数の前記第1装置のうちの1の前記第1装置と前記第2装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置である。 The invention described in claim 11 is an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in having a desired information acquisition unit that acquires guided information including a guided location from a user of the second device, an extraction unit that extracts one or more of the first devices based on the guided information and the current location of the first device or a route set in the mobile body, and a communication unit that performs the voice communication between one of the extracted one or more first devices and the second device.

 請求項12に記載の発明は、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、前記第1装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、前記第2装置のユーザが案内可能か否か及び案内可能である場合には案内が可能な場所である案内可能場所を含む案内可能情報を取得する案内可能情報取得部と、前記被案内希望情報及び前記案内可能情報に基づいて、1又は複数の前記第2装置を抽出する抽出部と、当該抽出された1又は複数の前記第2装置のうちの1の前記第2装置と前記第1装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置である。 The invention described in claim 12 is an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in having a desired information acquisition unit that acquires guided desired information including a guided desired place from a user of the first device, a guideable information acquisition unit that acquires guideable information including whether or not the user of the second device is guided and, if guided, a guideable place that is a place where guidance is possible, an extraction unit that extracts one or more of the second devices based on the guided desired information and the guideable information, and a communication unit that performs the voice communication between the first device and one of the extracted one or more second devices.

 請求項13に記載の発明は、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置によって実行される情報処理方法であって、前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、を含むことを特徴とする情報処理方法である。 The invention described in claim 13 is an information processing method executed by an information processing device that processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body, and is characterized in that it includes a preference information acquisition step of acquiring multiple preference information including user type information indicating whether the user is a user of the first device or a user of the second device, and a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guidance location where the user wishes to be guided if the preference type is the person who is guided, and a guideable location where the guide can provide guidance if the preference type is the guide, a matching step of matching users of the first device and users of the second device based on the preference information, and a communication step of performing the voice communication between the first device and the second device after the matching is performed.

 請求項14に記載の発明は、コンピュータを備え、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置によって実行される情報処理プログラムであって、前記コンピュータに、前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、を実行させるための情報処理プログラムである。 The invention described in claim 14 is an information processing program executed by an information processing device having a computer and processing information regarding voice communication between a first device moving with a mobile body and a second device which is a terminal outside the mobile body, the information processing program causing the computer to execute the following steps: a preference information acquisition step of acquiring a plurality of preference information including user type information indicating whether the user is a user of the first device or a user of the second device, a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guidance location where the guide can provide guidance if the preference type is the person who is guided, and a guideable location where the guide can provide guidance if the preference type is the guide; a matching step of matching a user of the first device with a user of the second device based on the preference information; and a communication step of performing the voice communication between the first device and the second device after the matching.

 請求項15に記載の発明は、コンピュータを備え、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置に、前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、を実行させるための情報処理プログラムを記憶するコンピュータが読み取り可能な記憶媒体である。 The invention described in claim 15 is a computer-readable storage medium that stores an information processing program for causing an information processing device that includes a computer and processes information regarding voice communication between a first device that moves with a mobile body and a second device that is a terminal outside the mobile body to execute the following steps: a preference information acquisition step that acquires multiple preference information including user type information indicating whether the user is a user of the first device or a user of the second device, and a preference type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a preferred guide location where the guide can provide guidance if the preferred type is the person who is guided, and a guideable location where the guide can provide guidance if the preferred type is the guide; a matching step that matches the user of the first device with the user of the second device based on the preference information; and a communication step that performs the voice communication between the first device and the second device after the matching is performed.

本発明の実施例に係る情報処理システムの概略を示す模式図である。1 is a schematic diagram illustrating an overview of an information processing system according to an embodiment of the present invention. 実施例に係る自動車の前席部分の構成を示す図である。FIG. 1 is a diagram showing a configuration of a front seat portion of an automobile according to an embodiment. 実施例に係る車載装置の構成の一例を示すブロック図である。1 is a block diagram showing an example of a configuration of an in-vehicle device according to an embodiment; 実施例に係る外部装置の構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of a configuration of an external device according to an embodiment. 実施例に係るサーバ装置の構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of a configuration of a server device according to an embodiment. 実施例に係るサーバ装置が保持する運転者(車載装置のユーザ)に関する情報の一例を示す図である。4 is a diagram showing an example of information about a driver (a user of an in-vehicle device) held by a server device according to an embodiment; FIG. 実施例に係るサーバ装置が保持する仮想同乗者(外部装置のユーザ)に関する情報の一例を示す図である。11 is a diagram showing an example of information about a virtual fellow passenger (user of an external device) held by a server device in the embodiment; FIG. 実施例に係るサーバ装置が受信する依頼情報の一例を示す図である。FIG. 11 is a diagram illustrating an example of request information received by a server device according to an embodiment. 実施例に係る情報処理システムにおいてマッチングが行われる際の案内可能場所と案内希望場所との照合の一例を示す模式図である。10 is a schematic diagram showing an example of matching between a guideable location and a guide-desired location when matching is performed in the information processing system according to the embodiment; FIG. 実施例に係る情報処理システムによって実行される情報処理の一例を示すシーケンス図である。FIG. 2 is a sequence diagram illustrating an example of information processing executed by an information processing system according to an embodiment. 実施例に係る情報処理システムによって実行される情報処理の一例を示すシーケンス図である。FIG. 2 is a sequence diagram illustrating an example of information processing executed by an information processing system according to an embodiment. 実施例に係るサーバ装置によって実行されるルーチンの一例を示すフローチャートである。5 is a flowchart illustrating an example of a routine executed by the server device according to the embodiment. 実施例に係るサーバ装置によって実行されるサブルーチンの一例を示すフローチャートである。10 is a flowchart illustrating an example of a subroutine executed by the server device according to the embodiment. 実施例に係るサーバ装置によって実行されるサブルーチンの一例を示すフローチャートである。10 is a flowchart illustrating an example of a subroutine executed by the server device according to the embodiment.

 以下に本発明の実施例について詳細に説明する。なお、以下の説明及び添付図面においては、実質的に同一又は等価な部分には同一の参照符号を付している。 The following describes in detail an embodiment of the present invention. In the following description and accompanying drawings, the same reference symbols are used for substantially the same or equivalent parts.

 実施例に係る情報処理装置を含む情報処理システム100の構成について添付図面を参照しつつ説明する。 The configuration of an information processing system 100 including an information processing device according to an embodiment will be described with reference to the attached drawings.

 図1は、情報処理システム100の構成の概要を示している。図1に示すように、情報処理システム100は、車載装置10、外部装置30及び情報処理装置としてのサーバ50を含んで構成されている。なお、図1においては、車載装置10が移動体の一例としての自動車Mに搭載されている場合を示している。また、図1においては、外部装置30の一例として、スマートフォンを示している。 FIG. 1 shows an overview of the configuration of an information processing system 100. As shown in FIG. 1, the information processing system 100 includes an in-vehicle device 10, an external device 30, and a server 50 as an information processing device. Note that FIG. 1 shows a case in which the in-vehicle device 10 is mounted on an automobile M as an example of a moving body. Also, FIG. 1 shows a smartphone as an example of the external device 30.

 車載装置10、外部装置30及びサーバ50は、ネットワークNWを介して、例えば、TCP/IPや、UDP/IP等の通信プロトコルを用いて相互にデータの送受信が可能になっている。なお、ネットワークNWは、例えば、移動体通信網、Wi-Fi(登録商標)等の無線通信及び有線通信を含むインターネット通信により構築され得る。 The in-vehicle device 10, the external device 30, and the server 50 can transmit and receive data to and from each other via the network NW using communication protocols such as TCP/IP and UDP/IP. The network NW can be constructed, for example, by a mobile communication network, wireless communication such as Wi-Fi (registered trademark), and Internet communication including wired communication.

 情報処理システム100において、車載装置10によって取得された自動車Mにおける映像及び音声を、外部装置30に送信し、外部装置30によって取得された音声を車載装置10に送信する音声映像通信が可能である。 In the information processing system 100, audio-visual communication is possible in which video and audio captured by the in-vehicle device 10 in the automobile M are transmitted to the external device 30, and audio captured by the external device 30 is transmitted to the in-vehicle device 10.

 本実施例の情報処理システム100においては、車載装置10と外部装置30との間で音声通話が確立された上で、車載装置10から自動車Mにおいて撮影された映像が外部装置30に配信される。 In the information processing system 100 of this embodiment, a voice call is established between the in-car device 10 and the external device 30, and then the video captured in the automobile M is distributed from the in-car device 10 to the external device 30.

 以下の説明において、上記のように、車載装置10と外部装置30との間での音声通話を確立させつつ、自動車Mにおいて撮影された映像を車載装置10から外部装置30に配信する通信態様を音声映像通信の一態様としてのビデオ通信と称する。 In the following description, the communication mode in which a voice call is established between the in-car device 10 and the external device 30 while video captured in the automobile M is distributed from the in-car device 10 to the external device 30 as described above is referred to as video communication, which is one form of audio-video communication.

 このようなビデオ通信が行われることで、車載装置10から送信される映像を視聴している外部装置30のユーザは、あたかも自動車Mの運転者と自動車Mに同乗しているような感覚を得ることができる。言い換えれば、ビデオ通信によって、外部装置30のユーザの自動車Mへの仮想同乗を実現することができる。また、このようなビデオ通信を実現可能な本実施例の情報処理システム100のようなシステムを仮想同乗システムとも称する。 By performing such video communication, the user of the external device 30 watching the video transmitted from the in-vehicle device 10 can feel as if he or she is riding in the vehicle M with the driver of the vehicle M. In other words, video communication can realize a virtual ride with the user of the external device 30 in the vehicle M. In addition, a system such as the information processing system 100 of this embodiment that can realize such video communication is also called a virtual ride-along system.

 本実施例において、車載装置10と外部装置30とは、サーバ50を介してビデオ通信を行う。サーバ50は、車載装置10と外部装置30との間で音声通話を確立させ、かつ、自動車Mにおいて撮影され車載装置10によって取得された映像を車載装置10から受信し、当該映像を外部装置30に送信することが可能である。 In this embodiment, the in-car device 10 and the external device 30 perform video communication via the server 50. The server 50 establishes voice communication between the in-car device 10 and the external device 30, and is capable of receiving from the in-car device 10 images captured in the automobile M and acquired by the in-car device 10, and transmitting the images to the external device 30.

 例えば、車載装置10と外部装置30とのビデオ通信の際に、車載装置10のユーザと外部装置30のユーザとの間で、自動車Mが走行している場所についての案内がなされ得る。 For example, during video communication between the in-car device 10 and the external device 30, guidance regarding the location where the automobile M is traveling can be provided between the user of the in-car device 10 and the user of the external device 30.

 例えば、車載装置10のユーザである自動車Mの運転者は、慣れた場所を走行中に、外部装置30のユーザに、その場所の付近の観光情報その他の地域に関する情報を提供することで、その場所について案内をすることが可能である。この場合、当該外部装置30のユーザは、例えば、その場所についてよく知らず、例えばその場所への旅行を予定しているなど、案内をされることを希望する者であれば、お互いに目的を達成できる。また、外部装置30のユーザは、ビデオ通信によって、実際の走行映像を見ながら案内を受けることができる。 For example, while driving in a familiar location, the driver of automobile M, who is the user of in-vehicle device 10, can guide the user of external device 30 about the location by providing tourist information about the area and other information about the region. In this case, if the user of external device 30 is not familiar with the location and wishes to be guided, for example, by planning a trip to the location, both parties can achieve their goals. In addition, the user of external device 30 can receive guidance while watching actual driving footage through video communication.

 一方、外部装置30のユーザが案内をすることもできる。例えば、自動車Mの運転者が不慣れな場所を走行中に、その場所に詳しい外部装置30のユーザが、ビデオ通信を介して、その場所の付近の観光情報や、運転に注意が必要な場所、いつも渋滞している場所などの地域に関する情報を提供することで、その場所について案内をすることができる。 On the other hand, the user of the external device 30 can also provide guidance. For example, while the driver of the automobile M is driving in an unfamiliar place, the user of the external device 30 who is familiar with the place can provide guidance about the place via video communication by providing tourist information in the vicinity, information about the area such as places where caution is required when driving, and places where there are always traffic jams.

 本実施例では、上記のような、案内をする人と案内をされる人との組み合わせでビデオ通信が行われるように、車載装置10のユーザと、外部装置30のユーザとを組み合わせることが可能である。サーバ50は、マッチングを行って車載装置10のユーザと外部装置30のユーザとの組み合わせを決定し、決定した相手との間でビデオ通信を確立させることが可能である。 In this embodiment, it is possible to pair a user of the in-vehicle device 10 with a user of the external device 30 so that video communication can be conducted between a guide and a guided person as described above. The server 50 performs matching to determine a pairing of a user of the in-vehicle device 10 and a user of the external device 30, and can establish video communication between the determined parties.

 なお、運転者及び同乗者を含む自動車Mの乗員が車載装置10のユーザになり得る。本実施例においては、車載装置10のユーザが自動車Mの運転者である場合を中心に説明する。 Note that the occupants of the automobile M, including the driver and passengers, can be users of the in-vehicle device 10. In this embodiment, the explanation will be centered on the case where the user of the in-vehicle device 10 is the driver of the automobile M.

 以下、本実施例においては、車載装置10がカーナビゲーション装置である場合を例に説明する。また、本実施例においては、車載装置10が、ユーザが案内を希望する目的地をユーザから受け付け、当該目的地をサーバ50に送信し、サーバ50が目的地への経路を生成する、いわゆるクラウド型のカーナビゲーション装置の端末装置である場合を例に説明する。 In the following, in this embodiment, the in-vehicle device 10 is a car navigation device. In addition, in this embodiment, the in-vehicle device 10 is a terminal device of a so-called cloud-type car navigation device that receives a destination to which the user wishes to be guided from the user, transmits the destination to the server 50, and the server 50 generates a route to the destination.

 また、上記したように、本実施例において外部装置30をスマートフォンとして説明する。タッチパネル31は、例えば、映像を表示可能な液晶ディスプレイ等のディスプレイとタッチパッドとが組み合わされたタッチパネルモニターである。タッチパネル31は、ユーザから受け付けたタッチパネル31への入力操作を表す信号を生成することが可能である。 As described above, in this embodiment, the external device 30 is described as a smartphone. The touch panel 31 is, for example, a touch panel monitor that combines a display, such as a liquid crystal display capable of displaying images, with a touch pad. The touch panel 31 is capable of generating a signal that represents an input operation to the touch panel 31 received from the user.

 本実施例において、タッチパネル31には、ビデオ通信中に車載装置10から配信された映像が表示される。外部装置30のユーザは、ビデオ通信中、タッチパネル31に表示された映像を見ながら自動車Mの運転者と通話をすることが可能である。 In this embodiment, the touch panel 31 displays the video delivered from the in-vehicle device 10 during video communication. During video communication, the user of the external device 30 can talk to the driver of the automobile M while watching the video displayed on the touch panel 31.

 また、タッチパネル31には、ビデオ通信に関する情報が表示される。例えば、タッチパネル31には、外部装置30のユーザが、情報処理システム100においてビデオ通信を行うために必要な情報を登録する際の登録画面や、自動車Mの運転者との間でビデオ通信を行うことについて承諾するか否かの確認画面等の画面が表示される。外部装置30のユーザは、タッチパネル31への入力操作によって、ビデオ通信に関する情報の入力をすることが可能である。 In addition, information related to video communication is displayed on the touch panel 31. For example, the touch panel 31 displays a registration screen for the user of the external device 30 to register information necessary for video communication in the information processing system 100, a confirmation screen for whether or not to consent to video communication with the driver of the automobile M, and other screens. The user of the external device 30 can input information related to video communication by performing an input operation on the touch panel 31.

 スピーカ33は、音楽や音声等の音を発することが可能である。ビデオ通信時において、スピーカ33からは、音声通話における車載装置10からの音声が発せられる。 The speaker 33 is capable of emitting sounds such as music and voice. During video communication, the speaker 33 emits the sound from the in-car device 10 during a voice call.

 マイク35は、外部装置30に向けて発せられた音を受音するマイク装置である。ビデオ通信時において、マイク35によって収音された音声が音声通話の音声としてサーバ50を介して車載装置10に送信される。 The microphone 35 is a microphone device that receives sounds emitted toward the external device 30. During video communication, the sound picked up by the microphone 35 is transmitted to the in-car device 10 via the server 50 as the sound of the voice call.

 図2は、車載装置10を搭載している自動車Mの前席付近を示す斜視図である。図2では、取り付け例として、自動車Mの前席のダッシュボードDB内に車載装置10が取り付けられている場合を示す。 FIG. 2 is a perspective view showing the vicinity of the front seat of an automobile M equipped with the in-vehicle device 10. As an example of an installation, FIG. 2 shows a case where the in-vehicle device 10 is installed in the dashboard DB of the front seat of the automobile M.

 GPS受信機11は、GPS(Global Positioning System)衛星からの信号(GPS信号)を受信する装置である。GPS受信機11は、例えば、ダッシュボードDB上に配されている。なお、GPS受信機11は、GPS信号が受信できればどこに配されていてもよい。GPS受信機11は、受信したGPS信号を車載装置10に送信することが可能である。車載装置10は、GPS信号を用いて自動車Mの現在位置情報を取得する。 The GPS receiver 11 is a device that receives signals (GPS signals) from GPS (Global Positioning System) satellites. The GPS receiver 11 is arranged, for example, on a dashboard DB. The GPS receiver 11 may be arranged anywhere as long as it can receive GPS signals. The GPS receiver 11 is capable of transmitting the received GPS signals to the in-vehicle device 10. The in-vehicle device 10 obtains current position information of the automobile M using the GPS signals.

 車外カメラ13は、自動車Mの前方を撮影する撮像装置である。本実施例において、車外カメラ13は、前方が撮影方向となる様にダッシュボードDBに配されている。例えば、車外カメラ13は、フロントガラスを介して自動車Mの前方を撮影可能である。なお、車外カメラ13は、ルームミラーRMの近傍に設けられても良く、フロントガラスFGの内側に取り付けられていてもよい。ビデオ通信時には、車外カメラ13によって撮像された映像が、外部装置30に配信される。 The exterior camera 13 is an imaging device that captures images of the area in front of the automobile M. In this embodiment, the exterior camera 13 is arranged on the dashboard DB so that the imaging direction is the front. For example, the exterior camera 13 can capture images of the area in front of the automobile M through the windshield. The exterior camera 13 may be provided near the rearview mirror RM or attached to the inside of the windshield FG. During video communication, the image captured by the exterior camera 13 is distributed to the external device 30.

 車内カメラ14は、自動車Mの内部を撮影する撮像装置である。本実施例において、車内カメラ14は、フロントガラスFGの上端または当該上端付近の天井部に設けられている。例えば、車内カメラ14は、自動車Mの運転者を撮影可能である。例えば、車内カメラ14は、自動車Mの助手席や後部座席を含めた自動車Mの内部を撮影可能に設けられていてもよい。 The in-vehicle camera 14 is an imaging device that captures images of the interior of the automobile M. In this embodiment, the in-vehicle camera 14 is installed at the top end of the windshield FG or on the ceiling near the top end. For example, the in-vehicle camera 14 is capable of capturing images of the driver of the automobile M. For example, the in-vehicle camera 14 may be installed so as to be capable of capturing images of the interior of the automobile M, including the passenger seat and rear seats of the automobile M.

 例えば、車内カメラ14によって撮像された映像は、ビデオ通信時に、車外カメラ13によって撮像された映像とともに、外部装置30に配信される。例えば、外部装置30において、車外カメラ13の画像と車内カメラ14の画像とが、ユーザの選択によって切り換えて表示されてもよく、並べて表示されてもよい。 For example, during video communication, the image captured by the in-vehicle camera 14 is distributed to the external device 30 together with the image captured by the exterior camera 13. For example, in the external device 30, the image from the exterior camera 13 and the image from the in-vehicle camera 14 may be displayed in a switched manner at the user's choice, or may be displayed side by side.

 タッチパネル15は、例えば、映像を表示可能な液晶ディスプレイ等のディスプレイとタッチパッドとが組み合わされたタッチパネルモニターである。タッチパネル15は、例えば、ダッシュボードDBのセンターコンソールに配されている。タッチパネル15は、運転者から視認できかつ運転者の手が届く場所に配されていればよい。例えば、タッチパネル15は、ダッシュボードDB上に取り付けられていてもよい。 The touch panel 15 is, for example, a touch panel monitor that combines a display, such as a liquid crystal display capable of displaying images, with a touch pad. The touch panel 15 is disposed, for example, in the center console of the dashboard DB. The touch panel 15 may be disposed in a location that is visible to the driver and within the driver's reach. For example, the touch panel 15 may be attached to the dashboard DB.

 タッチパネル15は、車載装置10の制御に基づいて画面表示を行うことが可能である。また、タッチパネル15は、ユーザから受け付けたタッチパネル15への入力操作を表す信号を車載装置10に送信することが可能である。例えば、タッチパネル15には、カーナビゲーションの案内表示がなされても良い。また、タッチパネル15を介して、目的地を設定する等、カーナビゲーション機能に関する操作が可能であってもよい。 The touch panel 15 is capable of displaying a screen based on the control of the in-vehicle device 10. The touch panel 15 is also capable of transmitting a signal representing an input operation to the touch panel 15 received from a user to the in-vehicle device 10. For example, the touch panel 15 may display car navigation guidance. Furthermore, it may be possible to perform operations related to the car navigation function, such as setting a destination, via the touch panel 15.

 また、タッチパネル15には、ビデオ通信に関する情報が表示されてもよく、外部装置30のユーザとのマッチングをサーバ50に依頼するための操作の受付画面やマッチングがなされた外部装置30のユーザとのビデオ通信の可否の問い合わせ画面等の画面が表示されてもよい。例えば、車載装置10のユーザは、タッチパネル15への入力操作によって、外部装置30のユーザとのマッチングの依頼の際に必要な情報の入力やビデオ通信の可否の問い合わせに対する許否の選択を行うことが可能である。 In addition, the touch panel 15 may display information regarding video communication, and may display a screen such as an operation reception screen for requesting the server 50 to match with a user of the external device 30, or a screen asking whether or not video communication with the user of the external device 30 with whom a match has been made is possible. For example, the user of the in-vehicle device 10 can input information required for requesting matching with a user of the external device 30 and select whether or not to allow video communication in response to a question about whether or not video communication is possible, by performing an input operation on the touch panel 15.

 スピーカ17は、例えば、AピラーAPの室内側に設けられている。スピーカ17は、車載装置10の制御に基づいて音楽や音声等の音を発することが可能である。ビデオ通信時において、スピーカ17からは、音声通話における外部装置30からの音声が発せられる。 The speaker 17 is provided, for example, on the interior side of the A-pillar AP. The speaker 17 is capable of emitting sounds such as music and voices under the control of the in-vehicle device 10. During video communication, the speaker 17 emits sound from the external device 30 during a voice call.

 また、ビデオ通信に関する通知が音声によって行われる場合に、マッチングによって決定された外部装置30のユーザとのビデオ通信の可否の問い合わせ等の音声がスピーカ17から発せられる。また、当該問い合わせの際に、事前に録音された外部装置30のユーザ自身の声によるメッセージが発せられてもよい。 In addition, when the notification regarding video communication is made by voice, a voice such as an inquiry as to whether or not video communication with the user of the external device 30 determined by matching is possible is issued from the speaker 17. In addition, when making such an inquiry, a pre-recorded message may be issued in the voice of the user of the external device 30.

 マイク19は、車内の音を受音するマイク装置であり、例えば、ダッシュボードDB上に配されている。マイク19は、車内の音を受音可能であれば、ルームミラーRMまたはハンドル等、どこに設けられていてもよい。ビデオ通信時において、マイク19に収音された音声が音声通話の音声として外部装置30に送信される。 The microphone 19 is a microphone device that picks up sounds inside the vehicle, and is located, for example, on the dashboard DB. The microphone 19 may be located anywhere, such as on the rearview mirror RM or the steering wheel, as long as it is capable of picking up sounds inside the vehicle. During video communication, the sound picked up by the microphone 19 is transmitted to the external device 30 as the sound of the voice call.

 また、外部装置30のユーザとのマッチングの依頼やビデオ通信の可否の問い合わせに対する許否の選択を行うための操作入力が、マイク19を介して音声によって行われてもよい。 Furthermore, operation inputs for requesting matching with a user of the external device 30 or selecting whether or not to accept an inquiry about whether or not video communication is possible may be made by voice via the microphone 19.

 図3は、車載装置10の構成を示すブロック図である。例えば、車載装置10は、システムバスを介して、記憶部23と、制御部25と、通信部27と、が協働する装置である。 FIG. 3 is a block diagram showing the configuration of the in-vehicle device 10. For example, the in-vehicle device 10 is a device in which a storage unit 23, a control unit 25, and a communication unit 27 work together via a system bus.

 また、自動車Mには加速度センサ21が搭載されている。加速度センサ21は、自動車Mの加速度を測定可能であり、当該測定された加速度を示す信号を出力可能である。加速度センサ21は、自動車Mの上方から見て自動車Mの進行方向、すなわち前後方向の加速度を検出可能なセンサである。また、加速度センサは、例えば、自動車Mの進行方向と垂直な横方向(幅方向)の加速度を検出可能である。車載装置10は、例えば、GPS受信機11からのGPS信号に加えて、加速度センサ21のセンサ信号が示す加速度に基づいて、自動車Mの現在位置情報を取得してもよい。 The automobile M is also equipped with an acceleration sensor 21. The acceleration sensor 21 is capable of measuring the acceleration of the automobile M and outputting a signal indicating the measured acceleration. The acceleration sensor 21 is a sensor capable of detecting the acceleration in the traveling direction of the automobile M, i.e., the forward/rearward direction, as viewed from above the automobile M. The acceleration sensor is also capable of detecting, for example, acceleration in the lateral direction (width direction) perpendicular to the traveling direction of the automobile M. The in-vehicle device 10 may obtain current position information of the automobile M based on, for example, the acceleration indicated by the sensor signal of the acceleration sensor 21 in addition to the GPS signal from the GPS receiver 11.

 記憶部23は、例えば、ハードディスク装置、SSD(solid state drive)、フラッシュメモリ等により構成される記憶デバイスである。記憶部23は、オペレーティングシステムや、端末用のソフトウェア等の、車載装置10において実行される各種プログラムを記憶する。 The storage unit 23 is a storage device configured, for example, by a hard disk drive, a solid state drive (SSD), a flash memory, etc. The storage unit 23 stores various programs executed in the in-vehicle device 10, such as an operating system and software for the terminal.

 各種プログラムは、例えば、他のサーバ装置等からネットワークを介して取得されるようにしてもよいし、記録媒体に記録されて各種ドライブ装置を介して読み込まれるようにしてもよい。すなわち、記憶部23に記憶される各種プログラムは、ネットワークを介して伝送可能であるし、また、コンピュータ読み取り可能な記録媒体に記録して譲渡することが可能である。 The various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices. In other words, the various programs stored in the storage unit 23 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.

 また、記憶部23は、道路地図を含む地図情報を記憶している。当該地図情報は、例えば、カーナビゲーションの案内表示に使用される。 The memory unit 23 also stores map information including road maps. This map information is used, for example, for displaying directions in car navigation systems.

 制御部25は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)等により構成され、コンピュータとして機能する。制御部25は、CPUがROMや記憶部23に記憶された各種プログラムを読み出し実行することにより各種機能を実現する。本実施例においては、制御部25によって、ビデオ通信時の音声通話及び動画配信機能、並びにカーナビゲーション機能等が発揮される。 The control unit 25 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer. The control unit 25 realizes various functions by the CPU reading and executing various programs stored in the ROM and the storage unit 23. In this embodiment, the control unit 25 performs functions such as voice calls and video distribution functions during video communication, as well as car navigation functions.

 制御部25は、自動車Mに備えられている各種機器、すなわち、GPS受信機11、車外カメラ13、車内カメラ14、タッチパネル15、スピーカ17、マイク19、及び加速度センサ21と通信可能に接続されている。制御部25は、自動車Mに備えられている各種機器からデータを取得する。また、制御部25は、自動車Mに備えられている各種機器にデータを供給する。 The control unit 25 is connected to be able to communicate with various devices provided in the automobile M, namely, the GPS receiver 11, the exterior camera 13, the interior camera 14, the touch panel 15, the speaker 17, the microphone 19, and the acceleration sensor 21. The control unit 25 acquires data from the various devices provided in the automobile M. The control unit 25 also supplies data to the various devices provided in the automobile M.

 具体的には、制御部25は、GPS受信機11からのGPS信号を逐次取得する。また、制御部25は、加速度センサ21によって計測された加速度を示す信号を逐次取得する。制御部25は、例えば、GPS信号及び加速度センサからの信号に基づいて自動車Mの現在位置情報を取得する。 Specifically, the control unit 25 sequentially acquires GPS signals from the GPS receiver 11. The control unit 25 also sequentially acquires a signal indicating the acceleration measured by the acceleration sensor 21. The control unit 25 acquires current position information of the automobile M based on, for example, the GPS signal and the signal from the acceleration sensor.

 制御部25は、車外カメラ13及び車内カメラ14によって撮像された映像を逐次取得する。制御部25は、タッチパネル15になされた入力操作を表す信号を取得する。また、制御部25は、タッチパネル15に表示する画像データを供給する。制御部25は、スピーカ17に音声データを供給する。また、制御部25は、マイク19によって収音された自動車Mにおける音声を逐次取得する。 The control unit 25 sequentially acquires images captured by the exterior camera 13 and the interior camera 14. The control unit 25 acquires signals representing input operations performed on the touch panel 15. The control unit 25 also supplies image data to be displayed on the touch panel 15. The control unit 25 also supplies audio data to the speaker 17. The control unit 25 also sequentially acquires audio in the automobile M picked up by the microphone 19.

 通信部27は、制御部25の指示に従って外部機器とのデータの送受信を行う通信装置である。通信部27は、例えば、ネットワークNWに接続するためのNIC(Network Interface Card)である。 The communication unit 27 is a communication device that transmits and receives data to and from external devices in accordance with instructions from the control unit 25. The communication unit 27 is, for example, a NIC (Network Interface Card) for connecting to the network NW.

 通信部27は、上記したネットワークNWに接続されており、種々のデータをサーバ50との間で送受信する。また、通信部27は、サーバ50を介して種々のデータを外部装置30との間で送受信する。 The communication unit 27 is connected to the network NW described above, and transmits and receives various data to and from the server 50. The communication unit 27 also transmits and receives various data to and from the external device 30 via the server 50.

 例えば、制御部25は、通信部27を介して、車載装置10のユーザによって入力された目的地を含む情報をサーバ50に送信し、サーバ50から、当該目的地への経路情報またはナビゲーション情報を受信可能である。 For example, the control unit 25 can transmit information including a destination input by a user of the in-vehicle device 10 to the server 50 via the communication unit 27, and receive route information or navigation information to the destination from the server 50.

 制御部25は、ビデオ通信中、通信部27を介して、車外カメラ13又は車内カメラ14によって撮像された映像をサーバ50に逐次送信する。また、制御部25は、ビデオ通信中、通信部27を介して、外部装置30からサーバ50を介して音声データを受信し、マイク19によって収音された自動車M内の音声をサーバ50に逐次送信する。 During video communication, the control unit 25 sequentially transmits images captured by the exterior camera 13 or the interior camera 14 to the server 50 via the communication unit 27. During video communication, the control unit 25 also receives audio data from the external device 30 via the server 50 via the communication unit 27, and sequentially transmits audio from inside the automobile M picked up by the microphone 19 to the server 50.

 制御部25は、例えば、自動車Mの走行中、通信部27を介して、GPS信号及び加速度センサからの信号に基づいて取得した自動車Mの現在位置情報をサーバ50に逐次送信する。 For example, while the automobile M is traveling, the control unit 25 sequentially transmits current location information of the automobile M obtained based on the GPS signal and the signal from the acceleration sensor to the server 50 via the communication unit 27.

 制御部25は、例えば、通信部27を介して、乗員によって入力されたビデオ通信に関する情報をサーバ50に送信する。例えば、制御部25は、タッチパネル15又はマイク19を介して運転者によって入力された、外部装置30のユーザとのマッチングを依頼することを示す情報等の情報をサーバ50に送信する。 The control unit 25 transmits information about the video communication input by the occupant to the server 50, for example, via the communication unit 27. For example, the control unit 25 transmits information to the server 50, such as information input by the driver via the touch panel 15 or microphone 19 indicating a request for matching with a user of the external device 30.

 制御部25は、例えば、通信部27を介して、ビデオ通信に関する情報をサーバ50から受信する。例えば、制御部25は、マッチングによって決定された外部装置30のユーザとのビデオ通信の可否の問い合わせ画面等の画面を表示するための画像を受信し、当該受信した画像をタッチパネル15に供給する。例えば、制御部25は、マッチングがなされた外部装置30のユーザとのビデオ通信の可否の問い合わせ等の音声データを受信し、当該受信した音声データをスピーカ17に供給する。 The control unit 25 receives information related to video communication from the server 50, for example, via the communication unit 27. For example, the control unit 25 receives an image for displaying a screen such as an inquiry screen about whether video communication is possible with the user of the external device 30 determined by matching, and supplies the received image to the touch panel 15. For example, the control unit 25 receives audio data such as an inquiry about whether video communication is possible with the user of the external device 30 with whom matching has been made, and supplies the received audio data to the speaker 17.

 図4は、外部装置30の構成の一例を示すブロック図である。例えば、外部装置30は、システムバス(図示せず)を介して、記憶部37と、制御部39と、通信部41と、が協働する装置である。 FIG. 4 is a block diagram showing an example of the configuration of the external device 30. For example, the external device 30 is a device in which a storage unit 37, a control unit 39, and a communication unit 41 work together via a system bus (not shown).

 記憶部37は、例えば、ハードディスク装置、SSD(solid state drive)、フラッシュメモリ等により構成されており、オペレーティングシステムや、外部装置30用のソフトウェア等の各種プログラムを記憶する。 The storage unit 37 is composed of, for example, a hard disk drive, a solid state drive (SSD), flash memory, etc., and stores various programs such as an operating system and software for the external device 30.

 なお、各種プログラムは、例えば、他のサーバ装置等からネットワークを介して取得されるようにしてもよいし、記録媒体に記録されて各種ドライブ装置を介して読み込まれるようにしてもよい。すなわち、記憶部37に記憶される各種プログラムは、ネットワークを介して伝送可能であるし、また、コンピュータ読み取り可能な記録媒体に記録して譲渡することが可能である。 The various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices. In other words, the various programs stored in the storage unit 37 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.

 制御部39は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)等により構成され、コンピュータとして機能する。そして、CPUがROMや記憶部37に記憶された各種プログラムを読み出し実行することにより各種機能を実現する。 The control unit 39 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer. The CPU reads and executes various programs stored in the ROM and memory unit 37 to realize various functions.

 制御部39は、上述したタッチパネル31、スピーカ33及びマイク35に通信可能に接続されている。制御部39は、タッチパネル31への入力操作を示す信号及びマイク35からの音声入力信号を受信することが可能である。また、制御部39は、タッチパネル31に映像または画像信号を送信して表示をさせたり、スピーカ33に音声信号を送信して、音を出力させたりすることが可能である。 The control unit 39 is communicatively connected to the above-mentioned touch panel 31, speaker 33, and microphone 35. The control unit 39 is capable of receiving signals indicating input operations to the touch panel 31 and audio input signals from the microphone 35. The control unit 39 is also capable of transmitting video or image signals to the touch panel 31 to cause it to display, and transmitting audio signals to the speaker 33 to cause it to output sound.

 具体的には、制御部39は、ビデオ通信中における映像をタッチパネル31に表示させ、自動車Mにおいて収音されたビデオ通信中における音声をスピーカ33に出力させる。 Specifically, the control unit 39 causes the image during the video communication to be displayed on the touch panel 31, and causes the audio during the video communication picked up in the automobile M to be output to the speaker 33.

 また、制御部39は、タッチパネル31又はマイク35によってユーザからなされたビデオ通信に関する入力操作を受け付けることが可能である。例えば、制御部39は、タッチパネル31及びマイク35を介して、車載装置10のユーザとのマッチングをリクエストすることを示す情報やマッチングがなされた車載装置10のユーザとのビデオ通信の許否の選択を示す情報の入力操作を受け付けることが可能である。 The control unit 39 can also accept input operations related to video communication made by the user via the touch panel 31 or microphone 35. For example, the control unit 39 can accept input operations via the touch panel 31 and microphone 35, such as information indicating a request for matching with the user of the in-vehicle device 10, or information indicating a choice as to whether or not to allow video communication with the user of the in-vehicle device 10 with whom a match has been made.

 例えば、制御部39は、車載装置10のユーザとのマッチングをリクエストするための操作の受付画面やマッチングがなされた車載装置10のユーザとのビデオ通信の可否の問い合わせ画面等の画面をタッチパネル31に表示させる。例えば、制御部39は、マッチングがなされた車載装置10のユーザとのビデオ通信の可否の問い合わせ等の音声をスピーカ33に出力させてもよい。 For example, the control unit 39 causes the touch panel 31 to display a screen for accepting operations to request matching with the user of the in-vehicle device 10, a screen for inquiring whether video communication with the user of the in-vehicle device 10 with whom the match has been made is possible, and other such screens. For example, the control unit 39 may cause the speaker 33 to output a sound such as an inquiry as to whether video communication with the user of the in-vehicle device 10 with whom the match has been made is possible.

 通信部41は、上記したネットワークNWに接続されており、種々のデータをサーバ50との間で送受信する。また、通信部41は、車載装置10から送信された音声や経路情報、外部装置30において取得された音声等の種々のデータを車載装置10との間でサーバ50を介して送受信する。 The communication unit 41 is connected to the network NW described above, and transmits and receives various data to and from the server 50. The communication unit 41 also transmits and receives various data, such as voice and route information transmitted from the in-vehicle device 10, and voice acquired by the external device 30, to and from the in-vehicle device 10 via the server 50.

 外部装置30の制御部39は、通信部41を介して、上記したようなマッチングがなされた車載装置10のユーザとのビデオ通信の可否の問い合わせ画面等の画面を表示させるための情報を受信可能である。 The control unit 39 of the external device 30 can receive information via the communication unit 41 to display a screen such as a screen asking whether or not video communication is possible with the user of the in-vehicle device 10 that has been matched as described above.

 また、制御部39は、ビデオ通信中において、サーバ50を介して車載装置10から送信される車外カメラ13及び車内カメラ14の画像を、通信部41を介して逐次受信する。 In addition, during video communication, the control unit 39 sequentially receives images from the exterior camera 13 and the interior camera 14 transmitted from the in-vehicle device 10 via the server 50 via the communication unit 41.

 また、制御部39は、通信部41を介して、ビデオ通信における音声通話のために、マイク35によって収音された音声の音声データを車載装置10に送信可能である。また、制御部39は、通信部41を介して、ビデオ通信における音声通話のために、車載装置10から送信された音声データを受信可能である。 The control unit 39 can also transmit audio data of the voice picked up by the microphone 35 to the in-vehicle device 10 via the communication unit 41 for voice calls in video communication. The control unit 39 can also receive audio data transmitted from the in-vehicle device 10 via the communication unit 41 for voice calls in video communication.

 例えば、外部装置30の制御部39は、通信部41を介して、車載装置10から送信された自動車Mの現在位置情報をサーバ50を介して受信可能である。また、例えば、制御部39は、通信部41を介して、サーバ50から、自動車Mの経路情報またはナビゲーション情報を受信可能である。例えば、ビデオ通信中に、自動車Mのナビゲーション情報が外部装置30にも表示されてもよい。 For example, the control unit 39 of the external device 30 can receive current location information of the automobile M transmitted from the in-vehicle device 10 via the server 50 through the communication unit 41. Also, for example, the control unit 39 can receive route information or navigation information of the automobile M from the server 50 through the communication unit 41. For example, navigation information of the automobile M may also be displayed on the external device 30 during video communication.

 図5は、サーバ50の構成を示すブロック図である。例えば、サーバ50はシステムバスを介して、記憶部51と、通信部53と、制御部55と、が協働している装置である。 FIG. 5 is a block diagram showing the configuration of the server 50. For example, the server 50 is a device in which a storage unit 51, a communication unit 53, and a control unit 55 work together via a system bus.

 サーバ50は、上述のように、ビデオ通信中において、車載装置10から、車外カメラ13又は車内カメラ14によって撮影されて車載装置10によって取得された映像を逐次受信する。サーバ50は、ビデオ通信中、車載装置10から受信した映像を外部装置30に逐次送信する。 As described above, during video communication, the server 50 sequentially receives from the in-vehicle device 10 images captured by the exterior camera 13 or the interior camera 14 and acquired by the in-vehicle device 10. During video communication, the server 50 sequentially transmits the images received from the in-vehicle device 10 to the external device 30.

 また、サーバ50は、ビデオ通信中において、車載装置10と外部装置30との間の音声通話を確立し、当該音声通話のデータを転送するSIPサーバのような機能を有している。 The server 50 also has a function similar to that of a SIP server, which establishes a voice call between the in-car device 10 and the external device 30 during video communication and transfers the data of the voice call.

 また、サーバ50は、車載装置10から自動車Mの現在位置情報及び自動車Mの乗員であるユーザが設定した目的地の情報を受信し、当該現在情報及び目的地の情報に基づいて当該目的地への経路を生成する機能を有する。 The server 50 also has the function of receiving current location information of the automobile M from the in-vehicle device 10 and information on a destination set by a user who is an occupant of the automobile M, and generating a route to the destination based on the current information and the destination information.

 さらに、サーバ50は、車載装置10のユーザと外部装置30のユーザとのマッチングを行って、マッチングがなされた車載装置10のユーザと外部装置30のユーザとの間でビデオ通信を行う機能を有する。 Furthermore, the server 50 has a function of matching the user of the in-vehicle device 10 with the user of the external device 30, and carrying out video communication between the matched user of the in-vehicle device 10 and the user of the external device 30.

 本実施例においては、ビデオ通信の相手を探すことを希望する車載装置10のユーザ又は外部装置30のユーザを、区別なく単に依頼者とも称する。例えば、依頼者が車載装置10のユーザである場合には、サーバ50は、当該依頼者と、予めサーバ50に登録している複数の外部装置30のユーザとのマッチングを行う。 In this embodiment, the user of the in-vehicle device 10 or the user of the external device 30 who wishes to find a partner for video communication will be referred to simply as a requester without distinction. For example, if the requester is a user of the in-vehicle device 10, the server 50 will match the requester with multiple users of the external devices 30 who have been registered in advance with the server 50.

 記憶部51は、例えば、ハードディスク装置及びSSD(solid state drive)等により構成されており、オペレーティングシステムや、サーバ50用のソフトウェア等の各種プログラムを記憶する。 The storage unit 51 is composed of, for example, a hard disk device and an SSD (solid state drive), and stores various programs such as the operating system and software for the server 50.

 各種プログラムは、例えば、他のサーバ装置等からネットワークを介して取得されるようにしてもよいし、記録媒体に記録されて各種ドライブ装置を介して読み込まれるようにしてもよい。すなわち、記憶部51に記憶される各種プログラムは、ネットワークを介して伝送可能であるし、また、コンピュータが読み取り可能な記録媒体に記録して譲渡することが可能である。 The various programs may be obtained, for example, from another server device or the like via a network, or may be recorded on a recording medium and read via various drive devices. In other words, the various programs stored in the storage unit 51 can be transmitted via a network, and can also be recorded on a computer-readable recording medium and transferred.

 記憶部51は、例えば、サーバ50が、車載装置10のユーザと外部装置30のユーザとのマッチングを行って、マッチングがなされた車載装置10のユーザと外部装置30のユーザとの間でビデオ通信を行うための情報処理プログラムを記憶している。 The storage unit 51 stores, for example, an information processing program that enables the server 50 to match a user of the in-vehicle device 10 with a user of the external device 30 and to perform video communication between the matched user of the in-vehicle device 10 and the user of the external device 30.

 また、記憶部51は、複数の車載装置10のユーザ及び複数の外部装置30のユーザがサーバ50に登録している情報である登録情報を保持する登録情報保持部を有している。 The memory unit 51 also has a registration information storage unit that stores registration information, which is information registered in the server 50 by users of multiple in-vehicle devices 10 and multiple external devices 30.

 一例として、登録情報が、車載装置10のユーザの登録情報と外部装置30のユーザの登録情報とに分けて記憶されている場合について説明するが、登録情報は、1つのデータベースにまとめて記憶されていてもよい。 As an example, a case will be described in which the registration information is stored separately for the user of the in-vehicle device 10 and the user of the external device 30, but the registration information may also be stored together in a single database.

 図6は、登録情報のうち、車載装置10のユーザの登録情報の一例であるデータLD1を示す図である。車載装置10のユーザの登録情報は、例えば、車載装置10の各々を識別可能な車載装置IDと、当該車載装置10の各々のユーザを識別可能なユーザIDとの組毎に記憶されている。 FIG. 6 is a diagram showing data LD1, which is an example of registration information of a user of an in-vehicle device 10, among the registration information. The registration information of a user of an in-vehicle device 10 is stored, for example, for each pair of an in-vehicle device ID that can identify each in-vehicle device 10 and a user ID that can identify each user of the in-vehicle device 10.

 登録情報は、各々のユーザIDに対応するユーザが車載装置10のユーザであるか又は外部装置30のユーザであるかを示す情報であるユーザ種別情報を含む。また、登録情報は、各々のユーザIDに対応するユーザが、案内をされることを希望する「被案内者」であるか案内をすることが可能な「案内者」であるかの区別を示す情報である希望種別を含む。 The registration information includes user type information, which is information indicating whether the user corresponding to each user ID is a user of the in-vehicle device 10 or a user of the external device 30. The registration information also includes a desired type, which is information indicating whether the user corresponding to each user ID is a "guide" who wishes to be guided or a "guide" who can provide guidance.

 一例として、本実施例においては、車載装置10のユーザである場合にユーザ種別を「運転者」とし、外部装置30のユーザである場合にユーザ種別を「同乗者」とする。 As an example, in this embodiment, if the user is the in-vehicle device 10 user, the user type is set to "driver," and if the user is the external device 30 user, the user type is set to "passenger."

 さらに、登録情報は、希望種別が被案内者である場合には案内を希望する場所を示す情報である案内希望場所を含む。また、登録情報は、希望種別が案内者である場合には当該案内者の案内が可能な場所を示す情報である案内可能場所含む。 Furthermore, if the desired type is to be guided, the registration information includes a desired guided location, which is information indicating a location to which guidance is desired. Furthermore, if the desired type is to be a guide, the registration information includes a possible guided location, which is information indicating a location to which guidance by the guide is possible.

 案内可能場所及び案内希望場所は、例えば、地図上の位置や住所等の地点を示す情報であってもよく、都市名や市区町村、「〇〇付近」等の領域を示す情報であってもよい。例えば、案内可能場所又は案内希望場所として、複数の場所が登録されていてもよい。 The locations where guidance is available and the desired locations for guidance may be, for example, information indicating a location on a map or an address, or may be information indicating an area such as a city name, municipality, or "near XX." For example, multiple locations may be registered as locations where guidance is available or desired locations for guidance.

 案内可能場所として、例えば、車載装置10のユーザにとって土地勘のある場所が予め登録される。例えば、当該ユーザの出身地、勤務地又は生活圏等の場所が案内可能場所として登録される。 For example, places that the user of the in-vehicle device 10 is familiar with are registered in advance as guideable places. For example, places such as the user's hometown, place of work, or living area are registered as guideable places.

 また、車載装置10のユーザが自動車Mで走行中である場合に、車載装置10の現在地すなわち自動車Mの現在地が、案内可能場所として設定されてもよい。例えば、サーバ50は、自動車Mの現在位置情報を車載装置10から取得して、当該車載装置10のユーザの案内可能場所として設定してもよい。 Also, when the user of the in-vehicle device 10 is driving the automobile M, the current location of the in-vehicle device 10, i.e., the current location of the automobile M, may be set as a navigable location. For example, the server 50 may obtain current location information of the automobile M from the in-vehicle device 10 and set it as a navigable location for the user of the in-vehicle device 10.

 また、例えば、車載装置10において経路が設定されている場合に、当該経路上の主要経由地が案内可能場所として登録されてもよい。主要経由地は、例えば交差点や建物等の案内の際に目印になり得る地点である案内対象地点を含む所定の領域としてサーバ50によって特定されてもよい。 Also, for example, when a route is set in the in-vehicle device 10, major waypoints on the route may be registered as navigable locations. Major waypoints may be identified by the server 50 as a predetermined area that includes navigable locations, such as intersections, buildings, and other locations that can serve as landmarks when providing guidance.

 図6に示す例においては、「〇〇寺」を含む所定の領域が主要経由地として特定され得る。例えば、サーバ50は、自動車Mの経路情報を車載装置10から受信して、受信した経路情報に基づいて主要経由地を特定して案内可能場所として設定してもよい。 In the example shown in FIG. 6, a predetermined area including "X Temple" may be identified as a major route point. For example, the server 50 may receive route information for the automobile M from the in-vehicle device 10, identify major route points based on the received route information, and set them as navigable locations.

 案内希望場所は、例えば、希望種別が被案内者であ・BR>骼ヤ載装置10のユーザが外部装置30のユーザとのマッチングを依頼することを示す情報をサーバ50が受信した場合に設定される。例えば、車載装置10のユーザが車載装置10において入力した情報又は車載装置10のユーザが設定した経路上の主要経由地又は目的地が、被案内者である車載装置10のユーザの案内希望場所として設定される。 The desired location to be guided to is set, for example, when the server 50 receives information indicating that the desired type is a person to be guided and the user of the in-vehicle device 10 requests matching with the user of the external device 30. For example, information input into the in-vehicle device 10 by the user of the in-vehicle device 10, or a major waypoint or destination on the route set by the user of the in-vehicle device 10, is set as the desired location to be guided to by the user of the in-vehicle device 10 who is the person to be guided to.

 また、案内希望場所は、例えば、希望種別が被案内者である車載装置10のユーザが、希望種別が案内者である外部装置30のユーザからのリクエストがあった場合に案内を受けることを希望する場合に、当該ユーザによって登録される。 The desired location to be guided to is registered by the user of the in-vehicle device 10, for example, when the user, whose desired category is a guided person, wishes to receive guidance in response to a request from the user of the external device 30, whose desired category is a guide.

 なお、案内希望場所についても、案内可能場所と同様に、現在地又は主要経由地が含まれていてもよい。 The desired location may also include the current location or a major intermediate location, just like the locations that can be guided.

 登録情報は、例えば、ビデオ通信が可能なタイミングを示す情報をさらに含んでいてもよい。図6に示す例においては、ビデオ通信が可能なタイミングを示す情報として、車載装置10の待機状態を示す情報が登録情報に含まれる。例えば、車載装置10に電源が投入されている場合に待機状態が「ON」となる。また、例えば、待機状態が「ON」の場合に、いつまでビデオ通信が可能かを示す待機予定時間が登録されていてもよい。 The registration information may further include, for example, information indicating the timing when video communication is possible. In the example shown in FIG. 6, the registration information includes information indicating the standby state of the in-vehicle device 10 as information indicating the timing when video communication is possible. For example, the standby state is "ON" when the in-vehicle device 10 is powered on. In addition, for example, when the standby state is "ON", a planned standby time indicating until when video communication is possible may be registered.

 なお、車載装置10のユーザの登録情報には、例えば、希望種別が被案内者である場合には、通信相手に対して案内希望場所についての高い案内スキルを求めることを示す情報、例えば地元の人を希望するか否かを示す情報が含まれてもよい。また、希望種別が案内者である場合には、案内可能場所についての案内スキルを示す情報、例えば出身地であるか否かを示す情報が含まれてもよく、運転歴等の運転スキルを示す情報が含まれてもよい。 In addition, the registration information of the user of the in-vehicle device 10 may include, for example, if the desired type is a person to be guided, information indicating that the communication partner is required to have high guidance skills for the desired location, such as information indicating whether or not a local person is desired. Also, if the desired type is a guide, information indicating guidance skills for the possible locations, such as information indicating whether or not the person is from the user's hometown, and information indicating driving skills such as driving history may be included.

 図7は、登録情報のうち、外部装置30のユーザの登録情報の一例であるデータLD2を示す図である。外部装置30のユーザの登録情報は、例えば、外部装置30の各々を識別可能な外部装置IDと、当該外部装置30の各々のユーザを識別可能なユーザIDとの組毎に記憶されている。 FIG. 7 is a diagram showing data LD2, which is an example of registration information of users of external devices 30, among the registration information. The registration information of users of external devices 30 is stored, for example, for each pair of an external device ID capable of identifying each external device 30 and a user ID capable of identifying each user of the external device 30.

 上記した車載装置10のユーザの登録情報と同様に、外部装置30のユーザの登録情報についても、ユーザ種別情報及び希望種別を含む。上述の車載装置10のユーザの登録情報と同様に、外部装置30のユーザの登録情報も、希望種別が被案内者である場合には案内を希望する場所を示す案内希望場所を含み、希望種別が案内者である場合には当該案内者の案内が可能な場所を示す案内可能場所含む。 Similar to the registration information of the user of the in-vehicle device 10 described above, the registration information of the user of the external device 30 also includes user type information and desired type. Similar to the registration information of the user of the in-vehicle device 10 described above, the registration information of the user of the external device 30 also includes a desired guided location indicating a location to which guidance is desired if the desired type is a guidee, and includes a guideable location indicating a location to which guidance by the guide is possible if the desired type is a guide.

 案内可能場所として、例えば、外部装置30のユーザにとって土地勘のある場所が予め登録される。例えば、当該ユーザの出身地、勤務地又は生活圏等の場所が案内可能場所として登録される。 For example, places that the user of the external device 30 is familiar with are registered in advance as locations to which the user can provide guidance. For example, places such as the user's hometown, place of work, or living area are registered as locations to which the user can provide guidance.

 案内希望場所は、例えば、希望種別が被案内者である外部装置30のユーザが、車載装置10のユーザとのマッチングを依頼することを示す情報をサーバ50に送信した場合に登録される。 The desired location for guidance is registered, for example, when a user of the external device 30 whose desired type is a person to be guided to transmits information to the server 50 indicating a request for matching with the user of the in-vehicle device 10.

 また、案内希望場所は、例えば、希望種別が被案内者である外部装置30のユーザが、希望種別が案内者である車載装置10のユーザからのリクエストがあった場合に案内を受けることを希望する場合に、当該ユーザによって登録される。 The desired location to be guided to is registered by the user of the external device 30, for example, when the user wishes to be guided to the location when requested by the user of the in-vehicle device 10, for example, when the user wishes to be guided to the desired location by the user.

 上記したように、登録情報は、例えば、ビデオ通信が可能なタイミングを示す情報をさらに含んでいてもよい。図7に示す例においては、外部装置30のユーザのビデオ通信が可能なタイミングを示す情報として、外部装置30の待機状態を示す情報が登録情報に含まれる。 As described above, the registration information may further include, for example, information indicating the timing when video communication is possible. In the example shown in FIG. 7, the registration information includes information indicating the standby state of the external device 30 as information indicating the timing when the user of the external device 30 is able to perform video communication.

 例えば、外部装置30において使用されるビデオ通信のためのアプリケーション上での設定により、待機状態が「ON」となる。例えば、待機状態がOFFの場合であっても、当該アプリケーション上で車載装置10のユーザとのマッチングがなされた旨の通知を受信可能になっていてもよい。また、例えば、待機状態が「ON」の場合に、いつまでビデオ通信が可能かを示す待機予定時間が登録されていてもよい。 For example, the standby state is set to "ON" by a setting on an application for video communication used in the external device 30. For example, even if the standby state is OFF, it may be possible to receive a notification on the application that a match has been made with the user of the in-vehicle device 10. Also, for example, when the standby state is "ON", a planned standby time indicating how long video communication is possible may be registered.

 なお、外部装置30のユーザの登録情報には、例えば、希望種別が被案内者である場合には、通信相手に対して案内希望場所についての高い案内スキルを求めることを示す情報、例えば地元の人を希望するか否かを示す情報が含まれてもよい。また、希望種別が案内者である場合には、案内可能場所についての案内スキルを示す情報、例えば出身地であるか否かを示す情報が含まれてもよく、運転歴等の運転アドバイスのスキルを示す情報が含まれてもよい。 In addition, the registration information of the user of the external device 30 may include, for example, if the desired type is a person to be guided, information indicating that the communication partner is required to have high guidance skills for the desired location, such as information indicating whether or not a local person is desired. Also, if the desired type is a guide, information indicating guidance skills for the possible locations, such as information indicating whether or not the location is the user's hometown, and information indicating driving advice skills such as driving history may be included.

 なお、登録情報保持部に保持される情報は、上記した例に限られず、他の情報を含み、マッチングに用いられてもよい。 The information stored in the registration information storage unit is not limited to the above examples, but may include other information and be used for matching.

 また、記憶部51は、道路地図を含む地図情報を記憶している。当該地図情報は、サーバ50がカーナビゲーションのための経路を生成する際に使用される。また、当該地図情報は、サーバ50が自動車Mの現在位置に基づいて、自動車Mの経路上の主要経由地を取得する際に使用される。 The memory unit 51 also stores map information including road maps. The map information is used by the server 50 when it generates a route for car navigation. The map information is also used by the server 50 when it acquires major stopovers on the route of the vehicle M based on the current position of the vehicle M.

 通信部53は、上記したネットワークNWに接続されており、種々のデータを車載装置10及び外部装置30との間で送受信する。 The communication unit 53 is connected to the network NW described above, and transmits and receives various data between the in-vehicle device 10 and the external device 30.

 制御部55は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)等により構成され、コンピュータとして機能する。そして、CPUが、ROMや記憶部51に記憶された各種プログラムを読み出し実行することにより各種機能を実現する。 The control unit 55 is composed of a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), etc., and functions as a computer. The CPU realizes various functions by reading and executing various programs stored in the ROM and the memory unit 51.

 制御部55は、車載装置10のユーザの登録情報又は外部装置30のユーザの登録情報を取得する機能を有する。制御部55は、取得した登録情報を登録情報保持部51Aに記憶する。 The control unit 55 has a function of acquiring registration information of a user of the in-vehicle device 10 or registration information of a user of the external device 30. The control unit 55 stores the acquired registration information in the registration information storage unit 51A.

 制御部55は、例えば、車載装置10の各々又は外部装置30の各々から送信された登録情報を通信部53を介して受信することで、登録情報を取得する。例えば、図6及び図7に例示したような、車載装置10のユーザの登録情報又は外部装置30のユーザの登録情報が登録情報保持部51Aに蓄積される。 The control unit 55 acquires the registration information, for example, by receiving the registration information transmitted from each of the in-vehicle devices 10 or each of the external devices 30 via the communication unit 53. For example, the registration information of the user of the in-vehicle device 10 or the registration information of the user of the external device 30 as illustrated in Figs. 6 and 7 is stored in the registration information storage unit 51A.

 また、制御部55は、情報処理システム100において、依頼者としての車載装置10のユーザ又は外部装置30のユーザがビデオ通信の相手を探すことをサーバ50に依頼する際にサーバ50に送信される依頼情報を取得する機能を有する。 In addition, in the information processing system 100, the control unit 55 has a function of acquiring request information that is sent to the server 50 when a user of the in-vehicle device 10 or a user of the external device 30 as a requester requests the server 50 to search for a partner for video communication.

 図8は、依頼情報の一例であるデータRD1を示す図である。依頼情報は、上述した登録情報と同様に外部装置ID又は車載装置ID及びユーザIDを含む。また、依頼情報は、ビデオ通信の相手を探すことを依頼することを示す情報を含む。 FIG. 8 shows data RD1, which is an example of request information. The request information includes an external device ID or an in-vehicle device ID and a user ID, similar to the registration information described above. The request information also includes information indicating a request to search for a partner for video communication.

 さらに、依頼情報は、上記した登録情報と同様に、車載装置10のユーザであるか又は外部装置30のユーザであるかを示すユーザ種別と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含み、希望種別が被案内者である場合には案内を希望する場所である案内希望場所を含み、希望種別が案内者である場合には当該案内者の案内が可能な場所である案内可能場所を含む。 Furthermore, like the registration information described above, the request information includes a user type indicating whether the user is the user of the in-vehicle device 10 or the user of the external device 30, and a desired type indicating whether the user is a person to be guided or a guide who can provide guidance. If the desired type is a person to be guided, it includes a desired guide location, which is a place where the user wishes to be guided, and if the desired type is a guide, it includes a guideable location, which is a place where the guide can provide guidance.

 依頼情報は、例えば、車載装置10におけるユーザからの入力操作、又は外部装置30へのユーザからの入力操作によって入力されて生成される。例えば、依頼情報の入力画面上で、案内種別が選択され、案内可能場所又は案内希望場所が入力される。また、例えば、「〇〇市内を案内できる通信相手を探して」等の音声によって、案内種別及び案内可能場所又は案内希望場所の入力がなされてもよい。 The request information is generated, for example, by inputting an input operation by the user on the in-vehicle device 10 or an input operation by the user on the external device 30. For example, on the input screen for the request information, a guidance type is selected, and a location where guidance is available or a desired location is input. In addition, the guidance type and a location where guidance is available or a desired location may be input by voice, for example, "Look for a communication partner who can provide guidance within XX city."

 例えば、車載装置10のユーザの案内種別が被案内者である場合、「京都市内を案内できる通信相手を探して」等の音声によって、案内種別が被案内者として設定され、案内希望場所が京都市内と設定される。 For example, if the guidance type of the user of the in-vehicle device 10 is a person to be guided, the guidance type is set to a person to be guided and the desired location to be guided is set to Kyoto city by a voice command such as "Find a communication partner who can guide you around Kyoto city."

 また、車載装置10のユーザが案内希望場所を入力しない場合であっても、経路を設定している場合、案内希望場所に代えて経路情報がサーバ50に送信されてもよい。その場合、例えば、サーバ50によって、経路情報に基づいて、経路上の主要経由地や目的地が案内希望場所として設定されてもよい。 In addition, even if the user of the in-vehicle device 10 does not input a desired location for guidance, if a route has been set, route information may be sent to the server 50 instead of the desired location for guidance. In that case, for example, the server 50 may set a major stopover or a destination on the route as the desired location for guidance based on the route information.

 例えば、外部装置30のユーザの希望種別が案内者である場合、ユーザが頻繁に車で走行する場所や頻繁に訪れる店舗等の場所、ユーザの生活圏等の場所が案内可能場所として入力される。例えば、案内者である外部装置30のユーザは、案内希望者を募集する時間、すなわちビデオ通信が可能な予定時間を待機予定時間として設定してもよい。その場合、例えば、依頼情報に当該待機予定時間が追加されてサーバ50に送信される。 For example, if the desired type of the user of the external device 30 is to be a guide, places where the user frequently drives by car, places such as stores that the user frequently visits, and places such as the user's living area are input as places where guidance is available. For example, the user of the external device 30 who is a guide may set the time when he/she is looking for people who want to be guided, that is, the planned time when video communication is possible, as the planned waiting time. In that case, for example, the planned waiting time is added to the request information and transmitted to the server 50.

 図8に示す例においては、依頼情報を送信するユーザが外部装置30のユーザであり、データRD1は外部装置IDを含む。例えば、図8に示すように、「依頼」の項目のフラグがONになっている場合に、ビデオ通信の相手を探すことを依頼することを示す。また、図8に示す例においては、ユーザが外部装置30のユーザであるため、ユーザ種別は「同乗者」となっている。なお、依頼情報を送信するユーザが車載装置10のユーザである場合には、依頼情報には外部装置IDの代わりに車載装置IDが含まれ、ユーザ種別は「運転者」となる。 In the example shown in FIG. 8, the user sending the request information is a user of the external device 30, and the data RD1 includes the external device ID. For example, as shown in FIG. 8, when the flag in the "Request" item is ON, it indicates a request to find a partner for video communication. Also, in the example shown in FIG. 8, since the user is a user of the external device 30, the user type is "passenger." Note that, if the user sending the request information is a user of the in-vehicle device 10, the request information includes the in-vehicle device ID instead of the external device ID, and the user type is "driver."

 また、図8に示す例においては、希望種別が「被案内者」である被案内希望情報であり、当該被案内者である依頼者が案内を希望する案内希望場所が含まれている。なお、希望種別が「案内者」である場合には、依頼情報には依頼者の案内可能場所が含まれる。 In the example shown in FIG. 8, the desired type is "person to be guided" and includes the desired location to which the requester, who is the person to be guided, wishes to be guided. Note that if the desired type is "guide," the request information includes the location to which the requester can be guided.

 上述のように、登録情報及び依頼情報は、いずれも、ユーザ種別、希望種別、及び希望種別に応じた案内可能場所又は案内希望場所を含んでいる。本実施例において、これらのユーザ種別と、希望種別と、希望種別に応じた案内可能場所又は案内希望場所とを希望情報と称する。希望情報は、本実施例における車載装置10のユーザと外部装置30のユーザとのマッチングに使用される。 As described above, both the registration information and the request information include a user type, a desired type, and a place to which guidance can be given or a desired place to be given according to the desired type. In this embodiment, the user type, the desired type, and the place to which guidance can be given or a desired place to be given according to the desired type are referred to as desired information. The desired information is used to match the user of the in-vehicle device 10 with the user of the external device 30 in this embodiment.

 制御部55は、登録情報及び依頼情報を取得することで、本実施例のマッチングに使用され得る希望情報を取得し、希望情報取得部57として機能する。 The control unit 55 acquires registration information and request information to acquire desired information that can be used for matching in this embodiment, and functions as a desired information acquisition unit 57.

 言い換えれば、制御部55は、第1装置としての車載装置10のユーザであるか又は第2装置としての外部装置30のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ希望種別が被案内者である場合には案内を希望する場所である案内希望場所を含み、希望種別が案内者である場合には案内者の案内が可能な場所である案内可能場所を含む複数の希望情報を取得するステップ(希望情報取得ステップ)を実行する希望情報取得部57として機能する。言い換えれば、制御部55は、機能部として希望情報取得部57を有する。 In other words, the control unit 55 functions as a desired information acquisition unit 57 that executes a step of acquiring multiple pieces of desired information (desired information acquisition step) including user type information indicating whether the user is the user of the in-vehicle device 10 as the first device or the user of the external device 30 as the second device, and a desired type indicating whether the user is a person wishing to be guided or a guide who can provide guidance, and including a desired guidance location where the user wishes to be guided if the desired type is a person wishing to be guided, and including a guideable location where the guide can provide guidance if the desired type is a guide. In other words, the control unit 55 has the desired information acquisition unit 57 as a functional unit.

 例えば、希望情報取得部57は、外部装置30のユーザから希望種別を被案内者とする依頼情報を受信する場合、第2装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部として機能する。 For example, when the desired information acquisition unit 57 receives request information with the desired type being a person to be guided from the user of the external device 30, it functions as a desired information acquisition unit that acquires guided desired information including a guided location, which is a location to which the user of the second device wishes to be guided.

 例えば、希望情報取得部57は、車載装置10のユーザから希望種別を被案内者とする依頼情報を受信する場合、第1装置のユーザから案内を希望する場所である案内希望場所である被案内希望情報を取得する希望情報取得部として機能する。この場合、希望情報取得部57は、例えば外部装置30のユーザの登録情報を取得することで、第2装置のユーザが案内可能か否か及び案内可能である場合には案内が可能な場所である案内可能場所を含む案内可能情報を取得する案内可能情報取得部として機能する。 For example, when the desired information acquisition unit 57 receives request information with the desired type set as "recipient" from the user of the in-vehicle device 10, it functions as a desired information acquisition unit that acquires desired information to be guided, which is a desired guided location, from the user of the first device. In this case, the desired information acquisition unit 57 functions as a navigable information acquisition unit that acquires navigable information including whether or not the user of the second device can be guided and, if so, a navigable location, which is a location where guidance is possible, by acquiring, for example, registration information of the user of the external device 30.

 なお、希望情報は、上述したような、待機状態を示す情報等のビデオ通信が可能なタイミングを示す情報をさらに含んでいてもよい。当該希望情報は、車載装置10のユーザと外部装置30のユーザとのマッチングに用いられる。 The desired information may further include information indicating the timing when video communication is possible, such as information indicating a standby state, as described above. The desired information is used to match the user of the in-vehicle device 10 with the user of the external device 30.

 制御部55は、依頼情報を取得すると、取得した依頼情報に含まれる希望情報及び登録情報に含まれる希望情報に基づいて、車載装置10のユーザと外部装置30のユーザとのマッチングを行うマッチング部59としての処理を実行する。言い換えれば、制御部55は、機能部としてマッチング部59を有する。 When the control unit 55 acquires the request information, it executes processing as a matching unit 59 that matches the user of the in-vehicle device 10 with the user of the external device 30 based on the desired information included in the acquired request information and the desired information included in the registration information. In other words, the control unit 55 has a matching unit 59 as a functional unit.

 例えば、マッチング部59は、依頼者と、登録情報を登録している複数の車載装置10のユーザ又は外部装置30のユーザ(登録者とも称する)の各々との間で、案内希望場所と案内可能場所との照合を行う。マッチング部59は、照合結果に基づいて、車載装置10のユーザと外部装置30のユーザとのマッチングを行う。 For example, the matching unit 59 compares the desired location to be guided to and the locations to which guidance can be provided between the requester and each of the users of the in-vehicle devices 10 or the users of the external devices 30 (also referred to as registrants) who have registered registration information. The matching unit 59 matches the users of the in-vehicle devices 10 and the users of the external devices 30 based on the comparison results.

 例えば、マッチング部59は、当該依頼情報が示すユーザ種別と異なるユーザ種別の登録者のうち、依頼情報が示す希望種別と異なる希望種別である登録者の各々との間で、案内希望場所と案内可能場所との照合を行う。 For example, the matching unit 59 compares the desired location to be guided with the locations to be guided for each of the registered users who have a desired location different from the desired location indicated by the request information, among the registered users who have a user type different from the user type indicated by the request information.

 上述のように、例えば、案内希望場所及び案内可能場所は、地点を示す情報又は領域を示す情報である。例えば、案内希望場所と案内可能場所との照合は、地点同士の距離、領域同士の距離、地点と領域との距離、又は領域内に地点若しくは領域が含まれるか否か等の判断基準に基づいて行われる。 As described above, for example, the desired location of guidance and the location where guidance is available are information indicating a location or information indicating an area. For example, the location of guidance desired and the location where guidance is available are matched based on criteria such as the distance between locations, the distance between areas, the distance between locations and areas, or whether or not a location or area is included within an area.

 例えば、案内希望場所を示す領域(希望エリアとも称する)が案内可能場所を示す領域(可能エリアとも称する)に含まれるか否か、希望エリアと可能エリアとが所定距離内にあるか(隣接しているか又は近傍にあるか)に基づいて照合が行われても良い。また、希望エリアが複数ある場合には、希望エリアと案内エリアとが重複するエリアの数の多少に基づいて照合が行われてもよい。例えば、照合結果は、案内希望場所と案内可能場所との一致度として算出されて数値で表わされてもよい。 For example, matching may be performed based on whether the area indicating the desired guidance location (also referred to as the desired area) is included in the area indicating the guidable location (also referred to as the possible area), and whether the desired area and the possible area are within a specified distance (adjacent or nearby). Furthermore, if there are multiple desired areas, matching may be performed based on the number of areas in which the desired area and the guiding area overlap. For example, the matching result may be calculated as the degree of match between the desired guidance location and the guiding possible location, and expressed as a numerical value.

 図9を参照しつつ、案内希望場所と案内可能場所との照合の一例について説明する。図9は、自動車Mが地点Pから地点Qまで走行予定の経路R上に、〇〇寺Tを含む領域ARが主要経由地として存在する場合を示している。 With reference to Figure 9, an example of matching a desired guide location with a guideable location will be described. Figure 9 shows a case where an area AR including a certain temple T exists as a major stop on route R along which a vehicle M is scheduled to travel from point P to point Q.

 例えば、車載装置10のユーザである自動車Mの運転者(ユーザAとする)は、自宅のある地点Pから職場のある地点Qまで走行予定である。例えば、当該運転者は、希望種別を「案内者」とし、「京都市内」を案内可能場所として登録情報を登録済みであるとする。例えば、車載装置10において経路が設定されると、「〇〇寺」を含む領域ARが主要経由地として特定されて登録情報に追加される。 For example, a driver of a car M (assumed to be user A), who is a user of the in-vehicle device 10, plans to travel from point P, where the driver lives, to point Q, where the driver works. For example, the driver's desired type is assumed to be "guide," and the driver has already registered information with "Kyoto City" as a possible guide location. For example, when a route is set in the in-vehicle device 10, an area AR including "XX Temple" is identified as a major stopover and added to the registration information.

 例えば、外部装置30のユーザ(ユーザBとする)が、希望種別を「被案内者」とし、案内希望場所を「京都市内、〇〇寺」として依頼情報を送信すると、〇〇寺Tを含む領域ARが案内希望場所として特定される。 For example, when a user of the external device 30 (user B) sends request information specifying the desired type as "person to be guided" and the desired location as "X temple in Kyoto city," the area AR including X temple T is identified as the desired location.

 マッチング部59は、ユーザ種別が「運転者」でありかつ希望種別が「案内者」である登録者、すなわちユーザAを含む登録者の各々との間で、ユーザBの案内希望場所と、登録者の案内可能場所との照合を行う。この場合、ユーザBの案内希望場所である領域ARと、ユーザAの案内可能場所とが一致するという照合結果になる。 The matching unit 59 compares the location where user B wishes to be guided with the locations where user A can be guided to among registered users whose user type is "driver" and whose desired user type is "guide," that is, each of the registered users including user A. In this case, the comparison result shows that the area AR, which is the location where user B wishes to be guided to, matches the locations where user A can be guided to.

 例えば、案内可能場所として、土地勘のある場所が予め登録されていなくともよく、その場合は、現在地又は主要経由地がマッチングに用いられる。 For example, familiar locations do not need to be preregistered as navigable locations; in that case, the current location or major intermediate locations can be used for matching.

 例えば、マッチング部59は、照合結果として一致度を算出すると、一致度を算出した登録者の各々について一致度が高い順に優先順位を付け、当該優先順が高い方から所定数の登録者を、依頼者のビデオ通信の相手候補として抽出する。 For example, when the matching unit 59 calculates the degree of match as the comparison result, it prioritizes each of the registered users whose degree of match is calculated in order of the degree of match, and extracts a predetermined number of registered users from the highest priority as candidates for video communication with the requester.

 例えば、マッチング部59は、抽出された相手候補の中から、依頼者のビデオ通信の相手を決定する。例えば、マッチング部59は、抽出したビデオ通信の相手候補のうち、優先順位が一番高い登録者に通知を行って、当該登録者からの承諾が得られた場合に、当該登録者を依頼者のビデオ通信の相手として決定する。以上のようにして、車載装置10のユーザと外部装置30のユーザとのマッチングがなされる。 For example, the matching unit 59 determines a partner for video communication of the requester from among the extracted partner candidates. For example, the matching unit 59 notifies the registered person with the highest priority among the extracted video communication partner candidates, and if consent is obtained from that registered person, determines that that registered person as the partner for video communication of the requester. In this manner, matching is performed between the user of the in-vehicle device 10 and the user of the external device 30.

 なお、ユーザ種別が運転者である場合の案内可能場所として、土地勘のある地域に加えて、現在地又は主要経由地が登録されている場合において、土地勘のある場所と現在地又は主要経由地が一致している方が優先順位が高く、マッチングされ易くなるようにしてもよい。また、当該運転者の案内可能場所として、土地勘のある地域が設定されていない場合は、優先順位を低くして、マッチングされ難くなるようにしてもよい。 In addition, when the user type is a driver and the current location or major intermediate points are registered as guideable locations in addition to areas familiar to the user, the locations familiar to the user and the current location or major intermediate points that match may be given a higher priority and may be more easily matched. Also, if the driver is not set as having a familiar area as a guideable location, the priority may be lowered and matching may be more difficult.

 また、登録者が待機状態である、すなわち、登録者がビデオ通信が可能な状態であると登録されている場合には、待機状態の登録者のマッチングを優先するようにしても良い。 In addition, if a registered user is in a standby state, that is, if the registered user is registered as being available for video communication, matching of the registered user in a standby state may be prioritized.

 制御部55は、マッチング部59によって車載装置10のユーザと外部装置30のユーザとのマッチングがなされると、当該マッチングがなされた車載装置10と外部装置30との間でビデオ通信を確立させる。制御部55は、マッチングがなされた車載装置10と外部装置30との音声通信を行う通信部として機能する。 When the matching unit 59 matches the user of the in-car device 10 with the user of the external device 30, the control unit 55 establishes video communication between the matched in-car device 10 and external device 30. The control unit 55 functions as a communication unit that performs audio communication between the matched in-car device 10 and external device 30.

 以下、情報処理システム100において実行される各装置間の通信シーケンス及びサーバ50による制御ルーチンについて説明する。 The following describes the communication sequence between the devices in the information processing system 100 and the control routine by the server 50.

 [仮想同乗者が依頼者となる場合]
 図10は、本実施例の情報処理システム100において、外部装置30のユーザが依頼者となる場合に実行されるシーケンスの一例を示すシーケンス図である。本シーケンスにおいて、複数の登録者が、上述したような登録情報の登録を済ませていることを前提とする。
[When a virtual passenger becomes the client]
10 is a sequence diagram showing an example of a sequence executed when a user of the external device 30 becomes a requester in the information processing system 100 of this embodiment. In this sequence, it is assumed that a plurality of registrants have completed the registration of the registration information as described above.

 まず、外部装置30において、ユーザからの依頼情報の入力操作が受け付けられ(ステップS11)、依頼情報がサーバ50に送信される(ステップS12)。 First, the external device 30 accepts an input operation of request information from a user (step S11), and the request information is sent to the server 50 (step S12).

 サーバ50は、依頼情報を受信すると、マッチング処理を行って(ステップS13)、当該外部装置30のユーザと車載装置10のユーザである複数の登録者のうちの1のユーザとをマッチングする。 When the server 50 receives the request information, it performs a matching process (step S13) to match the user of the external device 30 with one of the multiple registered users who are users of the in-vehicle device 10.

 サーバ50は、マッチング処理を行うと、マッチングがなされた車載装置10に、ビデオ通信による自動車Mへの仮想同乗を希望することを示す同乗希望の通知を送信する(ステップS14)。 When the server 50 has completed the matching process, it transmits a notification of a desire to ride to the matched in-vehicle device 10, indicating that the user wishes to virtually ride in the automobile M via video communication (step S14).

 車載装置10は、同乗希望の通知を受信すると、自動車Mの運転者による承諾の入力操作を受け付け(ステップS15)、許諾の通知をサーバ50に送信する(ステップS16)。 When the in-vehicle device 10 receives the notification of a passenger request, it accepts the input operation of consent by the driver of the vehicle M (step S15) and transmits a notification of consent to the server 50 (step S16).

 サーバ50は、許諾の通知を受信すると、外部装置30に、ビデオ通信を開始するためのURLを送信する(ステップS17)。外部装置30のユーザがURLをタップして外部装置30が当該URLにアクセスすると(ステップS18)、サーバ50は、車載装置10と外部装置30との間の音声通話を確立させつつ、自動車Mにおいて撮影された映像を車載装置10から外部装置30に配信させるビデオ通信を開始する(ステップS19)。 When the server 50 receives the notification of approval, it sends a URL for starting video communication to the external device 30 (step S17). When the user of the external device 30 taps the URL and the external device 30 accesses the URL (step S18), the server 50 establishes a voice call between the in-car device 10 and the external device 30 and starts video communication in which the video captured in the automobile M is distributed from the in-car device 10 to the external device 30 (step S19).

 例えば、外部装置30のユーザが案内を希望する場合であって、マッチングがなされた車載装置10のユーザが案内可能場所を設定していない場合は、ビデオ通信において、少なくとも開始時においては、音声通話を行わずに、映像の配信のみを行うようにしてもよい。 For example, if the user of the external device 30 desires guidance, but the user of the matched in-vehicle device 10 has not set a location where guidance is available, the video communication may be configured to only deliver video without conducting a voice call, at least at the start.

 例えば、本シーケンスの外部装置30のユーザ種別が「被案内者」の場合、依頼情報を送信する動機の1つとして、旅行の下見が挙げられる。例えば、旅行を予定している外部装置30のユーザが、案内希望場所について土地勘のある運転者から案内を受けことができる。例えば、当該ユーザは、ビデオ通信の音声通話によって案内希望場所についての情報提供を受けたり、配信される映像を見たりすることで旅行の下調べをすることができる。 For example, if the user type of the external device 30 in this sequence is a "guideee," one of the motivations for sending request information is to scout out a trip. For example, a user of the external device 30 who is planning a trip can receive guidance from a driver who is familiar with the area to a place where the user wishes to be guided. For example, the user can do preliminary research for the trip by receiving information about the place where the user wishes to be guided through a voice call in video communication or by watching distributed video.

 例えば、本シーケンスの外部装置30のユーザ種別が「案内者」の場合、依頼情報を送信する動機として、案内スキルの活用や隙間時間の活用が挙げられる。例えば運転に不慣れな場所を運転中の運転者とマッチングされることで、その場所に関する外部装置30のユーザの案内スキルを積極的に活用することができる。例えば、依頼情報に、希望情報の一部として案内をすることが可能なタイミングを示す情報を含めることで、隙間時間を有効に活用することができる。さらに、例えば案内の提供者にポイント等のインセンティブが与えられるようにすることで、外部装置30のユーザが案内可能な依頼者となる動機づけとなる。 For example, if the user type of the external device 30 in this sequence is a "guide," the motivation for sending request information may be the utilization of guidance skills or the utilization of spare time. For example, by being matched with a driver who is driving in an unfamiliar place, the guidance skills of the user of the external device 30 for that place can be actively utilized. For example, by including in the request information, as part of the desired information, information indicating the timing at which guidance can be provided, spare time can be utilized effectively. Furthermore, for example, by providing an incentive such as points to the provider of the guidance, the user of the external device 30 is motivated to become a requester who can provide guidance.

 [運転者が依頼者となる場合]
 図11は、本実施例の情報処理システム100において、車載装置10のユーザが依頼者となる場合に実行されるシーケンスの一例を示すシーケンス図である。まず、車載装置10において、ユーザからの依頼情報の入力操作が受け付けられる(ステップS21)。例えば、「京都市内を案内できる通信相手を探して」等の音声による入力操作が受け付けられる。その後、依頼情報がサーバ50に送信される(ステップS22)。
[When the driver is the requester]
11 is a sequence diagram showing an example of a sequence executed when the user of the in-vehicle device 10 becomes a requester in the information processing system 100 of this embodiment. First, the in-vehicle device 10 accepts an input operation of request information from the user (step S21). For example, a voice input operation such as "Find a communication partner who can guide me around Kyoto city" is accepted. After that, the request information is transmitted to the server 50 (step S22).

 サーバ50は、依頼情報を受信すると、マッチング処理を行って(ステップS23)、当該車載装置10のユーザと、外部装置30のユーザである複数の登録者のうちの1のユーザとをマッチングする。 When the server 50 receives the request information, it performs a matching process (step S23) to match the user of the in-vehicle device 10 with one of the multiple registered users who are users of the external device 30.

 サーバ50は、マッチング処理を行うと、マッチングがなされた外部装置30に、ビデオ通信による自動車Mへの仮想同乗を登録者に依頼することを示す同乗依頼の通知を送信する(ステップS24)。 When the server 50 has completed the matching process, it sends a ride request notification to the matched external device 30, requesting the registered person to virtually ride in the automobile M via video communication (step S24).

 外部装置30は、同乗依頼の通知を受信すると、外部装置30のユーザによる承諾の入力操作を受け付け(ステップS25)、許諾の通知をサーバ50に送信する(ステップS26)。 When the external device 30 receives the notification of the ride request, it accepts the input operation of acceptance by the user of the external device 30 (step S25) and sends a notification of acceptance to the server 50 (step S26).

 サーバ50は、許諾の通知を受信すると、車載装置10に許諾の通知を送信し(ステップS27)、外部装置30に、ビデオ通信を開始するためのURLを送信する(ステップS28)。外部装置30のユーザがURLをタップして外部装置30が当該URLにアクセスすると(ステップS29)、サーバ50は、車載装置10と外部装置30との間の音声通話を確立させつつ、自動車Mにおいて撮影された映像を車載装置10から外部装置30に配信させるビデオ通信を開始する(ステップS30)。 When the server 50 receives the notification of approval, it transmits the notification of approval to the in-car device 10 (step S27) and transmits a URL for starting video communication to the external device 30 (step S28). When the user of the external device 30 taps the URL and the external device 30 accesses the URL (step S29), the server 50 establishes a voice call between the in-car device 10 and the external device 30 and starts video communication in which the video captured in the automobile M is distributed from the in-car device 10 to the external device 30 (step S30).

 例えば、本シーケンスの車載装置10のユーザ種別が「被案内者」の場合、依頼情報を送信する動機の1つとして、不慣れな場所での運転の不安解消が挙げられる。 For example, if the user type of the in-vehicle device 10 in this sequence is a "guided person," one of the motivations for sending request information is to relieve anxiety about driving in unfamiliar places.

 例えば、本シーケンスの車載装置10のユーザ種別が「案内者」の場合、依頼情報を送信する動機の1つとして、日常の移動に付加価値をつけることや、退屈や眠気の解消が挙げられる。 For example, if the user type of the in-vehicle device 10 in this sequence is a "guide," one of the motivations for sending request information may be to add value to daily travel or to relieve boredom or drowsiness.

 [制御ルーチン]
 図12は、サーバ50の制御部55が、車載装置10のユーザと外部装置30のユーザとのマッチングを行って、マッチングがなされた車載装置10と外部装置30との間でビデオ通信を行う際に実行する制御ルーチンの一例である制御ルーチンRT1を示すフローチャートである。制御部55は、例えば、サーバ50に電源が投入されると、制御ルーチンRT1を繰り返し実行する。
[Control routine]
12 is a flowchart showing a control routine RT1 which is an example of a control routine executed by the control unit 55 of the server 50 when matching a user of the in-car device 10 with a user of the external device 30 and performing video communication between the matched in-car device 10 and the external device 30. For example, when the server 50 is powered on, the control unit 55 repeatedly executes the control routine RT1.

 制御部55は、制御ルーチンRT1を開始すると、依頼情報が存在するか否かを判定する(ステップS101)。ステップS101において、例えば、制御部55は、希望情報取得部57が、依頼者の希望情報を含みかつビデオ通信の相手を探すことを依頼することを示す情報を含む依頼情報を受信したか否かを判定する。 When the control unit 55 starts the control routine RT1, it determines whether or not request information is present (step S101). In step S101, for example, the control unit 55 determines whether or not the desired information acquisition unit 57 has received request information that includes the desired information of the requester and information indicating a request to find a partner for video communication.

 ステップS101において、依頼情報が存在しないと判定する(ステップS101:NO)と、制御部55は、制御ルーチンRT1を終了する。 If it is determined in step S101 that no request information exists (step S101: NO), the control unit 55 ends the control routine RT1.

 ステップS101において、依頼情報が存在すると判定する(ステップS101:YES)と、制御部55は、依頼情報を取得する(ステップS102)。ステップS102において、例えば、制御部55は、希望情報取得部57が受信した依頼情報をマッチング部59に供給する。 If it is determined in step S101 that request information exists (step S101: YES), the control unit 55 acquires the request information (step S102). In step S102, for example, the control unit 55 supplies the request information received by the desired information acquisition unit 57 to the matching unit 59.

 ステップS102の実行後、制御部55は、相手候補抽出サブルーチンをマッチング部59に実行させて、依頼者のビデオ通信の相手となり得る登録者を抽出する(ステップS103)。 After executing step S102, the control unit 55 causes the matching unit 59 to execute a partner candidate extraction subroutine to extract registered persons who can be potential video communication partners for the requester (step S103).

 ステップS103の実行後、制御部55は、ステップS103によって1以上の登録者が抽出されたか否かを判定する(ステップS104)。 After executing step S103, the control unit 55 determines whether or not one or more registered users have been extracted by step S103 (step S104).

 ステップS104において、登録者が抽出されていないと判定する(ステップS104:NO)と、制御部55は、該当者がいないことを依頼者に通知する(ステップS105)。ステップS105において、例えば、該当者がいないことを示す情報が依頼者の車載装置10又は外部装置30に送信される。 If it is determined in step S104 that a registered person has not been extracted (step S104: NO), the control unit 55 notifies the requester that there is no corresponding person (step S105). In step S105, for example, information indicating that there is no corresponding person is transmitted to the requester's in-vehicle device 10 or external device 30.

 ステップS104において、登録者が抽出されたと判定する(ステップS104:YES)と、制御部55は、通信相手決定サブルーチンを実行し、依頼者の通話相手として1の登録者を決定するための処理を実行する(ステップS106)。ステップS106において、例えば、制御部55は、マッチング部59に通信相手決定サブルーチンを実行させる。ステップS106において、例えば、抽出された複数の登録者のうち、ステップS103において算出されたスコアが高い順に、依頼者とビデオ通信を行うことについての承諾を求める通知が送信される。 If it is determined in step S104 that a registered person has been extracted (step S104: YES), the control unit 55 executes a communication partner determination subroutine and executes processing to determine one registered person as a call partner for the requester (step S106). In step S106, for example, the control unit 55 causes the matching unit 59 to execute the communication partner determination subroutine. In step S106, for example, a notification is sent from among the extracted multiple registered people in descending order of the score calculated in step S103, requesting consent to conduct video communication with the requester.

 ステップS106の実行後、制御部55は、通信相手が決定されたか否かを判定する(ステップS107)。ステップS107において、例えば、ステップS106においていずれの登録者からもビデオ通信の承諾が得られなかった場合に、通信相手が決定されなかったと判定される。 After executing step S106, the control unit 55 determines whether or not a communication partner has been determined (step S107). In step S107, for example, if consent for video communication was not obtained from any of the registered users in step S106, it is determined that a communication partner has not been determined.

 ステップS107において、通信相手が決定されなかったと判定する(ステップS107:NO)と、制御部55は、ステップS105に移り、該当者がいないことを依頼者に通知する。 If it is determined in step S107 that a communication partner has not been determined (step S107: NO), the control unit 55 proceeds to step S105 and notifies the requester that there is no match.

 ステップS107において、通信相手が決定されたと判定する(ステップS107:YES)と、制御部55は、依頼情報を送信した依頼者と、決定された登録者との間で、車載装置10と外部装置30とのビデオ通信を開始する(ステップS108)。 If it is determined in step S107 that the communication partner has been determined (step S107: YES), the control unit 55 starts video communication between the in-vehicle device 10 and the external device 30 between the requester who sent the request information and the determined registered person (step S108).

 ステップS108の実行後、又はステップS105の実行後、制御部55は、制御ルーチンRT1を終了する。 After executing step S108 or step S105, the control unit 55 ends the control routine RT1.

 ステップS103~ステップS106において、マッチング部59は、希望情報に基づいて、第1装置のユーザと第2装置のユーザとのマッチングを行うマッチングステップを実行する。 In steps S103 to S106, the matching unit 59 executes a matching step of matching a user of the first device with a user of the second device based on the desired information.

 ステップS107~ステップS108において、制御部55は、マッチングがなされた車載装置10と外部装置30との音声通信を行うステップ(通信ステップ)を実行する通信部として機能する。 In steps S107 and S108, the control unit 55 functions as a communication unit that executes a step (communication step) of performing voice communication between the matched in-vehicle device 10 and the external device 30.

 ステップS106~ステップS108において、制御部55は、例えばステップS103において1の外部装置30に対して1又は複数の車載装置10が抽出された場合に、当該抽出された1又は複数の車載装置10のうちの1の車載装置10と1の外部装置30との音声通信を行う通信部として機能する。 In steps S106 to S108, for example, if one or more in-vehicle devices 10 are extracted for one external device 30 in step S103, the control unit 55 functions as a communication unit that performs voice communication between one of the extracted one or more in-vehicle devices 10 and one external device 30.

 また、ステップS106~ステップS108において、制御部55は、例えばステップS103において1の車載装置10に対して1又は複数の外部装置30が抽出された場合に、当該抽出された1又は複数の外部装置30のうちの1の外部装置30と1の車載装置10との音声通信を行う通信部として機能する。 In addition, in steps S106 to S108, when one or more external devices 30 are extracted for one in-vehicle device 10 in step S103, the control unit 55 functions as a communication unit that performs voice communication between one of the extracted one or more external devices 30 and one in-vehicle device 10.

 図13は、制御ルーチンRT1のステップS103においてマッチング部59が実行するルーチンの一例である相手候補抽出サブルーチンRT2を示すフローチャートである。 FIG. 13 is a flowchart showing a partner candidate extraction subroutine RT2, which is an example of a routine executed by the matching unit 59 in step S103 of the control routine RT1.

 マッチング部59は、相手候補抽出サブルーチンRT2を開始すると、依頼者のユーザ種別と異なるユーザ種別の登録者を抽出する(ステップS201)。ステップS201において、マッチング部59は、依頼情報が示すユーザ種別と異なるユーザ種別の登録者の登録情報を、記憶部51の登録情報保持部51Aから読み出す。例えば、依頼者のユーザ種別が、外部装置30のユーザを示す「同乗者」である場合は、ユーザ種別が車載装置10のユーザを示す「運転者」である登録情報を読み出す。 When the matching unit 59 starts the partner candidate extraction subroutine RT2, it extracts registered persons with a user type different from the user type of the requester (step S201). In step S201, the matching unit 59 reads out registration information of registered persons with a user type different from the user type indicated by the request information from the registration information holding unit 51A of the memory unit 51. For example, if the user type of the requester is "passenger" indicating a user of the external device 30, the matching unit 59 reads out registration information whose user type is "driver" indicating a user of the in-vehicle device 10.

 ステップS201の実行後、マッチング部59は、ステップS201において抽出された登録者の中から、依頼者の希望種別と異なる希望種別の登録者を抽出する(ステップS202)。ステップS202において、例えば、依頼情報が示すユーザ種別が「被案内者」である場合には、ユーザ種別が「案内者」である登録情報を抽出する。 After executing step S201, the matching unit 59 extracts registrants with a desired type different from the desired type of the requester from among the registrants extracted in step S201 (step S202). In step S202, for example, if the user type indicated by the request information is "recipient," registration information with a user type of "guide" is extracted.

 ステップS202の実行後、マッチング部59は、依頼者と、ステップS202において抽出された1又は複数の登録者の各々との間で、案内希望場所と案内可能場所との一致度を算出する(ステップS203)。ステップS203において、マッチング部59は、1の車載装置10のユーザに対して複数の外部装置30のユーザの各々との間で、又は、1の外部装置30のユーザに対して複数の車載装置10のユーザの各々との間で、案内希望場所と案内可能場所との一致度を算出する。 After step S202 is executed, the matching unit 59 calculates the degree of agreement between the desired guidance location and the guidable location between the requester and each of the one or more registered users extracted in step S202 (step S203). In step S203, the matching unit 59 calculates the degree of agreement between the desired guidance location and the guidable location between the user of one in-vehicle device 10 and each of the users of the multiple external devices 30, or between the user of one external device 30 and each of the users of the multiple in-vehicle devices 10.

 ステップS203において、例えば、上述したように、案内希望場所を示す地点又は領域と、案内可能場所を示す地点又は領域との距離等に基づいて、一致度が算出される。 In step S203, for example, as described above, the degree of match is calculated based on the distance between the point or area indicating the desired guidance location and the point or area indicating the guidance available location.

 ステップS203の実行後、ステップS203において算出された一致度に基づいて、1又は複数の登録者の各々に、優先順位を付与する(ステップS204)。ステップS204において、例えば、一致度が高い順に優先順位が付与される。例えば、一致度に差が無い場合に、他のパラメータに基づいて優先順位が付与されてもよい。例えば、依頼情報に案内スキルの高さや運転スキルの高さに関する依頼者の要求事項が含まれる場合に、当該要求事項に対応する登録情報内の項目の適合度が高い順に、優先順位が付与されてもよい。 After step S203 is executed, a priority is assigned to each of one or more registered users based on the degree of match calculated in step S203 (step S204). In step S204, for example, the priority is assigned in descending order of the degree of match. For example, if there is no difference in the degree of match, the priority may be assigned based on other parameters. For example, if the request information includes the requester's requirements regarding the level of guidance skills or driving skills, the priority may be assigned in descending order of the degree of suitability of the items in the registration information that correspond to the requirements.

 ステップS203及びステップS204において、マッチング部59は、依頼者と、抽出された1又は複数の登録者の各々との間で、1の車載装置10のユーザに対して複数の外部装置30のユーザの各々を組み合わせるか又は1の外部装置30のユーザに対して複数の車載装置10のユーザを組み合わせて複数の組み合わせを生成し、当該複数の組み合わせについて、案内希望場所と案内可能場所との照合結果に応じた優先順位を付ける。 In steps S203 and S204, the matching unit 59 generates multiple combinations between the requester and each of the extracted one or more registered users by combining a user of one in-vehicle device 10 with each of the users of multiple external devices 30, or by combining a user of one external device 30 with users of multiple in-vehicle devices 10, and prioritizes the multiple combinations according to the result of matching between the desired location for guidance and the locations that can be guided to.

 ステップS204の実行後、マッチング部59は、ステップS204において付与した優先順位に基づいて、所定数の登録者を抽出する(ステップS205)。ステップS205において、例えば、3~5人程度の登録者が抽出される。ステップS205において、例えば、一致度の閾値が設けられ、一致度が閾値を超える登録者が抽出されるようにしてもよい。 After executing step S204, the matching unit 59 extracts a predetermined number of registrants based on the priority assigned in step S204 (step S205). In step S205, for example, about 3 to 5 registrants are extracted. In step S205, for example, a threshold value for the degree of match may be set, and registrants whose degree of match exceeds the threshold may be extracted.

 ステップS205の実行後、マッチング部59は、相手候補抽出サブルーチンRT2を終了する。 After executing step S205, the matching unit 59 ends the partner candidate extraction subroutine RT2.

 本ルーチンのステップS203~ステップS205において、例えば、外部装置30のユーザが希望種別を被案内者とする依頼者である場合に、マッチング部59は、外部装置30のユーザの被案内希望情報及び車載装置10の現在位置または自動車Mにおいて設定された経路に基づいて、1又は複数の車載装置10を抽出する抽出部として機能する。 In steps S203 to S205 of this routine, for example, if the user of the external device 30 is a requester whose desired category is to be guided, the matching unit 59 functions as an extraction unit that extracts one or more in-vehicle devices 10 based on the guidance request information of the user of the external device 30 and the current position of the in-vehicle device 10 or the route set in the automobile M.

 また、本ルーチンのステップS203~ステップS205において、例えば、車載装置10のユーザが希望種別を被案内者とする依頼者である場合に、マッチング部59は、車載装置10のユーザの被案内情報及び外部装置30が案内可能か否か及び案内可能である場合には案内可能場所を含む案内可能情報に基づいて、1又は複数の外部装置30を抽出する抽出部として機能する。 In addition, in steps S203 to S205 of this routine, for example, when the user of the in-vehicle device 10 is a requester whose desired category is to be guided, the matching unit 59 functions as an extraction unit that extracts one or more external devices 30 based on the guided information of the user of the in-vehicle device 10 and guide availability information including whether the external device 30 is guideable and, if so, the location to which the external device 30 can be guided.

 なお、例えば、本ルーチンにおいて、マッチング部59は、車載装置10の現在位置が、外部装置30のユーザの案内希望場所から近すぎる場合には、当該車載装置10のユーザをビデオ通信の相手候補としての抽出対象から除外してもよい。言い換えれば、外部装置30のユーザの案内希望場所から所定距離内(例えば1km以内など、数分内で到着する範囲)にある車載装置10のユーザは、マッチングの対象から除外されてもよい。それによって、例えば、ビデオ通信が開始された直後に目的地に到着し、ほとんど案内がなされないままビデオ通信が終了するような事態を防止することができる。 Note that, for example, in this routine, if the current location of the in-car device 10 is too close to the location to which the user of the external device 30 wishes to be guided, the matching unit 59 may exclude the user of the in-car device 10 from being extracted as a candidate for a video communication partner. In other words, a user of the in-car device 10 that is within a specified distance (e.g., within 1 km, within a few minutes) of the location to which the user of the external device 30 wishes to be guided may be excluded from matching targets. This makes it possible to prevent, for example, a situation in which the destination is reached immediately after video communication has begun, and the video communication ends with little guidance being given.

 なお、例えば、本ルーチンにおいて、マッチング部59は、車載装置10の現在位置と、外部装置30の現在位置とが所定距離内(例えば1km以内など)である場合に、当該車載装置10のユーザと当該外部装置30のユーザとをマッチングさせないようにしてもよい。 Note that, for example, in this routine, the matching unit 59 may not match the user of the in-vehicle device 10 with the user of the external device 30 if the current location of the in-vehicle device 10 and the current location of the external device 30 are within a predetermined distance (e.g., within 1 km).

 例えば、本ルーチンにおいて、マッチング部59は、1の車載装置10のユーザに対するビデオ通信の相手を複数の外部装置30の中から抽出する場合に、外部装置30各々の現在位置を取得してもよい。当該現在位置が、例えば案内可能場所又は案内希望場所としての自動車Mの現在地又は主要経由地から近すぎる場合に、当該外部装置30のユーザをビデオ通信の相手候補としての抽出対象から除外してもよい。 For example, in this routine, the matching unit 59 may obtain the current location of each external device 30 when extracting video communication partners for a user of one in-vehicle device 10 from among multiple external devices 30. If the current location is too close to, for example, the current location of the automobile M or a major route point as a navigable location or a desired location for guidance, the user of that external device 30 may be excluded from the candidates to be extracted as video communication partners.

 言い換えれば、外部装置30の現在位置が、車載装置10を搭載する自動車Mの現在地、目的地、又は自動車Mにおいて設定された経路上の場所を含む場所から所定距離内である場合に、当該外部装置30のユーザは、マッチングの対象から除外されてもよい。それによって、例えば、外部装置30のユーザによって自動車Mが特定されることを回避し、車載装置10のユーザのプライバシーを保護することができる。また、別の態様として、ビデオ通信中に、外部装置30の現在位置と、自動車Mの現在地とが所定距離内まで接近した場合に、通話を終了させるような制御をしても良い。 In other words, if the current location of the external device 30 is within a predetermined distance from a location including the current location of the automobile M in which the in-car device 10 is mounted, the destination, or a location on a route set in the automobile M, the user of the external device 30 may be excluded from matching targets. This, for example, can prevent the automobile M from being identified by the user of the external device 30, and protect the privacy of the user of the in-car device 10. As another aspect, control may be performed to end the call when the current location of the external device 30 and the current location of the automobile M come within a predetermined distance during video communication.

 なお、上記のように、所定の条件に該当する登録者をマッチングの対象から除外するかどうかは、依頼者が選択できるようにしてもよい。 As mentioned above, the requester may be able to choose whether or not to exclude registered users who meet certain conditions from matching targets.

 図14は、制御ルーチンRT1のステップS106においてマッチング部59が実行するルーチンの一例である通信相手決定サブルーチンRT3を示すフローチャートである。 FIG. 14 is a flowchart showing a communication partner determination subroutine RT3, which is an example of a routine executed by the matching unit 59 in step S106 of the control routine RT1.

 マッチング部59は、通信相手決定サブルーチンRT3を開始すると、相手候補抽出サブルーチンRT2で抽出された登録者のうち、優先順位が一番高い登録者に、ビデオ通信依頼の通知を行う(ステップS301)。ステップS301において、例えば、依頼者が外部装置30のユーザである場合には、優先順位が一番高い車載装置10のユーザに対して、同乗希望の通知が送信される。また、ステップS301において、例えば、依頼者が車載装置10のユーザである場合には、外部装置30のユーザに対して、同乗依頼の通知が送信される。 When the matching unit 59 starts the communication partner determination subroutine RT3, it notifies the registered person with the highest priority among the registered people extracted in the partner candidate extraction subroutine RT2 of a video communication request (step S301). In step S301, for example, if the requester is a user of the external device 30, a notification of a request to ride is sent to the user of the in-vehicle device 10, which has the highest priority. Also, in step S301, for example, if the requester is a user of the in-vehicle device 10, a notification of a request to ride is sent to the user of the external device 30.

 ステップS301の実行後、マッチング部59は、ビデオ通信が承諾されたか否かを判定する(ステップS302)。ステップS302において、例えば、ステップS301において送信した通知に対して承諾を示す回答をサーバ50が受信した場合に、ビデオ通信が承諾されたと判定される。ステップS301において、例えば、ビデオ通信を拒否する旨の回答が受信された場合、又は所定時間内に承諾を示す回答が受信されなかった場合に、ビデオ通信が承諾されなかったと判定される。 After executing step S301, the matching unit 59 determines whether the video communication has been accepted (step S302). In step S302, for example, if the server 50 receives a response indicating acceptance to the notification sent in step S301, it is determined that the video communication has been accepted. In step S301, for example, if a response to reject the video communication is received, or if a response indicating acceptance is not received within a predetermined time, it is determined that the video communication has not been accepted.

 ステップS302において、ビデオ通信が承諾されたと判定する(ステップS302:YES)と、マッチング部59は、承諾した登録者を依頼者との通信相手に決定する(ステップS303)。ステップS303において、1の車載装置10のユーザ又は1の外部装置30のユーザが通信相手として決定される。 If it is determined in step S302 that the video communication has been accepted (step S302: YES), the matching unit 59 determines the registrant who accepted the request as the communication partner with the requester (step S303). In step S303, the user of one in-vehicle device 10 or the user of one external device 30 is determined as the communication partner.

 ステップS302において、ビデオ通信が承諾されなかったと判定する(ステップS302:NO)と、相手候補抽出サブルーチンRT2で抽出された登録者のうち、優先順位が次に高い登録者が存在するか否かを判定する(ステップS304)。 If it is determined in step S302 that video communication has not been accepted (step S302: NO), it is determined whether or not there is a registered user with the next highest priority among the registered users extracted in the partner candidate extraction subroutine RT2 (step S304).

 ステップS304において、優先順位が次に高い登録者が存在しないと判定する(ステップS304:NO)と、マッチング部59は、通信相手を決定することなく通信相手決定サブルーチンRT3を終了
する。
In step S304, if it is determined that there is no registrant with the next highest priority (step S304: NO), the matching unit 59 ends the communication partner determination subroutine RT3 without determining a communication partner.

 ステップS304において、優先順位が次に高い登録者が存在すると判定する(ステップS304:YES)と、マッチング部59は、当該優先順位が次に高い登録者にビデオ通信依頼の通知を送信する(ステップS305)。 If it is determined in step S304 that a registered user with the next highest priority exists (step S304: YES), the matching unit 59 sends a notification of a video communication request to the registered user with the next highest priority (step S305).

 ステップS305の実行後、マッチング部59は、ステップS302に移り、ビデオ通信の承諾がされたか否かを判定する。 After executing step S305, the matching unit 59 proceeds to step S302 and determines whether or not the video communication has been accepted.

 本ルーチンによって、優先順位が高い順にビデオ通信の依頼を送信し、いずれかの登録者によって承諾がなされた場合には1の登録者が通信相手として決定される。いずれの登録者にも承諾されない場合には、通信相手が決定されず、上述したように、制御ルーチンRT1のステップS105において、該当者がいないことが依頼者に通知される。 This routine sends requests for video communication in descending order of priority, and if any registered person accepts the request, that registered person is selected as the communication partner. If none of the registered people accepts the request, no communication partner is selected, and as described above, in step S105 of control routine RT1, the requester is notified that there is no match.

 本ルーチンにおいて、マッチング部59は、優先順位の高い順に、車載装置10のユーザ又は外部装置30のユーザに、相手候補抽出サブルーチンRT2のステップS203及びステップS204において生成した複数の組み合わせの少なくとも一部を通知する。 In this routine, the matching unit 59 notifies the user of the in-vehicle device 10 or the user of the external device 30, in descending order of priority, of at least some of the multiple combinations generated in steps S203 and S204 of the partner candidate extraction subroutine RT2.

 なお、本ルーチンにおいて、マッチング部59は、例えば、単に優先順位が一番高い登録者を、依頼者との通信相手として決定してもよい。 In addition, in this routine, the matching unit 59 may simply determine, for example, the registrant with the highest priority as the communication partner with the requester.

 なお、例えば、本ルーチンのステップS301において、相手候補抽出サブルーチンRT2で抽出された1又は複数の登録者の中から、依頼者が通信相手を選択できるようにしてもよい。 In addition, for example, in step S301 of this routine, the requester may be able to select a communication partner from one or more registered users extracted in the partner candidate extraction subroutine RT2.

 以上、詳細に説明したように、本発明の情報処理装置は、移動体としての自動車Mとともに移動する第1装置としての車載装置10と、移動体外の端末である第2装置としての外部装置30との間の音声通信に関する情報処理を行う情報処理装置であって、第1装置のユーザであるか又は第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ希望種別が被案内者である場合には案内を希望する場所である案内希望場所を含み、希望種別が案内者である場合には当該案内者の案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得部と、希望情報に基づいて、第1装置のユーザと第2装置のユーザとのマッチングを行うマッチング部と、マッチングがなされた第1装置と第2装置との音声通信を行う通信部と、を有する。 As described above in detail, the information processing device of the present invention is an information processing device that processes information regarding voice communication between an in-vehicle device 10 as a first device that moves with an automobile M as a mobile body, and an external device 30 as a second device that is a terminal outside the mobile body, and has a desired information acquisition unit that acquires multiple desired information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desired type indicating whether the user is a person who wishes to be guided or a guide who can provide guidance, and including a desired guide location where the user wishes to be guided if the desired type is a person who is guided, and a guideable location where the guide can provide guidance if the desired type is a guide, a matching unit that matches users of the first device with users of the second device based on the desired information, and a communication unit that performs voice communication between the matched first device and the second device.

 上述の通り、本発明では、希望情報に希望種別が含まれるので、希望情報に基づいて、例えば、希望情報が案内者であるユーザと、希望種別が被案内者であるユーザとをマッチングすることができる。従って、マッチングがなされた第1装置と第2装置との音声通信によって、案内をされることを希望するユーザが、案内をすることが可能なユーザから案内を受けることができ、第1装置のユーザと第2装置のユーザとの双方が当該音声通信によって目的を達成することができる。 As described above, in the present invention, the desired information includes a desired type, so that, for example, a user whose desired information is to be a guide can be matched with a user whose desired type is to be guided based on the desired information. Therefore, through voice communication between the matched first device and second device, a user who wishes to be guided can receive guidance from a user who is able to provide guidance, and both the user of the first device and the user of the second device can achieve their goals through the voice communication.

 従って、本発明によれば、移動体の運転者と移動体外のユーザとの通話の際に、互いに適した相手とマッチングすることを可能とする情報処理装置、情報処理方法、情報処理プログラム及び記憶媒体を提供することができる。 Therefore, according to the present invention, it is possible to provide an information processing device, an information processing method, an information processing program, and a storage medium that can match a driver of a mobile body and a user outside the mobile body with a suitable partner when they make a call.

 例えば、本実施例の情報処理装置は、第2装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、被案内希望情報及び第1装置の現在位置または移動体において設定されたに経路に基づいて、1又は複数の前記第1装置を抽出する抽出部と、当該抽出された1又は複数の第1装置のうちの1の第1装置と前記第2装置との音声通信を行う通信部と、を有する。 For example, the information processing device of this embodiment has a desired information acquisition unit that acquires guided information including a guided location from a user of a second device, which is a location to which the user wishes to be guided, an extraction unit that extracts one or more of the first devices based on the guided information and the current location of the first device or a route set in a mobile body, and a communication unit that performs voice communication between one of the extracted one or more first devices and the second device.

 このような構成により、例えば、外部装置30のユーザが案内を希望する場所を含む経路を走行中の自動車Mに搭載された車載装置10が抽出され得る。外部装置30のユーザは、例えば、抽出されたうちの1の車載装置10のユーザとの音声通信によって、案内希望場所についての案内を受けることができる。 With this configuration, for example, an in-vehicle device 10 installed in an automobile M traveling along a route that includes a location to which a user of an external device 30 wishes to receive guidance can be extracted. The user of the external device 30 can receive guidance about the desired location, for example, through voice communication with the user of one of the extracted in-vehicle devices 10.

 例えば、本実施例の情報処理装置は、第1装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、第2装置のユーザが案内可能か否か及び案内可能である場合には案内が可能な場所である案内可能場所を含む案内可能情報を取得する案内可能情報取得部と、当該被案内希望情報及び案内可能情報に基づいて、1又は複数の第2装置を抽出する抽出部と、当該抽出された1又は複数の第2装置のうちの1の第2装置と前記第1装置との音声通信を行う通信部と、を有する。 For example, the information processing device of this embodiment has a desired information acquisition unit that acquires guided desired information including a guided location from a user of a first device, which is a location to which the user desires guidance; a guideable information acquisition unit that acquires guideable information including whether or not the user of a second device is guideable and, if guideable, which is a location to which guidance is possible; an extraction unit that extracts one or more second devices based on the guided desired information and the guideable information; and a communication unit that performs voice communication between one of the extracted one or more second devices and the first device.

 このような構成により、例えば、車載装置10のユーザが案内を希望する場所について案内が可能な外部装置30のユーザが使用する外部装置30が抽出され得る。車載装置10のユーザは、例えば、抽出されたうちの1の外部装置30のユーザとの音声通信によって、案内希望場所についての案内を受けることができる。 With this configuration, for example, an external device 30 used by a user of an external device 30 capable of providing guidance to a location to which the user of the in-vehicle device 10 wishes to be guided can be extracted. The user of the in-vehicle device 10 can receive guidance to the desired location, for example, through voice communication with the user of one of the extracted external devices 30.

 上述した実施例における車載装置10、サーバ50及び外部装置30の構成、ルーチン等は、例示に過ぎず、用途等に応じて、適宜選択または変更することができる。 The configurations, routines, etc. of the in-vehicle device 10, server 50, and external device 30 in the above-described embodiment are merely examples, and can be appropriately selected or changed depending on the application, etc.

 例えば、上記の実施例においては、サーバ50の登録情報保持部に登録情報が保持されることについて説明したが、これに限られない。例えば、登録情報保持部に保持される情報は、サーバ50とは別の外部のサーバに記憶されてもよい。 For example, in the above embodiment, the registration information is stored in the registration information storage unit of the server 50, but this is not limited to the above. For example, the information stored in the registration information storage unit may be stored in an external server separate from the server 50.

 上記の実施例においては、情報処理システム100においてビデオ通信を行う例について説明したが、これに限られない。本発明は、情報処理システム100において、外部装置30と車載装置10との間で音声通話のみ行う場合に適用可能である。つまり、本発明は、車載装置10から映像が配信されない態様での通信にも適用可能である。また、ビデオ通信の開始直後の一定時間は映像の配信のみとし、音声通話は遅れて開始されるようにしてもよい。 In the above embodiment, an example of video communication in the information processing system 100 has been described, but the present invention is not limited to this. The present invention is applicable to cases where only voice calls are made between the external device 30 and the in-car device 10 in the information processing system 100. In other words, the present invention is also applicable to communication in a form in which no video is distributed from the in-car device 10. Furthermore, for a certain period of time immediately after the start of video communication, only video may be distributed, and the voice call may be started later.

 また、上記の実施例においては、車載装置10と外部装置30とがサーバ50を介してビデオ通信を行うこととしたが、ビデオ通信は車載装置10と外部装置30との間で直接なされてもよい。 In addition, in the above embodiment, the in-car device 10 and the external device 30 perform video communication via the server 50, but the video communication may be performed directly between the in-car device 10 and the external device 30.

 また、ビデオ通信の際に、車載装置10から外部装置30に上記映像配信が行われつつ、情報処理システム100とは別の経路で並行して車載装置10と外部装置30との間の音声通話が確立されていても良い。 Furthermore, during video communication, while the above-mentioned video distribution is being performed from the in-vehicle device 10 to the external device 30, a voice call may be established between the in-vehicle device 10 and the external device 30 in parallel via a path separate from the information processing system 100.

 上記の実施例において、自動車Mの乗員として主に運転者が外部装置30のユーザとマッチングされる例について説明したが、助手席や後部座席に乗っている乗員が外部装置30のユーザとマッチングされてもよい。 In the above embodiment, an example was described in which the driver is primarily matched with the user of the external device 30 as the occupant of the automobile M, but occupants in the passenger seat or the back seat may also be matched with the user of the external device 30.

 上記実施例において、自動車Mの前方及び内部を撮影した映像が配信される例について説明したが、これに限られない。例えば、自動車Mには、自動車Mの側方又は後方を撮影するカメラ、又は360度カメラが備えられていてもよく、これらのカメラによって撮影された映像がビデオ通信時に配信されてもよい。 In the above embodiment, an example was described in which video footage of the front and interior of the automobile M was distributed, but this is not limited to this. For example, the automobile M may be equipped with a camera that captures the side or rear of the automobile M, or a 360-degree camera, and video footage captured by these cameras may be distributed during video communication.

 上記実施例において、車載装置10は、車載ナビゲーション装置であるとしたが、車載装置10は、ナビゲーション機能を有していなくともよい。その場合、例えば、車載装置10が自動車Mの現在位置情報をサーバ50に送信することで、案内可能場所としての現在地が登録されてもよく、現在位置の推移に基づいて主要経由地が予測されて登録されてもよい。 In the above embodiment, the in-vehicle device 10 is an in-vehicle navigation device, but the in-vehicle device 10 does not need to have a navigation function. In that case, for example, the in-vehicle device 10 may transmit current position information of the automobile M to the server 50, and the current location may be registered as a navigable location, or major intermediate points may be predicted and registered based on the change in the current location.

 例えば、車載装置10は、車載装置10と同様の構成を有する端末装置と車外カメラ13とタッチパネル15とが一体化された構成であってもよい。具体的には、例えば、車載装置10は、上記車載装置10と同様の機能を発揮するアプリケーションを搭載したカメラ付きのスマートフォン、タブレットまたはPC等の端末装置であってもよい。この場合、車載装置10は、内蔵カメラが自動車Mのフロントガラスを通して自動車Mの前方を撮影可能なように、例えばクレードル等でダッシュボードDB上に取り付けられ得る。 For example, the in-vehicle device 10 may be configured by integrating a terminal device having a similar configuration to the in-vehicle device 10 with an external camera 13 and a touch panel 15. Specifically, for example, the in-vehicle device 10 may be a terminal device such as a smartphone, tablet, or PC with a camera that is equipped with an application that performs the same functions as the in-vehicle device 10. In this case, the in-vehicle device 10 may be mounted on the dashboard DB, for example, by a cradle or the like, so that the built-in camera can capture an image of the front of the automobile M through the windshield of the automobile M.

 また、車載装置10は、自動車Mの運転者に提示する画面を表示しない構成であってもよい。例えば、車載装置10は、ドライブレコーダのような構成を有していてもよく、例えば、車外カメラ13と一体となった装置であってもよい。具体的には、車載装置10は、例えば、車外カメラ13の筐体内に上記した車載装置10のビデオ通信機能を果たすハードウェアを内蔵したような装置であってもよい。この場合、車載装置10は、上記において説明したような種々の表示出力を行わないこととしてもよい。 Furthermore, the in-vehicle device 10 may be configured not to display a screen to be presented to the driver of the automobile M. For example, the in-vehicle device 10 may have a configuration similar to that of a drive recorder, and may be, for example, a device integrated with the exterior camera 13. Specifically, the in-vehicle device 10 may be, for example, a device in which hardware that performs the above-mentioned video communication function of the in-vehicle device 10 is built into the housing of the exterior camera 13. In this case, the in-vehicle device 10 may not perform the various display outputs as described above.

 上記の実施例において、外部装置30はスマートフォンである場合について説明したが、これに限られない。外部装置30は、外部装置30のユーザがビデオ通信に利用できる端末装置であって、ビデオ通信に関する表示又はメッセージの提示、ビデオ通信を行うために必要な操作入力の受付、音声データの送受信並びに映像の受信及び表示が可能に構成されていればよい。例えば、外部装置30は、タブレット、PC、ウェアラブル端末等の端末装置であってもよい。 In the above embodiment, the external device 30 is described as being a smartphone, but this is not limited thereto. The external device 30 is a terminal device that the user of the external device 30 can use for video communication, and is configured to be able to display or present messages related to video communication, accept operational inputs necessary for video communication, send and receive audio data, and receive and display video. For example, the external device 30 may be a terminal device such as a tablet, PC, or wearable device.

 上記の実施例においては、車載装置10が自動車Mに搭載される例を説明したが、車載装置10は、自転車、バイク、船舶等他の移動体に搭載されていてもよい。 In the above embodiment, an example was described in which the in-vehicle device 10 was mounted on an automobile M, but the in-vehicle device 10 may also be mounted on other moving objects such as bicycles, motorbikes, and boats.

10 車載装置
13 車外カメラ
14 車内カメラ
15 タッチパネル
17 スピーカ
19 マイク
21 加速度センサ
23 記憶部
25 制御部
27 通信部
30 外部装置
31 タッチパネル
33 スピーカ
35 マイク
37 記憶部
39 制御部
41 通信部
50 サーバ
51 記憶部
53 通信部
55 制御部
57 希望情報取得部
59 マッチング部
Reference Signs List 10 In-vehicle device 13 Outside-vehicle camera 14 Inside-vehicle camera 15 Touch panel 17 Speaker 19 Microphone 21 Acceleration sensor 23 Memory unit 25 Control unit 27 Communication unit 30 External device 31 Touch panel 33 Speaker 35 Microphone 37 Memory unit 39 Control unit 41 Communication unit 50 Server 51 Memory unit 53 Communication unit 55 Control unit 57 Desired information acquisition unit 59 Matching unit

Claims (15)

 移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、
 前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得部と、
 前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチング部と、
 前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置。
An information processing device for processing information regarding voice communication between a first device that moves together with a mobile object and a second device that is a terminal outside the mobile object,
a desire information acquiring unit that acquires a plurality of desire information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desire type indicating whether the user is a guided person who wishes to be guided or a guide who is capable of providing guidance, and including a desired guide place that is a place where the user desires to be guided if the desire type is the guided person, and including a guideable place that is a place where the guide is capable of providing the guidance if the desire type is the guider;
a matching unit that matches a user of the first device with a user of the second device based on the desired information;
and a communication unit for performing the voice communication between the first device and the second device that have been matched.
 前記マッチング部は、前記希望種別が前記案内者であるユーザと前記被案内者であるユーザとをマッチングすることを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that the matching unit matches a user whose desired type is the guide with a user whose desired type is the guided person.  前記マッチング部は、前記案内可能場所と、前記案内希望場所との照合結果に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うことを特徴とする請求項1又は2に記載の情報処理装置。 The information processing device according to claim 1 or 2, characterized in that the matching unit matches the user of the first device with the user of the second device based on a comparison result between the guideable location and the guided location.  前記マッチング部は、前記第1装置のユーザと前記第2装置のユーザとのマッチングにおいて、1の前記第1装置のユーザに対して複数の前記第2装置のユーザの各々を組み合わせるかまたは1の前記第2装置のユーザに複数の前記第1装置のユーザの各々を組み合わせて複数の組み合わせを生成し、前記複数の組み合わせについて、前記案内可能場所と前記案内希望場所との照合結果に応じた優先順位を付けることを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that, in matching the users of the first device and the users of the second device, the matching unit generates a plurality of combinations by combining one user of the first device with each of the users of the second device, or by combining one user of the second device with each of the users of the first device, and prioritizes the plurality of combinations according to the comparison result between the navigable locations and the desired location for guidance.  前記マッチング部は、前記優先順位の高い順に、前記1の前記第1装置のユーザ又は前記1の前記第2装置のユーザに前記複数の組み合わせの少なくとも一部を通知することを特徴とする請求項4に記載の情報処理装置。 The information processing device according to claim 4, characterized in that the matching unit notifies the user of the one of the first devices or the user of the one of the second devices of at least some of the multiple combinations in order of the highest priority.  前記第1装置のユーザの前記希望種別が前記案内者である場合に、前記案内可能場所は、前記移動体の現在地、目的地、又は前記移動体において設定された経路上の場所を含むことを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that, when the desired type of the user of the first device is the guide, the guiding possible locations include the current location of the mobile object, a destination, or a location on a route set in the mobile object.  前記希望情報は、前記第2装置のユーザの前記希望種別が前記案内者である場合に、前記第2装置のユーザが案内をすることが可能なタイミングを示す情報を含むことを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that the desired information includes information indicating a timing at which the user of the second device is available to provide guidance when the desired type of the user of the second device is to be a guide.  前記マッチング部は、前記第1装置のユーザと前記第2装置のユーザとのマッチングにおいて、前記第2装置のユーザの前記案内希望場所から所定距離内にある第1装置のユーザを前記マッチングの対象から除外することを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that, in matching between a user of the first device and a user of the second device, the matching unit excludes a user of the first device who is within a predetermined distance from the desired location of the user of the second device from the matching targets.  前記マッチング部は、前記第1装置のユーザと前記第2装置のユーザとのマッチングにおいて、前記第2装置の現在位置が、前記第1装置とともに移動する前記移動体の現在地、目的地、又は前記移動体において設定された経路上の場所を含む場所から所定距離内である第2装置のユーザを前記マッチングの対象から除外することを特徴とする請求項1又は6に記載の情報処理装置。 The information processing device according to claim 1 or 6, characterized in that, in matching between a user of the first device and a user of the second device, the matching unit excludes from the matching target a user of the second device whose current location of the second device is within a predetermined distance from a location including a current location of the mobile object moving together with the first device, a destination, or a location on a route set by the mobile object.  前記マッチング部は、前記第1装置の現在位置と、前記第2装置の現在位置との距離が所定距離内である場合に、当該第1装置のユーザと当該第2装置のユーザとをマッチングさせないことを特徴とする請求項1に記載の情報処理装置。 The information processing device according to claim 1, characterized in that the matching unit does not match the user of the first device with the user of the second device when the distance between the current location of the first device and the current location of the second device is within a predetermined distance.  移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、
 前記第2装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、
 前記被案内希望情報及び前記第1装置の現在位置または前記移動体において設定されたに経路に基づいて、1又は複数の前記第1装置を抽出する抽出部と、
 当該抽出された1又は複数の前記第1装置のうちの1の前記第1装置と前記第2装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置。
An information processing device for processing information regarding voice communication between a first device that moves together with a mobile object and a second device that is a terminal outside the mobile object,
a desired information acquiring unit that acquires guided information including a guided location from a user of the second device;
an extraction unit that extracts one or a plurality of the first devices based on the guidance request information and a current location of the first device or a route set in the mobile object;
and a communication unit that performs the voice communication between one of the extracted one or more first devices and the second device.
 移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置であって、
 前記第1装置のユーザから案内を希望する場所である案内希望場所を含む被案内希望情報を取得する希望情報取得部と、
 前記第2装置のユーザが案内可能か否か及び案内可能である場合には案内が可能な場所である案内可能場所を含む案内可能情報を取得する案内可能情報取得部と、
 前記被案内希望情報及び前記案内可能情報に基づいて、1又は複数の前記第2装置を抽出する抽出部と、
 当該抽出された1又は複数の前記第2装置のうちの1の前記第2装置と前記第1装置との前記音声通信を行う通信部と、を有することを特徴とする情報処理装置。
An information processing device for processing information regarding voice communication between a first device that moves together with a mobile object and a second device that is a terminal outside the mobile object,
a desired information acquiring unit that acquires guided information including a guided location from a user of the first device;
a guideable information acquisition unit that acquires guideable information including whether or not the user of the second device can provide guidance and, if so, a guideable place that is a place where guidance is possible;
an extraction unit that extracts one or more of the second devices based on the guidance request information and the guidable information;
an information processing apparatus comprising: a communication unit that performs the voice communication between one of the extracted one or more second devices and the first device;
 移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置によって実行される情報処理方法であって、
 前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、
 前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、
 前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、を含むことを特徴とする情報処理方法。
1. An information processing method executed by an information processing device for processing information regarding voice communication between a first device that moves with a mobile object and a second device that is a terminal outside the mobile object, comprising:
a desire information acquiring step of acquiring a plurality of desire information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desire type indicating whether the user is a guided person who wishes to be guided or a guide who is capable of providing guidance, and including a desired guide place that is a place where the user desires to be guided if the desire type is the guided person, and including a guideable place that is a place where the guide is capable of providing the guidance if the desire type is the guider;
a matching step of matching a user of the first device with a user of the second device based on the desired information;
and a communication step of performing the voice communication between the first device and the second device for which the matching has been achieved.
 コンピュータを備え、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置によって実行される情報処理プログラムであって、前記コンピュータに、
 前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、
 前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、
 前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、
 を実行させるための情報処理プログラム。
An information processing program executed by an information processing device that includes a computer and processes information regarding voice communication between a first device that moves with a mobile object and a second device that is a terminal outside the mobile object, the information processing program comprising:
a desire information acquiring step of acquiring a plurality of desire information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desire type indicating whether the user is a guided person who wishes to be guided or a guide who is capable of providing guidance, and including a desired guide place that is a place where the user desires to be guided if the desire type is the guided person, and including a guideable place that is a place where the guide is capable of providing the guidance if the desire type is the guider;
a matching step of matching a user of the first device with a user of the second device based on the desired information;
a communication step of performing the voice communication between the first device and the second device that have been matched;
An information processing program for executing the above.
 コンピュータを備え、移動体とともに移動する第1装置と、前記移動体外の端末である第2装置との間の音声通信に関する情報処理を行う情報処理装置に、
 前記第1装置のユーザであるか又は前記第2装置のユーザであるかを示すユーザ種別情報と、案内をされることを希望する被案内者か又は案内をすることが可能な案内者かの区別を示す希望種別と、を含みかつ前記希望種別が前記被案内者である場合には案内を希望する場所である案内希望場所を含み、前記希望種別が前記案内者である場合には前記案内者の前記案内が可能な場所である案内可能場所を含む複数の希望情報を取得する希望情報取得ステップと、
 前記希望情報に基づいて、前記第1装置のユーザと前記第2装置のユーザとのマッチングを行うマッチングステップと、
 前記マッチングがなされた前記第1装置と前記第2装置との前記音声通信を行う通信ステップと、
 を実行させるための情報処理プログラムを記憶するコンピュータが読み取り可能な記憶媒体。
An information processing device includes a computer, the information processing device processing information related to voice communication between a first device that moves together with a mobile object and a second device that is a terminal outside the mobile object,
a desire information acquiring step of acquiring a plurality of desire information including user type information indicating whether the user is a user of the first device or a user of the second device, and a desire type indicating whether the user is a guided person who wishes to be guided or a guide who is capable of providing guidance, and including a desired guide place that is a place where the user desires to be guided if the desire type is the guided person, and including a guideable place that is a place where the guide is capable of providing the guidance if the desire type is the guider;
a matching step of matching a user of the first device with a user of the second device based on the desired information;
a communication step of performing the voice communication between the first device and the second device that have been matched;
A computer-readable storage medium storing an information processing program for executing the above.
PCT/JP2023/023186 2023-02-10 2023-06-22 Information processing device, information processing method, information processing program, and storage medium WO2024166412A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-019272 2023-02-10
JP2023019272 2023-02-10

Publications (1)

Publication Number Publication Date
WO2024166412A1 true WO2024166412A1 (en) 2024-08-15

Family

ID=92262764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/023186 WO2024166412A1 (en) 2023-02-10 2023-06-22 Information processing device, information processing method, information processing program, and storage medium

Country Status (1)

Country Link
WO (1) WO2024166412A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015227869A (en) * 2014-05-07 2015-12-17 アクセラス株式会社 Voice guide support system and program thereof
JP2020076683A (en) * 2018-11-09 2020-05-21 トヨタ自動車株式会社 Mobile object and remote guide system
WO2022196429A1 (en) * 2021-03-15 2022-09-22 ソニーグループ株式会社 Information processing device, information processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015227869A (en) * 2014-05-07 2015-12-17 アクセラス株式会社 Voice guide support system and program thereof
JP2020076683A (en) * 2018-11-09 2020-05-21 トヨタ自動車株式会社 Mobile object and remote guide system
WO2022196429A1 (en) * 2021-03-15 2022-09-22 ソニーグループ株式会社 Information processing device, information processing method, and program

Similar Documents

Publication Publication Date Title
EP2995908A1 (en) Apparatus, system and method for clustering points of interest in a navigation system
JP2003177026A (en) Navigation apparatus and selecting method for information
US20090105933A1 (en) System for providing visual information of a remote location to a user of a vehicle
JPH10232135A (en) Image-data collecting method, image-data providing method, map forming method, position-data providing method, navigation device and vehicle
JP6324196B2 (en) Information processing apparatus, information processing method, and information processing system
US20100161209A1 (en) Routing a User to a Parked Vehicle
JP2020061062A (en) Driving support device, vehicle, driving support system, driving support method, and computer program for driving support
WO2005024346A1 (en) Portable communication unit with navigation means
JP2020095475A (en) Matching method, matching server, matching system, and program
WO2021253955A1 (en) Information processing method and apparatus, and vehicle and display device
US20070115433A1 (en) Communication device to be mounted on automotive vehicle
JP2006084461A (en) Image acquisition system
JP2007263802A (en) On-board display device and its display method
JP2018124097A (en) On-vehicle device, information providing method, and information providing system
US11915202B2 (en) Remote meeting and calendar support for the in-vehicle infotainment unit
JP2020119131A (en) Server, server control method, server control program, communication terminal, terminal control method, and terminal control program
JP7451901B2 (en) Communication devices, communication methods and programs
JP5018665B2 (en) Data communication system and first in-vehicle device, second in-vehicle device, and data storage device used therefor
JP2006250874A (en) Navigation device and guidance method for own vehicle relative position
WO2024166412A1 (en) Information processing device, information processing method, information processing program, and storage medium
JP2010160066A (en) Navigation apparatus
JP2004157692A (en) Inter-vehicle communication information processing apparatus
JP2024104640A (en) Information processing device, information processing method, information processing program, and storage medium
JP2022103977A (en) Information providing device, information providing method, and program
KR20180035452A (en) Infotainment System Mounted on Vehicle and Operation Method Of The System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23921239

Country of ref document: EP

Kind code of ref document: A1