CN110871813A - Control method and device of virtual robot, vehicle, equipment and storage medium - Google Patents
Control method and device of virtual robot, vehicle, equipment and storage medium Download PDFInfo
- Publication number
- CN110871813A CN110871813A CN201811015804.3A CN201811015804A CN110871813A CN 110871813 A CN110871813 A CN 110871813A CN 201811015804 A CN201811015804 A CN 201811015804A CN 110871813 A CN110871813 A CN 110871813A
- Authority
- CN
- China
- Prior art keywords
- virtual robot
- state
- vehicle
- data
- trigger event
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/023—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Manipulator (AREA)
Abstract
The invention provides a control method, a control device, a vehicle, equipment and a storage medium of a virtual robot, wherein the method comprises the following steps: acquiring source data; extracting a trigger event from the source data; and controlling the virtual robot to execute target behaviors matched with the trigger events, wherein the target behaviors comprise at least one of actions, expressions and voice. According to the method, the virtual robot is controlled to make actions and expressions in different states according to the acquired source data, the actions and expressions which can be shown by the virtual robot are greatly enriched, and voice output is matched, so that the virtual robot is more vivid and stereoscopic and has emotion, and the interestingness of interaction between a user and the virtual robot is improved.
Description
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a method and an apparatus for controlling a virtual robot, a vehicle, a device, and a storage medium.
Background
Along with the improvement of the popularization rate of vehicles, the vehicles gradually become an indispensable part of people in the trip, therefore, people's attention is more and more drawn to intelligence and interest of controlling the vehicles, and users increasingly need emotional accompaniments in the process of driving the vehicles.
In the related art, a user can install a vehicle-mounted mechanical arm device and a power supply in a vehicle, fix a smart phone on a mechanical arm to form a physical robot, and rotate the vehicle-mounted mechanical arm to drive the smart phone to face forwards and backwards on a vertical surface, so that the robot can perform different actions. However, the above method is limited by the physical image of the robot, the motion that the robot can display is mechanical and single, and the robot cannot make vivid three-dimensional motion according with the current scene, so the interest of the interaction between the user and the robot is low.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
To this end, a first object of the present invention is to provide a method for controlling a virtual robot. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.
A second object of the present invention is to provide a control device for a virtual robot.
A third object of the invention is to propose a vehicle.
A fourth object of the invention is to propose an electronic device.
A fifth object of the invention is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for controlling a virtual robot, including:
acquiring source data;
extracting a trigger event from the source data;
controlling the virtual robot to execute a target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression and a voice.
In addition, the control method of the virtual robot according to the above embodiment of the present invention may further include the following additional technical features:
in one embodiment of the present invention, controlling the virtual robot to execute the target behavior matched with the trigger event includes: identifying a current state of the virtual robot, wherein the current state is one of a sleep state, a play state, a learning state, a working state and an event state; judging whether the state of the virtual robot needs to be switched or not according to the trigger event; if the state of the virtual robot does not need to be switched, matching the target behavior from the behaviors included in the current state according to the trigger event; if the state of the virtual robot needs to be switched, switching from the current state to a target state, and matching the target behavior from the behaviors included in the target state according to the trigger event; and controlling the virtual robot to execute the target behavior.
In an embodiment of the present invention, after identifying the current state of the virtual robot, the method further includes: and monitoring the state data of the virtual robot and/or the state data of the vehicle in real time, determining the next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and controlling the virtual robot to switch from the current state to the next state.
In one embodiment of the present invention, the control method of the virtual robot further includes: each state of the virtual robot comprises at least one state behavior matched with the state; after controlling the virtual robot to execute the target behavior, the method further includes: and controlling the virtual robot to randomly execute at least one state behavior in the current state.
In one embodiment of the present invention, extracting the trigger event from the source data includes: judging whether the source data comprises a voice interaction request, if so, extracting keywords from the voice interaction request, and determining a target interaction scene corresponding to the voice interaction event according to the keywords; and determining the trigger event according to the target interaction scene.
In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting a voice control request from the source data, and extracting vehicle components and control instructions to be controlled from the voice control request; determining the triggering event according to the vehicle component and the control instruction.
In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting vehicle driving data and driver's operation data from the source data; determining the current driving state of the vehicle according to the vehicle driving data and the operation data of the driver; and determining the trigger event according to the current driving state.
In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting state data of the vehicle from the source data; judging whether the vehicle has a fault according to the state data of the vehicle; and if the fault exists, identifying the fault type of the vehicle, and determining the trigger event according to the fault type.
In one embodiment of the present invention, extracting the trigger event from the source data includes: extracting environment data of the environment where the vehicle is located from the source data; identifying the environmental state of the environment according to the environmental data; determining the trigger event according to the environment state.
In an embodiment of the present invention, after the virtual robot executes the target behavior, the method further includes: and judging whether the virtual robot carries out state switching before executing the target behavior, and if the virtual robot carries out the state switching, controlling the virtual robot to return to the state before switching.
The control method of the virtual robot in the embodiment of the invention comprises the steps of firstly obtaining source data, then extracting a trigger event from the source data, and finally controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.
In order to achieve the above object, a second embodiment of the present invention provides a control apparatus for a virtual robot, including:
the data acquisition module is used for acquiring source data;
the extraction module is used for extracting the trigger event from the source data;
and the control module is used for controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice.
The control device of the virtual robot in the embodiment of the invention firstly obtains source data, then extracts the trigger event from the source data, and finally controls the virtual robot to execute the target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice. The device confirms the trigger event that changes virtual robot form according to the source data that whole car CAN network gathered, and then control virtual robot and carry out the target action that matches with the trigger event, the target action includes predetermined action expression and pronunciation, thereby control virtual robot and carry out different action and expression according to the concrete scene that the user is located, the action that has greatly enriched virtual robot CAN show and the quantity of expression, and virtual robot's action is not restricted by the condition more vivid image, combine virtual robot's expression and pronunciation interdynamic, make virtual robot more vivid three-dimensional and with the emotion, the interest of user with virtual robot interaction has been improved.
In order to achieve the above object, an embodiment of a third aspect of the present invention proposes a vehicle including the control device of a virtual robot as described in the above embodiments.
In order to achieve the above object, a fourth aspect of the present invention provides an electronic device, including a processor and a memory, wherein the processor runs a program corresponding to an executable program code by reading the executable program code stored in the memory, so as to implement the control method of the virtual robot as described in the above embodiments.
In order to achieve the above object, a fifth aspect embodiment of the present invention proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the control method of a virtual robot as described in the above embodiments.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic structural diagram of a virtual robot architecture according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a control method for a virtual robot according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating state transition of a virtual robot according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating music listening actions of a virtual robot according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a walking action of a virtual robot according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a defogging operation of a virtual robot according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a fan operation of a virtual robot according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a control apparatus of a virtual robot according to an embodiment of the present invention; and
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a control method, apparatus, and device of a virtual robot according to an embodiment of the present invention with reference to the drawings.
The virtual robot in the embodiment of the invention is a virtual robot running on the vehicle-mounted multimedia, and the virtual robot can show different images, actions and expressions through a display screen of the vehicle-mounted multimedia. The control method of the virtual robot in the embodiment of the present invention may be executed by the virtual robot architecture provided in the embodiment of the present invention.
Fig. 1 is a schematic diagram illustrating a connection between a virtual robot and an external device according to an embodiment of the present invention, where as shown in fig. 1, the virtual robot runs on a vehicle-mounted multimedia, and the vehicle-mounted multimedia is used as a main running carrier of the virtual robot, and can establish a connection with an artificial intelligence platform through a mobile 4G/5G network or a wireless access network, so that the virtual robot can acquire data in the network and interact with the artificial intelligence platform. The whole vehicle electronic equipment is accessed to a whole vehicle CAN network through a CAN gateway, and then is connected with the vehicle-mounted multimedia, so that the virtual robot CAN acquire local data such as vehicle state data, driving data, environment data and the like.
Fig. 2 is a flowchart illustrating a control method of a virtual robot according to an embodiment of the present invention. As shown in fig. 2, the method includes:
step 101, source data is acquired.
The source data includes vehicle-mounted data acquired while the user uses the vehicle, operation data of the driver, environment data, a voice request transmitted by the user, and the like.
The vehicle-mounted data may include, among others, vehicle travel data, warning information of the vehicle, and status data of the vehicle itself. For example, the vehicle driving data may include vehicle driving speed, driving time, fuel consumption, and the like; the warning information of the vehicle can comprise whether a user fastens a safety belt or alarm information sent by an instrument detection module, and the like; the environment data can include weather data, temperature data inside and outside the vehicle, environment quality data or road data and traffic data, etc., such as ambient temperature outside the vehicle, air quality inside the vehicle or traffic conditions on the road section ahead, etc.; the operation data of the driver can comprise behavior data of a user for controlling the vehicle device or operation data of the vehicle-mounted multimedia, and the like, such as air conditioner temperature set by the user, gear position hung by the user or click position on a vehicle-mounted multimedia display screen; the voice request can be voice interaction information such as chat content, query request or control instruction and the like sent by the user through the audio equipment.
During specific implementation, each vehicle-mounted electronic module and corresponding sensors mounted on a vehicle acquire vehicle-mounted data, environment data and operation data of a driver, and then the acquired data are sent to vehicle-mounted multimedia through a vehicle CAN network, for example, the sensor fixed on a main shaft of a vehicle transmission detects the rotating speed of the transmission, and sends rotating speed information to the vehicle-mounted multimedia through the CAN network, the vehicle-mounted multimedia is converted to acquire the current speed information of the vehicle, for example, the sensor mounted on a seat and a trigger switch detect the angle of the seat, whether a user ties a safety belt and whether the seat is occupied or not, and then the sensor sends the seat information to the vehicle-mounted multimedia through the CAN network. Furthermore, the user can also send a voice request through audio equipment such as a microphone, and the audio equipment sends voice information to the vehicle-mounted multimedia in a wireless or wired mode after analog/digital conversion.
Step 102, extracting a trigger event from the source data.
In the embodiment of the invention, the current trigger event of the vehicle can be provided according to the source data, for example, the vehicle-mounted data can be extracted from the source data, whether the vehicle has a fault or not is judged based on the vehicle-mounted data, and then the trigger event matched with the fault is determined. For another example, the environmental state may be identified according to the environmental data in the source data, and then the trigger event matching the environmental state may be determined. As another example, a trigger event matching the voice request may be determined based on the voice request in the source data.
In the embodiment of the invention, the trigger event is an event which triggers some behaviors executed by the virtual robot, the virtual robot can execute different behaviors according to the trigger event, and corresponding behaviors such as actions, expressions or voices can be displayed for a user.
In specific implementation, the virtual robot may acquire the trigger event in different manners according to different actual situations.
As a first example, the virtual robot determines whether the source data includes a voice interaction request, extracts a keyword from the voice interaction request if the voice interaction request exists, determines a target interaction scenario corresponding to the voice interaction event according to the keyword, and determines a trigger event according to the target interaction scenario.
In this example, the voice interaction request may include a voice chat, an intelligent question and answer, and the like, where the intelligent question and answer is a question that is provided by the user to remove a vehicle fault or control a vehicle device, and the virtual robot determines a chat topic sent by the user or a provided question according to the extracted keyword, thereby determining a target interaction scenario for voice interaction with the user. Wherein, the type of the chat topic may include weather, navigation, joke, stock, flight, train, constellation, network search, voice interaction (aspects of general knowledge or greeting, etc.), and the target interaction scenario includes specific content of the virtual robot interacting with the user under the type of the chat topic, such as, when the topic type is weather, the target interaction scene comprises that the virtual robot plays the situation that the current weather state is cloudy, rainy or rainy and the like to the user, when the topic type is a joke, the target interactive scene includes that the virtual robot talks the joke to the user after searching for the joke, when the topic type is the ticket information, the target interaction scene comprises the steps that the virtual robot shows the current ticket remaining situation to the user and recommends the available tickets to the user, for another example, when the user asks the virtual robot how to turn on the vehicle fog light, the target interaction scene comprises the answer of the virtual robot to the user and the like. Therefore, the specific target interaction scene is determined according to the voice interaction request sent by the user, the corresponding trigger event can be determined for different target interaction scenes, and the diversity of the trigger event is improved.
As a second example, the virtual robot extracts a voice control request from the source data, extracts a vehicle component and a control instruction to be controlled from the voice control request, and determines a trigger event according to the vehicle component and the control instruction.
In this example, the voice control request is a voice control instruction sent to the virtual robot when it is inconvenient for the user to manually operate the in-vehicle device, and the virtual robot can control the vehicle device according to the voice instruction of the user.
As a third example, the virtual robot extracts vehicle driving data and driver's operation data from the source data, determines a current driving state of the vehicle according to the vehicle driving data and the driver's operation data, and determines a trigger event according to the current driving state.
As a fourth example, the virtual robot extracts the state data of the vehicle from the source data, determines whether the vehicle has a fault according to the state data of the vehicle, identifies the fault type of the vehicle if the vehicle has the fault, and determines the trigger event according to the fault type.
As a fifth example, the virtual robot extracts environment data of the environment where the vehicle is located from the source data, identifies an environment state of the environment where the vehicle is located according to the environment data, and determines a trigger event according to the environment state. The environmental data includes current weather, temperature inside and outside the vehicle, air quality, road traffic conditions, and the like.
Therefore, various types of trigger events are extracted according to the source data of the current scene of the vehicle, the virtual robot can be conveniently controlled to execute the behaviors corresponding to the trigger events subsequently, so that vivid and specific display is provided for the user, interaction can be carried out with the user, and the help of entertainment, driving assistance, vehicle diagnosis, message pushing and the like is provided for the user.
And 103, controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice.
In particular, in order to control the virtual robot to execute behaviors matching different trigger events when the trigger events exist in the source data. As a possible implementation manner, a mapping relationship between the trigger event and the target behavior may be pre-constructed, and after the trigger event is determined, the mapping relationship is queried to obtain the target action matched with the trigger event.
The control method of the virtual robot in the embodiment of the invention comprises the steps of firstly obtaining source data, then extracting a trigger event from the source data, and finally controlling the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises at least one of action, expression and voice. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.
Further, in the embodiment of the present invention, a plurality of states are set in advance for the virtual robot, for example, five states such as sleep, study, play, work, and event may be set, so that the current state of the virtual robot may be identified. Specifically, the state of the virtual robot may be recognized based on the state data of the virtual robot and the state data of the vehicle. Optionally, the current state of the virtual robot may be identified according to the type of the identified trigger event.
As an example, as shown in fig. 3, when the user starts the virtual robot, the virtual robot performs initialization, and then the timer on the vehicle-mounted multimedia starts to count time to determine the operation time of the virtual robot after the initialization. The method comprises the steps of entering a sleep state after initialization of the virtual robot is completed, simultaneously obtaining time data sent by a timing device by the virtual robot, entering a play state after the virtual robot is in the sleep state for more than X hours, entering a learning state after the virtual robot is in the play state for more than X hours, entering the play state again after the virtual robot is in the learning state for more than X hours, entering the sleep state again after the virtual robot is in the play state for more than Y hours, and circulating according to the method, wherein X and Y are preset positive integers.
Further, as shown in fig. 3, when the virtual robot completes the voice request sent by the user in the working state, the timing device calculates the time after the voice request is completed, and the virtual robot acquires the speed data of the vehicle and the driving state data of the vehicle, and determines the state to be returned by the virtual robot after the working state is completed according to the data. Specifically, when the virtual robot does not receive a new voice command more than 10s after finishing the voice command and the current vehicle speed is less than 10m/s, the virtual robot enters a sleep state; when the virtual robot does not receive a new voice command for more than 10 seconds after finishing the voice command, the current speed is more than 10m/s, and the vehicle driving system is in an electric driving mode (EV), the virtual robot enters a learning state; when the virtual robot does not receive a new voice command for more than 10s after finishing the voice command, the current vehicle speed is more than 10m/s, and the vehicle driving system is in a hybrid electric drive (HEV) mode, the virtual robot enters a playing state. And if the virtual robot is in the burst state at present and the vehicle-mounted multimedia recognizes that the condition for triggering the virtual robot to enter the burst state is not met according to the real-time source data, controlling the virtual robot to return to the previous state when the virtual robot enters the burst state. Therefore, the virtual robot is controlled to be switched among the five states, and the current state of the virtual robot is ensured to be in accordance with the scene of the vehicle.
As shown in fig. 3, the virtual robot may be switched between the above states, and the in-vehicle multimedia may control the virtual robot to switch the states. It can be understood that the state of the virtual robot changes correspondingly due to the continuous change of the state of the vehicle, the vehicle-mounted multimedia monitors the state data of the virtual robot and/or the state data of the vehicle and the trigger condition extracted from the current source data in real time, determines the next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and controls the virtual robot to switch from the current state to the next state. Therefore, the state of the vehicle can be switched according to the vehicle state data and the trigger conditions under the scene of the vehicle, so that the virtual robot executes the target behaviors according with the current scene, and the timeliness and the accuracy of the virtual robot in executing the target behaviors are improved.
Furthermore, in the embodiment of the invention, a plurality of different state behaviors can be set for the same state so as to avoid visual fatigue of the user and provide rich display contents for the user.
As an example, when the virtual robot is in a sleep state, a learning state, and a play state, the virtual robot may perform different actions and control the virtual robot to perform a target action, and then control the virtual robot to randomly perform other actions included in the current state, for example, in the play state, the virtual robot may expose an expression of interest and perform six actions of dancing with a grass skirt, listening to music with earphones (as shown in fig. 4), hiding behind a wall, walking (as shown in fig. 5), punching a fist, rolling and the like, and simultaneously output sound of a child through a speaker, the six actions of the virtual robot in the play state may be freely switched according to a preset time and an arrangement order, and the virtual robot may automatically perform a next action after the time of performing a certain action reaches a preset time, for example, when the virtual robot performs dance with a dress, the virtual robot can perform the action of wearing earphones to listen to music.
TABLE 1
Further, in the embodiment of the present invention, different trigger events may be divided into different states. After the trigger event is acquired, whether the current state of the virtual robot is consistent with the state to which the trigger event belongs needs to be judged.
For the example of setting different trigger events in the working state and the emergency state, the trigger events are divided into different states for explanation.
Specifically, the trigger event set in the working state is a voice interaction type event, such as weather, navigation, joke, stock, flight, train, constellation, network search, voice interaction, and the like. When the currently extracted trigger event of the vehicle-mounted multimedia is a voice interaction event, whether the virtual robot is currently in a working state can be judged, and if the virtual robot is currently in the working state and the virtual robot is matched with the virtual robot, a target action can be determined from a plurality of actions included in the current state. Specifically, the target action may be determined according to an actual trigger event, as shown in fig. 2.
And in the working state, the virtual robot executes the action matched with the trigger event according to the difference of the trigger event. As a first example, when the triggering event is voice car control, the virtual robot exposes a self-confident expression, performs an action of clicking a control button with a remote controller, and sends a voice message that the command is completed to the user after completing the control command.
As a second example, when the triggering event is a voice question and answer, the virtual robot performs different actions according to different voice messages sent by the user, for example, when the problem sent by the user is an operation-type problem such as how to turn on a fog light, the virtual robot exposes a smiling expression, wears glasses and a doctor cap, holds a chalk to perform an action of explaining to the user, acquires data in the artificial intelligence platform through a 4G/5G network, and further plays the voice explanation of turning on the fog light through voice playing, when the problem sent by the user is a fault-type problem such as how to handle an engine fault light, the virtual robot exposes a tense expression, wears clothes of a worker, performs an action of holding a wrench to repair a device, acquires data in the artificial intelligence platform through the 4G/5G network, and further processes the engine fault through voice playing, thus, the user is assisted by the target line of the virtual robot to solve the problem occurring in driving the vehicle, and assistance in troubleshooting and driving assistance is provided to the user.
As a third example, when the trigger event is a voice interaction, the virtual robot performs a preset action matching the target event according to the data type in the voice request of the user, wherein a part of the actions is shown in table 2. It should be noted that, when the user performs voice interaction with the virtual robot, the user may send simple interactive sentences and control instructions to the virtual robot, and the interactive sentences and control instructions sent by the user may further include names set for the virtual robot, for example, juidi, and send voice interactive information according to the user to control the virtual robot to execute different actions, expressions and voice interactive information, so as to give emotions to the virtual robot, accompany the joy and sorrow of the user, and make the virtual robot become the emotion accompanying table 2 of the user
Further, in the event state, the virtual robot performs a preset action matching the trigger event according to the vehicle driving data and the operation data of the driver and the type and specific content of the environment data, wherein a part of the actions is shown in table 3. Therefore, during the running process of the vehicle, the target behavior of the virtual robot is determined according to the trigger event extracted from the source data, and the target behavior executed by the virtual robot is related to the state of the vehicle in real time.
TABLE 3
Therefore, according to the specific scene where the user is located, the virtual robot executes different actions and expressions and is matched with voice output, so that the actions of the virtual robot are more vivid and stereoscopic.
After the virtual robot has executed the target behavior, it may be determined whether the virtual robot has switched states before executing the target behavior, and if so, the virtual robot may be controlled to return to the state before switching. For example, when the user drives the vehicle to overspeed in a sleep state of the virtual robot, the virtual robot enters an event state, the virtual robot exposes a panic expression and broadcasts a ' vadose ', you go overspeed and pay attention to driving safety ', and when the user reduces the vehicle speed to within a specified speed, the virtual robot ends the event state and returns to the previous sleep state.
It should be noted that, when the virtual robot is in the sleep, play and learning states, the virtual robot waits for a voice instruction of the user or waits for an emergency to be shown, so that the target behavior of the virtual robot in the state may be to show a picture in a preset standby state to the user. When the virtual robot is in working and event states, the virtual robot executes the action and expression matched with the target event and sends corresponding voice information through audio equipment such as a loudspeaker, and therefore the virtual robot executes different actions and expressions in different states according to the specific scene where a user is located.
In summary, in the control method of the virtual robot according to the embodiment of the present invention, first, source data is obtained, then, a trigger event is extracted from the source data, and finally, the virtual robot is controlled to execute a target behavior matched with the trigger event, where the target behavior includes at least one of an action, an expression, and a voice. The method determines a trigger event for changing the form of the virtual robot according to source data acquired by a CAN network of the whole vehicle, and further controls the virtual robot to execute a target behavior matched with the trigger event, wherein the target behavior comprises preset action expressions and voices, so that the virtual robot is controlled to execute different actions and expressions according to a specific scene where a user is located, the number of the actions and the expressions which CAN be displayed by the virtual robot is greatly enriched, the actions of the virtual robot are more vivid and vivid without being limited by conditions, the virtual robot is enabled to be more vivid and stereoscopic and attached with emotion by combining the interaction of the expressions and the voices of the virtual robot, and the interestingness of interaction between the user and the virtual robot is improved.
In order to implement the above embodiments, an embodiment of the present invention further provides a control device for a virtual robot. Fig. 8 is a schematic structural diagram of a control device of a virtual robot according to an embodiment of the present invention.
As shown in fig. 8, the control device for a virtual robot includes: a data acquisition module 100, an extraction module 200 and a control module 300.
The data obtaining module 100 is configured to obtain source data.
An extracting module 200, configured to extract the trigger event from the source data.
And a control module 300, configured to control the virtual robot to execute a target behavior matched with the trigger event, where the target behavior includes at least one of an action, an expression, and a voice.
In a possible implementation manner of the embodiment of the present invention, the extraction module 200 is specifically configured to determine whether the source data includes a voice interaction request, extract a keyword from the voice interaction request if the voice interaction request exists, determine a target interaction scenario corresponding to the voice interaction event according to the keyword, and further determine a trigger event according to the target interaction scenario.
In a possible implementation manner of the embodiment of the present invention, the extracting module 200 is further configured to extract a voice control request from the source data, extract a vehicle component and a control instruction to be controlled from the voice control request, and further determine a trigger event according to the vehicle component and the control instruction.
In a possible implementation manner of the embodiment of the present invention, the extraction module 200 is further configured to extract vehicle driving data and driver operation data from the source data, determine a current driving state of the vehicle according to the vehicle driving data and the driver operation data, and further determine the trigger event according to the current driving state.
In a possible implementation manner of the embodiment of the present invention, the extraction module 200 is further configured to extract status data of the vehicle from the source data, determine whether the vehicle has a fault according to the status data of the vehicle, identify a fault type of the vehicle if the vehicle has the fault, and determine the trigger event according to the fault type.
In a possible implementation manner of the embodiment of the present invention, the extraction module 200 is further configured to extract environmental data of an environment where the vehicle is located from the source data, identify an environmental state of the environment where the vehicle is located according to the environmental data, and further determine the trigger event according to the environmental state.
Further, the control module 300 is further configured to identify a current state of the virtual robot, where the current state is one of a sleep state, a play state, a learning state, a working state, and an event state, determine whether to switch the state of the virtual robot according to a trigger event, match a target behavior from behaviors included in the current state according to the trigger event if the state of the virtual robot does not need to be switched, switch the state of the virtual robot from the current state to the target state if the state of the virtual robot needs to be switched, match the target behavior from the behaviors included in the target state according to the trigger event, and finally control the virtual robot to execute the target behavior.
In a possible implementation manner of the embodiment of the present invention, the control module 300 is further configured to monitor the state data of the virtual robot and/or the state data of the vehicle in real time, determine a next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and control the virtual robot to switch from the current state to the next state.
Further, the control module 300 is further configured to determine whether the virtual robot performs state switching before executing the target behavior after controlling the virtual robot to perform the target behavior, and if the virtual robot performs the state switching, control the virtual robot to return to the state before switching.
It should be noted that the foregoing explanation of the embodiment of the control method for the virtual robot is also applicable to the control device for the virtual robot in this embodiment, and details are not repeated here.
In summary, the control apparatus for a virtual robot according to the embodiment of the present invention first obtains source data, then extracts a trigger event from the source data, and finally controls the virtual robot to execute a target behavior matched with the trigger event, where the target behavior includes at least one of an action, an expression, and a voice. The device confirms the trigger event that changes virtual robot form according to the source data that whole car CAN network gathered, and then control virtual robot and carry out the target action that matches with the trigger event, the target action includes predetermined action expression and pronunciation, thereby control virtual robot and carry out different action and expression according to the concrete scene that the user is located, the action that has greatly enriched virtual robot CAN show and the quantity of expression, and virtual robot's action is not restricted by the condition more vivid image, combine virtual robot's expression and pronunciation interdynamic, make virtual robot more vivid three-dimensional and with the emotion, the interest of user with virtual robot interaction has been improved.
In order to implement the above embodiments, the present invention also proposes a vehicle including the control device of the virtual robot as described in the above embodiments.
In order to implement the above embodiments, the present invention further provides an electronic device.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic device 120 includes: a processor 121 and a memory 122; the memory 122 is used for storing executable program code; the processor 121 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 122, for implementing the control method of the virtual robot as described in the above embodiments.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the control method of the virtual robot as described in the above embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (14)
1. A control method of a virtual robot is characterized by comprising the following steps:
acquiring source data;
extracting a trigger event from the source data;
controlling the virtual robot to execute a target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression, and a voice.
2. The method of claim 1, further comprising:
identifying a current state of the virtual robot, wherein the current state is one of a sleep state, a play state, a learning state, a working state and an event state;
the controlling the virtual robot to execute the target behavior matched with the trigger event comprises the following steps:
judging whether the state of the virtual robot needs to be switched or not according to the trigger event;
if the state of the virtual robot does not need to be switched, matching the target behavior from the behaviors included in the current state according to the trigger event;
if the state of the virtual robot needs to be switched, switching from the current state to a target state, and matching the target behavior from the behaviors included in the target state according to the trigger event;
and controlling the virtual robot to execute the target behavior.
3. The method of claim 2, wherein after identifying the current state of the virtual robot, further comprising:
and monitoring the state data of the virtual robot and/or the state data of the vehicle in real time, determining the next state of the virtual robot according to the state data of the virtual robot and/or the vehicle, and controlling the virtual robot to switch from the current state to the next state.
4. The method of claim 2, wherein each state of the virtual robot includes at least one state behavior matching a state;
after the controlling the virtual robot to execute the target behavior, the method further includes:
and controlling the virtual robot to randomly execute at least one state behavior in the current state.
5. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:
judging whether the source data comprises a voice interaction request, if so, extracting keywords from the voice interaction request, and determining a target interaction scene corresponding to the voice interaction event according to the keywords;
and determining the trigger event according to the target interaction scene.
6. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:
extracting a voice control request from the source data, and extracting vehicle components and control instructions to be controlled from the voice control request;
determining the triggering event according to the vehicle component and the control instruction.
7. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:
extracting vehicle driving data and driver's operation data from the source data;
determining the current driving state of the vehicle according to the vehicle driving data and the operation data of the driver;
and determining the trigger event according to the current driving state.
8. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:
extracting state data of the vehicle from the source data;
judging whether the vehicle has a fault according to the state data of the vehicle;
and if the fault exists, identifying the fault type of the vehicle, and determining the trigger event according to the fault type.
9. The method according to any one of claims 1-4, wherein said extracting trigger events from said source data comprises:
extracting environment data of the environment where the vehicle is located from the source data;
identifying the environmental state of the environment according to the environmental data;
determining the trigger event according to the environment state.
10. The method of claim 2, further comprising, after the virtual robot has performed the target behavior:
and judging whether the virtual robot carries out state switching before executing the target behavior, and if the virtual robot carries out the state switching, controlling the virtual robot to return to the state before switching.
11. A control device for a virtual robot, comprising:
the data acquisition module is used for acquiring source data;
the extraction module is used for extracting the trigger event from the source data;
the control module is used for controlling the virtual robot to execute the target behavior matched with the trigger event; wherein the target behavior comprises at least one of an action, an expression, and a voice.
12. A vehicle characterized by comprising the control device of the virtual robot according to claim 11.
13. An electronic device comprising a memory, a processor;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the control method of the virtual robot according to any one of claims 1 to 10.
14. A non-transitory computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a method of controlling a virtual robot according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811015804.3A CN110871813A (en) | 2018-08-31 | 2018-08-31 | Control method and device of virtual robot, vehicle, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811015804.3A CN110871813A (en) | 2018-08-31 | 2018-08-31 | Control method and device of virtual robot, vehicle, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110871813A true CN110871813A (en) | 2020-03-10 |
Family
ID=69716493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811015804.3A Pending CN110871813A (en) | 2018-08-31 | 2018-08-31 | Control method and device of virtual robot, vehicle, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110871813A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528000A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Virtual robot generation method and device and electronic equipment |
CN113212448A (en) * | 2021-04-30 | 2021-08-06 | 恒大新能源汽车投资控股集团有限公司 | Intelligent interaction method and device |
CN113859255A (en) * | 2021-10-12 | 2021-12-31 | 国汽智控(北京)科技有限公司 | Control method, device and equipment of vehicle actuator and storage medium |
CN114356083A (en) * | 2021-12-22 | 2022-04-15 | 阿波罗智联(北京)科技有限公司 | Virtual personal assistant control method and device, electronic equipment and readable storage medium |
CN114979029A (en) * | 2022-05-16 | 2022-08-30 | 百果园技术(新加坡)有限公司 | Control method, device, equipment and storage medium of virtual robot |
CN115092072A (en) * | 2022-06-08 | 2022-09-23 | 中国第一汽车股份有限公司 | Vehicle state display method, device, equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN103324100A (en) * | 2013-05-02 | 2013-09-25 | 郭海锋 | Emotion vehicle-mounted robot driven by information |
CN103697900A (en) * | 2013-12-10 | 2014-04-02 | 郭海锋 | Method for early warning on danger through augmented reality by vehicle-mounted emotional robot |
CN104002818A (en) * | 2014-05-27 | 2014-08-27 | 昆山市智汽电子科技有限公司 | Interaction and automation integrated vehicle-mounted system |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
CN106325065A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Robot interactive behavior control method, device and robot |
CN107197384A (en) * | 2017-05-27 | 2017-09-22 | 北京光年无限科技有限公司 | The multi-modal exchange method of virtual robot and system applied to net cast platform |
CN107423809A (en) * | 2017-07-07 | 2017-12-01 | 北京光年无限科技有限公司 | The multi-modal exchange method of virtual robot and system applied to net cast platform |
CN107614308A (en) * | 2015-05-05 | 2018-01-19 | B.G.内盖夫技术与应用有限公司 | Generalized Autonomic robot driver system |
CN107656519A (en) * | 2017-09-30 | 2018-02-02 | 北京新能源汽车股份有限公司 | Driving control method and device for electric vehicle |
CN108198554A (en) * | 2018-01-29 | 2018-06-22 | 深圳市共进电子股份有限公司 | The control method of domestic robot work system based on interactive voice |
-
2018
- 2018-08-31 CN CN201811015804.3A patent/CN110871813A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN103324100A (en) * | 2013-05-02 | 2013-09-25 | 郭海锋 | Emotion vehicle-mounted robot driven by information |
CN103697900A (en) * | 2013-12-10 | 2014-04-02 | 郭海锋 | Method for early warning on danger through augmented reality by vehicle-mounted emotional robot |
CN104002818A (en) * | 2014-05-27 | 2014-08-27 | 昆山市智汽电子科技有限公司 | Interaction and automation integrated vehicle-mounted system |
CN107614308A (en) * | 2015-05-05 | 2018-01-19 | B.G.内盖夫技术与应用有限公司 | Generalized Autonomic robot driver system |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
CN106325065A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Robot interactive behavior control method, device and robot |
CN107197384A (en) * | 2017-05-27 | 2017-09-22 | 北京光年无限科技有限公司 | The multi-modal exchange method of virtual robot and system applied to net cast platform |
CN107423809A (en) * | 2017-07-07 | 2017-12-01 | 北京光年无限科技有限公司 | The multi-modal exchange method of virtual robot and system applied to net cast platform |
CN107656519A (en) * | 2017-09-30 | 2018-02-02 | 北京新能源汽车股份有限公司 | Driving control method and device for electric vehicle |
CN108198554A (en) * | 2018-01-29 | 2018-06-22 | 深圳市共进电子股份有限公司 | The control method of domestic robot work system based on interactive voice |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528000A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Virtual robot generation method and device and electronic equipment |
CN112528000B (en) * | 2020-12-22 | 2024-07-02 | 北京百度网讯科技有限公司 | Virtual robot generation method and device and electronic equipment |
CN113212448A (en) * | 2021-04-30 | 2021-08-06 | 恒大新能源汽车投资控股集团有限公司 | Intelligent interaction method and device |
CN113859255A (en) * | 2021-10-12 | 2021-12-31 | 国汽智控(北京)科技有限公司 | Control method, device and equipment of vehicle actuator and storage medium |
CN114356083A (en) * | 2021-12-22 | 2022-04-15 | 阿波罗智联(北京)科技有限公司 | Virtual personal assistant control method and device, electronic equipment and readable storage medium |
CN114979029A (en) * | 2022-05-16 | 2022-08-30 | 百果园技术(新加坡)有限公司 | Control method, device, equipment and storage medium of virtual robot |
WO2023221979A1 (en) * | 2022-05-16 | 2023-11-23 | 广州市百果园信息技术有限公司 | Control method and apparatus for virtual robot, and device, storage medium and program product |
CN114979029B (en) * | 2022-05-16 | 2023-11-24 | 百果园技术(新加坡)有限公司 | Control method, device, equipment and storage medium of virtual robot |
CN115092072A (en) * | 2022-06-08 | 2022-09-23 | 中国第一汽车股份有限公司 | Vehicle state display method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110871813A (en) | Control method and device of virtual robot, vehicle, equipment and storage medium | |
CN110875940B (en) | Application program calling method, device and equipment based on virtual robot | |
CN107367841B (en) | Moving object, system, and storage medium | |
US20190355019A1 (en) | Information processing apparatus and information processing system | |
CN109878441B (en) | Vehicle control method and device | |
WO2022204925A1 (en) | Image obtaining method and related equipment | |
CN110876047A (en) | Vehicle exterior projection method, device, equipment and storage medium | |
CN104581355A (en) | Autonomous vehicle media control | |
CN110008879A (en) | Vehicle-mounted personalization audio-video frequency content method for pushing and device | |
CN110202587B (en) | Information interaction method and device, electronic equipment and storage medium | |
CN108922307A (en) | Drive simulating training method, device and driving simulation system | |
WO2024104045A1 (en) | Method for acquiring operation instruction on basis of compartment area, and display method and related device | |
CN115195637A (en) | Intelligent cabin system based on multimode interaction and virtual reality technology | |
CN107878465A (en) | Mobile member control apparatus and moving body | |
CN112455365A (en) | Monitoring method, system, equipment and storage medium for rear row seats in vehicle | |
CN111625094A (en) | Interaction method and device for intelligent rearview mirror, electronic equipment and storage medium | |
CN110929078A (en) | Automobile voice image reloading method, device, equipment and storage medium | |
CN207157062U (en) | A kind of vehicle-mounted display device and vehicle | |
CN110871810A (en) | Vehicle, vehicle equipment and driving information prompting method based on driving mode | |
CN115534850B (en) | Interface display method, electronic device, vehicle and computer program product | |
CN111045636A (en) | Vehicle function display method and system | |
WO2019114019A1 (en) | Scene generation method for self-driving vehicle and intelligent glasses | |
CN112052325A (en) | Voice interaction method and device based on dynamic perception | |
WO2023241185A1 (en) | Human-computer interaction method and apparatus | |
CN115219151B (en) | Vehicle testing method, system, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200310 |