[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110654389B - Vehicle control method and device and vehicle - Google Patents

Vehicle control method and device and vehicle Download PDF

Info

Publication number
CN110654389B
CN110654389B CN201910849832.3A CN201910849832A CN110654389B CN 110654389 B CN110654389 B CN 110654389B CN 201910849832 A CN201910849832 A CN 201910849832A CN 110654389 B CN110654389 B CN 110654389B
Authority
CN
China
Prior art keywords
vehicle
rule
interactive instruction
driving
driving scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910849832.3A
Other languages
Chinese (zh)
Other versions
CN110654389A (en
Inventor
王创杰
张翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Motors Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Motors Technology Co Ltd filed Critical Guangzhou Xiaopeng Motors Technology Co Ltd
Priority to CN201910849832.3A priority Critical patent/CN110654389B/en
Publication of CN110654389A publication Critical patent/CN110654389A/en
Application granted granted Critical
Publication of CN110654389B publication Critical patent/CN110654389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The application discloses a control method of a vehicle. The control method comprises the following steps: detecting the current state of the vehicle, determining the driving scene of the vehicle according to the current state, generating an interactive instruction according to the driving scene, and controlling the vehicle according to the interactive instruction. According to the vehicle control method, which driving scene the current vehicle is in is determined according to the current state of the detected vehicle, so that a proper interactive instruction is generated under the driving scene in a targeted manner, and the vehicle is controlled according to the interactive instruction, so that interaction between the vehicle and a user is more convenient and faster, potential safety hazards caused when the user interacts with the vehicle in the driving process of the vehicle can be avoided, and user experience is improved. The application also discloses a control device and a vehicle.

Description

Vehicle control method and device and vehicle
Technical Field
The present disclosure relates to the field of automotive technologies, and in particular, to a control method and a control device for a vehicle, and a vehicle.
Background
In the related technology, interaction between a user and a vehicle-mounted system needs to be initiated by the user, along with the development of the intelligent automobile technology, the functions of a vehicle are more and more increased, the interaction between the user and the vehicle is more and more complex, the user initiates the interaction with the vehicle in the driving process of the vehicle, potential safety hazards can be caused, the intelligence of the vehicle is poor, and the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present application provide a control method and a control device for a vehicle, and the vehicle.
The application provides a control method of a vehicle, comprising the following steps:
detecting a current state of the vehicle;
judging the driving scene of the vehicle according to the current state;
generating an interactive instruction according to a driving scene; and
and controlling the vehicle according to the interactive instruction.
In certain embodiments, determining the current state of the vehicle comprises:
the current state is determined based on a change in sensor data of the vehicle.
In certain embodiments, determining the driving scenario of the vehicle based on the current state comprises:
and determining a driving scene according to the current state and the cloud server data of the vehicle.
In some embodiment, generating the interactive instructions according to the driving scenario includes:
and generating an interactive instruction corresponding to the driving scene according to the triggering rule of the interactive instruction and the driving scene.
In one embodiment, the trigger rule includes a main rule and an auxiliary rule, and generating the interactive instruction corresponding to the driving scene according to the trigger rule and the driving scene of the interactive instruction includes:
matching a main rule corresponding to a driving scene;
matching an auxiliary rule corresponding to a driving scene;
and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
In one embodiment, generating the interactive instruction corresponding to the driving scenario according to the main rule and the auxiliary rule comprises:
sequencing the interactive instructions according to the auxiliary rules;
generating a final interactive instruction according to the sequencing result;
controlling the vehicle according to the interactive instructions includes:
and controlling the vehicle according to the final interactive instruction.
In one embodiment, controlling the vehicle according to the interactive instructions comprises:
judging the execution type of the interactive instruction;
and controlling the vehicle to interact according to the execution type.
The application provides a control device of a vehicle, including:
the detection module is used for detecting the current state of the vehicle;
the judging module is used for judging the driving scene of the vehicle according to the current state;
the information generation module is used for generating an interactive instruction according to the driving scene; and
and the control module controls the vehicle according to the interactive instruction.
The present application provides a vehicle comprising a processor configured to:
detecting a current state of the vehicle;
judging the driving scene of the vehicle according to the current state;
generating an interactive instruction according to a driving scene; and
and controlling the vehicle according to the interactive instruction.
In one embodiment, a vehicle includes a sensor coupled to a processor, the processor configured to:
the current state is determined based on a change in sensor data of the vehicle.
In one embodiment, the processor is configured to:
and determining a driving scene according to the current state and the cloud server data of the vehicle.
In one embodiment, the processor is configured to:
matching a main rule corresponding to a driving scene;
matching an auxiliary rule corresponding to a driving scene;
and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
In one embodiment, the processor is configured to:
sequencing the interactive instructions according to the auxiliary rules;
generating a final interactive instruction according to the sequencing result;
and controlling the vehicle according to the final interactive instruction.
In one embodiment, the processor is configured to:
judging the execution type of the interactive instruction;
and controlling the vehicle to interact according to the execution type.
A vehicle is provided that includes one or more processors, memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the control method described above.
A computer-readable storage medium is provided, on which a computer program is stored which, when executed by one or more processors, causes the processors to carry out the above-mentioned control method of a vehicle.
According to the control method, the control device, the vehicle and the computer-readable storage medium, which driving scene the current vehicle is in is determined according to the current state of the detected vehicle, so that a proper interactive instruction is generated in the driving scene in a targeted manner, and the vehicle is controlled according to the interactive instruction, so that the interaction between the vehicle and a user is more convenient. Meanwhile, the vehicle actively makes a response instruction according to the current condition, and the response instruction is not initiated actively by the user, so that potential safety hazards caused by interaction between the user and the vehicle in some driving scenes of the vehicle can be avoided, and the user experience is improved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a method for controlling a vehicle according to certain embodiments of the present disclosure;
FIG. 2 is a schematic illustration of the structure of a vehicle according to certain embodiments of the present application;
FIG. 3 is a block schematic diagram of a control device according to certain embodiments of the present application;
4-9 are flow diagrams of vehicle control methods according to certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Along with the development of automobile intellectualization, the interaction between a vehicle and a user is more and more widely applied, the traditional vehicle and the user generally need to be initiated actively by the user, the user can interact with the vehicle through a vehicle-mounted screen or a key, the interaction mode is single and not flexible enough, and potential safety hazards can be even caused when the vehicle and the user interact in certain scenes.
Referring to fig. 1, the present application provides a control method for a vehicle, including:
s10: detecting a current state of the vehicle;
s20: judging the driving scene of the vehicle according to the current state;
s30: generating an interactive instruction according to a driving scene; and
s40: and controlling the vehicle according to the interactive instruction.
Referring to fig. 2 and 3, the present embodiment provides a vehicle 100. The vehicle 100 includes a detector 12, a processor 14. The detector 12 is used to acquire the current state of the vehicle 100. The processor 14 is configured to detect a current state of the vehicle 100, generate an interactive instruction according to a driving scenario, and control the vehicle 100 according to the interactive instruction.
The embodiment of the present application also provides a control device 110 of the vehicle 100, and the control method of the vehicle 100 of the embodiment of the present application may be implemented by the control device 110.
Specifically, the control device 110 includes a detection module 112, a determination module 114, an information generation module 114, and a control module 116. S10 may be implemented by the detection module 112, S20 may be implemented by the determination module 114, S30 may be implemented by the information generation module 116, and S40 may be implemented by the control module 118. Alternatively, the detection module 112 is used to detect the current state of the vehicle 100. The determination module 114 is used for determining the driving scene of the vehicle 100 according to the current state. The information generating module 116 is configured to generate an interactive instruction according to a driving scenario. The control module 116 is configured to control the vehicle 100 based on the interactive instructions.
In the control method, the control device 110 and the vehicle 100 according to the embodiment of the application, the processor 14 determines which driving scene the current vehicle is in according to the current state of the detected vehicle, so as to generate a suitable interactive instruction in the driving scene in a targeted manner, and control the vehicle according to the interactive instruction, so that the interaction between the vehicle and a user is more convenient. Meanwhile, the vehicle actively makes a response instruction according to the current condition, so that the vehicle is actively initiated only by the user, potential safety hazards caused by interaction between the user and the vehicle in certain driving scenes can be avoided, and the user experience is improved.
Specifically, the state of the vehicle 100 may be derived from one or more vehicle information combinations, and the vehicle information may include geographic position information, vehicle body information, surrounding environment information, network information, in-vehicle information, detected user information, and the like, for example, the current state of the vehicle 100 may be derived from battery information, geographic position information, and surrounding environment information of the vehicle.
Further, the geographic location status may include a location of the destination, a current location of the vehicle 100, a location of the home, a company location, and the like.
The body information may include the charge level of the battery, the charge level, the driving level, the mileage, the speed, the door opening and closing, the engine, the seat information, etc., for example, the speed of the vehicle is zero, the door is closed, the seat is occupied, etc.
The ambient information may include weather information, ambient light, etc., ambient vehicle information, etc.
The network information may include the presence or absence of a network, the speed of the network, the type of network, etc. The in-vehicle information may include in-vehicle temperature, humidity, the number of users, driver information, and the like.
The detector 12 may be various sensors disposed on the vehicle 100, such as a speed sensor, an acceleration sensor, a temperature sensor, etc., and the specific kind and number are not limited. The detector 12 may be distributed throughout the vehicle to obtain information about the vehicle, such as obtaining the temperature of the vehicle from a temperature sensor. In this way, when the information of the vehicle changes, the detector 12 may transmit the detected information to the processor 14 through the on-board bus, and the processor 14 may determine which driving scenario the vehicle is in.
In operation, the processor 14 detects information of the vehicle 100 in real time to determine the current state, and the processor 14 can determine which driving scenario the vehicle 100 is in according to different states of the vehicle 100.
The driving scenes of the vehicle 100 may include an upcoming driving scene, an entering scene, a driving abnormality scene, an upcoming exiting scene, an exiting scene, and the like.
In the upcoming vehicle scene, the status of the vehicle may be closed and locked by the doors, no person in the vehicle, the time period is the time period when the user gets on the vehicle, the vehicle sensor recognizes that the user is in front of the vehicle, etc.
In the getting-on scene, the vehicle state can be that the vehicle speed is zero, the main driving seat is detected to be occupied, the engine is started, the air conditioner is started and the like.
In a driving scene, the vehicle state may be that the vehicle speed is greater than zero, a person is detected in the main driving seat, the gear is changed, the surrounding environment is changed, the current position is changed, and the like.
In the abnormal driving scene, the vehicle state may be a failure of a vehicle body part, a traffic jam detected on a road section ahead or a traffic accident ahead, an insufficient battery power, a detection of fatigue driving of a user, and the like.
In a scene of getting off, the vehicle state can be destination to be reached, abnormal vehicle body, zero speed, parking space detection and the like;
in the getting-off scene, the vehicle state can be that the number of people of the vehicle changes, the door is closed and locked, the engine is closed, and the gear is the parking gear and the like.
In this way, when the driving scene of the vehicle can be judged according to the current state of the vehicle, for example, when the seat is from no person to someone, the getting-on habit of the user can be analyzed according to the state of the door, the state of the safety belt, the state of the speed information and the historical data information, so as to judge whether the scene is the getting-on scene or the driving scene. If the seat is from no person to someone, the door is closed, the safety belt is fastened, the speed is not zero and the scene is in accordance with the getting-on habit of the user, and the scene can be judged to be a driving scene, otherwise, the scene is the getting-on scene. If it is determined that the vehicle 100 is in the boarding scene, the boarding scene is transferred to the next driving scene when the states of all the vehicles meet the driving scene.
It should be noted that each driving scenario includes multiple interactive commands, and when the processor 14 determines that the current vehicle 100 is in a certain driving scenario, one or more interactive commands in the scenario may be triggered.
Specifically, the interactive instructions in the upcoming vehicle scene may include: the method comprises the steps of starting an air conditioner interaction instruction, starting a monitoring system interaction instruction included by a vehicle, opening a vehicle door interaction instruction and the like. The interactive instructions in the boarding scenario may include: adjusting an air conditioner interaction instruction, adjusting a seat interaction instruction, adjusting a rearview mirror interaction instruction, starting a navigation interaction instruction, starting a steering wheel square control operation interaction instruction and the like. The interactive instructions in the driving scene can include: playing a music interaction instruction, starting an air circulation interaction instruction, reminding a user of a rest interaction instruction and the like. The interactive instruction in the abnormal driving scene can comprise the following steps: emergency brake instruction, real-time vehicle condition reminding instruction, driving route changing interaction instruction reminding user and the like; the interactive instructions in the getting-off scene may include: reminding a charging interactive instruction, informing a user of a parking space interactive instruction and the like. The interactive instructions in the get-off scenario may include: an automatic parking instruction, an automatic locking instruction, and the like.
Further, when the current vehicle 100 is in a certain driving scenario, the processor 14 may trigger various interactive instructions in the driving scenario, and then determine an interactive instruction or a series of interactive instructions most suitable for the user according to the driving scenario and the current state of the vehicle 100, and control the vehicle 100 according to the interactive instructions to implement the interactive instructions of the user. For example, in an upcoming vehicle getting-on scene, the state of the vehicle is that the time period is the time period that the user is used to get on the vehicle, and the temperature in the vehicle is too high to turn on, so that an air conditioner interaction command is generated according to the state of the vehicle and is adjusted to the optimum temperature of the user.
In some examples, processor 14 may determine that vehicle 100 is in a driving scenario and that the user is en route home by detecting the speed, gear, geographic location information of vehicle 100, and the current state presented in conjunction with historical data of the user, etc. The traffic information is pushed to the vehicle 100 in real time. When the vehicle 100 receives a change in the road condition of the target route, for example, when the route ahead is blocked, the vehicle 100 may automatically respond according to the current state and enter a driving abnormal situation, and further, an interactive instruction most suitable for responding to the driving abnormal situation is selected according to the current geographical location information, the vehicle speed condition, the battery cruising state, and the like, for example, the interactive instruction may be a route-changing interactive instruction or a slow-down interactive instruction, or an energy-saving interactive instruction is started to respond to a possible situation of the driving abnormal situation. For example, when the road condition ahead is poor, the vehicle 100 will enter a bumpy road segment, and the processor 14 determines that the vehicle enters an abnormal driving scene, and prompts the user to slow down or control the vehicle 100 to start an interactive instruction of another way for dealing with the road condition in combination with the current condition.
The vehicle 100 of the present application may include an on-board system, which is commonly involved in the interaction of the vehicle 100 with a user, and the on-board system may be composed of an intelligent driving system, a life service system, a safety protection system, a location service system, a vehicle service system, and the like, wherein each system is composed of systems and functions including some subdivision, for example, the intelligent driving system may include an intelligent sensing system, an auxiliary driving system, and the like. The life service system can comprise functions of video and audio entertainment, information inquiry, various biological services and the like. The various systems included in the vehicle 100 participate together to meet the user's needs, and thus obtain the user's desired interactive instructions.
In some examples, the life service system may include an air conditioning system, which is a control system including temperature, humidity, air cleanliness and air circulation, and in the driving scene, the processor 14 may control the air conditioning system to adjust the temperature of the vehicle 100 to meet the user requirement by cooperating with the systems of the vehicle 100 to obtain the state change of the vehicle 100 and generating the interactive command for cooling the vehicle 100.
For the present application, when the vehicle 100 changes, the processor 14 processes the current state of the vehicle 100 to trigger various driving scenarios, and therefore, the processor 14 may be the processor 14 separately provided by the vehicle 100 for each driving scenario and generating an interactive instruction to control the vehicle 100, or the processor 14 of the driving system of the vehicle 100 itself, which is not limited herein.
Referring to fig. 4, in some embodiments, S10 includes:
s11: the current state is determined based on a change in sensor data of the vehicle.
In certain embodiments, S11 may be implemented by detection module 112. Alternatively, the detection module 112 is configured to determine the current state based on a change in sensor data of the vehicle.
In some embodiments, the processor 14 determines the current state from changes in sensor data from the vehicle 100.
Specifically, the body includes various sensors that are coupled to the processor 14. The sensors may acquire the vehicle 100 or various data changes around the vehicle 100 and transmit to the processor 14 via an onboard signal bus for processing by the processor 14 to determine the current status. The sensors may include, for example, an image sensor, a stability sensor, a speed sensor, a tire pressure detection sensor, a steering wheel sensor, etc., for example, the speed sensor is used to detect a change in the speed of the vehicle 100, and when the speed changes, the speed is transmitted to the processor 14, and the processor 14 determines that the current state of the vehicle 100 is that the speed has changed. The current state can be determined based on changes in the data that the sensor will acquire. At the same time, the sensors may monitor the vehicle 100 at various times to ensure that the vehicle 100 is safe from anomalies.
Referring to fig. 5, in some embodiments, S20 further includes:
s21: and judging a driving scene according to the current state and the cloud server data of the vehicle.
In some embodiments, S21 may be implemented by the detection module 112 and the determination module 114. Alternatively, the detection module 112 is further configured to detect cloud server data of the vehicle 100, and the determination module 114 is configured to determine a driving scenario according to a current state of the vehicle 100 and the cloud server data of the vehicle 100.
In some embodiments, the processor 14 is configured to determine a driving scenario based on the current state and cloud server data of the vehicle 100.
Specifically, the vehicle 100 includes a communication module, the communication module may receive the cloud server data and send the information data related to the vehicle 100 to the cloud server, the communication module of the vehicle 100 maintains a close communication with the cloud server in real time through a long connection, the long connection means that the communication module of the vehicle 100 maintains a connection state after communicating with the cloud server, and subsequent communication may not enter a technology of a connection step, so as to ensure that the vehicle 100 can receive and send data in real time. Therefore, the driving scene can be determined by combining the current state of the vehicle 100 and the cloud server data received by the communication module, so that an accurate driving scene can be generated.
Further, the cloud server includes a cloud processor, and the cloud processor can process cloud server data and send the cloud server data to the communication module of the vehicle, and the cloud server data may include the geographic location of the vehicle 100, weather condition information, data sent by the communication module of the vehicle 100, and the like. For example, the vehicle 100 may need to obtain real-time dynamic information of the vehicle 100 from the cloud server when the map navigation is started. The processor 14 of the vehicle 100 performs comprehensive processing on the current state of the vehicle 100 and the received cloud server data to determine the current driving scene of the vehicle 100. For example, when determining the current driving scene of the vehicle 100, it is necessary to acquire the current position data of the vehicle 100 and determine the current driving scene in conjunction with the current state of the vehicle 100. Therefore, the communication module sends a request to the cloud server to obtain the current position information of the vehicle 100, and then confirms the current driving scene according to the speed parameters of the vehicle 100 and the states of the presence or absence of a person in the main driving seat of the vehicle 100, and if the current position of the vehicle 100 is not changed, the speed of the vehicle 100 is zero and the presence of a person in the driving seat of the vehicle 100 is comprehensively judged, the vehicle 100 is in the getting-on scene. In addition, historical user behavior data can be stored in the cloud server data, and the processor can judge by combining the historical user behavior data stored in the cloud when judging a driving scene. For example, when the driving scene is determined to be a driving scene, the historical behavior data of the user can determine that the driving scene is the driving scene and the vehicle is on the way home.
In addition, the vehicle 100 may be connected to a mobile phone or other mobile terminal, and receive and transmit cloud server data of the vehicle 100 through the mobile phone or other mobile terminal.
It should be noted that the cloud server data of the vehicle 100 needs to have a network to receive and send. In some extreme environments, the vehicle 100 may not receive the network signal or the network delay is high, so that the vehicle 100 cannot perform receiving and sending the cloud server data of the vehicle 100. The processor 14 determines that the vehicle 100 enters an abnormal condition and thereby takes action in line with the current condition, for example, interacting with a user to alert that a response is being taken.
In some examples, when the vehicle 100 is in a driving situation, that is, the vehicle 100 is in the driving situation, the cloud server of the vehicle 100 sends the state of the vehicle 100 and receives the driving route state of the vehicle 100 in real time, and when the vehicle 100 receives a front abnormality, for example, a front accident or a front jam, the vehicle 100 generates a driving abnormality situation according to the current position, the vehicle speed, the battery life state, and the like, so that a response method can be made in time.
Referring to fig. 6, in some embodiments, S30 includes:
s31: and generating an interactive instruction corresponding to the driving scene according to the triggering rule of the interactive instruction and the driving scene.
In some embodiments, S31 may be implemented by information generation module 116. Or the information generating module 116 is configured to generate an interactive instruction corresponding to the driving scenario according to the triggering rule of the interactive instruction and the driving scenario.
In some embodiments, the processor 14 is configured to generate the interactive instruction corresponding to the driving scenario according to the triggering rule of the interactive instruction and the driving scenario.
Specifically, each interactive instruction included in the driving scene is provided with a corresponding trigger rule, and the interactive instruction can be activated only when the relevant rule is triggered. The trigger rules may include rules set by the cloud server, rules set when the vehicle 100 leaves the factory, and rules set by the vehicle 100 for machine learning according to user behavior habits. If the rule of the interactive instruction is triggered, the interactive instruction corresponding to the driving scene can be generated according to the triggering rule of the interactive instruction in the driving scene. In addition, the corresponding interactive instruction may be one or more instructions.
The rule set by the vehicle 100 through machine learning according to the behavior habit of the user means that the relevant rule can be automatically set by the vehicle 100 according to the driving habit and the use habit of the user, and it can be understood that the longer the user uses the vehicle 100, the more accurate the relevant rule is set. Of course, the vehicle 100 may also recognize user information, and the processor 14 may set different rules according to different driving or behavior habits of different users, so that the vehicle 100 may generate different instructions according to different users in the same driving scene, for example, setting the temperature in the vehicle 100, and the processor 14 may set different temperatures according to different drivers.
After the processor 14 determines the driving scenario according to the current state and the cloud server data of the vehicle 100, the processor 14 may further match a corresponding trigger rule set by the interaction instruction included in the driving scenario according to the current state and the cloud server data of the vehicle 100. If the current state and the cloud server data of the vehicle 100 trigger a rule corresponding to the interactive instruction, a corresponding interactive instruction is generated according to the triggered rule.
Further, the triggering rule of the interactive instruction means that a certain condition is set for generating the interactive instruction, and the corresponding interactive instruction can be generated only when the condition is met. For example, an air conditioner starting instruction is set in an upcoming vehicle scene, and the air conditioner starting instruction needs to meet the condition that the temperature in the vehicle is higher or lower, and the time period is a time period when a user is about to get on the vehicle, and the like, so that the air conditioner starting interactive instruction can be generated.
In some examples, when the current driving scene is determined to be a driving scene by the current state of the automobile and the data of the cloud server, the user changes the destination, and the rule of the charging interaction instruction is triggered by comprehensively determining that the current battery power is difficult to supply to the vehicle 100 to the destination according to the power, the number of people in the vehicle 100 and the current position of the vehicle 100, so that the charging interaction instruction is generated.
Referring to fig. 7, in some embodiments, S30 further includes:
s32: matching a main rule corresponding to a driving scene;
s33: matching an auxiliary rule corresponding to a driving scene;
s34: and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
In some embodiments, S32-S34 may be implemented by information generation module 116. Or, the information generating module 116 is configured to match a main rule corresponding to the driving scene and match an auxiliary rule corresponding to the driving scene, and generate an interaction instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
In some embodiments, the processor 14 is configured to match a main rule corresponding to the driving scenario and match an auxiliary rule corresponding to the driving scenario, and generate the interaction instruction corresponding to the driving scenario according to the main rule and the auxiliary rule.
Specifically, as the trigger rule and the driving scenario generate the interactive instruction corresponding to the driving scenario, the generated interactive instruction may include multiple types, or one trigger rule may generate multiple interactive instructions. For example: in the getting-on scene, the switch of the vehicle door can be set as a trigger rule, if the vehicle door is opened, the rule is triggered, and the interactive instruction corresponding to the rule may include a seat adjustment interactive instruction, a vehicle 100 failure reminding instruction, a seat belt fastening interactive instruction and the like. Therefore, a plurality of rules can be set according to each interactive instruction, so that the output interactive instruction best meets the requirements of users.
Further, the interactive instruction is provided with a main rule and an auxiliary rule, the main rule refers to a necessary rule for activating the interactive instruction, namely, the driving scene can be determined only by triggering the main rule, the interactive instruction in the driving scene is further activated, and otherwise, the interactive instruction is ignored; for example, the sensor may start the adjustment seat interaction command or the adjustment rearview mirror interaction command only when the sensor detects that the door is opened and someone in the main cab judges a boarding scene, and the main rule of the adjustment seat interaction command is started when the door is opened and someone in the main cab is judged. The auxiliary rule refers to a reference rule for activating a corresponding interactive instruction, and the more auxiliary rules triggering the interactive instruction, the higher the interactive instruction conformity, or in a plurality of interactive instructions, the auxiliary rule can screen out the interactive instruction which best meets the user requirements. Therefore, the interactive instruction corresponding to the driving scene can be generated through the main rule and the auxiliary rule, so that the interactive instruction which best meets the user requirement is screened out, and the user requirement is met.
Referring to fig. 8, in some embodiments, S30 further includes:
s35: sequencing the interactive instructions according to the auxiliary rules;
s36: generating a final interactive instruction according to the sequencing result;
s40 includes:
and S41 controlling the vehicle according to the final interactive command.
In certain embodiments, S35 and S36 may be implemented by the information generation module 116. S41 may be implemented by the control module 118, or the information generating module 116 is configured to sort the interactive instructions according to the auxiliary rule, and generate the final interactive instruction according to the sorted result. The control module 118 is configured to control the vehicle based on the final interactive instructions.
In some embodiments, the processor 14 is configured to rank the interactive instructions according to the auxiliary rule, generate final interactive instructions according to a result of the ranking, and control the vehicle according to the final interactive instructions.
It should be noted that, because the triggering assistance rule may generate a plurality of interaction instructions, and the assistance rule included in each generated interaction instruction may also be provided with a plurality of interaction instructions, but it is impossible to output a plurality of interaction instructions simultaneously to control the vehicle, the processor 14 may determine the interaction instruction according to the assistance rule included in each instruction, output the interaction instruction that best meets the requirement, and control the vehicle according to the final interaction instruction.
Specifically, the processor 14 generates an interactive instruction list from the generated interactive instructions, sorts the interactive instruction list according to the auxiliary rule included in each interactive instruction, sets a score according to the auxiliary rule included in each interactive instruction, obtains a total score of the auxiliary rule, and sorts the interactive instructions according to the total score of the auxiliary rule to generate final interactive instructions.
Further, processor 14 may set the score according to the importance of the secondary rule, which may set the score larger for the corresponding interactive instruction. And adding all the scores to obtain a total score, and sequencing the interactive instructions according to the final score to generate final interactive instructions. For example, in the getting-on scene, the triggering assistance rule may be that the main driver seat is occupied, the steering wheel control button is operated, the condition that the main driver seat is occupied is important, the score of the main driver seat is set to 5, the steering wheel control button is not important, and the score of the operation is set to 2. The processor 14 sets the score for the assist rule either by setting the vehicle 100 before it leaves the factory, by setting the vehicle 100 according to the user's behavior habits, or by the user himself.
In addition, the processor 14 may perform sequencing according to the number of the auxiliary rules included in each interactive instruction, when the number of the auxiliary rules included in a certain interactive instruction is the largest, the certain interactive instruction is the final interactive instruction, and the vehicle is controlled according to the final interactive instruction, for example, in an abnormal driving scene, the interactive instruction may include a music playing interactive instruction and an energy saving interactive instruction, where the auxiliary rule triggering the music playing interactive instruction includes that the user is accustomed to playing music in the time period, and the auxiliary rule triggering the energy saving interactive instruction to be started may include that the electric quantity is insufficient, a front jam is detected, and a vehicle body component has a fault. The processor 14 will turn on the power-saving interactive instruction as the final interactive instruction.
Referring to fig. 9, in an embodiment, S40 further includes:
s42: judging the execution type of the interactive instruction;
s43: and controlling the vehicle to interact according to the execution type.
In some embodiments, S41 and S42 may be implemented by the control module 118, or the control module 118 may be configured to determine the type of execution of the interactive command and control the vehicle 100 to interact according to the type of execution.
In some embodiments, processor 14 is configured to determine a type of execution of the interactive instructions and control vehicle 100 to interact based on the type of execution.
Specifically, in different driving scenarios, the driving scenarios include a large number of interactive instructions and different types, so that each interactive instruction needs to have an interactive execution type for interacting with a user, where the interactive execution type mainly includes: message reminding, voice interaction, timing execution, entering the next driving scene and the like. And interacting with the user according to the execution type, and finally determining whether to control the vehicle 100 to execute a corresponding interaction instruction.
Further, the vehicle 100 may include various interactive systems that enable user interaction with the vehicle 100 such that the vehicle 100 may execute interactive instructions. When the vehicle 100 interacts with the user, the vehicle 100 may display a voice message, an image message, or a voice plus image, for example, in a driving scene, an interactive instruction entering the abnormal driving scene is generated, the vehicle 100 may perform a voice message to remind the user and an abnormal situation occurring in front of the image display, further, the vehicle 100 is controlled to enter the abnormal driving scene, and after the vehicle 100 enters the abnormal driving scene, the interactive instruction is regenerated according to the current state of the vehicle 100 and the abnormal driving scene is dealt with.
It should be noted that, in some driving scenarios, since the attention and the sight line of the user are mainly focused outside the vehicle, it is necessary to observe the conditions outside the vehicle at any time, so that it is difficult to focus part of the attention in the vehicle 100, and the vehicle 100 can flexibly select an interaction mode or make other countermeasures according to the current conditions.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors 14, cause the processors 14 to perform the control method of any of the embodiments described above.
The embodiment of the application also provides a vehicle. The vehicle includes sensors, an on-board system, memory, and one or more processors 14, with one or more programs stored in the memory and configured to be executed by the one or more processors 14. The program includes a control method for executing any one of the above embodiments.
The processor 14 may be used to provide computational and control capabilities to support the operation of the overall vehicle travel system. The memory of the vehicle provides an environment for the computer readable instructions stored in the memory to operate.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions interacted with by a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the above embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A control method of a vehicle, characterized by comprising:
detecting a current state of the vehicle;
judging the driving scene of the vehicle according to the current state, wherein the driving scene comprises a scene before driving, a scene during driving and a scene after driving;
generating interactive instructions according to the driving scene, the interactive instructions comprising instructions for vehicle devices in the driving scene; and
controlling the vehicle according to the interactive instruction;
the generating of the interactive instruction according to the driving scenario comprises:
generating an interactive instruction corresponding to the driving scene according to the triggering rule of the interactive instruction and the driving scene;
the trigger rules include a main rule and an auxiliary rule, the main rule is a necessary rule for activating an interactive instruction, the auxiliary rule is a reference rule for activating a corresponding interactive instruction, and the generating of the interactive instruction corresponding to the driving scene according to the trigger rule of the interactive instruction and the driving scene includes:
matching the master rule corresponding to the driving scene;
matching the auxiliary rule corresponding to the driving scene;
and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
2. The control method of claim 1, wherein determining the current state of the vehicle comprises:
determining the current state based on a change in sensor data of the vehicle.
3. The control method of claim 1, wherein the determining the driving scenario of the vehicle according to the current state comprises:
and determining the driving scene according to the current state and the cloud server data of the vehicle.
4. The control method according to claim 1, wherein generating the interactive instruction corresponding to the driving scenario according to the main rule and the assist rule includes:
sequencing the interactive instructions according to the auxiliary rules;
generating a final interactive instruction according to the sequencing result;
the controlling the vehicle according to the interactive instruction comprises:
and controlling the vehicle according to the final interactive instruction.
5. The control method of claim 1, wherein the controlling the vehicle according to the interactive instruction comprises:
judging the execution type of the interactive instruction;
and controlling the vehicle to interact according to the execution type.
6. A control apparatus of a vehicle, characterized by comprising:
the detection module is used for detecting the current state of the vehicle;
the judging module is used for judging the driving scene of the vehicle according to the current state;
the information generation module is used for generating an interactive instruction according to the driving scene; and
the control module controls the vehicle according to the interactive instruction;
the information generation module is further configured to:
generating an interactive instruction corresponding to the driving scene according to the triggering rule of the interactive instruction and the driving scene;
the trigger rules include a main rule and an auxiliary rule, the main rule is a necessary rule for activating the interactive instruction, the auxiliary rule is a reference rule for activating the corresponding interactive instruction, and the information generation module is specifically configured to:
matching the master rule corresponding to the driving scene;
matching the auxiliary rule corresponding to the driving scene;
and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
7. A vehicle, comprising a processor configured to:
detecting a current state of the vehicle;
judging the driving scene of the vehicle according to the current state, wherein the driving scene comprises a scene before driving, a scene during driving and a scene after driving;
generating interactive instructions according to the driving scene, the interactive instructions comprising instructions for vehicle devices in the driving scene;
controlling the vehicle according to the interactive instruction;
generating an interactive instruction corresponding to the driving scene according to a trigger rule of the interactive instruction and the driving scene, wherein the trigger rule comprises a main rule and an auxiliary rule, the main rule is a necessary rule for activating the interactive instruction, and the auxiliary rule is a reference rule for activating the corresponding interactive instruction;
matching the master rule corresponding to the driving scene;
matching the auxiliary rule corresponding to the driving scene;
and generating an interactive instruction corresponding to the driving scene according to the main rule and the auxiliary rule.
8. The vehicle of claim 7, comprising a sensor coupled to a processor configured to:
determining the current state based on a change in sensor data of the vehicle.
9. The vehicle of claim 8, wherein the processor is configured to:
and determining the driving scene according to the current state and the cloud server data of the vehicle.
10. The vehicle of claim 9, wherein the processor is configured to:
sequencing the interactive instructions according to the auxiliary rules;
generating a final interactive instruction according to the sequencing result;
and controlling the vehicle according to the final interactive instruction.
11. The vehicle of claim 7, wherein the processor is further configured to:
judging the execution type of the interactive instruction;
and controlling the vehicle to interact according to the execution type.
12. A vehicle, characterized by comprising:
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the control method of any of claims 1-5.
13. A computer readable storage medium having stored thereon a computer program which, when executed by one or more processors, causes the processors to execute a control method of a vehicle according to any one of claims 1-5.
CN201910849832.3A 2019-09-09 2019-09-09 Vehicle control method and device and vehicle Active CN110654389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910849832.3A CN110654389B (en) 2019-09-09 2019-09-09 Vehicle control method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910849832.3A CN110654389B (en) 2019-09-09 2019-09-09 Vehicle control method and device and vehicle

Publications (2)

Publication Number Publication Date
CN110654389A CN110654389A (en) 2020-01-07
CN110654389B true CN110654389B (en) 2022-02-11

Family

ID=69038061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910849832.3A Active CN110654389B (en) 2019-09-09 2019-09-09 Vehicle control method and device and vehicle

Country Status (1)

Country Link
CN (1) CN110654389B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292448B (en) * 2020-02-20 2021-10-08 深圳市春晖信档案技术服务有限公司 File cabinet, centralized file management terminal and file management method
CN113799717A (en) * 2020-06-12 2021-12-17 广州汽车集团股份有限公司 Fatigue driving relieving method and system and computer readable storage medium
CN111942307A (en) * 2020-08-12 2020-11-17 华人运通(上海)云计算科技有限公司 Scene generation method, device, system, equipment and storage medium
CN112130547B (en) * 2020-09-28 2024-05-03 广州小鹏汽车科技有限公司 Vehicle interaction method and device
CN114312815B (en) * 2020-09-30 2024-05-07 比亚迪股份有限公司 Driving prompt method and device and automobile
CN112721909B (en) * 2021-01-27 2022-04-08 浙江吉利控股集团有限公司 Vehicle control method and system and vehicle
CN113415250B (en) * 2021-06-16 2023-01-06 Oppo广东移动通信有限公司 Device control method, device, electronic device and storage medium
CN114475479A (en) * 2022-01-20 2022-05-13 奇瑞汽车股份有限公司 Automobile control method and device and computer storage medium

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8597027B2 (en) * 2009-11-25 2013-12-03 Loren J. Staplin Dynamic object-based assessment and training of expert visual search and scanning skills for operating motor vehicles
US9988037B2 (en) * 2014-04-15 2018-06-05 Ford Global Technologies, Llc Driving scenario prediction and automatic vehicle setting adjustment
CN104290745B (en) * 2014-10-28 2017-02-01 奇瑞汽车股份有限公司 Driving method of semi-automatic driving system for vehicle
CN105799710B (en) * 2016-03-11 2019-02-05 北京理工大学 A kind of autonomous instruction car system of interactive mode
EP3264391A1 (en) * 2016-06-30 2018-01-03 Honda Research Institute Europe GmbH Method and system for assisting a driver in driving a vehicle and vehicle on which such system is mounted
US10209708B2 (en) * 2016-07-28 2019-02-19 Lytx, Inc. Determining driver engagement with autonomous vehicle
CN108958804A (en) * 2017-05-25 2018-12-07 蔚来汽车有限公司 Man-machine interaction method suitable for application scenarios related with vehicle
CN109428968B (en) * 2017-08-24 2021-03-09 中兴通讯股份有限公司 Method and device for controlling terminal and storage medium
US11093829B2 (en) * 2017-10-12 2021-08-17 Honda Motor Co., Ltd. Interaction-aware decision making
CN108197526A (en) * 2017-11-23 2018-06-22 西安艾润物联网技术服务有限责任公司 Detection method, system and computer readable storage medium
US11130630B2 (en) * 2017-11-27 2021-09-28 Amazon Technologies, Inc. Collision prevention for autonomous vehicles
US10788829B2 (en) * 2017-12-20 2020-09-29 International Business Machines Corporation Self-driving vehicle passenger management
CN108891422B (en) * 2018-07-09 2020-04-24 深圳市易成自动驾驶技术有限公司 Control method and device of intelligent vehicle and computer readable storage medium
CN108974003A (en) * 2018-08-09 2018-12-11 北京智行者科技有限公司 A kind of exchange method
CN109445732A (en) * 2018-09-30 2019-03-08 上海友衷科技有限公司 The Vehicular information display method and its display system of identity-based identification
CN109448409A (en) * 2018-10-30 2019-03-08 百度在线网络技术(北京)有限公司 Method, apparatus, equipment and the computer storage medium of traffic information interaction
CN109887373B (en) * 2019-01-30 2021-11-23 北京津发科技股份有限公司 Driving behavior data acquisition method, evaluation method and device based on vehicle driving
CN110015308B (en) * 2019-04-03 2021-02-19 广州小鹏汽车科技有限公司 Human-vehicle interaction method and system and vehicle
CN110070738A (en) * 2019-05-27 2019-07-30 广州小鹏汽车科技有限公司 Drive function recommended method, device and vehicle
CN110209278A (en) * 2019-05-30 2019-09-06 广州小鹏汽车科技有限公司 People-car interaction method, apparatus, storage medium and controlling terminal

Also Published As

Publication number Publication date
CN110654389A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN110654389B (en) Vehicle control method and device and vehicle
US10717412B2 (en) System and method for controlling a vehicle using secondary access methods
US11150652B2 (en) Method for operating a driver assistance device of a motor vehicle
US10852720B2 (en) Systems and methods for vehicle assistance
WO2016067593A1 (en) Vehicle control apparatus
CN106467106A (en) System and method for driver assistance
US11190155B2 (en) Learning auxiliary feature preferences and controlling the auxiliary devices based thereon
EP3410070B1 (en) Information processing apparatus and information processing method
WO2018101978A1 (en) Vehicle tutorial system and method for sending vehicle tutorial to tutorial manager device
US11285966B2 (en) Method and system for controlling an autonomous vehicle response to a fault condition
JP2010247799A (en) Control system for in-vehicle apparatus
US10911589B2 (en) Vehicle control device
CN111223479A (en) Operation authority control method and related equipment
KR20230016163A (en) Method and system for improving user warning in autonomous vehicles
CN109584871B (en) User identity recognition method and device of voice command in vehicle
JP5617942B2 (en) In-vehicle device control system
US11210722B2 (en) Adaptive vehicle feature matching system
US20220229432A1 (en) Autonomous vehicle camera interface for wireless tethering
US11845315B2 (en) Intelligent power management while in camp mode
CN115635978A (en) Vehicle human-computer interaction method and device and vehicle
CN115546920A (en) AI intelligent vehicle system based on Hilens
US20240025437A1 (en) Driver assistance system for vehicle
US20240317304A1 (en) Information processing device and information processing system
US20240025432A1 (en) Driver assistance system for vehicle
US20240142777A1 (en) Inebriation test system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant