[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112061048B - Scene triggering method, device, equipment and storage medium - Google Patents

Scene triggering method, device, equipment and storage medium Download PDF

Info

Publication number
CN112061048B
CN112061048B CN202010930419.2A CN202010930419A CN112061048B CN 112061048 B CN112061048 B CN 112061048B CN 202010930419 A CN202010930419 A CN 202010930419A CN 112061048 B CN112061048 B CN 112061048B
Authority
CN
China
Prior art keywords
module
state
driving state
triggering
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010930419.2A
Other languages
Chinese (zh)
Other versions
CN112061048A (en
Inventor
丁磊
郑洲
王昶旭
赵叶霖
卢加浚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202010930419.2A priority Critical patent/CN112061048B/en
Publication of CN112061048A publication Critical patent/CN112061048A/en
Application granted granted Critical
Publication of CN112061048B publication Critical patent/CN112061048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for scene triggering, wherein the method comprises the following steps: triggering the vehicle-mounted monitoring system to acquire current sign data of the driver according to an execution request of a target scene; determining the driving state of the driver according to the current sign data; and triggering a corresponding scene execution module to execute a function corresponding to the driving state according to the driving state. According to the technical scheme of the embodiment of the application, the driving state of the driver is monitored during driving, and then the corresponding scene execution module is triggered to execute the corresponding function, if the voice interaction module is triggered, the driver can timely remind and give corresponding suggestions, so that the driving safety is improved, the intelligent service of the vehicle is realized, and the user experience is improved.

Description

Scene triggering method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent vehicle technologies, and in particular, to a method, an apparatus, a device, and a storage medium for scene triggering.
Background
Various output devices are provided on the vehicle, such as a display screen, an atmosphere light, a seat, a sound, an air conditioner, and the like. These output devices typically perform some function alone and cannot cooperate with each other to achieve some scenario. As vehicle intelligence has evolved, users expect that vehicles can provide more diverse services.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for scene triggering, which are used for solving the problems in the related technology, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a scene triggering method, where the method includes:
triggering the vehicle-mounted monitoring system to acquire current sign data of the driver according to the execution request of the target scene;
determining the driving state of the driver according to the current sign data;
and triggering the corresponding scene execution module to execute the function corresponding to the driving state according to the driving state.
In a second aspect, an embodiment of the present application provides a scene triggering apparatus, including:
the first triggering module is used for triggering the vehicle-mounted monitoring system to acquire current sign data of a driver according to an execution request of a target scene;
the driving state determining module is used for determining the driving state of the driver according to the current sign data;
and the second triggering module is used for triggering the corresponding scene execution module to execute the function corresponding to the driving state according to the driving state.
In a third aspect, an embodiment of the present application provides a scene trigger device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the scenario trigger method of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed by a processor, the computer-readable storage medium implements the scenario triggering method of the present application.
The advantages or beneficial effects in the above technical solution at least include: the driving state of a driver is monitored during driving, and then the corresponding scene execution module is triggered to execute the corresponding function, if the voice interaction module is triggered, the corresponding function is timely reminded and corresponding suggestions are given, so that the driving safety is improved, the intelligent service of a vehicle is realized, and the user experience is improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is an application architecture diagram of a scene triggering method according to an embodiment of the present application;
fig. 2 is a flowchart of a scenario triggering method according to an embodiment of the present application;
fig. 3 is a timing diagram of an application example of a scene trigger method according to an embodiment of the present application;
fig. 4 is a flowchart of an application example of a scenario trigger method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a scene trigger apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a scene triggering device according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The application provides a scene triggering method for triggering a corresponding scene execution module at a vehicle end to execute a corresponding function so as to realize a target scene.
In one example, as shown in fig. 1, each user (including the first user) may edit a scene at its mobile terminal or vehicle end (a1), upload the edited scene to the cloud scene server (a2), and save the edited scene to the scene database (A3). The mobile terminal comprises mobile intelligent equipment such as a mobile phone and a tablet personal computer. The management terminal may send a scenario query request to the cloud scenario server to request a query scenario (B1 and B2). The cloud scene server inquires one or more scenes to be pushed from the scene database and sends the scenes to the management terminal as a scene inquiry result (B3). The management terminal may receive the scene query result from the cloud scene server, so as to obtain the scene to be pushed (B3). And the management terminal screens the scenes to be pushed to obtain the selected scenes. The management terminal may also configure some initial scenarios as pick scenarios. The management terminal sends a scene push request to the message management center to request the refined scene push (B5), and the message management center pushes the refined scene to the corresponding vehicle end (B6).
After the vehicle end receives the pushed scenes, the scenes can be synchronized to a scene service module (C1) at the vehicle end. The condition management module at the vehicle end monitors the current scene condition at the vehicle end (C2), and when the current scene condition meets the scene trigger condition, the scene engine module at the vehicle end executes the scene (C3).
After the vehicle end executes the scene, the scene execution result may be uploaded to the cloud scene server (C4). The cloud scene server is provided with a big data center and presets a data burying point (C5).
Fig. 2 shows a flowchart of a scenario triggering method according to an embodiment of the present application. As shown in fig. 2, the method may include:
step S201, according to the execution request of the target scene, triggering the vehicle-mounted monitoring system to obtain the current sign data of the driver.
The target scene may be a scene selected by a user. For example: the user can select a target scene from a plurality of scenes prestored in the scene service module based on a vehicle-mounted scene Application program (APP) at the vehicle end, and further trigger an execution request of the target scene. The method for selecting the target scene based on the vehicle-mounted scene APP includes but is not limited to screen interface selection, voice triggering and the like.
In addition, the user can also select a target scene based on the scene application APP on the mobile terminal, and then send an execution request of the target scene to the vehicle end through network communication between the mobile terminal and the vehicle end. In the embodiment of the application, the target scene may be a healthy accompanying/safe mode scene.
The on-board monitoring system may be a Driver Monitoring System (DMS) including a driver monitoring camera aimed at a driver to monitor whether the driver is present and its current state in real time.
After the vehicle end receives the execution request of the target scene, the vehicle end automatically triggers the vehicle-mounted monitoring system to obtain the current sign data of the driver. The sign data may be physiological sign data of the driver.
And S202, determining the driving state of the driver according to the current sign data.
The driving state is used to characterize whether the driver is driving properly.
And step S203, triggering a corresponding scene execution module to execute a function corresponding to the driving state according to the driving state.
Each scene has scene configuration information including information of a scene execution module and information of an execution function. As for the target scene in this embodiment, the configuration information of the target scene includes the scene execution module that needs to be triggered and the function that needs to be executed by the scene execution module in different driving states.
In this embodiment, the scene execution module may include a fragrance module, an (in-vehicle) fragrance lamp, a seat module, an intelligent air conditioner, a voice interaction module, and the like. The voice interaction module may be an Artificial Intelligence robot (Artificial Intelligence robot), such as an avatar designed by a developer, for voice interaction with a vehicle-end user.
According to the method, the driving state of the driver is monitored during driving, and then the corresponding scene execution module is triggered to execute the corresponding function, if the condition that the driver is not suitable for driving is detected, the fragrance module can be triggered to release fresh fragrance, the seat module is triggered to open ventilation and slightly massage, the air quantity of the intelligent air conditioner is triggered to be adjusted to be a middle level, and the voice interaction module is triggered to timely remind and give out corresponding suggestions, so that the driving safety is improved, the intelligent service of the vehicle is realized, and the user experience is improved.
In one example, the driver's driving state may be determined in conjunction with historical sign data for the driver. The step S202 may include: carrying out face recognition on a driver to obtain identity information of the driver; obtaining historical sign data of the driver according to the identity information; and determining the driving state according to the historical sign data and the current sign data.
For example: when a driver approaches a vehicle, the face recognition device on the B-pillar of the vehicle performs face recognition on the driver to obtain Identity information of the driver, such as an Identity (ID). And further determines the driver's control authority for the vehicle. Upon entering the target scene, this identity information is used to retrieve corresponding historical sign data, i.e., the driver's historical sign data.
The driving state of the driver can be determined by analytically comparing the historical vital sign data with the current vital sign data, such as the current physical state being a good or negative result. When the current physical state is good, representing the driving state suitable for driving of the driver; when the current physical state is a negative result, a driving state in which the driver is not suitable for driving is characterized.
The vital sign data may include, among other things, Heart Rate Variability (HRV) data, blood pressure changes, respiration data, gender, age, and the like. Classification criteria such as parameter criteria or data ranges corresponding to the current physical state being a good and negative result, respectively, may be preset. For example: by analyzing and comparing the current historical sign data and the historical sign data and combining preset classification standards, whether the current body state is a good result or a negative result can be determined.
In yet another example, the driving state of the driver may be determined in conjunction with the facial emotion of the driver. The step S202 may include: determining the current body state of the driver according to the current sign data; identifying a facial emotion of a driver; and determining the driving state according to the facial emotion and the current body state.
The determining of the current physical state of the driver according to the current sign data may include determining the current physical state of the driver according to the current sign data and the historical sign data of the driver, and the specific method may be as described above.
The facial emotion of the driver can be obtained based on monitoring of an on-board monitoring system. For example: and the DMS acquires a face image of the driver, the face image comprises the facial features of the driver, and the facial features are input into the trained emotion recognition model to further obtain the facial emotion of the driver. Facial emotions may include good and negative results.
Facial emotions may include anger, disgust, fear, happiness, calmness, injury, surprise, and the like. The classification criteria of the facial emotion can be preset, such as the emotion type corresponding to good facial emotion and the emotion type corresponding to negative facial emotion.
In one embodiment, determining the driving state based on the facial emotion and the current body state may include: determining that the driving state is a first preset state under the condition that the current body state and the facial emotion are good; and/or determining the driving state to be a second preset state under the condition that the current physical state is good and the facial emotion is a negative result; and/or determining the driving state to be a third preset state under the condition that the current body state and the facial emotion are negative results.
And then different scene execution modules can be triggered to execute different functions according to different driving states. For example: under the condition that the driving state is a first preset state, triggering the fragrance module and the atmosphere lamp to execute a function corresponding to the first preset state; and/or under the condition that the driving state is a second preset state, triggering the fragrance module, the atmosphere lamp and the seat module to execute a function corresponding to the second preset state, and triggering the voice interaction module to enter a chat function; and/or under the condition that the driving state is a third preset state, triggering the fragrance module, the seat module and the air conditioning module to execute a function corresponding to the second preset state, and triggering the voice interaction module to prompt a driver to control the vehicle to enter an automatic driving function.
An example of the above method is described below in conjunction with fig. 3 and 4.
(1) The user, which may be a driver, selects a target scene. The user can select a target scene through a scene card on a screen interface of the automobile end, and the target scene can also be triggered by voice.
(2) Identity recognition: the face recognition equipment on the B column of the vehicle can perform face recognition on the driver to obtain the ID of the driver, and historical sign data of the driver is called from the network database according to the ID.
(3) The DMS monitors the current sign data of the driver and the facial emotion, and further determines the physical state, such as the health information of the driver.
(4) Displaying the health information of the driver: and displaying a driver health information page of the personal center through the central control screen.
(5) After the driving state of the driver is determined, triggering a voice interaction module (such as an AI iRobot) to report a corresponding driving state result: under the condition that driving state is first preset state, AI iRobot voice broadcast: "your state is good everywhere, congratulate your journey is pleasant"; under the condition that the driving state is the second preset state, AI iRobot voice broadcast: "what do your physical signs are all normal, but what is there? "; under the condition that driving state is first preset state, AI iRobot voice broadcast: "your current situation is not suitable for continuing to drive the vehicle".
(6) Triggering the corresponding scene execution component to execute the corresponding healing function: under the condition that the driving state is a first preset state, triggering the fragrance module and the atmosphere lamp to execute a function corresponding to the first preset state (the fragrance type of the fragrance module is switched to the flower fragrance type, and the atmosphere lamp in the vehicle presents an orange flowing effect); under the condition that the driving state is a second preset state, triggering the fragrance module, the atmosphere lamp and the seat module to execute a function corresponding to the second preset state (the fragrance type of the fragrance module is switched to a fresh fragrance type, the atmosphere lamp in the vehicle presents a blue flowing effect, the seat module is opened for ventilation and heavy massage); and under the condition that the driving state is a third preset state, triggering the fragrance module, the seat module and the air conditioning module to execute a function corresponding to the second preset state (the fragrance type of the fragrance module is switched to a fresh fragrance type, the ventilation air volume of the seat is a middle level, and the seat is slightly massaged).
(7) Triggering a voice interaction module: and under the condition that the driving state is a second preset state, triggering a voice interaction module (such as AI iRobot) to enter a chat function. For example: when the facial emotion is fatigue or hurt, the AI iRobot turns on the chat accordingly based on the obtained emotion index (facial emotion recognition result). And under the condition that the driving state is a third preset state, triggering the voice interaction module to prompt the driver to control the vehicle to enter the automatic driving function. For example: AI iRobot voice play: the "front road section is adapted to use the automatic driving function".
In one embodiment, the time when each scene execution module is triggered may be executed according to the time axis information in the scene configuration information. For example: entering the step (5) 5s after the target scene is started, and entering the step (6) 15s after the target scene is started; step (7) is entered 20s after the start of the target scene.
It should be noted that the scene configuration information may be preset according to an actual situation, for example, the scene configuration information is already set in a scene pushed to the vehicle end by the message management center, that is, the scene configuration information is set by the management terminal or is edited by the user in advance. The scene configuration information can also be edited and updated by the user after the target scene is downloaded from the vehicle end. For example: one or more items of content in the scene configuration information of the target scene may be thermally updated according to the user edit information. As for the target scene of the embodiment, the scene configuration information includes the scene execution module that needs to be triggered and the function that needs to be executed in different driving states.
In one example, the user may edit the time axis, or may edit a function or parameter to be executed by a certain scene execution module. For example: editing voice interaction content of the voice interaction module; editing the display effect of the atmosphere lamp, and the like. The scene configuration information of the vehicle terminal can be conveniently updated by the content edited by the user through thermal update, so that the personalized scene is realized, and the user experience is further improved. In addition, the target scene edited by the user can be shared by other users, and scene sharing and interaction among the users are realized.
An embodiment of the present application further provides a scene triggering apparatus, as shown in fig. 5, the apparatus may include:
the first triggering module 501 is configured to trigger the vehicle-mounted monitoring system to obtain current sign data of the driver according to an execution request of a target scene;
a driving state determining module 502, configured to determine a driving state of the driver according to the current sign data;
and a second triggering module 503, configured to trigger the corresponding scene execution module to execute a function corresponding to the driving state according to the driving state.
In one embodiment, the driving state determination module 502 includes:
the face recognition unit is used for carrying out face recognition on the driver to obtain the identity information of the driver;
the acquisition unit is used for acquiring historical sign data of the driver according to the identity information;
and the driving state determining unit is used for determining the driving state according to the historical sign data and the current sign data.
In one embodiment, the driving state determination module 502 includes:
the body state determining unit is used for determining the current body state of the driver according to the current sign data;
an emotion recognition unit for recognizing facial emotion of the driver;
and the driving state determining unit is used for determining the driving state according to the facial emotion and the current body state.
In one embodiment, the driving state determination unit is configured to:
determining that the driving state is a first preset state under the condition that the current body state and the facial emotion are good; and/or
Determining that the driving state is a second preset state under the condition that the current body state is good and the facial emotion is a negative result; and/or
And under the condition that the current body state and the facial emotion are negative results, determining that the driving state is a third preset state.
In one embodiment, the second triggering module 503 is configured to:
under the condition that the driving state is a first preset state, triggering the fragrance module and the atmosphere lamp to execute a function corresponding to the first preset state; and/or
Under the condition that the driving state is a second preset state, triggering the fragrance module, the atmosphere lamp and the seat module to execute a function corresponding to the second preset state, and triggering the voice interaction module to enter a chat function; and/or
And under the condition that the driving state is a third preset state, triggering the fragrance module, the seat module and the air conditioning module to execute a function corresponding to the second preset state, and triggering the voice interaction module to prompt the driver to control the vehicle to enter an automatic driving function.
In one embodiment, the second triggering module 503 is configured to:
and after the driving state is determined, triggering a voice interaction module to broadcast a corresponding driving state result.
In one embodiment, the apparatus may further comprise:
and the hot updating module is used for hot updating one or more items of contents in the scene configuration information of the target scene according to the user editing information, and the scene configuration information comprises the scene execution module which needs to be triggered and the functions which need to be executed under different driving states.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
Fig. 6 shows a block diagram of a scene trigger device according to an embodiment of the present application. As shown in fig. 6, the apparatus includes: a memory 601 and a processor 602, the memory 601 having stored therein instructions executable on the processor 602. The processor 602, when executing the instructions, implements any of the methods in the embodiments described above. The number of the memory 601 and the processor 602 may be one or more. The terminal or server is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The terminal or server may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
The device may further include a communication interface 603 for communicating with an external device for data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 602 may process instructions for execution within the terminal or server, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple terminals or servers may be connected, with each device providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may complete mutual communication through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 601), which stores computer instructions, and when executed by a processor, the program implements the method provided in the embodiments of the present application.
Optionally, the memory 601 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of a terminal or a server, and the like. Further, the memory 601 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 601 may optionally include memory located remotely from processor 602, which may be connected to a terminal or server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for triggering a scene, comprising:
triggering the vehicle-mounted monitoring system to acquire current sign data of the driver according to the execution request of the target scene;
determining the driving state of the driver according to the current sign data;
according to the driving state, triggering a corresponding scene execution module to execute a function corresponding to the driving state, wherein the triggered time of each scene execution module is executed according to time axis information in the scene configuration information of the target scene, and the method comprises the following steps:
after the target scene is started for a first time, triggering a voice interaction module to play a result of the driving state;
after the target scene is started for the second time length, triggering a corresponding scene execution component to execute a corresponding cure function, wherein the process comprises the following steps: triggering at least one of a fragrance module, a fragrance lamp, a seat module and an air conditioning module to execute a function corresponding to the driving state;
after the target scene is started for a third duration, triggering the voice interaction module, including: under the condition that the driving state is a second preset state, triggering the voice interaction module to enter a chat function; under the condition that the driving state is a third preset state, triggering the voice interaction module to prompt the driver to control the vehicle to enter an automatic driving function;
wherein the first duration is less than the second duration, and the second duration is less than the third duration.
2. The method of claim 1, wherein determining the driving state of the driver from the current sign data comprises:
carrying out face recognition on the driver to obtain identity information of the driver;
obtaining historical sign data of the driver according to the identity information;
and determining the driving state according to the historical sign data and the current sign data.
3. The method of claim 1, wherein determining the driving state of the driver from the current sign data comprises:
determining the current physical state of the driver according to the current sign data;
identifying a facial emotion of the driver;
and determining the driving state according to the facial emotion and the current body state.
4. The method of claim 3, wherein determining the driving state from the facial emotion and the current body state comprises:
determining the driving state to be a first preset state under the condition that the current body state and the facial emotion are both good; and/or
Determining that the driving state is the second preset state in a case where the current body state is good and the facial emotion is a negative result; and/or
Determining that the driving state is the third preset state if both the current body state and the facial emotion are negative results.
5. The method of claim 4, wherein triggering at least one of a fragrance module, a mood light, a seat module, and an air conditioning module to perform a function corresponding to the driving status after the target scene is initiated for a second duration comprises:
under the condition that the driving state is a first preset state, triggering the fragrance module and the atmosphere lamp to execute a function corresponding to the first preset state; and/or
Under the condition that the driving state is a second preset state, triggering the fragrance module, the atmosphere lamp and the seat module to execute a function corresponding to the second preset state; and/or
And under the condition that the driving state is a third preset state, triggering the fragrance module, the seat module and the air conditioning module to execute a function corresponding to the third preset state.
6. The method of any of claims 1 to 5, further comprising:
and thermally updating one or more items of contents in the scene configuration information of the target scene according to user editing information, wherein the scene configuration information comprises scene execution modules which need to be triggered and functions which need to be executed in different driving states.
7. A scene trigger apparatus, comprising:
the first triggering module is used for triggering the vehicle-mounted monitoring system to acquire current sign data of a driver according to an execution request of a target scene;
the driving state determining module is used for determining the driving state of the driver according to the current sign data;
a second triggering module, configured to trigger a corresponding scene execution module to execute a function corresponding to the driving state according to the driving state, where a time at which each of the scene execution modules is triggered is executed according to time axis information in the scene configuration information of the target scene, and the triggering module includes:
after the target scene is started for a first time, triggering a voice interaction module to play a result of the driving state;
after the target scene is started for the second time length, triggering a corresponding scene execution component to execute a corresponding cure function, wherein the process comprises the following steps: triggering at least one of a fragrance module, a fragrance lamp, a seat module and an air conditioning module to execute a function corresponding to the driving state;
after the target scene is started for a third duration, triggering the voice interaction module, including: under the condition that the driving state is a second preset state, triggering the voice interaction module to enter a chat function; under the condition that the driving state is a third preset state, triggering the voice interaction module to prompt the driver to control the vehicle to enter an automatic driving function;
wherein the first duration is less than the second duration, and the second duration is less than the third duration.
8. The apparatus of claim 7, wherein the driving state determination module comprises:
the face recognition unit is used for carrying out face recognition on the driver to obtain the identity information of the driver;
the acquisition unit is used for acquiring historical sign data of the driver according to the identity information;
and the driving state determining unit is used for determining the driving state according to the historical sign data and the current sign data.
9. The apparatus of claim 7, wherein the driving state determination module comprises:
the body state determining unit is used for determining the current body state of the driver according to the current sign data;
an emotion recognition unit for recognizing facial emotion of the driver;
and the driving state determining unit is used for determining the driving state according to the facial emotion and the current body state.
10. The apparatus according to claim 9, characterized in that the driving state determination unit is configured to:
determining the driving state to be a first preset state under the condition that the current body state and the facial emotion are both good; and/or
Determining that the driving state is the second preset state in a case where the current body state is good and the facial emotion is a negative result; and/or
Determining that the driving state is the third preset state if both the current body state and the facial emotion are negative results.
11. The apparatus of claim 10, wherein the second triggering module is configured to:
under the condition that the driving state is a first preset state, triggering a fragrance module and an atmosphere lamp to execute a function corresponding to the first preset state; and/or
Under the condition that the driving state is a second preset state, triggering the fragrance module, the atmosphere lamp and the seat module to execute a function corresponding to the second preset state; and/or
And under the condition that the driving state is a third preset state, triggering the fragrance module, the seat module and the air conditioning module to execute a function corresponding to the third preset state.
12. The apparatus of any one of claims 7 to 11, further comprising:
and the hot updating module is used for hot updating one or more items of contents in the scene configuration information of the target scene according to the user editing information, wherein the scene configuration information comprises scene execution modules which need to be triggered and functions which need to be executed under different driving states.
13. A scene trigger apparatus comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any one of claims 1 to 6.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010930419.2A 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium Active CN112061048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010930419.2A CN112061048B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010930419.2A CN112061048B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112061048A CN112061048A (en) 2020-12-11
CN112061048B true CN112061048B (en) 2022-05-20

Family

ID=73663943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010930419.2A Active CN112061048B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112061048B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112968946B (en) * 2021-02-01 2023-06-02 斑马网络技术有限公司 Scene recognition method and device for internet-connected vehicle and electronic equipment
CN113792059A (en) * 2021-09-10 2021-12-14 中国第一汽车股份有限公司 Scene library updating method, device, equipment and storage medium
CN113923245B (en) * 2021-10-16 2022-07-05 安徽江淮汽车集团股份有限公司 A self-defined scene control system for intelligent networking vehicle
CN114330778A (en) * 2021-12-31 2022-04-12 阿维塔科技(重庆)有限公司 Intelligent function management method and device for vehicle, vehicle and computer storage medium
CN114435383A (en) * 2022-01-28 2022-05-06 中国第一汽车股份有限公司 Control method, device, equipment and storage medium
CN114348012A (en) * 2022-01-30 2022-04-15 中国第一汽车股份有限公司 Driving mode control method and device and vehicle
CN114879780B (en) * 2022-06-20 2024-05-14 启明信息技术股份有限公司 In-vehicle temperature adjusting method and system based on vehicle control APP
CN117545133A (en) * 2023-11-02 2024-02-09 深圳市蓝鲸智联科技股份有限公司 Method for automatically changing color and brightness of atmosphere lamp according to external weather and driving mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108528371A (en) * 2018-03-07 2018-09-14 北汽福田汽车股份有限公司 Control method, system and the vehicle of vehicle
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN110395260A (en) * 2018-04-20 2019-11-01 比亚迪股份有限公司 Vehicle, safe driving method and device
CN111166357A (en) * 2020-01-06 2020-05-19 四川宇然智荟科技有限公司 Fatigue monitoring device system with multi-sensor fusion and monitoring method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106004735B (en) * 2016-06-27 2019-03-15 京东方科技集团股份有限公司 The method of adjustment of onboard system and vehicle service

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108528371A (en) * 2018-03-07 2018-09-14 北汽福田汽车股份有限公司 Control method, system and the vehicle of vehicle
CN110395260A (en) * 2018-04-20 2019-11-01 比亚迪股份有限公司 Vehicle, safe driving method and device
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN111166357A (en) * 2020-01-06 2020-05-19 四川宇然智荟科技有限公司 Fatigue monitoring device system with multi-sensor fusion and monitoring method thereof

Also Published As

Publication number Publication date
CN112061048A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112061048B (en) Scene triggering method, device, equipment and storage medium
CN106170826B (en) The monitoring method and system of cab-getter's number
EP3618012A1 (en) Data collection apparatus, data collection system, data collection method, and on-vehicle device
US10105593B2 (en) File prefetching for gaming applications accessed by electronic devices
DE102018200244A1 (en) Driver assistance system and method for the individual determination of boredom
CN111009311A (en) Medical resource recommendation method, device, medium and equipment
JP2024510518A (en) Terminal upgrade method and device
CN110211704A (en) The engine method and server of matter of opening
CN114475488A (en) Vehicle scene adjusting method and device and computer readable storage medium
CN108091373A (en) Medical data processing method and processing device
US20200099970A1 (en) Synchronized video control system for sexual stimulation devices
CN111160562A (en) Continuous learning method and device based on meta-learning optimization method
CN114051116A (en) Video monitoring method, device and system for driving test vehicle
US9656677B2 (en) Method and device for ascertaining a driver state
CN108305476A (en) A kind of method, system and ambulance obtaining illegal vehicle information
CN108616824B (en) Switching method and device of motion group and computer readable storage medium
CN113901895B (en) Door opening action recognition method and device for vehicle and processing equipment
CN117290605A (en) Vehicle-mounted intelligent scene recommendation method, device, equipment and medium
CN114710553B (en) Information acquisition method, information pushing method and terminal equipment
DE102021131856A1 (en) Planning the charging of electric vehicles before departure
CN116215235A (en) Vehicle energy management method and device, electronic equipment and vehicle
CN113744202A (en) Vehicle consistency detection method and device for network appointment
CN109529354A (en) Method, device and storage medium for determining game background music
CN108860150A (en) Automobile brake method, apparatus, equipment and computer readable storage medium
CN108959890A (en) Control method and electric terminal in electric terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant