[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110825503B - Theme switching method and device, storage medium and server - Google Patents

Theme switching method and device, storage medium and server Download PDF

Info

Publication number
CN110825503B
CN110825503B CN201910969202.XA CN201910969202A CN110825503B CN 110825503 B CN110825503 B CN 110825503B CN 201910969202 A CN201910969202 A CN 201910969202A CN 110825503 B CN110825503 B CN 110825503B
Authority
CN
China
Prior art keywords
emotion
mood
theme
user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910969202.XA
Other languages
Chinese (zh)
Other versions
CN110825503A (en
Inventor
王建华
马琳
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910969202.XA priority Critical patent/CN110825503B/en
Publication of CN110825503A publication Critical patent/CN110825503A/en
Application granted granted Critical
Publication of CN110825503B publication Critical patent/CN110825503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Child & Adolescent Psychology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical fields of data analysis, user portrayal and user emotion portrayal, and provides a theme switching method, which comprises the following steps: monitoring whether an application program is in foreground operation, and when the application program is in foreground operation, operating the application program foreground to acquire current face feature data and acquire voice information of a user in real time; determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information; and determining the mood index of the user according to the first mood and the second mood, acquiring the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program. Based on the user expression, judging the mood of the user, obtaining a more accurate theme according to the mood of the user, and realizing automatic theme updating, so that the theme is more matched with the mood of the user, or reminding the user to keep good mood, realizing mood adjustment of the user, and improving user experience.

Description

Theme switching method and device, storage medium and server
Technical Field
The invention relates to the technical fields of data analysis, user portrayal and user emotion portrayal, in particular to a theme switching method and device, a storage medium and a server.
Background
With the development of computer technology, there is an increasing demand for electronic devices to be attractive. More and more users use terminal equipment to execute various functions so as to meet the demands of themselves, such as reading characters, watching videos, listening to music, playing games and the like by using the terminal equipment, in order to adapt to the individual demands of the users, people can download different types of topics through a topic application market, and install the downloaded topics in proper occasions, so that the electronic equipment can display corresponding desktop wallpaper, screen locking wallpaper, application icons and the like according to the topics, various application programs and system programs all comprise a plurality of settable and replaceable topics, different topics correspond to different display interfaces, conventional topic setting is mainly aimed at the topics of the terminal equipment, the user uses the application programs which are manually set according to own preference or default settings of application program background, the conventional setting mode enables the application program topic style to be fixed singly, the setting efficiency is low, the setting mode is also single or complicated, and if the user does not manually change the default topic of the application program background, the application program is not updated for a long time.
Disclosure of Invention
In order to solve the technical problems, particularly the problems that the current application program theme style is fixed and single, the setting efficiency is low and the mood of a user cannot be adjusted, the following technical scheme is specifically provided:
the theme switching method provided by the embodiment of the invention comprises the following steps:
monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation;
determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information;
determining a mood index of the user according to the first mood and the second mood;
and obtaining the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program.
Optionally, the determining the second emotion of the user according to the voice signal and the semantic information in the voice information includes:
matching the voice signal with a preset voice signal, and determining a first sub-emotion of the voice signal;
converting the semantic information into text information, performing word segmentation on the text information, and determining a second sub emotion according to the text information subjected to word segmentation;
And determining the second emotion based on the first sub-emotion, the second sub-emotion and a first weight preset by the first sub-emotion and a second weight preset by the first sub-emotion.
Optionally, the determining the mood index of the user according to the first mood and the second mood includes:
determining the emotion of the first emotion and the second emotion in the Prague emotion model and the corresponding mood weight value of each emotion in the Prague emotion model according to a machine learning model, and determining the mood index of the user according to the emotion and the mood weight value.
Optionally, the determining, according to the machine learning model, the emotion of the first emotion and the second emotion in the prazicke emotion model and the mood weight value corresponding to each emotion in the prazicke emotion model, and determining the mood index of the user according to the emotion and the mood weight value includes:
inputting the Prazik emotion model into the machine learning model to obtain an emotion fusion model;
inputting the first emotion into the emotion fusion model to obtain a first emotion value and a first weight value of the first emotion in the Prague emotion model, and inputting the second emotion into the emotion fusion model to obtain a second emotion value and a first weight value of the second emotion in the Prague emotion model;
And multiplying the first emotion value by the first emotion weight and multiplying the second emotion value by the second emotion weight to obtain the mood index of the user.
Optionally, after determining the mood index of the user according to the first mood and the second mood, the method includes:
acquiring a plurality of mood indexes in a preset time period of a user, and calculating the mood indexes to determine a plurality of mood index average values of the user;
the average value is taken as the mood index of the user.
Optionally, the obtaining the subject content corresponding to the mood index according to the preset rule, and configuring the subject content as the current subject of the application program includes:
when the mood index of the user is within a first preset threshold range, configuring a warm-color theme as a current theme of the application program;
and configuring the encouraging theme as the current theme of the application program when the mood index of the user is within a second preset threshold range.
Optionally, the theme switching method further includes:
determining a first mood index of the user according to the first mood, and determining a second mood index of the user according to the second mood;
acquiring a first theme content corresponding to the first mood index according to the preset rule, acquiring a second theme content corresponding to the second mood index according to the preset rule, comparing the first theme content with the second theme content, and determining whether the first theme content is consistent with the second theme content;
When the first theme content is inconsistent with the second theme content, acquiring a history theme content of the application program when the history first theme content is inconsistent with the history second theme content, and configuring the history theme content as the current theme content of the application program; the first theme content is consistent with the historical first theme content, and the second theme content is consistent with the historical second theme content.
The embodiment of the invention provides a theme switching device, which comprises:
the acquisition module is used for monitoring whether the application program is in foreground operation, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in foreground operation;
the determining module is used for determining a first emotion of the user according to the face characteristic data and determining a second emotion of the user according to the voice signals and the semantic information in the voice information;
the first mood index determining module is used for determining mood indexes of the user according to the first mood and the second mood;
the first configuration module is used for acquiring the theme content corresponding to the mood index according to a preset rule and configuring the theme content as the current theme of the application program.
Optionally, the determining module includes:
the matching unit is used for matching the voice signal with a preset voice signal and determining a first sub emotion of the voice signal;
the word segmentation unit is used for converting the semantic information into text information, carrying out word segmentation on the text information and determining a second sub emotion according to the text information after word segmentation;
the second emotion determining unit is used for determining the second emotion based on the first sub emotion, the second sub emotion, a first weight preset by the first sub emotion and a second weight preset by the first sub emotion.
Optionally, the configuration module includes:
the mood index determining unit is used for determining the mood of the first mood and the second mood in the Prague mood model and the mood weight value corresponding to each mood in the Prague mood model according to the machine learning model, and determining the mood index of the user according to the mood and the mood weight values.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the program realizes the theme switching method according to any technical scheme when being executed by a processor.
The embodiment of the invention also provides a server, which comprises:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the steps of the subject matter switching method according to any of the claims.
Compared with the prior art, the invention has the following beneficial effects:
1. the theme switching method provided by the embodiment of the application comprises the following steps: monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation; determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information; and determining the mood index of the user according to the first mood and the second mood, acquiring the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program. The mood of the user is judged based on the user expression, a more accurate theme is obtained based on the mood of the user, the user can be reminded of keeping the mood when the mood of the user is bad, and optimistic mood can be continuously injected into the user when the mood of the user is good, so that the theme is more fit with the mood of the user on the basis of realizing automatic updating of the theme, the user can keep a good mood by reminding the user, the effect of adjusting the mood of the user is realized, and further user experience is improved.
2. The method for switching the theme provided in the embodiment of the present application determines the mood index of the user according to the first mood and the second mood, including: determining the emotion of the first emotion and the second emotion in the Prague emotion model and the corresponding mood weight value of each emotion in the Prague emotion model according to a machine learning model, and determining the mood index of the user according to the emotion and the mood weight value. On the basis of acquiring the face and voice of the user acquired by the equipment terminal and determining the emotion of the user by analyzing the face and voice information of the user, determining the position of each emotion in the Prague emotion model by combining a machine learning algorithm, determining the current emotion index of the user, updating the theme corresponding to the emotion index of the user by the application program based on the emotion index of the user, and by the method, the style of the theme of the application program is increased and the user can be reminded of keeping the emotion at any time.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic flow chart of an implementation of a subject switching method according to an exemplary embodiment of the present invention;
FIG. 2 is a diagram of a Pracket model in an exemplary embodiment of the subject switching method;
FIG. 3 is a schematic diagram of a structure of determining emotion of a user from a placket model diagram in an exemplary embodiment of the subject switching method of the present invention;
FIG. 4 is a schematic diagram of an exemplary embodiment of a subject switching device;
fig. 5 is a schematic structural diagram of an embodiment of a server according to the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, but do not preclude the presence or addition of one or more other features, integers, steps, operations.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It will be appreciated by those skilled in the art that references to "application," "application program," "application software," and similar concepts herein are intended to be equivalent concepts well known to those skilled in the art, and refer to computer software, organically constructed from a series of computer instructions and related data resources, suitable for electronic execution. Unless specifically specified, such naming is not limited by the type, level of programming language, nor by the operating system or platform on which it operates. Of course, such concepts are not limited by any form of terminal.
In one implementation manner, as shown in fig. 1, the theme switching method provided in the embodiment of the present application includes: s100, S200, S300, S400.
S100: monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation;
s200: determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information;
s300: determining a mood index of the user according to the first mood and the second mood;
s400: and obtaining the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program.
Under the conventional situation, because the user is in the process of interacting with the mobile terminal, the terminal monitors whether the application program is in the foreground running state in real time, and when the application program is in the foreground running state, the user is indicated to interact with the application program in the mobile terminal, in the process, the mobile terminal can start the camera device to capture the face, start the voice acquisition device to acquire the voice information of the user in real time, and further can carry out emotion recognition based on the captured face, so that the theme of the application program can be more accurately corresponding to the emotion of the user, and the currently acquired face image is acquired. In addition, the face image of the user during unlocking can be obtained, and then the theme of the application program can be presented with the theme corresponding to the mood of the user after the user unlocks and opens the application program. In addition, after the face image is acquired, in order to determine the emotion of the user, the face feature data of the user is extracted, and the extraction of the face feature data can be realized by the following method, which mainly comprises the following steps: a series of characteristic points are predefined, marked in different areas of the face and mainly concentrated in eyebrow, eyes, mouth and chin parts of the face; extracting Gabor wavelet coefficients of the feature points through image convolution, and taking the matching distance of Gabor features as a measurement standard of similarity; and after the feature extraction is finished, obtaining a facial expression recognition result through a multi-layer neural network. A face recognition method based on a convolutional neural network is adopted, wherein the convolutional neural network is a feedforward neural network, and an artificial neuron of the feedforward neural network can respond to surrounding units in a part of coverage area. Convolutional neural networks consist of one or more convolutional layers and a top fully connected layer (corresponding to classical neural networks) and also include associated weights and pooling layers. Specifically, the main steps include: positioning the image position of the face, and performing image preprocessing (such as removing irrelevant noise); and adopting a network model comprising 5 roll layers and 3 random pooling layers to obtain a recognition result for the preprocessed image. Because the emotion of the user is determined by the face image alone, the subject corresponding to the emotion of the user cannot be accurately found, and under partial conditions, the emotion of the user can be covered by hiding the emotion of the user, and the emotion characteristics of the face are hidden, so that the current emotion of the user cannot be accurately determined. The voice signal is mainly matched with a preset voice signal in a database (see later in detail), the semantics are the meanings expressed by words and sentences of the user, the emotion characteristics of the user are identified based on the voice information of the user, the emotion and the mood of the user are comprehensively and jointly determined by the voice information and the information corresponding to the face characteristics, a more accurate mood state of the user can be obtained, and then the related application theme can be conveniently pushed to the user based on the mood state of the user, so that the user can play a role in adjusting the mood of the user in the process of opening the application program, and the user can release bad emotion affecting the mood of the user or keep the good emotion. When the mobile terminal is used for face unlocking, or the camera device can be used for shooting face data obtained when the angle of the face of the user is reached, the emotion of the user under the current condition is often directly expressed, and part of the users can possibly or through an emotion hiding method. In combination with the foregoing, the emotion with a single feature cannot fully or accurately represent the current emotion of the user, so after determining the facial emotion of the user in the emotion contained in the voice of the user, to accurately represent the emotion index of the user, the first emotion and the second emotion are fused together to jointly determine the emotion index of the user. In this application, a prazicke emotion model (see fig. 2 for details) is used as a reference, and the first emotion fusion and the second emotion fusion are used to determine the mood index of the user together through a machine learning model, and the specific user mood index determination process is described later, which is not described herein. It should be noted that, the process of obtaining the face feature data and the voice information in the present application does not have a sequence, and the sequence obtained by the face feature data and the voice information is not limited, and the face feature data and the voice information can be obtained and collected simultaneously, or the face feature data can be obtained first and then the real-time voice information is collected, or the sequence is the reverse.
Optionally, the theme switching method further includes:
determining a first mood index of the user according to the first mood, and determining a second mood index of the user according to the second mood;
acquiring a first theme content corresponding to the first mood index according to the preset rule, acquiring a second theme content corresponding to the second mood index according to the preset rule, comparing the first theme content with the second theme content, and determining whether the first theme content is consistent with the second theme content;
when the first theme content is inconsistent with the second theme content, acquiring a history theme content of the application program when the history first theme content is inconsistent with the history second theme content, and configuring the history theme content as the current theme content of the application program; the first theme content is consistent with the historical first theme content, and the second theme content is consistent with the historical second theme content.
It should be noted that, in the foregoing example process, since the voice is collected for a period of time and the face feature is kept for a period of time, the mood index of the user is determined to be inconsistent through the current face feature and the voice information of the user collected in real time, so that the subject content cannot be better matched with the mood index of the user, and in order to achieve better matching of the subject content of the application program with the mood index of the user. And configuring the historical subject matter under the same condition as the current subject matter of the application program, wherein the first subject matter is consistent with the historical first subject matter, the second subject matter is consistent with the historical second subject matter, namely, the first subject matter is warm, the historical first subject matter is warm, the second subject matter is encouraging, the historical second subject matter is encouraging, and when the historical subject matter is set to encouraging, the current set subject matter of the application program is replaced by the encouraging subject matter. The history in the process refers to a point in time before the first subject content and the second subject content are generated, and further, before the current face feature and the voice information acquired in real time are acquired. The method includes the steps that when a terminal monitors that an application program runs in the foreground, namely a user acquires face characteristics of the user when the user uses the application program in a 10:00 process, but cannot acquire voice information of the user at the same time, a first emotion of the user is obtained according to the face characteristics of the user to be happy, a mood index of the user is obtained according to the first emotion to be good, and first theme content obtained according to the mood index is a warm-color theme; collecting voice information of a user in a 10:01 time-sharing way, obtaining a second emotion of the user as sadness according to the voice information, obtaining that the mood index of the user is lower according to the second emotion, and obtaining a second theme content according to the mood index as an encouraging theme; at this time, since the subject contents obtained through the face features and the voice information are inconsistent, reference needs to be made to the historical subject contents under the same condition before 10:00 minutes, that is, the historical first subject contents are warm subjects before 10:00 minutes, and the historical second subject contents are encouraging subjects, where the subject contents configured by the application program are: if the application program is configured with the warm theme under the same condition before 00, the warm theme is configured as the current theme content of the application program, if 10: if the application program is configured with the encouraging theme under the same condition before 00, the encouraging theme is configured as the current theme content of the application program. In one embodiment, when the statistical first theme content and the statistical second theme content are inconsistent, the number of the first theme content serving as the historical theme content of the application program and the number of the second theme content serving as the historical theme content of the application program are configured, and the historical theme with the larger number is configured as the current theme content of the application program. According to the method, the configured theme content can be more attached to the mood of the user, and particularly, when the history theme is determined for the user under the condition, the theme content of the application program and the mood of the user are more attached to each other, so that the requirements of the user can be met.
Further, the application program can determine the theme through the face and the voice; the mood index can also be determined through the words input by the user, so as to obtain the theme. The mood index of the user is determined in combination with the context of the present application, and will not be described in detail herein. When the two mood indexes are inconsistent, or the subject contents are inconsistent, one of the two mood indexes may be used as a reference. For example, the user sadness is obtained from the voice, but the user is very happy from the text, and the historical theme is referred to in combination with the previous description at this time, and the theme consistent with the historical theme is switched. In addition, the theme of the application program in the application can be determined by taking a plurality of mood indexes in a preset time period as a reference.
Optionally, the determining the second emotion of the user according to the voice signal and the semantic information in the voice information includes:
matching the voice signal with a preset voice signal, and determining a first sub-emotion of the voice signal;
converting the semantic information into text information, performing word segmentation on the text information, and determining a second sub emotion according to the text information subjected to word segmentation;
and determining the second emotion based on the first sub-emotion, the second sub-emotion and a first weight preset by the first sub-emotion and a second weight preset by the first sub-emotion.
In combination with the foregoing, the voice signal needs to be matched with a preset voice signal to determine a part of emotion contained in the voice signal of the user. Correspondingly, in the embodiment provided by the application, the voice signal is mainly matched with the preset voice signal in the database, so that the first sub-emotion corresponding to the voice signal is determined, for example, if the user roar loudly, the voice signal with high decibel and rapidness is obtained, after the voice emotion recognition technology is adopted for processing, the decibel and the speech speed in the voice of the user are recognized, the decibel and the speech speed in the voice of the user are compared with the emotion characteristics corresponding to the preset decibel and the speech speed in the database, and the first sub-emotion corresponding to the voice signal of the user is determined. In order to more accurately determine the emotion of the user, word recognition word segmentation is carried out on semantic information in a user voice signal, and in the word segmentation process, the current emotion of the user can be accurately read out due to word segmentation, and the current emotion characteristics of the user can be accurately represented by combining the user voice signal (the voice emotion in the speaking process of the user, namely the mood). In some cases, users often express their own emotion by hiding their own speech signals, for example, when the user is angry, their own emotional characteristics are often characterized by calm mood in combination with semantics expressing their angry emotion; in some cases, the emotion characteristics of the user can be identified directly through the speaking mood, and the emotion characteristics of the user are more obvious by combining with the semantic information of the user. Therefore, the semantic signal can often directly acquire part of the user emotion characteristics, the emotion characteristics represented by the voice signal assist the expression of the second emotion, and the current emotion of the user can be more accurately represented by presetting weights of the voice signal and the semantic signal in the second emotion characteristics. The weights of the two emotion features can be determined by using the prazicke emotion model, and the detailed determination process of the weights of the two emotion features is described later, and is not described herein. In addition, weight values corresponding to different sub-emotions can be preset in the database, the weights train the machine model through big data to obtain sub-emotion weights under different voice signals and semantic information related to the voice signals, the first weights of the first sub-emotions determined by the voice signals and the second weights of the second sub-emotions determined by the semantic signals, the first sub-emotions and the first weights are prestored in the database in a mapping relation, and the second sub-emotions and the second weights are prestored in the database in a mapping relation. When the second emotion is determined, the second emotion is determined according to the two emotions. Accordingly, assuming that the difference in weight of the second emotion is determined to be greater than a preset threshold, the sub-emotion with a greater weight value is taken as the second emotion, and the preset threshold is 60% in an exemplary manner, when the weight of the first sub-emotion is significantly greater than the weight of the second emotion (for example, greater than 60%), the first sub-emotion is taken as the second emotion, and when the weight of the second sub-emotion is greater than the weight of the second emotion (for example, greater than 70%), the second sub-emotion is taken as the second emotion. Further, when the weights of the two are close to each other, that is, 60% or less, it is indicated that the first sub-emotion and the second sub-emotion are the same or similar, and at this time, any one sub-emotion is used as the second emotion.
Optionally, the determining the mood index of the user according to the first mood and the second mood includes:
determining the emotion of the first emotion and the second emotion in the Prague emotion model and the corresponding mood weight value of each emotion in the Prague emotion model according to a machine learning model, and determining the mood index of the user according to the emotion and the mood weight value.
Optionally, the determining, according to the machine learning model, the emotion of the first emotion and the second emotion in the prazicke emotion model and the mood weight value corresponding to each emotion in the prazicke emotion model, and determining the mood index of the user according to the emotion and the mood weight value includes:
inputting the Prazik emotion model into the machine learning model to obtain an emotion fusion model;
inputting the first emotion into the emotion fusion model to obtain a first emotion value and a first weight value of the first emotion in the Prague emotion model, and inputting the second emotion into the emotion fusion model to obtain a second emotion value and a first weight value of the second emotion in the Prague emotion model;
And multiplying the first emotion value by the first emotion weight and multiplying the second emotion value by the second emotion weight to obtain the mood index of the user.
Prazicke (Plutchik) considers emotion to be multidimensional, and includes three dimensions of intensity, similarity, and bipolarity. I.e., (1) all emotions can show different intensities, such as different degrees of pleasure or sadness; (2) The similarity of feelings of different moods, such as happiness and expectations, aversion and fear, etc.; (3) The two polarities refer to feeling two emotions which are completely different, such as sadness and happiness. Prazicke uses one orange-peel to graphically describe the relationship between the three dimensions (similar to an eight-color ring), each orange-peel representing a basic class of emotions, namely happiness, acceptance, surprise, fear, sadness, hate, furness, and vigilance. The strongest emotion in the orange body is located at the upper part, and the lower the emotion intensity is, the weaker the emotion intensity is; the emotion at the diagonal position shows two polarities; adjacent emotions have similarity. The machine learning model can be automatically adjusted so as to improve the operation or behavior of the algorithm, and the obtained mood index of the user is more accurate based on the Prague mood model. Determining the emotion (shown in figure 2) of the first emotion and the second emotion in the Prague emotion model and the corresponding mood weight value of each emotion in the Prague emotion model according to the machine learning model, and determining the mood index of the user according to the emotion and the mood weight value. Correspondingly, inputting the Prague emotion model into the machine learning model to obtain an emotion fusion model, so that the emotion of the user in the Prague emotion model is determined conveniently through the machine learning model, and the mood index of the user is determined according to the position of the emotion in the Prague emotion model and the position relation between the first emotion and the second emotion because the calculated emotion value is not in the middle position of a certain emotion of the Prague emotion model. In addition, the emotion fusion model uses the first emotion and the second emotion in the big data as sample data, so that the emotion fusion model can be trained through the sample data, the accuracy of the emotion fusion model is improved, errors are reduced, each emotion of a user can be calculated according to the trained emotion fusion model, and the mood index of the user can be determined more accurately.
By way of example, the first emotion and the second emotion are respectively input into the emotion fusion model, when the first emotion is angry and the position is close to anger, and the second emotion is afraid and the position is in the middle of the afraid area, it can be determined that the angry value of the user is higher. To be able to determine the mood index of a user, in this application, the mood value with higher positive mood expression is determined to be 1 (e.g. expect, happy, worry), the mood value with less positive mood expression is determined to be 0.5 (e.g. expect, happy, trusted), the mood value with less positive mood expression is determined to be 0.1 (e.g. wait, peace, admittance), the mood value with higher negative mood expression is determined to be-1 (e.g. anger, hate, sad, shock, fear), the mood value with less negative mood expression is determined to be-0.5 (e.g. happy, aversion, sad, surprise, fear), and the mood value with less negative mood expression is determined to be-0.1 (e.g. annoyance, aversion, sad, unexpected, fear).
In the following explanation of the specific calculation process of the mood index with reference to the foregoing example and fig. 3, a circle is drawn with the center of the orange segment as the center of the circle, and the radius is the distance from the center line to the edge of the orange segment, such as the black line segment in the orange segment where the radius is in fig. 3 (where the white line segment of the gas generating region covers the black line segment part length). When the first emotion is angry and the position is close to anger, determining the emotion weight according to the distance of the angry emotion in the angry area, which is close to the center of the circle, for example, the angry emotion is located on the radius of the orange valve. Further, when the position of the vital energy emotion is at the position (based on the boundary of the vital energy area away from the circle center) of 70% of the radius line segment (white line segment) of the vital energy area, the vital energy emotion value is lower by combining with the knowledge of the emotion intensity in the Prazik emotion model, and the vital energy emotion weight is determined to be 70% of the mood index; similarly, in the case that the second emotion is afraid and is located in the middle of the afraid area, that is, the afraid emotion is located on the radius of the orange valve where the afraid area is located and is located at the position of 50% of the radius line segment of the afraid area (the black line segment in the afraid area in the orange valve opposite to the orange valve with the drawing radius), the afraid emotion weight is 50% of the mood index, the vital mood weight is 70% of the mood index, and then the mood index of the user is determined according to the weight of the mood index of the afraid area and the corresponding mood index calculation rule. Thus, from this example, the mood index of the user is: 0.7 (-0.5) +0.5 (-0.5) = -0.6, indicating a lower mood index for the user. Based on the above, when the position of the emotion point (such as the generated qi) is not in the radius, an arc parallel to two boundaries of the generated qi (boundaries intersecting with the radius) is drawn based on the point, so that the arc where the generated qi emotion is located can intersect with the radius, and the intersection point is determined as the position of the generated qi emotion in the generated qi region in the orange valve.
Further, when determining the positions of the first emotion and the second emotion in the emotion corresponding to the Prague emotion model, inputting the face features corresponding to the first emotion into a machine learning model, inputting the voice signals and the semantic information corresponding to the second emotion into the machine learning model, determining the specific positions of the first emotion and the second emotion in the emotion corresponding to the Prague emotion model based on training of the machine model by big data, and further determining the weights of the first emotion and the second emotion in the user mood index according to the specific positions corresponding to the first emotion and the second emotion. Correspondingly, in the machine learning model library, one or more of face features, voice signals and semantic information corresponding to each emotion are contained, the emotion is an emotion in a Prazik emotion model, and the same emotion contains a plurality of face features and/or a plurality of voice signals and/or a plurality of voice information; different face features in the same emotion represent specific positions in the Prague emotion model, the same is true, different voice signals in the same emotion represent specific positions in the Prague emotion model, different voice information in the same emotion represent specific positions in the Prague emotion model, in the process of determining the specific positions of the first emotion in the Prague emotion model corresponding to the Prague emotion model through a machine learning model, the face features of a user are matched with face features in a machine learning model library, when the matching is successful, the specific positions of the first emotion in the Prague emotion model corresponding to the emotion are determined, then the weights of the first emotion are determined by combining the method, and so on, the specific positions of the voice signals and the voice information in the Prague emotion model corresponding to the emotion can be determined, and the weights of the second emotions can be determined. Since the speech signal and the speech information determine the second emotion, the emotion values corresponding to the speech signal and the speech information in the prazicke emotion model are multiplied by the weights corresponding to the speech signal and the speech information in the prazicke emotion model, and then the products are summed to obtain the mood index of the user in the second emotion. In order to enable the topics to better play a role in adjusting the mood of the user, in the method, the mood index of the user is determined on the basis of the mood proportion, and when the method is used for being bad, topics which can relieve the mood more greatly are pushed to the user.
Further, the database stores the theme corresponding to the user mood index, or the theme type, and further, the theme corresponding to the user mood index is directly extracted from the database according to the user mood index, or the theme with the same theme type is downloaded through a network, and the obtained new theme is configured as the theme of the user application program, and the relation between the user mood index and the theme stored in the database is shown in table 1:
TABLE 1 user mood index and topic storage relationship
Theme Mood index
Theme 1 Mood index A
Theme 2 Mood index B
Theme 3 Mood index C
Optionally, after determining the mood index of the user according to the first mood and the second mood, the method includes:
acquiring a plurality of mood indexes in a preset time period of a user, and calculating the mood indexes to determine a plurality of mood index average values of the user;
the average value is taken as the mood index of the user.
In combination with the foregoing, when the current mood index of the user cannot be determined, a plurality of mood indexes within a preset period, such as the last week of the current week, are obtained. By way of example, a plurality of mood indexes of a user in the previous week before monday are obtained, and states of moods of the user in the previous week are determined according to the plurality of mood indexes, so that the influence of the user moods in the previous week on the current state can be determined conveniently, and therefore topics of an application program are determined based on the mood indexes of the user in the previous week. Further, after determining the mood index in the preset period of the user, determining the average value of the mood indexes of the user according to the mood indexes, the theme of the user application program can be determined by adopting the method as described above.
Optionally, the obtaining the subject content corresponding to the mood index according to the preset rule, and configuring the subject content as the current subject of the application program includes:
when the mood index of the user is within a first preset threshold range, configuring a warm-color theme as a current theme of the application program;
and configuring the encouraging theme as the current theme of the application program when the mood index of the user is within a second preset threshold range.
When the user mood index is high (i.e., the user is well-loved), the warm-color themes are configured as the current theme of the application program or as themes that are active upwards. When the user mood index is low (i.e., the user is bad), the user is provided with an encouraging theme. By combining the explanation, the mood of the user is judged based on the expression of the user, a more accurate theme is obtained based on the mood of the user, the user can be reminded of keeping the mood when the mood of the user is bad, and the optimistic mood can be continuously injected into the user when the mood of the user is good, so that the theme is more attached to the mood of the user on the basis of realizing automatic updating of the theme, the user can keep a good mood by reminding the user, the effect of adjusting the mood of the user is realized, and further the user experience is improved.
The embodiment of the invention also provides a theme switching device, in one implementation manner, as shown in fig. 4, the theme switching device includes: acquisition module 100, determination module 200, configuration module 300:
the acquiring module 100 is configured to monitor whether an application program is in a foreground operation, acquire current face feature data when the application program is in the foreground operation, and acquire voice information of a user in real time;
a determining module 200, configured to determine a first emotion of the user according to the face feature data, and determine a second emotion of the user according to the speech signal and the semantic information in the speech information;
a first mood index determining module 300, configured to determine a mood index of a user according to the first mood and the second mood;
the first configuration module 400 is configured to determine a mood index of a user according to the first mood and the second mood, obtain a topic corresponding to the mood index according to a preset rule, and configure the topic as a current topic of the application program.
Further, as shown in fig. 4, the apparatus for a theme switching method provided in the embodiment of the present invention further includes: a matching unit 210, configured to match the speech signal with a preset speech signal, and determine a first sub-emotion of the speech signal; the word segmentation unit 220 is configured to convert the semantic information into text information, perform word segmentation on the text information, and determine a second sub-emotion according to the text information after word segmentation; a second emotion determining unit 230, configured to determine the second emotion based on the first and second sub-emotions and a first weight preset for the first sub-emotion and a second weight preset for the first sub-emotion. The first mood index determining unit 310 is configured to determine, according to a machine learning model, a mood of the first mood and the second mood in a prazick mood model and a mood weight value corresponding to each mood in the prazick mood model, and determine a mood index of the user according to the mood and the mood weight values. A first input unit 311, configured to input the plazick emotion model into the machine learning model, and obtain an emotion fusion model; a second input unit 312, configured to input the first emotion into the emotion fusion model, obtain a first emotion value and a first weight value of the first emotion in the plazick emotion model, and input the second emotion into the emotion fusion model, obtain a second emotion value and a first weight value of the second emotion in the plazick emotion model; and a mood index obtaining unit 313, configured to obtain a mood index of the user by multiplying the first mood value by the first mood weight and multiplying the second mood value by the second mood weight. The mood index obtaining module 510 is configured to obtain mood indexes within a preset time period of a user, calculate the mood indexes, and determine a mood index average value of the user; the second mood index determination module 520 is configured to take the average value as a mood index of the user. A first configuration unit 410, configured to configure a warm-color theme as a current theme of the application program when the mood index of the user is within a first preset threshold range; and a second configuration unit 420, configured to configure the encouraging theme as the current theme of the application program when the mood index of the user is within a second preset threshold range. A third mood index determining module 610, configured to determine a first mood index of the user according to the first mood and determine a second mood index of the user according to the second mood; the comparison module 620 is configured to obtain a first subject content corresponding to the first mood index according to the preset rule, obtain a second subject content corresponding to the second mood index according to the preset rule, compare the first subject content with the second subject content, and determine whether the first subject content is consistent with the second subject content; a second configuration module 630, configured to obtain, when the first topic content and the second topic content are inconsistent, a historical topic content of the application program when the historical first topic content and the historical second topic content are inconsistent, and configure the historical topic content as a current topic content of the application program; the first theme content is consistent with the historical first theme content, and the second theme content is consistent with the historical second theme content.
The embodiment of the theme switching method can be realized by the theme switching method device provided by the embodiment of the invention, and specific function realization is referred to the description in the embodiment of the method and is not repeated here.
The embodiment of the invention provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the program is executed by a processor, the method for switching the theme according to any one of the technical schemes is realized. The computer readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random AcceSS Memory, random access memories), EPROMs (EraSable Programmable Read-Only memories), EEPROMs (Electrically EraSable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage device includes any medium that stores or transmits information in a form readable by a device (e.g., computer, cell phone), and may be read-only memory, magnetic or optical disk, etc.
The embodiment of the theme switching method can be realized, the mood of the user is judged based on the expression of the user, a more accurate theme is obtained based on the mood of the user, the theme is more matched with the mood of the user on the basis of realizing automatic updating of the theme, the user can keep a good mood by reminding the user, the effect of adjusting the mood of the user is realized, and further the user experience is improved; the theme switching method provided by the embodiment of the application comprises the following steps: monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation; determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information; and determining the mood index of the user according to the first mood and the second mood, acquiring the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program. Conventionally, as the user gathers the process of interacting with the mobile terminal, the user can capture the face and further can recognize the emotion based on the captured face, so that the theme of the application program can be more accurately made to correspond to the emotion of the user, and the face image is acquired in real time. In addition, the face image of the user during unlocking can be obtained, and then the theme of the application program can be presented with the theme corresponding to the mood of the user after the user unlocks and opens the application program. In addition, after the face image is acquired, face feature data of the user is extracted in order to be able to determine the emotion of the user. Because the emotion of the user is determined by the face image alone, the subject corresponding to the emotion of the user cannot be accurately found, and under partial conditions, the emotion of the user can be covered by hiding the emotion of the user, and the emotion characteristics of the face are hidden, so that the current emotion of the user cannot be accurately determined. The voice signal is mainly matched with a preset voice signal in a database (see later in detail), the semantics are the meanings expressed by words and sentences of the user, the emotion characteristics of the user are identified based on the voice information of the user, the emotion and the mood of the user are comprehensively and jointly determined by the voice information and the information corresponding to the face characteristics, a more accurate mood state of the user can be obtained, and then the related application theme can be conveniently pushed to the user based on the mood state of the user, so that the user can play a role in adjusting the mood of the user in the process of opening the application program, and the user can release bad emotion affecting the mood of the user or keep the good emotion. When the mobile terminal is used for face unlocking, or the camera device can be used for shooting face data obtained when the angle of the face of the user is reached, the emotion of the user under the current condition is often directly expressed, and part of the users can possibly or through an emotion hiding method. In combination with the foregoing, the emotion with a single feature cannot fully or accurately represent the current emotion of the user, so after determining the facial emotion of the user in the emotion contained in the voice of the user, to accurately represent the emotion index of the user, the first emotion and the second emotion are fused together to jointly determine the emotion index of the user. The first emotion fusion and the second emotion fusion are used for jointly determining the mood index of the user through a machine learning model based on a Prague mood model (see fig. 2 for details).
In addition, in another embodiment, the present invention further provides a server, as shown in fig. 5, where the server processor 503, the memory 505, the input unit 507, the display unit 509, and other devices. Those skilled in the art will appreciate that the structural elements shown in fig. 5 do not constitute a limitation on all servers, and may include more or fewer components than shown, or may combine certain components. The memory 505 may be used to store an application 501 and various functional modules, and the processor 503 runs the application 501 stored in the memory 505 to perform various functional applications and data processing of the device. The memory 505 may be an internal memory or an external memory, or include both internal and external memories. The internal memory may include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, floppy disk, ZIP disk, U-disk, tape, etc. The disclosed memory includes, but is not limited to, these types of memory. The memory 505 of the present disclosure is by way of example only and not by way of limitation.
The input unit 507 is used for receiving input of signals, as well as personal information and related physical condition information input by a user. The input unit 507 may include a touch panel and other input devices. The touch panel can collect touch operations on or near the client (such as operations of the client on or near the touch panel using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, mouse, joystick, etc. The display unit 509 may be used to display information input by a client or information provided to the client and various menus of the computer device. The display unit 509 may take the form of a liquid crystal display, an organic light emitting diode, or the like. The processor 503 is the control center of the computer device, connecting the various parts of the overall computer using various interfaces and lines, performing various functions and processing data by running or executing software programs and/or modules stored in the memory 503, and invoking data stored in the memory. The one or more processors 503 shown in fig. 5 are capable of executing, implementing, the functions of the acquisition module 100, the functions of the determination module 200, the functions of the first mood index determination module 300, the functions of the first configuration module 400, the functions of the matching unit 210, the functions of the word segmentation unit 220, the functions of the second mood determination unit 230, the functions of the first mood index determination unit 310, the functions of the first input unit 311, the second input unit 312, the mood index obtaining unit 313, the functions of the plurality of mood index obtaining modules 510, the functions of the second mood index determination module 520, the functions of the first configuration unit 410, the functions of the second configuration unit 420, the functions of the third mood index determination module 610, the functions of the comparison module 620, and the functions of the second configuration module 630 shown in fig. 4.
In one embodiment, the server includes one or more processors 503 and one or more memories 505, one or more applications 501, wherein the one or more applications 501 are stored in the memory 505 and configured to be executed by the one or more processors 503, and the one or more applications 301 are configured to perform the theme switching method described in the above embodiments.
The embodiment of the theme switching method can be realized, the mood of the user is judged based on the expression of the user, a more accurate theme is obtained based on the mood of the user, the theme is more fit with the mood of the user on the basis of realizing automatic updating of the theme, the user can keep a good mood by reminding the user, the effect of adjusting the mood of the user is realized, and further the user experience is improved; the theme switching method provided by the embodiment of the application comprises the following steps: monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation; determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information; and determining the mood index of the user according to the first mood and the second mood, acquiring the theme content corresponding to the mood index according to a preset rule, and configuring the theme content as the current theme of the application program. Conventionally, as the user gathers the process of interacting with the mobile terminal, the user can capture the face and further can recognize the emotion based on the captured face, so that the theme of the application program can be more accurately made to correspond to the emotion of the user, and the face image is acquired in real time. In addition, the face image of the user during unlocking can be obtained, and then the theme of the application program can be presented with the theme corresponding to the mood of the user after the user unlocks and opens the application program. In addition, after the face image is acquired, face feature data of the user is extracted in order to be able to determine the emotion of the user. Because the emotion of the user is determined by the face image alone, the subject corresponding to the emotion of the user cannot be accurately found, and under partial conditions, the emotion of the user can be covered by hiding the emotion of the user, and the emotion characteristics of the face are hidden, so that the current emotion of the user cannot be accurately determined. The voice signal is mainly matched with a preset voice signal in a database (see later in detail), the semantics are the meanings expressed by words and sentences of the user, the emotion characteristics of the user are identified based on the voice information of the user, the emotion and the mood of the user are comprehensively and jointly determined by the voice information and the information corresponding to the face characteristics, a more accurate mood state of the user can be obtained, and then the related application theme can be conveniently pushed to the user based on the mood state of the user, so that the user can play a role in adjusting the mood of the user in the process of opening the application program, and the user can release bad emotion affecting the mood of the user or keep the good emotion. When the mobile terminal is used for face unlocking, or the camera device can be used for shooting face data obtained when the angle of the face of the user is reached, the emotion of the user under the current condition is often directly expressed, and part of the users can possibly or through an emotion hiding method. In combination with the foregoing, the emotion with a single feature cannot fully or accurately represent the current emotion of the user, so after determining the facial emotion of the user in the emotion contained in the voice of the user, to accurately represent the emotion index of the user, the first emotion and the second emotion are fused together to jointly determine the emotion index of the user. The first emotion fusion and the second emotion fusion are used for jointly determining the mood index of the user through a machine learning model based on a Prague mood model (see fig. 2 for details).
The server provided by the embodiment of the present invention can implement the embodiment of the theme switching method provided above, and specific functional implementation is referred to the description in the method embodiment and is not repeated herein.
The foregoing is only a partial embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (9)

1. A theme switching method, comprising:
monitoring whether an application program is in a foreground operation or not, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in the foreground operation;
determining a first emotion of the user according to the face characteristic data, and determining a second emotion of the user according to the voice signals and the semantic information in the voice information;
determining a mood index of the user according to the first mood and the second mood;
obtaining theme contents corresponding to the mood index according to a preset rule, and configuring the theme contents as the current theme of the application program;
the method further comprises the steps of:
Determining a first mood index of the user according to the first mood, and determining a second mood index of the user according to the second mood;
acquiring a first theme content corresponding to the first mood index according to the preset rule, acquiring a second theme content corresponding to the second mood index according to the preset rule, comparing the first theme content with the second theme content, and determining whether the first theme content is consistent with the second theme content;
when the first theme content is inconsistent with the second theme content, acquiring a history theme content of the application program when the history first theme content is inconsistent with the history second theme content, and configuring the history theme content as the current theme content of the application program; the first theme content is consistent with the historical first theme content, and the second theme content is consistent with the historical second theme content.
2. The theme switching method according to claim 1, wherein the determining the second emotion of the user according to the voice signal and the semantic information in the voice information includes:
matching the voice signal with a preset voice signal, and determining a first sub-emotion of the voice signal;
Converting the semantic information into text information, performing word segmentation on the text information, and determining a second sub emotion according to the text information subjected to word segmentation;
and determining the second emotion based on the first sub-emotion, the second sub-emotion and a first weight preset by the first sub-emotion and a second weight preset by the first sub-emotion.
3. The theme switching method according to claim 2, wherein the determining the mood index of the user according to the first mood and the second mood includes:
determining the emotion of the first emotion and the second emotion in the Prague emotion model and the corresponding mood weight value of each emotion in the Prague emotion model according to a machine learning model, and determining the mood index of the user according to the emotion and the mood weight value.
4. The theme switching method according to claim 3, wherein determining, according to the machine learning model, the emotion of the first emotion and the second emotion in the prazicke emotion model and the mood weight value corresponding to each emotion in the prazicke emotion model, and determining the mood index of the user according to the emotion and the mood weight values includes:
Inputting the Prazik emotion model into the machine learning model to obtain an emotion fusion model;
inputting the first emotion into the emotion fusion model to obtain a first emotion value and a first weight value of the first emotion in the Prague emotion model, and inputting the second emotion into the emotion fusion model to obtain a second emotion value and a first weight value of the second emotion in the Prague emotion model;
and multiplying the first emotion value by the first emotion weight and multiplying the second emotion value by the second emotion weight to obtain the mood index of the user.
5. The theme switching method according to any one of claims 1 to 4, wherein after the determining the mood index of the user according to the first mood and the second mood, further comprising:
acquiring a plurality of mood indexes in a preset time period of a user, and calculating the average value of the mood indexes;
the average value is taken as the mood index of the user.
6. The theme switching method according to any one of claims 1 to 4, wherein the acquiring the theme content corresponding to the mood index according to the preset rule, and configuring the theme content as the current theme of the application program, includes:
When the mood index of the user is within a first preset threshold range, configuring a warm-color theme as a current theme of the application program;
and configuring the encouraging theme as the current theme of the application program when the mood index of the user is within a second preset threshold range.
7. A theme switching apparatus, comprising:
the acquisition module is used for monitoring whether the application program is in foreground operation, and acquiring current face characteristic data and acquiring voice information of a user in real time when the application program is in foreground operation;
the determining module is used for determining a first emotion of the user according to the face characteristic data and determining a second emotion of the user according to the voice signals and the semantic information in the voice information;
the first mood index determining module is used for determining mood indexes of the user according to the first mood and the second mood;
the first configuration module is used for acquiring the theme content corresponding to the mood index according to a preset rule and configuring the theme content as the current theme of the application program;
the apparatus further comprises:
the third mood index determining module is used for determining a first mood index of the user according to the first mood and determining a second mood index of the user according to the second mood;
The comparison module is used for acquiring first theme contents corresponding to the first mood index according to the preset rule, acquiring second theme contents corresponding to the second mood index according to the preset rule, comparing the first theme contents with the second theme contents, and determining whether the first theme contents are consistent with the second theme contents or not;
the second configuration module is used for acquiring the historical subject matter of the application program when the first subject matter is inconsistent with the second subject matter, and configuring the historical subject matter as the current subject matter of the application program when the historical first subject matter is inconsistent with the historical second subject matter; the first theme content is consistent with the historical first theme content, and the second theme content is consistent with the historical second theme content.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the theme switching method of any one of claims 1 to 6.
9. A server, comprising:
One or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the steps of the subject matter switching method of any of claims 1 to 6.
CN201910969202.XA 2019-10-12 2019-10-12 Theme switching method and device, storage medium and server Active CN110825503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910969202.XA CN110825503B (en) 2019-10-12 2019-10-12 Theme switching method and device, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910969202.XA CN110825503B (en) 2019-10-12 2019-10-12 Theme switching method and device, storage medium and server

Publications (2)

Publication Number Publication Date
CN110825503A CN110825503A (en) 2020-02-21
CN110825503B true CN110825503B (en) 2024-03-19

Family

ID=69549165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910969202.XA Active CN110825503B (en) 2019-10-12 2019-10-12 Theme switching method and device, storage medium and server

Country Status (1)

Country Link
CN (1) CN110825503B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379962B (en) * 2020-11-25 2024-07-26 Oppo(重庆)智能科技有限公司 Desktop theme adjusting method, mobile terminal, server and storage medium
CN114666443A (en) * 2020-12-22 2022-06-24 成都鼎桥通信技术有限公司 Emotion-based application program running method and equipment
CN112768075A (en) * 2021-01-20 2021-05-07 西安闻泰电子科技有限公司 User health monitoring method, system, computer equipment and storage medium
CN112667196B (en) * 2021-01-28 2024-08-13 百度在线网络技术(北京)有限公司 Information display method and device, electronic equipment and medium
CN113163155B (en) * 2021-04-30 2023-09-05 咪咕视讯科技有限公司 User head portrait generation method and device, electronic equipment and storage medium
CN113947798B (en) * 2021-10-28 2024-10-25 平安科技(深圳)有限公司 Application program background replacement method, device, equipment and storage medium
CN114925276A (en) * 2022-05-26 2022-08-19 惠州高盛达智显科技有限公司 A kind of intelligent floor display method and system
CN115033146A (en) * 2022-06-29 2022-09-09 深圳市沃特沃德信息有限公司 Method and device for replacing application icon, computer equipment and storage medium
CN116909159B (en) * 2023-01-17 2024-07-09 广东维锐科技股份有限公司 Intelligent home control system and method based on mood index

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868830A (en) * 2012-09-26 2013-01-09 广东欧珀移动通信有限公司 A mobile terminal theme switching control method and device
WO2018119924A1 (en) * 2016-12-29 2018-07-05 华为技术有限公司 Method and device for adjusting user mood
CN108307037A (en) * 2017-12-15 2018-07-20 努比亚技术有限公司 Terminal control method, terminal and computer readable storage medium
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN109117734A (en) * 2018-07-18 2019-01-01 平安科技(深圳)有限公司 System theme setting method, device, computer equipment and storage medium
CN109672937A (en) * 2018-12-28 2019-04-23 深圳Tcl数字技术有限公司 TV applications method for switching theme, TV, readable storage medium storing program for executing and system
CN110087451A (en) * 2016-12-27 2019-08-02 本田技研工业株式会社 Mood improves device and mood ameliorative way
CN110325982A (en) * 2017-11-24 2019-10-11 微软技术许可有限责任公司 The abstract of multimedia document is provided in a session

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868830A (en) * 2012-09-26 2013-01-09 广东欧珀移动通信有限公司 A mobile terminal theme switching control method and device
CN110087451A (en) * 2016-12-27 2019-08-02 本田技研工业株式会社 Mood improves device and mood ameliorative way
WO2018119924A1 (en) * 2016-12-29 2018-07-05 华为技术有限公司 Method and device for adjusting user mood
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN110325982A (en) * 2017-11-24 2019-10-11 微软技术许可有限责任公司 The abstract of multimedia document is provided in a session
CN108307037A (en) * 2017-12-15 2018-07-20 努比亚技术有限公司 Terminal control method, terminal and computer readable storage medium
CN109117734A (en) * 2018-07-18 2019-01-01 平安科技(深圳)有限公司 System theme setting method, device, computer equipment and storage medium
CN109672937A (en) * 2018-12-28 2019-04-23 深圳Tcl数字技术有限公司 TV applications method for switching theme, TV, readable storage medium storing program for executing and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多维数据特征融合的用户情绪识别;陈茜;史殿习;杨若松;;计算机科学与探索;20161231;第10卷(第06期);第751-759页 *

Also Published As

Publication number Publication date
CN110825503A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825503B (en) Theme switching method and device, storage medium and server
CN111368609B (en) Speech interaction method based on emotion engine technology, intelligent terminal and storage medium
US12121823B2 (en) Automatic classification and reporting of inappropriate language in online applications
CN111459290B (en) Interactive intention determining method and device, computer equipment and storage medium
US20210042580A1 (en) Model training method and apparatus for image recognition, network device, and storage medium
CN108334583A (en) Affective interaction method and device, computer readable storage medium, computer equipment
CN110110169A (en) Man-machine interaction method and human-computer interaction device
US11907273B2 (en) Augmenting user responses to queries
US12254784B2 (en) Emotional evolution method and terminal for virtual avatar in educational metaverse
CN109493885A (en) Psychological condition assessment and adjusting method, device and storage medium, server
CN117637134A (en) Health management system, method, electronic device, and computer-readable storage medium
Serbaya [Retracted] Analyzing the Role of Emotional Intelligence on the Performance of Small and Medium Enterprises (SMEs) Using AI‐Based Convolutional Neural Networks (CNNs)
CN118587757A (en) AR-based emotional data processing method, device and electronic device
CN119498844A (en) A humanoid robot emotion analysis and processing method, system, device and medium
CN119399663A (en) Automatic interview method, device, computer equipment and storage medium based on artificial intelligence
CN113705312A (en) Hemoglobin detection method, device and computer readable storage medium
KR102630803B1 (en) Emotion analysis result providing device and emotion analysis result providing system
US20210304870A1 (en) Dynamic intelligence modular synthesis session generator for meditation
CN113988214A (en) Similar user recommendation method and device based on voice recognition result
Radha et al. Emotion Based Song Suggestion System for Tamil Language
CN112669832A (en) Semantic understanding method of intelligent device, intelligent device and management platform
EP3787849A1 (en) Method for controlling a plurality of robot effectors
CN117539356B (en) Meditation-based interactive user emotion perception method and system
CN120221037A (en) Digital human auxiliary diagnosis method, device, electronic equipment and readable storage medium
CN119303205A (en) A music emotion regulation and sleep aid method and system combined with emotional computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant