WO2020175969A1 - Emotion recognition apparatus and emotion recognition method - Google Patents
Emotion recognition apparatus and emotion recognition method Download PDFInfo
- Publication number
- WO2020175969A1 WO2020175969A1 PCT/KR2020/002928 KR2020002928W WO2020175969A1 WO 2020175969 A1 WO2020175969 A1 WO 2020175969A1 KR 2020002928 W KR2020002928 W KR 2020002928W WO 2020175969 A1 WO2020175969 A1 WO 2020175969A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- emotion
- unit
- recognition
- recognition unit
- party
- Prior art date
Links
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 135
- 238000000034 method Methods 0.000 title claims description 35
- 230000008451 emotion Effects 0.000 claims abstract description 193
- 230000014509 gene expression Effects 0.000 claims description 59
- 230000033001 locomotion Effects 0.000 claims description 22
- 239000000284 extract Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 40
- 230000002996 emotional effect Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 10
- 208000010415 Low Vision Diseases 0.000 description 8
- 230000004303 low vision Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008921 facial expression Effects 0.000 description 6
- 230000008512 biological response Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 210000003169 central nervous system Anatomy 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 206010010356 Congenital anomaly Diseases 0.000 description 2
- 230000032683 aging Effects 0.000 description 2
- 210000003403 autonomic nervous system Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010014970 Ephelides Diseases 0.000 description 1
- 206010021703 Indifference Diseases 0.000 description 1
- 206010022998 Irritability Diseases 0.000 description 1
- 206010024264 Lethargy Diseases 0.000 description 1
- 208000003351 Melanosis Diseases 0.000 description 1
- 238000012356 Product development Methods 0.000 description 1
- 241000514401 Prumnopitys ferruginea Species 0.000 description 1
- 206010041243 Social avoidant behaviour Diseases 0.000 description 1
- 208000037919 acquired disease Diseases 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000037007 arousal Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000000467 autonomic pathway Anatomy 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000001097 facial muscle Anatomy 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- the present invention relates to a device and a method for recognizing the emotions of the other party facing the user.
- Expensive devices that assist with vision impairment do not include technology to support communication skills. Expensive products amount to 3,500,000 won. The actual supply rate is less than 5%.
- Visionics, Viisage, Miros, etc. are leading the market. These are not only face detection technology, lighting recognition technology, pose recognition technology, expression recognition technology, which are the core technologies of face recognition technology, but also aging recognition technology, large-scale DB construction technology, and 3D. Active research is in progress focusing on facial shape restoration technology.
- the present invention is intended to provide a recognition device and recognition method capable of accurately recognizing the emotion of a user facing a user.
- the recognition device of the present invention includes a photographing unit for photographing the other party facing the user; an emotion recognition unit for recognizing the external emotion of the other party through the analysis of the image of the other party photographed by the photographing unit; the appearance of the above by voice or vibration Expressing emotion 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 display part; can be included.
- the emotion recognition unit analyzes the image of the other party to
- At least one of the gestures can be recognized.
- the emotion recognition unit may recognize the external emotion by using the expression or the gesture.
- the recognition method of the present invention includes the steps of photographing the surrounding area; storing the photographed image; Analyzing the photographed image and extracting the other party to be conversed with the user; When the other party is extracted, recognizing and correcting the external feeling of the other party; Displaying the corrected external feeling. It may include.
- the recognition device and recognition method of the present invention analyzes the other party photographed by the camera
- External emotions such as facial expressions and gestures may show different strengths or different types of real emotions depending on the surrounding environment.
- the recognition device and recognition method of the present invention recognizes the external feeling of the other party, and can correct the external external feeling of the other party, which is attributed to the object or text, by using the peripheral lens. Considering the Eunju conversion
- the emotion of the other party can be analyzed and recognized in consideration of the surrounding environment.
- the present invention can accurately recognize the emotion of the other party.
- the emotion of the other party recognized through complex emotion recognition technology can be provided to the user.
- a user with low vision can accurately recognize the other party's emotions, and thus can communicate in the process of the other party.
- This invention is for people with low vision based on complex emotion recognition technology to improve communication skills.
- the present invention can aid vision by allowing low vision people to read characters, objects, and expressions that are difficult to distinguish.
- the present invention enables users with low vision to understand the surrounding situation and , Be aware of the risk, 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 You can guide your everyday life, such as shopping. This invention can guide users to communicate their emotions normally with the other party.
- 1 is a schematic diagram showing the recognition device of the present invention.
- FIG. 2 is a schematic diagram showing an image photographed by a photographing unit.
- Fig. 3 is a flow chart showing the recognition method of the present invention.
- FIG. 4 is a diagram showing a computing device according to an embodiment of the present invention.
- the terms'and/or' include a combination of a plurality of listed items or any of a plurality of listed items.
- Emotion recognition technology is a technology that aims to improve the quality of human life by measuring human emotions and analyzing them, applying them to product development or environmental design. This is the comfort of external physical stimulation obtained through personal experiences, It belongs to the field of technology that changes the product or environment by scientifically measuring and analyzing complex emotions such as discomfort, comfort, and discomfort, and applying them engineeringly.
- emotion recognition technology accurately recognizes the emotions of users
- emotion recognition technology can provide emotion-based services to users by using the emotions of users in entertainment, education, and medical fields, and respond immediately to users when using the service.
- the quality of service can be improved by checking and providing feedback according to the response.
- the conventional emotion recognition technology as described above only improves the accuracy of the measured value itself for the biological response, and there is a problem in that it does not reflect the difference in the individual's emotion recognition and surroundings. That is, the emotion recognition technology.
- a standardized emotion recognition rule-base that can be applied statistically to all people has been established, and this standardized rule-base has been applied to all individuals.
- the recognition device and recognition method of the present invention can be used to accurately grasp the emotion of the other party faced by this user.
- Fig. 1 is a schematic diagram showing the recognition device of the present invention.
- Fig. 2 is a photographing unit (1W) 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 This is a schematic diagram showing the captured image.
- the recognition device shown in FIG. 1 includes a photographing unit 110, an emotion recognition unit 130, and a display unit 150
- the photographing unit 110 may photograph the user and the other party who speaks. For example,
- the photographing unit (0) can photograph the other person's face, arms, legs, and body.
- the opponent's face photographed in the photographing unit 110 can be used to analyze and recognize the other's expression.
- the opponent's arms, legs, and body photographed in the photographing unit (0) analyze and recognize the other's gestures.
- the photographing unit (0) can contain various types of cameras that generate video images.
- the emotion recognition unit 130 may recognize the external emotion of the other party through the analysis of the image of the other party photographed by the photographing unit 0.
- the other party's image may refer to an image that includes the other party among the images captured by the photographing unit 110.
- the emotion recognition unit 130 may extract the other party included in the other party image.
- the emotion recognition unit 130 Can apply various expression recognition technologies and various gesture recognition technologies to the image of the other party to the extracted information of the other party, and at least one of the other party’s expressions and gestures) can be grasped or recognized.
- the emotion recognition unit 130 recognizes the emotion of the identified opponent's expression or gesture
- the emotion recognition algorithm can be generated based on deep learning. Using the external appearance of the other party, such as expressions or gestures), the external emotion corresponding to the external emotion expression of the other party can be recognized. have.
- Appearance may be a difficult part for low vision users to recognize.
- the appearance feeling recognized by the recognition unit 130 may be provided to the user by the display unit 150.
- This display unit 150 can display the external feeling of the other party by voice or vibration.
- the user can sense the voice or vibration of the display unit 150 by holding or wearing the display unit 150.
- the photographing unit 110 and the emotion recognition unit 130 may also be formed to be possessed or worn by the user. .
- the display unit 150 may include an audio output means such as a speaker that displays a voice such as “the other party is in a joyful state” of the other party’s emotions.
- the display unit 150 makes an appointment with the user.
- Vibration means for generating a vibration signal can be included.
- the vibration means can indicate that the emotion of the other party is in a joyful state through vibrations tailored to the setting routine.
- the user can receive the external feeling through the display unit 150 while acquiring the voice of the other party by himself.
- the user can accurately grasp the other party's feelings through the detection of the external feeling.
- the user himself/herself Emotional learning can be performed while matching the external emotion provided by the display unit 150 to the voice of the other party being heard. Through this, the user can learn what voice the other party is talking with. 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928
- the external sentiment of the other party through expressions or gestures may vary depending on the surrounding environment. For example, in a library where quietness is required, expressions/gestures in which the other party expresses characteristic emotions is a noisy market. It may be different from what is expressed in. For this reason, it may be difficult to accurately recognize the other party's emotions only with the user's external emotions.
- the recognition device of the present invention may be provided with a collection unit 120 for collecting the voice of the other party.
- the emotion recognition unit 130 can analyze the voice by using a voice emotion recognition model such as deep learning or a voice characteristic analysis algorithm, and recognize the voice emotion of the other party.
- the sound emotion is recognized through the voice. It may contain emotions that can be or perceived.
- the emotion recognition unit 130 may correct the external emotion by using the recognized sound emotion.
- the display unit 150 may display the corrected appearance emotion.
- the display unit 150 may display the corrected appearance emotion using sound emotions.
- Correction of appearance emotion can mean correction of intensity for the same kind of emotion.
- a plurality of display information may be provided on the display unit 150 by dividing the intensity of emotion for the same type of external emotion. For example, even if it is a feeling of joy, a little joy can be provided. It is advantageous that the intensity is included as, normal joy, very joy, etc.
- the emotion recognition unit 130 can recognize the type of external emotion through the analysis of the image of the other party.
- the emotion recognition unit 130 can recognize the type of external emotion according to the strength of the voice. It is possible to discriminate the strength and weakness of the type of external feeling that is attributed.
- the display unit 150 can display specific display information indicating the type of external feeling and the determination result of the strength and weakness. The century can be strong.
- This emotion recognition unit 130 uses the image of the other party to calculate the image of the other party to calculate the image of the other party to calculate the image of the other party to calculate
- the emotion recognition unit 130 may correct the perceived external emotion through analysis of the image of the other party if the external emotion and the sound emotion are different from each other and the intensity of the sound emotion satisfies the set value. For example, the external appearance While emotions are perceived as'joy', sound emotions can be recognized as'sadness.' In this case, if the intensity of the sound emotion, for example, the strength of the voice, the decibels, or the volume is above the set value, the emotion recognition unit ( 130) can modify the external emotion of'joy' to the state of'sadness'.
- the display unit 150 is the appearance modified by the emotion recognition unit 130 2020/175969 1 » (:1 ⁇ 1 ⁇ 2020/002928 You can express your emotions.
- the emotion recognition unit 130 may provide the determined external emotion to the display unit 150 as it is, when the external emotion and the sound emotion are the same or the intensity of the sound emotion does not satisfy the set value. For example, , If the intensity of the sound emotion is less than the set value, the emotion recognition unit 130 may ignore the sound emotion and provide the already recognized external emotion'joy' to the display unit 150 as it is.
- An object recognition unit 170 can be provided to guide users with low vision.
- the photographing unit 110 can photograph surrounding objects.
- the object recognition unit 170 is
- the surrounding object photographed by the photographing unit 0 can be recognized.
- the display unit 150 can display object information corresponding to the recognition result of the object by voice or vibration. Even if the user does not visually recognize the surrounding object, With the help of the recognition device of the present invention, surrounding objects can be recognized audibly or tactilely.
- the emotion recognition unit 130 may identify person 1 excluding the other party among objects.
- the emotion recognition unit 130 analyzes the number of person 1s corresponding to the object, and determines the number of characters 1 If you are satisfied, you can highly correct the intensity of your external feelings.
- the other party can refrain from expressing external emotions when other people are around. For example, even in a very happy situation, the other party can express joy calmly in consideration of the people around him.
- the emotion recognition unit 130 can determine the number of the surrounding person I excluding the other person. If the identified number of characters satisfies the set number, the intensity of the currently recognized appearance can be highly corrected. Through the analysis of the other party's facial expressions, the other party's emotions can be recognized as'normal joy.' Person 17 ⁇ If there are 2 people corresponding to the set number, emotion
- the recognition unit 130 may correct the intensity of the emotion as high as the setting step. By the correction, the external emotion of the other party is corrected to be “very joyful” and may be displayed through the display unit 150.
- person I can react to the other person's emotional expression.
- the other person 1 who stops around the other party may be an acquaintance of the other party.
- the acquaintance of the other party 2020/175969 1»(:1/10 ⁇ 020/002928 Can react to the other party's emotional expression.
- the other party who is aware of this fact can express his/her emotional expression calmly compared to the general situation.
- the other party's sentiment expressed as may be in a state where the intensity of the original sentiment information is weakened. Therefore, the other person's actual feelings can be stronger than those expressed externally.
- the emotion recognition unit 130 when it is determined that one surrounding person is moving,
- the emotion recognition unit 130 can correct the intensity of the external emotion high.
- the emotion recognition unit 130 displays an external emotion that is highly corrected in intensity instead of the original external emotion. Can be provided to
- a recognition unit 180 may be provided.
- the emotion recognition unit 130 may analyze the contents of the character III included in the image photographed by the photographing unit 0.
- the emotion recognition unit 130 may be defined by the letter III.
- the appearance emotion can be corrected according to the analysis result of the content.
- the display unit 150 can display the appearance emotion corrected according to the analysis result of the content.
- the recognition device of the present invention analyzes characters. Through this, you can correct the external feeling with an intensity higher than the intensity of the emotion expressed by the other party.
- the emotion recognition unit 130 can analyze various characters included in the image and determine the swap environment in which the other party is facing.
- the environment in which the other party is facing corresponds to the current location of the other party.
- Content information such as facility information, text on books, letters, text messages, menu boards, product labels, screens, etc. that affect the emotions of the other party may be included.
- the emotion recognition unit 130 corrects the intensity high even if the external information of the other party is ⁇ a little joy''. , It is possible to modify the external appearance information with'very joyful'.
- the display unit 150 can display the external feeling of the other party as'very joyful'. The user can accurately grasp the actual feeling of the other party and deal with the other party.
- an object recognition unit 170 In the recognition apparatus, an object recognition unit 170, a character recognition unit 180, and a selection unit 190 may be provided.
- the object recognition unit 170 may recognize surrounding objects photographed by the photographing unit 110.
- the character recognition unit 180 can recognize the character 111 photographed by the photographing unit (0).
- the selection unit 190 is among the appearance information, recognition results of objects, and recognition results of characters. 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 Through the display unit 150, the object to be displayed can be selected according to the user's choice.
- the user can use the object recognition unit 170 for the purpose of recognizing surrounding objects, or the user can use the character recognition unit 180 for the purpose of recognizing the surrounding characters.
- Display unit 150 May display the object selected by the selection unit 190.
- the emotion recognition unit 130 may recognize the appearance emotion.
- At least one of the recognition units 180 can recognize objects or characters while operating together with the emotion recognition unit 130.
- the appearance emotion can be modified using the content information.
- the storage unit 140 may be provided as a means for obtaining the external feeling of the other party in advance so that the user can show the attitude of focusing only on the other party.
- a photographed image captured by the photographing unit 0 may be stored for a set time.
- the recognition unit 170 or the character recognition unit 180 may be provided with a photographed image stored for a set time before the current time point.
- the object recognition unit 170 may recognize surrounding objects included in the photographed image.
- the character recognition unit 180 may recognize characters included in the photographed image.
- the emotion recognition unit 130 may receive object information corresponding to the object recognition result from the object recognition unit 170, or may receive content information corresponding to the character recognition result from the character recognition unit 180.
- the emotion recognition unit 130 uses object information or content information to calculate emotion recognition
- the emotion may be corrected.
- the display unit 150 may display the appearance emotion corrected by the emotion recognition unit 130.
- an analysis can be performed on the captured image photographed before the time of the other party's face-to-face and stored in the storage unit 140.
- the character is recognized, and the emotion recognition unit 130 may correct the external emotion of the other party facing by using the corresponding surrounding object or character.
- the recognition device of the present invention is manufactured in the form of glasses or a mounting device.
- the recognition device of the present invention may be worn by the user.
- the recognition device of the present invention may be manufactured in the form of a mobile device to follow the user or be worn by the user.
- the photographing unit (0) equipped with glasses or the like may be formed to shoot the user's front side.
- the present embodiment even when the user who starts to face the other party continuously looks at the other party, the external appearance of the other party can be accurately corrected by using the pre-stored peripheral environment. As a result, the actual feeling of the other party can be followed. A user who knows the other person's actual feelings accurately can respond correctly to the other's feelings.
- 3 is a flow chart showing the recognition method of the present invention.
- the recognition method of FIG. 3 can be performed by the recognition device of FIG. 1.
- the photographing unit (0) can shoot the surrounding area desired by the user 510).
- This storage unit 140 may store the photographed image 520).
- the emotion recognition unit 130 may analyze the photographed image and extract the other party who communicates with the user 530).
- the emotion recognition unit 130 may recognize and correct the external emotion of the other party (540).
- the display unit 150 may display the appearance emotion corrected by the emotion recognition unit 130 (550).
- the step 540 of recognizing and correcting the external emotion can be subdivided as follows.
- the selection unit 190 may select any one of emotion recognition, object recognition, and character recognition according to the user's selection (541).
- the object recognition unit 170 can recognize surrounding objects photographed by the photographing unit 110 543). At this time, the recognition of the surrounding objects can be performed in real time. 150) may display the information of the surrounding object recognized by the object recognition unit 170 by voice or vibration.
- the character recognition unit 180 can recognize the characters photographed in the photographing unit 110 544). At this time, the character recognition can be performed in real time.
- the display unit 150 may display content information of a character recognized by the character recognition unit 180 by voice or vibration.
- the emotion recognition unit 130 may recognize a nearby object or recognize a text through the analysis of the photographed image previously stored in the storage unit 140.
- the recognition of an object or text performed during emotion recognition is performed in real time.
- the recognition of objects or characters performed during emotional recognition has a characteristic in that it targets the photographed image previously stored in the storage unit 140.
- This emotion recognition unit 130 recognizes object information or text corresponding to the object recognition result. 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 You can use the content information corresponding to the result to correct the appearance feeling.
- the emotion recognition unit 130 may determine the level of quietness that requires a quiet environment through the analysis of object information or content information, and correct the intensity of external emotion in proportion to the level of quietness.
- the quietness of the location can be determined through the text included in the photographed image.
- a quietness level per number of surrounding people included in a photographed image may be set. If the number of neighboring people is 1, it can be set as quietness level 1, if the number of neighbors is 2, quietness level 2, etc.
- the emotion recognition unit (130 ) Can correct the external information in three stages of joy, which increases the intensity of the current emotion by one level. If the current level of emotion and joy of the other party is at the second level, and if the quietness is 2, the emotion recognition unit 130 adjusts the strength of the current emotion to the second level. The appearance information can be corrected in 4 steps of increased joy.
- a quietness level of 5 can be given. If the current location is a market, a quietness level of 1 can be given. If the quietness level is 5, the emotion recognition unit 130 can correct the appearance information to 7 levels of joy, which raises the current level of emotion by 5 levels. If the current level of emotion is 2 levels of the other party's emotions, the level of quietness is 1, the emotion recognition unit ( 130) can correct the appearance information in three stages of joy, which raises the intensity of the current emotion by one level.
- the present invention can perform recognition of five basic emotions (eg, joy, surprise, sadness, anger, fear).
- the present invention has 17 complex emotions (eg, positive and negative, arousal and taste awakening). According to the three-dimensional concept of domination and obedience, it is possible to perform recognition of complex emotions, including intensity (intensity) levels).
- the present invention has a function of presenting a communication language. Specifically, the present invention has a maximum of 75 emotions (eg, energetic, cool, disappointed, fearful, a lot of surprise, despair, embarrassment, anxiety, annoyance). They can recognize and express their feelings of anger, irritability, wanting to get out, energetic, fun, and shy.
- emotions eg, energetic, cool, disappointed, fearful, a lot of surprise, despair, embarrassment, anxiety, annoyance. They can recognize and express their feelings of anger, irritability, wanting to get out, energetic, fun, and shy.
- the present invention uses deep learning artificial intelligence technique.
- the present invention can recognize a variety of complex emotions that were previously difficult to recognize by using a high-dimensional combination method in complex expressions.
- the present invention first advances environmental awareness and reduces the number of objects to objects that can be found within the identified environment in order to compensate for the recognition rate that decreases significantly when the number of objects increases. Can be raised.
- the present invention can input the motion flow of the sequence image into the Dictionary learning method, and design a recognition technology through the pre-learning of latent motions for each expression.
- the present invention is a variety of methods for extracting past expression features.
- PCA Principal Component Analysis
- ICA Independent Component Analysis
- ASM Active Shape 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928
- the present invention can use an object recognition algorithm. Specifically, about 100
- Object recognition and voice guidance algorithms that can detect indoor and outdoor atmospheres can be used.
- the present invention can use a character recognition algorithm. Specifically, an algorithm capable of recognizing characters in 102 languages including Korean, Chinese, Japanese, and English can be used.
- the present invention can use a basic emotional real-time recognition algorithm. Specifically,
- An algorithm capable of real-time recognition of five basic emotions eg, joy, surprise, sadness, anger, and fear
- five basic emotions eg, joy, surprise, sadness, anger, and fear
- the present invention can use 17 complex expression and emotion recognition algorithms.
- AU action unit detection and LSTM-based complex emotion recognition function can be used.
- the present invention can perform data collection and database construction, data learning and model creation, and object recognition YOLO.
- the present invention can perform data collection and database construction, data learning and model generation, and character recognition tesseract-ocr use.
- the present invention can implement the object recognition, text recognition, emotion recognition and communication language speech integrated system.
- the present invention is based on the feature extraction and AU detection method for landmark point and LSTM-based expression recognition, 23 expression recognition methods based on AU detection and LSTM features, and expression DB-based expression recognition. A method of improving the recognition rate of complex expressions through optimization can be used.
- the present invention can use emotional characteristic modeling based on complex expression recognition.
- the present invention can provide numerical indicators for emotional evaluation based on 23 expression recognition.
- the emotion device or method of the present invention can measure emotion as follows.
- the emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the expression extracted from the image of the other party.
- the expression may be the primary basis for judgment of the emotion. However, the expression may be a few seconds after expression. It is common to disappear within. Also, in the case of Asians, it is difficult to apply to emotion judgment because they do not make big expressions.
- 75 emotions can be measured or classified.
- emotions can be evaluated by taking into account the movement of the facial muscles. 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928 Depending on muscle mass, habits, etc., it may vary from person to person.
- the emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the eyes extracted from the image of the other party.
- the eyes are the size of the eyes, the movement of the eyes, the direction of the eyes, the movement speed of the eyes, and the muscles of the eyes. It can be defined as a combination of the movement of the eyelids, the degree of opening and closing of the eyelids, the gaze time, the gap between the brows and the wrinkles.
- feelings of emotion miod, hobbies, emotional atmosphere, emotions, etc.
- Including high-order complex emotions that occur during development can be grasped; therefore, eyes can be used to measure emotions such as emotional stability and instability, passion and indifference, shyness, guilt, disgust, curiosity, and empathy.
- the emotion recognition unit 130 uses the movement of the neck extracted from the image of the other party
- Neck movement can include the direction of the head, the angle between the head and the neck, the angle between the head and the back, the speed of the movement of the neck, the movement of the shoulder, and the degree and angle of the bending of the shoulder. Using neck movement, feelings such as depression and lethargy can be accurately measured.
- the emotion recognition unit 130 includes posture, gesture, and behavior extracted from the image of the other party.
- Gestures can be used to judge, recognize, or classify the emotions of the other party.
- the emotion recognition unit 130 comprehensively judges the movements of the neck, arms, back, and legs, and the physical openness, aggression, physical pain, and excitement (elevated) ), you can measure your favor.
- the emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the voice of the other party.
- the height of the voice, the length of the voice, the wave of the voice, the intonation of the voice, and the voice of the other party are large and small.
- the other person's emotions can be classified by measuring and classifying the length of the lost paragraph and the intensity of the voice.
- the emotion recognition unit 130 may judge the other person’s emotions by using the vocabulary analyzed through the voice of the other party.
- the emotion recognition unit 130 uses a specific vocabulary related to emotion among the vocabulary expressed by the voice of the other party. You can judge, recognize, or classify the other person's feelings.
- a plurality of elements capable of grasping the emotions of the other party may be input through various input means such as a photographing unit and a collection unit.
- the emotion recognition unit 130 receives elements (emotion analysis information) obtained through multiple channels. You can do it like this:
- the factors can include expressions, eyes, movements of the neck, gestures, voices, vocabulary, etc.
- the emotion recognition unit analyzes the other person's image to analyze the other person's gestures and expressions.
- the emotion recognition unit can grasp at least one of the voice of the other party and the vocabulary spoken by the other party through the collection unit.
- This emotion recognition unit 130 receives the data obtained first in time among a plurality of elements.
- the emotion recognition unit 130 may recognize a specific element used to recognize the emotion of the other party according to whether each element satisfies a set intensity or whether each element lasts for a set time. Can be selected.
- the emotion recognition unit 130 can analyze the intensity of each element. For example, it is assumed that the expression level 1, the eyes 3 level, the movement level 2, the gesture level 5, the voice level 2, and the vocabulary level 0 When the set intensity is set to 3 levels, the emotion recognition unit 130 can recognize the emotion of the other party by using only the eyes and gestures with the intensity 3 or higher.
- the emotion recognition unit 130 can analyze the duration of each element. For example, it is assumed that the expression is 3 seconds, the eyes are 4 seconds, the neck movement is 1 second, the gesture is 2 seconds, the voice is 3 seconds, and the vocabulary is 2 seconds. When the set time is set to 3 seconds, the emotion recognition unit 130 can recognize/determine/classify the other person's emotions using only the expression, eyes, and voice whose duration is 3 seconds or more.
- the emotion recognition unit 130 may give priority to classify emotions in the order of gesture, voice, expression, vocabulary, and neck movement. Or, the emotion recognition unit ( 130) can give priority to classify emotions in the order of gesture, voice, expression, vocabulary, and eyes. According to this embodiment, when various complex emotion elements are identified, the priority to accurately grasp the emotion of the other party is given. Can be assigned to each complex emotion element.
- the computing device TNW0 of FIG. 4 may be a device (eg, a recognition device) described in this specification.
- the computing device TNW0 may include at least one processor (TN110), a transmission/reception device (TN120), and a memory (TN130).
- the computing device (TNW0) is a storage device.
- TN140 input interface device
- TN160 output interface device
- Components included in the computing device (TNW0) are connected by a bus (TN170) to communicate with each other. can do.
- the processor (TN110) can execute a program command stored in at least one of the memory (TN130) and the storage device (TN140).
- the processor (TN110) is a central processing unit (CPU), It may mean a graphics processing unit (GPU: graphics processing unit), or a dedicated processor in which the methods according to the embodiment of the present invention are performed.
- the processor (TN110) is a process, function, and It can be configured to implement methods, etc.
- the processor (TN110) can control each component of the computing device (TNW0). 2020/175969 1»(:1 ⁇ 1 ⁇ 2020/002928
- Each of the memory (TN130) and the storage device (TN140) can store various information related to the operation of the processor (TN110).
- the memory (TN130) and the storage device (TN140) are each a volatile storage medium and a nonvolatile storage medium. It can be composed of at least one of them.
- the memory (TN130) may consist of at least one of read only memory (ROM) and random access memory (RAM).
- the transmitting and receiving device (TN120) can transmit or receive a wired signal or a wireless signal.
- the transmitting and receiving device (TN120) is connected to the network and can perform communication.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A recognition apparatus is provided. The recognition apparatus may comprise: a capturing unit for capturing an image of a counterpart facing a user; an emotion recognition unit for recognizing an external emotion of the counterpart through analysis of the counterpart's image captured by the capturing unit; and a display unit for displaying the external emotion by voice or vibration.
Description
2020/175969 1»(:1/10公020/002928 명세서 2020/175969 1»(:1/10公020/002928 Specification
발명의 명칭:감정 인식장치 및감정 인식방법 기술분야 Title of Invention: Emotion Recognition Device and Emotion Recognition Method Technical Field
[1] 본발명은사용자와대면하고있는상대방의감정을인식하는장치및방법에 관한것이다. [1] The present invention relates to a device and a method for recognizing the emotions of the other party facing the user.
배경기술 Background
[2] 저시력자를포함하여인간관계에서감정적소통능력을갖지못한사람이 증가하고있다.선천적장애,후천적질병이나노화또는소통교육의부재에 의해감정적소통능력을갖지못한사람이많다.이는본인과가족,친지, 동료에게고통을유발한다. [2] The number of people who do not have emotional communication skills in human relationships, including those with low vision, is increasing. Many people do not have emotional communication skills due to congenital disabilities, acquired diseases, aging, or lack of communication education. Causes pain in friends, relatives, and colleagues.
[3] 감정을읽지못하는문제로인해인간관계고립이유발될수있다.선천적, 후천적문제로인해상대방의감정을읽거나이해하고집중하는데어려움을 겪고있는사람이증가하고있다.노인들과우울증을앓고있는사람들이 증가하고있는추세이다.이들은상대방의감정을표정에서읽지못하는문제를 가짐으로인해인간관계에서고립되고있다. [3] The inability to read emotions can lead to isolation in human relations. Congenital and acquired problems are increasing the number of people who have difficulty reading, understanding, and concentrating on the other person's emotions. The elderly and suffering from depression There is a growing trend of people with whom they are isolated from human relations due to the problem of not being able to read the other's emotions in their expressions.
[4] 시력장애를보조하는고가의디바이스에는소통능력을지원하는기술이 포함되어 있지않다.고가제품은 300 500만원에이른다.실제보급률은 5%도채 되지않는다. [4] Expensive devices that assist with vision impairment do not include technology to support communication skills. Expensive products amount to 3,500,000 won. The actual supply rate is less than 5%.
[5] 국내/외에서얼굴인식,표정인식,동작인식기술을다양한분야에접목하여 사용하려는시도가늘어나고있으며 ,관련시장도확대되고있다. [5] Attempts to use face recognition, expression recognition, and motion recognition technologies in various fields at home and abroad are increasing, and related markets are also expanding.
[6] 구체적으로,얼굴인식및표정인식기술과관련하예 T기업및대학연구소를 중심으로생체인식기술에대한연구가 2000년대초반부터시작되었다.선도 연구기관으로서는카네기멜론대학의연구팀 ,미국의 MIT Media Lab, 유럽지역의 INRIA,일본의 ATR연구소등이있다.기업으로서는미국의 [6] Specifically, research on biometrics technology began in the early 2000s, centering on T-company and university research centers related to facial recognition and expression recognition technology. As a leading research institute, the research team at Carnegie Mellon University, MIT in the US There are Media Lab, INRIA in Europe, ATR Lab in Japan, etc.
Visionics, Viisage, Miros등이시장을선도하고있다.이들은현재얼굴인식 기술의핵심기술인얼굴검출기술,조명인식기술,포즈인식기술,표정인식 기술뿐만아니라,노화인식기술,대용량 DB구축기술, 3차원얼굴형상복원 기술등에초점을맞춰활발한연구를진행중에있다. Visionics, Viisage, Miros, etc. are leading the market. These are not only face detection technology, lighting recognition technology, pose recognition technology, expression recognition technology, which are the core technologies of face recognition technology, but also aging recognition technology, large-scale DB construction technology, and 3D. Active research is in progress focusing on facial shape restoration technology.
발명의상세한설명 Detailed description of the invention
기술적과제 Technical task
[7] 본발명은사용자와대면하는사용자의감정을정확하게인식할수있는인식 장치및인식방법을제공하기위한것이다. [7] The present invention is intended to provide a recognition device and recognition method capable of accurately recognizing the emotion of a user facing a user.
과제해결수단 Problem solving means
[8] 본발명의인식장치는사용자와대면하는상대방을촬영하는촬영부;상기 촬영부에서촬영된상대방이미지의분석을통해상기상대방의외형감정을 인식하는감정인식부;음성또는진동으로상기외형감정을표시하는
2020/175969 1»(:1^1{2020/002928 표시부;를포함할수있다. [8] The recognition device of the present invention includes a photographing unit for photographing the other party facing the user; an emotion recognition unit for recognizing the external emotion of the other party through the analysis of the image of the other party photographed by the photographing unit; the appearance of the above by voice or vibration Expressing emotion 2020/175969 1»(:1^1{2020/002928 display part; can be included.
[9] 상기감정 인식부는상대방이미지의분석을통해상기상대방의표정과 [9] The emotion recognition unit analyzes the image of the other party to
제스쳐중적어도하나를인식할수있다. At least one of the gestures can be recognized.
[1이 상기감정 인식부는상기표정또는상기제스쳐를이용해서상기 외형감정을 인식할수있다. [1] The emotion recognition unit may recognize the external emotion by using the expression or the gesture.
[11] 본발명의 인식방법은주변을촬영하는단계;촬영 이미지를저장하는단계; 촬영 이미지를분석하고,사용자와대화하는상대방을추출하는단계 ;상기 상대방이추출되면,상기상대방의 외형감정을인식하고보정하는단계;보정된 외형감정을표시하는단계;를포함할수있다. [11] The recognition method of the present invention includes the steps of photographing the surrounding area; storing the photographed image; Analyzing the photographed image and extracting the other party to be conversed with the user; When the other party is extracted, recognizing and correcting the external feeling of the other party; Displaying the corrected external feeling. It may include.
[12] 상기외형감정을인식하고보정하는단계는, [12] The step of recognizing and correcting the appearance emotion,
[13] 상기촬영 이미지에투영된상기상대방의표정또는제스쳐를이용해서상기 외형감정을인식하고,기저장된상기촬영 이미지의분석을통해주변사물을 인식하거나문자를인식하며,상기사물의 인식 결과에해당하는사물정보또는 상기문자의 인식 결과에해당하는내용정보를이용해서상기 외형감정을 보정할수있다. [13] Recognizing the external emotion by using the opponent's expression or gesture projected on the photographed image, and by analyzing the previously stored photographed image to recognize a nearby object or recognize a character, corresponding to the recognition result of the object. The appearance emotion can be corrected by using the object information or content information corresponding to the recognition result of the character.
발명의효과 Effects of the Invention
[14] 본발명의 인식장치 및인식 방법은카메라로촬영된상대방을분석하고, [14] The recognition device and recognition method of the present invention analyzes the other party photographed by the camera,
상대방의 외형감정을인식할수있다. You can recognize the other's external feelings.
[15] 표정 ,제스쳐등의 외형감정은주변환경에 따라다른강도또는다른종류의 실제감정을나타낼수있다. [15] External emotions such as facial expressions and gestures may show different strengths or different types of real emotions depending on the surrounding environment.
[16] 본발명의 인식장치 및인식 방법은상대방의 외형감정을인식하고,사물또는 문자등의주변환경을이용해서 기인식된상대방의 외형감정을보정할수 있다.본발명에의해보정된외형감정은주변환경까지고려한 [16] The recognition device and recognition method of the present invention recognizes the external feeling of the other party, and can correct the external external feeling of the other party, which is attributed to the object or text, by using the peripheral lens. Considering the Eunju conversion
복합감정인식기술기반의정확한감정 인식상대방의외형감정을보정할수 있다. Correct emotion recognition based on complex emotion recognition technology Can correct the external emotion of the other party.
[17] 본발명에 따르면,상대방의감정이주변환경까지고려해서분석되고인식될 수있다.주변환경까지고려한복합감정인식기술의 제시를통해본발명은 상대방의감정을정확하게 인식할수있다. [17] According to the present invention, the emotion of the other party can be analyzed and recognized in consideration of the surrounding environment. Through the presentation of the complex emotion recognition technology considering the surrounding environment, the present invention can accurately recognize the emotion of the other party.
[18] 복합감정인식기술을통해 인식된상대방의감정은사용자에게제공될수있다. 저시력을갖는사용자는상대방의감정을정확하게 인식할수있으므로, 상대방과정상적으로의사소통을진행할수있다. [18] The emotion of the other party recognized through complex emotion recognition technology can be provided to the user. A user with low vision can accurately recognize the other party's emotions, and thus can communicate in the process of the other party.
[19] 본발명의실시예에따르면,저시력을가진사용자의소통력이 [19] According to an embodiment of the present invention, the communication ability of a user with low vision
복합감정인식기술을이용하여 개선될수있다. It can be improved by using complex emotion recognition technology.
[2이 본발명은소통력 향상을위한복합감정인식기술기반의 저시력자용 [2 This invention is for people with low vision based on complex emotion recognition technology to improve communication skills.
디바이스에 관한것이다.본발명은저시력자들이 읽거나구별하기 어려운글자, 사물,사람들의표정을읽도록하여시력을보조할수있다.또한,본발명은 저시력자에 해당하는사용자가주변상황을이해하고,위험을인지하며 ,
2020/175969 1»(:1^1{2020/002928 쇼핑같은일상생활을하도록가이드할수있다.본발명은사용자가상대방과 정상적으로감정을소통할수있도록가이드할수있다. The present invention can aid vision by allowing low vision people to read characters, objects, and expressions that are difficult to distinguish. In addition, the present invention enables users with low vision to understand the surrounding situation and , Be aware of the risk, 2020/175969 1»(:1^1{2020/002928 You can guide your everyday life, such as shopping. This invention can guide users to communicate their emotions normally with the other party.
도면의간단한설명 Brief description of the drawing
[21] 도 1은본발명의 인식장치를나타낸개략도이다. 1 is a schematic diagram showing the recognition device of the present invention.
[22] 도 2는촬영부에서촬영된이미지를나타낸개략도이다. 2 is a schematic diagram showing an image photographed by a photographing unit.
[23] 도 3은본발명의 인식방법을나타낸흐름도이다. [23] Fig. 3 is a flow chart showing the recognition method of the present invention.
[24] 도 4는본발명의실시예에따른,컴퓨팅장치를나타내는도면이다. 4 is a diagram showing a computing device according to an embodiment of the present invention.
발명의실시를위한형태 Modes for the implementation of the invention
[25] 아래에서는첨부한도면을참고로하여본발명의실시예에 대하여본발명이 속하는기술분야에서통상의지식을가진자가용이하게실시할수있도록 상세히 설명한다.그러나본발명은여러 가지상이한형태로구현될수있으며 여기에서 설명하는실시예에 한정되지 않는다.그리고도면에서본발명을 명확하게설명하기위해서 설명과관계없는부분은생략하였으며,명세서 전체를통하여유사한부분에 대해서는유사한도면부호를붙였다. [25] Hereinafter, with reference to the accompanying drawings, an embodiment of the present invention will be described in detail so that those with ordinary knowledge in the technical field to which the present invention belongs can easily implement. However, the present invention has several different forms. It may be implemented and is not limited to the embodiments described herein. In order to clearly describe the present invention in the drawings, portions irrelevant to the description are omitted, and similar reference numerals are attached to similar portions throughout the specification.
[26] 본명세서에서,동일한구성요소에 대해서중복된설명은생략한다. [26] In this specification, redundant explanations for the same component are omitted.
[27] 또한본명세서에서,어떤구성요소가다른구성요소에’연결되어’있다거나 ’접속되어’있다고언급된때에는,그다른구성요소에직접적으로연결되어 있거나또는접속되어 있을수도있지만,중간에다른구성요소가존재할수도 있다고이해되어야할것이다.반면에본명세서에서,어떤구성요소가다른 구성요소에’직접 연결되어’있다거나’직접 접속되어’있다고언급된때에는, 중간에다른구성요소가존재하지 않는것으로이해되어야할것이다. [27] In addition, in this specification, when an element is referred to as being'connected' or'connected' to another element, it may be directly connected or connected to that other element, but in the middle It should be understood that other components may exist, whereas in this specification, when a component is referred to as being'directly connected' or'directly connected' to another component, no other component is present in the middle. It should be understood as not.
[28] 또한,본명세서에서사용되는용어는단지특정한실시예를설명하기 위해 사용되는것으로써,본발명을한정하려는의도로사용되는것이 아니다. [28] In addition, terms used in the present specification are only used to describe specific embodiments, and are not intended to limit the present invention.
[29] 또한본명세서에서 ,단수의표현은문맥상명백하게다르게뜻하지 않는한, 복수의표현을포함할수있다. [29] Also, in this specification, a singular expression may include a plural expression, unless the context clearly indicates otherwise.
[3이 또한본명세서에서 ,’포함하다’또는’가지다’등의용어는명세서에 기재된 특징 ,숫자,단계 ,동작,구성요소,부품,또는이들을조합한것이존재함을 지정하려는것일뿐,하나또는그이상의다른특징,숫자,단계,동작,구성요소, 부품또는이들을조합한것의존재또는부가가능성을미리 배제하지 않는 것으로이해되어야할것이다. [3] In this specification, terms such as,'include' or'have' are only intended to designate the existence of features, numbers, steps, actions, components, parts, or combinations of them described in the specification. It should be understood that the presence or addition of any further features, numbers, steps, actions, components, parts, or combinations thereof, does not preclude in advance.
[31] 또한본명세서에서,’및/또는’이라는용어는복수의 기재된항목들의조합 또는복수의기재된항목들중의 어느항목을포함한다.본명세서에서 , ^또는 [31] Also in this specification, the terms'and/or' include a combination of a plurality of listed items or any of a plurality of listed items. In this specification, ^ or
있다.
2020/175969 1»(:1^1{2020/002928 have. 2020/175969 1»(:1^1{2020/002928
[34] 감정 인식 기술이란인간의감정을측정하여 이를분석함으로써 제품개발이나 환경 설계에 적용하여 인간의삶의 질적 향상을도모하는기술이다.이는개인의 경험을통해 얻어지는외부물리적자극에 대한쾌적함,불쾌함,안락함,불편함 등의복합적인감정을과학적으로측정 및분석하여 이를공학적으로적용시켜 제품이나환경을변경시키는기술분야에속한다. [34] Emotion recognition technology is a technology that aims to improve the quality of human life by measuring human emotions and analyzing them, applying them to product development or environmental design. This is the comfort of external physical stimulation obtained through personal experiences, It belongs to the field of technology that changes the product or environment by scientifically measuring and analyzing complex emotions such as discomfort, comfort, and discomfort, and applying them engineeringly.
[35] 이러한감정 인식기술은인간의특성을파악하려는생체측정기술,인간의 오감센서 및감정처리 기술,감정디자인기술,마이크로가공기술,및사용성 평가나가상현실기술등의분야로나눌수있다.감정 인식 기술은인간의 생체적 및심리적 적합성을고려한전자제품및소프트웨어 인터페이스개발에 이용되고있다. [35] These emotion recognition technologies can be divided into fields such as biometric technology to understand human characteristics, human five sense sensors and emotion processing technology, emotion design technology, micro processing technology, and usability evaluation and virtual reality technology. Recognition technology is being used to develop electronic products and software interfaces that consider human biometric and psychological suitability.
[36] 또한,감정 인식 기술은사용자의감정을정확히 인식하여 이에 관련된 [36] In addition, emotion recognition technology accurately recognizes the emotions of users and
서비스를제공하는데이용되고있다.예를들어감정 인식 기술은오락분야, 교육분야,및의료분야등에 있어서사용자의감정을이용하여사용자에게감정 기반서비스를제공할수있고,서비스이용시에사용자의즉각적인반응을 확인하여그반응에 따른피드백을제공함으로써서비스의 질적 향상을도모할 수있다. For example, emotion recognition technology can provide emotion-based services to users by using the emotions of users in entertainment, education, and medical fields, and respond immediately to users when using the service. The quality of service can be improved by checking and providing feedback according to the response.
[37] 이러한감정 인식기술에서 ,사용자의 생체반응을측정하는방법으로는 [37] In this emotion recognition technology, as a method of measuring a user's biological response
자율신경겨 | (ANS, Autonomic Nervous System)즉정방법 ,중주신경겨 | (CNS, Central Nervous System)즉정을위한뇌파즉정방법 ,또는얼굴영상 (Behavior Response) 촬영방법등이 있다.또한,생체반응측정의 정확도를높이기 위하여,하나의 생체반응만을이용하여사용자의감정을추출하는것보다,다수의 생체반응을 통합하여분석하는멀티모달 (Multi-modal)감정 인식기술이 있다.즉,초기의 감정 인식기술은자율신경계측정,중추신경계측정,얼굴영상측정을통한 단편적인접근방법을이용하였으나,이후복수의 생체반응을통합하여 분석하는멀티모달시스템으로발전하고있다. Autonomic nerve bran | (ANS, Autonomic Nervous System) Immediate method, quartet nerve freckles | (CNS, Central Nervous System) There are EEG methods for immediate improvisation, or methods for photographing a face image (Behavior Response). In addition, in order to increase the accuracy of measuring the biological response, the user's emotions are extracted using only one biological response. Rather than that, there is a multi-modal emotion recognition technology that integrates and analyzes a number of biological responses. In other words, the initial emotion recognition technology is a fragmentary approach through autonomic nervous system measurement, central nervous system measurement, and facial image measurement. Was used, but since then, it has developed into a multimodal system that integrates and analyzes multiple biological reactions.
[38] 그러나,위와같은종래의감정 인식기술은생체반응에 대한측정값자체의 정확도를높이는것에불과하고,개개인의감정인식 및주변환경에관한차이를 반영하지못하는문제가있다.즉,감정인식기술에서는통계학적으로모든 사람들에게공통으로적용이 가능한표준화된감정인식룰-베이스 (Rule Base)를 설정하고,이러한표준화된룰-베이스를모든개개인에게 적용시켜왔다. [38] However, the conventional emotion recognition technology as described above only improves the accuracy of the measured value itself for the biological response, and there is a problem in that it does not reflect the difference in the individual's emotion recognition and surroundings. That is, the emotion recognition technology. In technology, a standardized emotion recognition rule-base that can be applied statistically to all people has been established, and this standardized rule-base has been applied to all individuals.
[39] 그러나,사람의 생체반응과그에따른감정은일률적으로나타나지 않고 [39] However, human body reactions and the accompanying emotions do not appear uniformly.
개개인마다다르게나타난다.따라서모든사용자에 대하여표준화된 It is different for each individual, so the standardized
룰-베이스에 따라생체반응과그에 따른감정을결정할경우,개개인의 감정인식의차이를정확하게판별하지못하게되는문제가있다. There is a problem in that it is not possible to accurately discriminate the difference in individual's emotional perception when determining the biological reaction and the corresponding emotion according to the rule-base.
[4이 사용자가대면하는상대방의감정을정확하게파악하기 위해본발명의 인식 장치 및인식 방법이 이용될수있다. [4] The recognition device and recognition method of the present invention can be used to accurately grasp the emotion of the other party faced by this user.
[41] [41]
[42] *도 1은본발명의 인식장치를나타낸개략도이다.도 2는촬영부 (1 W)에서
2020/175969 1»(:1^1{2020/002928 촬영된이미지를나타낸개략도이다. [42] * Fig. 1 is a schematic diagram showing the recognition device of the present invention. Fig. 2 is a photographing unit (1W) 2020/175969 1»(:1^1{2020/002928 This is a schematic diagram showing the captured image.
[43] 도 1에도시된인식장치는촬영부 (110),감정 인식부 (130),표시부 (150)를 The recognition device shown in FIG. 1 includes a photographing unit 110, an emotion recognition unit 130, and a display unit 150
포함할수있다. Can include
[44] 촬영부 (110)는사용자와대변하는상대방을촬영할수있다.일 예로, [44] The photographing unit 110 may photograph the user and the other party who speaks. For example,
촬영부 ( 0)는상대방의 얼굴,팔,다리,몸등을촬영할수있다. The photographing unit (0) can photograph the other person's face, arms, legs, and body.
[45] 촬영부 (110)에서 촬영된상대방의 얼굴은상대방의표정 를분석 ,인식하는데 사용될수있다.촬영부 ( 0)에서촬영된상대방의팔,다리,몸은상대방의 제스쳐 를분석,인식하는데사용될수있다.촬영부 ( 0)는영상이미지를 생성하는각종방식의카메라를포함할수있다. [45] The opponent's face photographed in the photographing unit 110 can be used to analyze and recognize the other's expression. The opponent's arms, legs, and body photographed in the photographing unit (0) analyze and recognize the other's gestures. The photographing unit (0) can contain various types of cameras that generate video images.
[46] 감정 인식부 (130)는촬영부 ( 0)에서 촬영된상대방이미지의분석을통해 상대방의 외형감정을인식할수있다. [46] The emotion recognition unit 130 may recognize the external emotion of the other party through the analysis of the image of the other party photographed by the photographing unit 0.
[47] 상대방이미지는촬영부 (110)에서 촬영된이미지중상대방이포함된영상을 지칭할수있다.감정 인식부 (130)는상대방이미지에포함된상대방을추출할수 있다.감정 인식부 (130)는추출된상대방의 정보에각종표정 인식 기술,각종 제스쳐 인식기술을상대방이미지에 적용하고,상대방의표정 와제스쳐 )중 적어도하나를파악또는인식할수있다. [47] The other party's image may refer to an image that includes the other party among the images captured by the photographing unit 110. The emotion recognition unit 130 may extract the other party included in the other party image. The emotion recognition unit 130 Can apply various expression recognition technologies and various gesture recognition technologies to the image of the other party to the extracted information of the other party, and at least one of the other party’s expressions and gestures) can be grasped or recognized.
[48] 감정 인식부 (130)는파악된상대방의표정 또는제스쳐 에감정 인식 [48] The emotion recognition unit 130 recognizes the emotion of the identified opponent's expression or gesture
알고리즘을적용하고,상대방의감정을인식할수있다.감정 인식 알고리즘은 딥러닝 기반으로생성될수있다.표정 또는제스쳐 )등의상대방의외형을 이용해서상대방의외적감정표현에 해당하는외형감정이 인식될수있다. It is possible to apply an algorithm and recognize the other party's emotions. The emotion recognition algorithm can be generated based on deep learning. Using the external appearance of the other party, such as expressions or gestures), the external emotion corresponding to the external emotion expression of the other party can be recognized. have.
[49] 외형감정은저시력사용자가인식하기 어려운부분일수있다.감정 [49] Appearance may be a difficult part for low vision users to recognize.
인식부 (130)에의해 인식된외형감정은표시부 (150)에 의해사용자에게제공될 수있다. The appearance feeling recognized by the recognition unit 130 may be provided to the user by the display unit 150.
[5이 표시부 (150)는음성또는진동으로상대방의 외형감정을표시할수있다. 사용자는표시부 (150)를소지하거나착용하는것을통해 ,표시부 (150)의 음성 또는진동을감지할수있다.촬영부 (110)및감정 인식부 (130)도사용자가 소지하거나착용가능하게 형성될수있다. [5 This display unit 150 can display the external feeling of the other party by voice or vibration. The user can sense the voice or vibration of the display unit 150 by holding or wearing the display unit 150. The photographing unit 110 and the emotion recognition unit 130 may also be formed to be possessed or worn by the user. .
[51] 일예로,표시부 (150)는상대방의감정을’상대방은기쁨상태입니다’와같은 음성을표시하는스피커등의음성출력수단을포함할수있다.일예로, 표시부 (150)는사용자와약속된진동신호를생성하는진동수단을포함할수 있다.진동수단은설정루틴에맞춘진동을통해상대방의감정이 기쁨상태인 것을표시할수있다. [51] As an example, the display unit 150 may include an audio output means such as a speaker that displays a voice such as “the other party is in a joyful state” of the other party’s emotions. For example, the display unit 150 makes an appointment with the user. Vibration means for generating a vibration signal can be included. The vibration means can indicate that the emotion of the other party is in a joyful state through vibrations tailored to the setting routine.
[52] 사용자는상대방의 음성을자체적으로획득한상태에서 ,표시부 (150)를통해 외형감정을제공받을수있다.사용자는외형감정의감지를통해상대방의 감정을정확하게파악할수있다.또한,사용자는자신이듣고있는상대방의 음성에표시부 (150)에서 제공한외형감정을매칭시키면서감정 학습을수행할 수있다.이를통해,사용자는상대방이 어떤감정일때 어떤음성으로대화를 하는지 학습할수있다.
2020/175969 1»(:1^1{2020/002928 [52] The user can receive the external feeling through the display unit 150 while acquiring the voice of the other party by himself. The user can accurately grasp the other party's feelings through the detection of the external feeling. In addition, the user himself/herself Emotional learning can be performed while matching the external emotion provided by the display unit 150 to the voice of the other party being heard. Through this, the user can learn what voice the other party is talking with. 2020/175969 1»(:1^1{2020/002928
[53] 한편,표정또는제스쳐를통해표시되는상대방의 외형감정은주변환경에 따라달라질수있다.예를들어,정숙이요구되는도서관에서상대방이특성 감정을표현하는표정/제스쳐는주변이소란스러운마켓에서표현하는그것과 다를수있다.이런이유로인해사용자의외형감정만으로상대방의감정을 정확하게 인식하기곤란할수있다. [53] On the other hand, the external sentiment of the other party through expressions or gestures may vary depending on the surrounding environment. For example, in a library where quietness is required, expressions/gestures in which the other party expresses characteristic emotions is a noisy market. It may be different from what is expressed in. For this reason, it may be difficult to accurately recognize the other party's emotions only with the user's external emotions.
[54] 실제의상대방감정을최대한추종하는외형감정이도출되도록,외형감정을 보정하는다양한수단이마련될수있다. [54] Various means for correcting the external feeling can be provided so that the external feeling that closely follows the actual feeling of the other party is derived.
[55] 일예로,본발명의 인식장치에는상대방의목소리를수집하는수집부 (120)가 마련될수있다. For example, the recognition device of the present invention may be provided with a collection unit 120 for collecting the voice of the other party.
[56] 감정 인식부 (130)는딥러닝등의목소리감정 인식모델,또는목소리의특성 분석 알고리즘등을이용해서목소리를분석하고,상대방의소리감정을인식할 수있다.소리감정은목소리를통해 인식하거나감지할수있는감정을포함할 수있다. [56] The emotion recognition unit 130 can analyze the voice by using a voice emotion recognition model such as deep learning or a voice characteristic analysis algorithm, and recognize the voice emotion of the other party. The sound emotion is recognized through the voice. It may contain emotions that can be or perceived.
[57] 감정 인식부 (130)는인식된소리감정을이용해서외형감정을보정할수있다. 표시부 (150)는보정된외형감정을표시할수있다.표시부 (150)는소리감정을 이용해서보정된외형감정을표시할수있다. [57] The emotion recognition unit 130 may correct the external emotion by using the recognized sound emotion. The display unit 150 may display the corrected appearance emotion. The display unit 150 may display the corrected appearance emotion using sound emotions.
[58] 외형감정의보정은동일한종류의감정에 대한강도의수정을의미할수있다. [58] Correction of appearance emotion can mean correction of intensity for the same kind of emotion.
[59] 감정의강도를표시하기위해 ,표시부 (150)에는동일한종류의외형감정에 대해서감정의강약을구분해서복수의표시정보가마련될수있다.예를들어, 기쁨의감정이라하더라도,조금기쁨,보통기쁨,매우기쁨등으로강도가 포함되는것이유리하다.감정 인식부 (130)는상대방이미지의분석을통해외형 감정의종류를인식할수있다.감정 인식부 (130)는목소리의세기에 따라 기인식된외형감정의종류에 대한강약을판별할수있다.표시부 (150)는외형 감정의종류와강약의판별결과를함께나타내는특정표시 정보를표시할수 있다.상대방은대화도중감정이격앙될수록목소리의세기가세질수있다. [59] In order to display the intensity of emotion, a plurality of display information may be provided on the display unit 150 by dividing the intensity of emotion for the same type of external emotion. For example, even if it is a feeling of joy, a little joy can be provided. It is advantageous that the intensity is included as, normal joy, very joy, etc. The emotion recognition unit 130 can recognize the type of external emotion through the analysis of the image of the other party. The emotion recognition unit 130 can recognize the type of external emotion according to the strength of the voice. It is possible to discriminate the strength and weakness of the type of external feeling that is attributed. The display unit 150 can display specific display information indicating the type of external feeling and the determination result of the strength and weakness. The century can be strong.
[6이 감정 인식부 (130)는상대방이미지를이용해기쁨,화남,슬픔등과같은 [6 This emotion recognition unit 130 uses the image of the other party to
상대방의감정종류를판별하고,판별된감정종류에 대한강도를목소리의 세기를이용해서판별할수있다. You can determine the type of emotion of the other person, and determine the intensity of the determined type of emotion by using the strength of your voice.
[61] 한편,수집부 (120)등의다른수단을통해 획득한소스를이용해서파악된 [61] On the other hand, by using the source obtained through other means such as the collection unit 120
상대방의감정과외형감정이충돌할수있다.이때,다른소스에기반하여 인식된감정에따라외형감정의종류를변경하는것이외형감정의보정에 해당될수있다. There may be a conflict between the other's emotions and the external emotions. In this case, changing the type of external emotions according to the perceived emotions based on different sources may correspond to the correction of the external emotions.
[62] 감정 인식부 (130)는외형감정과소리감정이서로다르고,소리감정의 강도가 설정값을만족하면,상대방이미지의분석을통해 인식한외형감정을수정할수 있다.예를들어,외형감정은’기쁨’으로인식된반면,소리감정은’슬픔’으로 인식될수있다.이 경우,소리감정의 강도,예를들어목소리의세기,데시벨, 또는음량이설정값이상이면,감정 인식부 (130)는’기쁨’의 외형감정을’슬픔’ 상태로수정할수있다.표시부 (150)는감정 인식부 (130)에 의해수정된외형
2020/175969 1»(:1^1{2020/002928 감정을표시할수있다. [62] The emotion recognition unit 130 may correct the perceived external emotion through analysis of the image of the other party if the external emotion and the sound emotion are different from each other and the intensity of the sound emotion satisfies the set value. For example, the external appearance While emotions are perceived as'joy', sound emotions can be recognized as'sadness.' In this case, if the intensity of the sound emotion, for example, the strength of the voice, the decibels, or the volume is above the set value, the emotion recognition unit ( 130) can modify the external emotion of'joy' to the state of'sadness'. The display unit 150 is the appearance modified by the emotion recognition unit 130 2020/175969 1 » (:1^1{2020/002928 You can express your emotions.
[63] 감정 인식부 (130)는외형감정과소리감정이동일하거나,소리감정의강도가 설정값을만족하지못하면,기파악된외형감정을그대로표시부 (150)에제공할 수있다.예를들어,소리감정의강도가설정값미만이면,감정 인식부 (130)는 소리감정을무시하고기인식된외형감정’기쁨’을그대로표시부 (150)에제공할 수있다. [63] The emotion recognition unit 130 may provide the determined external emotion to the display unit 150 as it is, when the external emotion and the sound emotion are the same or the intensity of the sound emotion does not satisfy the set value. For example, , If the intensity of the sound emotion is less than the set value, the emotion recognition unit 130 may ignore the sound emotion and provide the already recognized external emotion'joy' to the display unit 150 as it is.
[64] 저시력의사용자를가이드하기위해사물인식부 (170)가마련될수있다. [64] An object recognition unit 170 can be provided to guide users with low vision.
[65] 촬영부 (110)는주변사물을촬영할수있다.사물인식부 (170)는 [65] The photographing unit 110 can photograph surrounding objects. The object recognition unit 170 is
촬영부 ( 0)에서촬영된주변사물을인식할수있다.표시부 (150)는사물의 인식 결과에 해당하는사물정보를음성또는진동으로표시할수있다.사용자는 시각적으로주변사물을인지하지못하더라도,본발명의 인식장치의도움을 받아청각또는촉각적으로주변사물을인지할수있다. The surrounding object photographed by the photographing unit 0 can be recognized. The display unit 150 can display object information corresponding to the recognition result of the object by voice or vibration. Even if the user does not visually recognize the surrounding object, With the help of the recognition device of the present invention, surrounding objects can be recognized audibly or tactilely.
[66] 한편,상대방의감정을정확하게파악하기 위해,인식장치는주변사물을 [66] On the other hand, in order to accurately grasp the emotion of the other party, the recognition device
이용할수있다. You can use it.
[67] 일예로,감정 인식부 (130)는사물중에서상대방을제외한인물 1를파악할수 있다.감정 인식부 (130)는사물에해당되는인물 1의수를분석하고,인물 1의 수가설정 개수를만족하면외형감정의강도를높게보정할수있다. [67] As an example, the emotion recognition unit 130 may identify person 1 excluding the other party among objects. The emotion recognition unit 130 analyzes the number of person 1s corresponding to the object, and determines the number of characters 1 If you are satisfied, you can highly correct the intensity of your external feelings.
[68] 상대방은주변에다른사람들이존재할때,외적감정표현을자제할수있다. 예를들어,매우기쁜상황이더라도,상대방은주변사람들을고려해서차분하게 기쁨을표현할수있다.이와같은환경을반영하기 위해,감정 인식부 (130)는 상대방을제외한주변인물 I의숫자를파악할수있다.파악된인물수가설정 개수를만족하면,현재파악된외형감정의 강도를높게보정할수있다.상대방 표정의분석을통해상대방의감정이’보통기쁨’인것으로인식될수있다.이때, 주변에다른인물 17}설정 개수에해당되는 2명이존재하는경우,감정 [68] The other party can refrain from expressing external emotions when other people are around. For example, even in a very happy situation, the other party can express joy calmly in consideration of the people around him. To reflect such an environment, the emotion recognition unit 130 can determine the number of the surrounding person I excluding the other person. If the identified number of characters satisfies the set number, the intensity of the currently recognized appearance can be highly corrected. Through the analysis of the other party's facial expressions, the other party's emotions can be recognized as'normal joy.' Person 17}If there are 2 people corresponding to the set number, emotion
인식부 (130)는감정의 강도를설정 단계만큼높게보정할수있다.해당보정에 의해상대방의외형감정은’매우기쁨’으로보정되고,표시부 (150)를통해 표시될수있다. The recognition unit 130 may correct the intensity of the emotion as high as the setting step. By the correction, the external emotion of the other party is corrected to be “very joyful” and may be displayed through the display unit 150.
[69] 한편,주변인물 의수가설정 개수를만족하더라도,해당인물 의 이동여부에 따라상대방의감정표현이달라질수있다. [69] On the other hand, even if the number of prosthesis of the neighboring person satisfies the set number, the other party's emotional expression may be different depending on whether the person moves or not.
이 주변인물 17}상대방을기준으로이동하는상태라면,현재상대방은행인들이 존재하는길거리등에 위치한것으로파악될수있다.이경우,행인은상대방의 감정표현에별다른반응을보이지 않을가능성이높다.해당사실을인지하고 있는상대방은일반적인상황과유사하게자신의감정표현을솔직하게표현할 수있다. If you are moving based on the person around you 17}, it can be determined that you are located on the street where the other bankers currently exist. In this case, it is highly likely that the pedestrian will not react to the other person's emotional expression. Recognized opponents can honestly express their feelings, similar to a normal situation.
1] 주변인물 171-상대방의주변에정지해 있다면,주변인물 I는상대방의감정 표현에 반응할수있다.예를들어,횡단보도에서 여러사람이 기다리는경우, 다른인물 I는상대방의격한감정표현에 반응할수있다.또는,상대방주변에 정지한다른인물 1는상대방의지인일수있다.이경우에상대방의지인은
2020/175969 1»(:1/10公020/002928 상대방의감정표현에 반응할수있다.해당사실을인지하고있는상대방은 일반적인상황과비교해서자신의감정표현을차분하게표현할수있다.이때, 외부로표현된상대방의감정은본래의감정보다강도가약해진상태일수있다. 따라서,상대방의실제감정은외부로표현된감정표현보다강할수있다.1] Surrounding Person 171-If you are stopped in the vicinity of the other person, person I can react to the other person's emotional expression. For example, when several people are waiting on a crosswalk, the other person I is in the other person's intense expression of emotions. Alternatively, the other person 1 who stops around the other party may be an acquaintance of the other party. In this case, the acquaintance of the other party 2020/175969 1»(:1/10公020/002928 Can react to the other party's emotional expression. The other party who is aware of this fact can express his/her emotional expression calmly compared to the general situation. The other party's sentiment expressed as may be in a state where the intensity of the original sentiment information is weakened. Therefore, the other person's actual feelings can be stronger than those expressed externally.
촬영부 (:나0)에서촬영된주변사물중에서상대방을제외한인물떠 이동여부를 분석할수있다. It is possible to analyze the movement of a person excluding the other party among surrounding objects photographed by the photographing department (:B0).
[73] 감정 인식부 (130)는주변인물 1가이동하는것으로판별되면,외형감정을 [73] The emotion recognition unit 130, when it is determined that one surrounding person is moving,
그대로표시부 (150)에 제공할수있다. It can be provided to the display unit 150 as it is.
감정 인식부 (130)는주변인물 I가정지한것으로판별되면,외형감정의강도를 높게보정할수있다.감정 인식부 (130)는원래의 외형감정 대신강도가높게 보정된외형감정을표시부 (150)에 제공할수있다. When it is determined that the surrounding person I is stationary, the emotion recognition unit 130 can correct the intensity of the external emotion high. The emotion recognition unit 130 displays an external emotion that is highly corrected in intensity instead of the original external emotion. Can be provided to
[75] 본발명의 인식장치에는촬영부 ( 0)에서 촬영된문자를인식하는문자 [75] In the recognition device of the present invention, a character that recognizes a character photographed in the photographing unit (0)
인식부 (180)가마련될수있다.감정 인식부 (130)는촬영부 ( 0)에서촬영된 이미지에포함된문자 III의내용을분석할수있다.감정 인식부 (130)는문자 III이 의미하는내용의분석 결과에따라외형감정을보정할수있다.표시부 (150)는 내용의분석결과에 따라보정된외형감정을표시할수있다. A recognition unit 180 may be provided. The emotion recognition unit 130 may analyze the contents of the character III included in the image photographed by the photographing unit 0. The emotion recognition unit 130 may be defined by the letter III. The appearance emotion can be corrected according to the analysis result of the content. The display unit 150 can display the appearance emotion corrected according to the analysis result of the content.
주변에다른인물이존재하는여부에상관없이자신의 현재감정을차분하게 표현할수있다.이때,상대방의실제감정은외부로표현된감정보다높은 강도를가질수있다.본발명의 인식장치는문자의분석을통해상대방이 표현한감정의강도보다높은강도로외형감정을보정할수있다. Regardless of whether or not there are other people around you, you can calmly express your current feelings. At this time, the actual feelings of the other party can have a higher intensity than the externally expressed feelings. The recognition device of the present invention analyzes characters. Through this, you can correct the external feeling with an intensity higher than the intensity of the emotion expressed by the other party.
[77] 감정 인식부 (130)는이미지에포함된각종문자를분석하고,상대방이 처한 스 환경을파악할수있다.상대방이처한환경은상대방이 현재위치에 해당하는 [77] The emotion recognition unit 130 can analyze various characters included in the image and determine the swap environment in which the other party is facing. The environment in which the other party is facing corresponds to the current location of the other party.
21 21
880 781 시설정보,상대방의감정에 영향을미치는책 ,편지 ,문자메시지 ,메뉴판,제품 라벨,스크린등에 대한텍스트등의 내용정보를포함할수있다. 880 78 1 Content information such as facility information, text on books, letters, text messages, menu boards, product labels, screens, etc. that affect the emotions of the other party may be included.
상대방이바라보는스마트폰의문자메시지의내용분석결과, XXX회사에 합격하셨습니다’가파악될수있다.이때,감정 인식부 (130)는상대방의 외형 정보가현재’조금기쁨’이더라도강도를높게보정해서,’매우기쁨’으로외형 정보를수정할수있다.표시부 (150)는상대방의외형감정을’매우기쁨’으로 표시할수있다.사용자는상대방의실제감정을정확하게파악하고상대방을 상대할수있다. As a result of analyzing the content of the text message of the smartphone that the other party is looking at, it can be determined that you have passed the XXX company. At this time, the emotion recognition unit 130 corrects the intensity high even if the external information of the other party is ``a little joy''. , It is possible to modify the external appearance information with'very joyful'. The display unit 150 can display the external feeling of the other party as'very joyful'. The user can accurately grasp the actual feeling of the other party and deal with the other party.
[79] 인식장치에는사물인식부 (170),문자인식부 (180),선택부 (190)가마련될수 있다. In the recognition apparatus, an object recognition unit 170, a character recognition unit 180, and a selection unit 190 may be provided.
앞에서설명된바와같이,사물인식부 (170)는촬영부 (110)에서촬영된주변 사물을인식할수있다. As described above, the object recognition unit 170 may recognize surrounding objects photographed by the photographing unit 110.
[8 문자인식부 (180)는촬영부 ( 0)에서 촬영된문자 111을인식할수있다. [8 The character recognition unit 180 can recognize the character 111 photographed by the photographing unit (0).
선택부 (190)는외형정보,사물의 인식결과,문자의 인식결과중에서
2020/175969 1»(:1^1{2020/002928 표시부 (150)를통해표시될대상을사용자의선택에 따라선정할수있다. The selection unit 190 is among the appearance information, recognition results of objects, and recognition results of characters. 2020/175969 1»(:1^1{2020/002928 Through the display unit 150, the object to be displayed can be selected according to the user's choice.
선택부 (190)에따르면,사용자는주변사물의 인식용도로사물인식부 (170)를 사용할수있다.또는사용자는주변문자의 인식용도로문자인식부 (180)를 사용할수있다.표시부 (150)는선택부 (190)에 의해선정된대상을표시할수 있다. According to the selection unit 190, the user can use the object recognition unit 170 for the purpose of recognizing surrounding objects, or the user can use the character recognition unit 180 for the purpose of recognizing the surrounding characters. Display unit 150 ) May display the object selected by the selection unit 190.
[83] 선택부 (190)에 의해외형 정보가선정되면,감정 인식부 (130)는외형감정을 인식할수있다. When the appearance information is selected by the selection unit 190, the emotion recognition unit 130 may recognize the appearance emotion.
[84] 선택부 (190)에 의해외형 정보가선정되면,사물인식부 (170)및문자 [84] When the appearance information is selected by the selection unit 190, the object recognition unit 170 and text
인식부 (180)중적어도하나는감정 인식부 (130)와함께동작하면서사물또는 문자를인식할수있다. At least one of the recognition units 180 can recognize objects or characters while operating together with the emotion recognition unit 130.
[85] 감정 인식부 (130)는사물인식부 (170)로부터사물의 인식결과에 해당하는 [85] The emotion recognition unit 130 corresponding to the recognition result of the object from the object recognition unit 170
사물정보가입수되면,사물정보를이용해서 외형감정을수정할수있다. If you can subscribe to object information, you can use the object information to revise your appearance.
[86] 감정 인식부 (130)는문자인식부 (180)로부터문자의 인식결과에 해당하는 [86] The emotion recognition unit 130 corresponding to the recognition result of the character from the character recognition unit 180
문자의 내용정보가입수되면,내용정보를이용해서외형감정을수정할수 있다. When the content information of the text is subscribed, the appearance emotion can be modified using the content information.
[87] 한편,상대방을상대하고있는상태에서 ,사용자는외형감정의보정을위해 촬영부 ( 0)로다른곳을촬영하기 어려울수있다.왜냐하면,상대방이자신을 무시하는처사로오해할수있기 때문이다. [87] On the other hand, while dealing with the other party, it may be difficult for the user to shoot another location with the shooting unit (0) to correct the appearance of the person, because it may be mistaken as a disregard for the other party. .
[88] 상대방에 대면할때,사용자가상대방에게만집중하는자세를보일수있도록, 상대방의 외형감정을사전에 획득하는수단으로저장부 (140)가마련될수있다. [88] When confronting the other party, the storage unit 140 may be provided as a means for obtaining the external feeling of the other party in advance so that the user can show the attitude of focusing only on the other party.
[89] 저장부 (140)에는촬영부 ( 0)에서촬영된촬영 이미지가설정시간동안저장될 수있다. In the storage unit 140, a photographed image captured by the photographing unit 0 may be stored for a set time.
[9이 현재시점에서감정 인식부 (130)에상대방이미지가제공되면,사물 [9 If the image of the other party is provided to the emotion recognition unit 130 at the present time,
인식부 (170)또는문자인식부 (180)에는현재시점 이전에설정시간동안저장된 촬영 이미지가제공될수있다. The recognition unit 170 or the character recognition unit 180 may be provided with a photographed image stored for a set time before the current time point.
[91] 사물인식부 (170)는촬영 이미지에포함된주변사물을인식할수있다. [91] The object recognition unit 170 may recognize surrounding objects included in the photographed image.
[92] 문자인식부 (180)는촬영 이미지에포함된문자를인식할수있다. [92] The character recognition unit 180 may recognize characters included in the photographed image.
[93] 감정 인식부 (130)는사물인식부 (170)로부터사물인식결과에 해당하는사물 정보를제공받거나,문자인식부 (180)로부터문자인식결과에 해당하는내용 정보를제공받을수있다. [93] The emotion recognition unit 130 may receive object information corresponding to the object recognition result from the object recognition unit 170, or may receive content information corresponding to the character recognition result from the character recognition unit 180.
[94] 감정 인식부 (130)는사물정보또는내용정보를이용해서기인식된외형 [94] The emotion recognition unit 130 uses object information or content information to
감정을수정할수있다.표시부 (150)는감정 인식부 (130)에 의해수정된외형 감정을표시할수있다. The emotion may be corrected. The display unit 150 may display the appearance emotion corrected by the emotion recognition unit 130.
[95] 본실시예에 따르면,사용자가상대방을대면하게되면,상대방의 대면시점 이전에 촬영하고저장부 (140)에 저장해둔촬영 이미지에 대한분석이수행될수 있다.해당분석을통해주변사물또는문자가인식되고,감정 인식부 (130)는 해당주변사물또는문자를이용해서 대면하고있는상대방의 외형감정을 수정할수있다.
2020/175969 1»(:1^1{2020/002928 According to the present embodiment, when the user faces the other party, an analysis can be performed on the captured image photographed before the time of the other party's face-to-face and stored in the storage unit 140. The character is recognized, and the emotion recognition unit 130 may correct the external emotion of the other party facing by using the corresponding surrounding object or character. 2020/175969 1»(:1^1{2020/002928
[96] 본발명의인식장치는안경또는거치형디바이스형태로제작되어 [96] The recognition device of the present invention is manufactured in the form of glasses or a mounting device.
사용자에게착용될수있다.또는,본발명의인식장치는이동형디바이스 형태로제작되어사용자를따라다니거나사용자에게착용될수있다.이때,안경 등에마련된촬영부 ( 0)는사용자의전방을촬영하도록형성될수있다.본 실시예에따르면,상대방과대면하기시작한사용자가상대방을지속적으로 바라보는상태에서도,기저장된주변환경을이용해서상대방의외형감정이 정확하게보정될수있다.그결과,상대방의실제감정을추종하는외형감정이 사용자에게제공될수있다.상대방의실제감정을정확하게파악한사용자는 상대방의감정에올바르게대응할수있다. The recognition device of the present invention may be worn by the user. Alternatively, the recognition device of the present invention may be manufactured in the form of a mobile device to follow the user or be worn by the user. At this time, the photographing unit (0) equipped with glasses or the like may be formed to shoot the user's front side. According to the present embodiment, even when the user who starts to face the other party continuously looks at the other party, the external appearance of the other party can be accurately corrected by using the pre-stored peripheral environment. As a result, the actual feeling of the other party can be followed. A user who knows the other person's actual feelings accurately can respond correctly to the other's feelings.
[97] 도 3은본발명의인식방법을나타낸흐름도이다. 3 is a flow chart showing the recognition method of the present invention.
[98] 도 3의인식방법은도 1의인식장치에의해수행될수있다. The recognition method of FIG. 3 can be performed by the recognition device of FIG. 1.
[99] 촬영부 ( 0)는사용자가희망하는주변을촬영할수있다 510). [99] The photographing unit (0) can shoot the surrounding area desired by the user 510).
[10이 저장부 (140)는촬영이미지를저장할수있다 520). [10 This storage unit 140 may store the photographed image 520).
[101] 감정인식부 (130)는촬영이미지를분석하고,사용자와대화하는상대방을 추출할수있다 530). [101] The emotion recognition unit 130 may analyze the photographed image and extract the other party who communicates with the user 530).
[102] 감정인식부 (130)는상기촬영이미지로부터상대방이추출되면,상대방의 외형감정을인식하고보정할수있다 540). When the other party is extracted from the photographed image, the emotion recognition unit 130 may recognize and correct the external emotion of the other party (540).
[103] 표시부 (150)는감정인식부 (130)에의해보정된외형감정을표시할수있다 550). [103] The display unit 150 may display the appearance emotion corrected by the emotion recognition unit 130 (550).
[104] 외형감정을인식하고보정하는단계 540)는다음과같이세분화될수있다. [104] The step 540 of recognizing and correcting the external emotion can be subdivided as follows.
[105] 선택부 (190)는사용자의선택에따라,감정인식,사물인식,문자인식중어느 하나를선정할수있다 541). The selection unit 190 may select any one of emotion recognition, object recognition, and character recognition according to the user's selection (541).
[106] 사물인식이선정되면 541),사물인식부 (170)는촬영부 (110)에서촬영된 주변사물을인식할수있다 543).이때,주변사물의인식은실시간으로 수행될수있다.표시부 (150)는사물인식부 (170)에서인식된주변사물의정보를 음성또는진동으로표시할수있다. [106] When object recognition is selected 541), the object recognition unit 170 can recognize surrounding objects photographed by the photographing unit 110 543). At this time, the recognition of the surrounding objects can be performed in real time. 150) may display the information of the surrounding object recognized by the object recognition unit 170 by voice or vibration.
[107] 문자인식이선정되면 541),문자인식부 (180)는촬영부 (110)에서촬영된 문자를인식할수있다 544).이때,문자의인식은실시간으로수행될수있다. 표시부 (150)는문자인식부 (180)에서인식된문자의내용정보를음성또는 진동으로표시할수있다. [107] When character recognition is selected, 541), the character recognition unit 180 can recognize the characters photographed in the photographing unit 110 544). At this time, the character recognition can be performed in real time. The display unit 150 may display content information of a character recognized by the character recognition unit 180 by voice or vibration.
[108] 감정인식이선정되면 541),감정인식부 (130)는촬영이미지에투영된 [108] When emotion recognition is selected 541), the emotion recognition unit 130 is projected on the shooting image
상대방의표정{또는제스쳐 를이용해서외형감정을인식할수있다 542). Using the other person's facial expressions or gestures, you can recognize your external emotions 542).
[109] 감정인식부 (130)는저장부 (140)에기저장된촬영이미지의분석을통해주변 사물을인식하거나문자를인식할수있다.감정인식시수행되는사물의인식 또는문자의인식은실시간으로수행될수도있다.여기에추가로감정인식시 수행되는사물또는문자의인식은저장부 (140)에기저장된촬영이미지를 대상으로하는점에서특징이있다. [109] The emotion recognition unit 130 may recognize a nearby object or recognize a text through the analysis of the photographed image previously stored in the storage unit 140. The recognition of an object or text performed during emotion recognition is performed in real time. In addition, the recognition of objects or characters performed during emotional recognition has a characteristic in that it targets the photographed image previously stored in the storage unit 140.
[11이 감정인식부 (130)는사물의인식결과에해당하는사물정보또는문자의인식
2020/175969 1»(:1^1{2020/002928 결과에해당하는내용정보를이용해서외형감정을보정할수있다. [11 This emotion recognition unit 130 recognizes object information or text corresponding to the object recognition result. 2020/175969 1»(:1^1{2020/002928 You can use the content information corresponding to the result to correct the appearance feeling.
[111] 감정인식부 (130)는사물정보또는내용정보의분석을통해조용한환경을 요구하는정숙도를판별하고,정숙도에비례하게외형감정의강도를보정할수 있다. [111] The emotion recognition unit 130 may determine the level of quietness that requires a quiet environment through the analysis of object information or content information, and correct the intensity of external emotion in proportion to the level of quietness.
[112] 촬영이미지에포함된문자를통해파악된해당위치의정숙도가판별될수 있다. [112] The quietness of the location can be determined through the text included in the photographed image.
[113] 일예로,촬영이미지에포함된주변인물의개수당정숙도가설정될수있다. 주변인물의개수가 1명이면정숙도 1,주변인물의개수가 2명이면정숙도 2 등과같이설정될수있다.현재상대방의감정이기쁨 2단계인경우,정숙도가 1이면,감정인식부 (130)는현재감정의강도를 1단계높인기쁨 3단계로외형 정보를보정할수있다.현재상대방의감정이기쁨 2단계이고,정숙도가 2이면, 감정인식부 (130)는현재감정의강도를 2단계높인기쁨 4단계로외형정보를 보정할수있다. [113] As an example, a quietness level per number of surrounding people included in a photographed image may be set. If the number of neighboring people is 1, it can be set as quietness level 1, if the number of neighbors is 2, quietness level 2, etc. In the case of the second level of emotional pleasure of the other party, if the level of quietness is 1, the emotion recognition unit (130 ) Can correct the external information in three stages of joy, which increases the intensity of the current emotion by one level. If the current level of emotion and joy of the other party is at the second level, and if the quietness is 2, the emotion recognition unit 130 adjusts the strength of the current emotion to the second level. The appearance information can be corrected in 4 steps of increased joy.
[114] 촬영이미지에포함된문자를통해파악된현재위치가도서관이면정숙도 5가 부여될수있다.현재위치가마켓이면정숙도 1이부여될수있다.현재 상대방의감정이기쁨 2단계인경우,정숙도가 5이면,감정인식부 (130)는현재 감정의강도를 5단계높인기쁨 7단계로외형정보를보정할수있다.현재 상대방의감정이기쁨 2단계이고,정숙도가 1이면,감정인식부 (130)는현재 감정의강도를 1단계높인기쁨 3단계로외형정보를보정할수있다. [114] If the current location determined through the text included in the photographed image is a library, a quietness level of 5 can be given. If the current location is a market, a quietness level of 1 can be given. If the quietness level is 5, the emotion recognition unit 130 can correct the appearance information to 7 levels of joy, which raises the current level of emotion by 5 levels. If the current level of emotion is 2 levels of the other party's emotions, the level of quietness is 1, the emotion recognition unit ( 130) can correct the appearance information in three stages of joy, which raises the intensity of the current emotion by one level.
[115] 본발명은 5개의기본감정 (예,기쁨,놀라움,슬픔,화남,두려움)에대한인식을 수행할수있다.본발명은 17개의복합감정 (예 ,긍정과부정 ,각성과미각성 , 지배와복종의 3차원개념에따른복합감정포함,또는강도 (세기)레벨포함)에 대한인식을수행할수있다. [115] The present invention can perform recognition of five basic emotions (eg, joy, surprise, sadness, anger, fear). The present invention has 17 complex emotions (eg, positive and negative, arousal and taste awakening). According to the three-dimensional concept of domination and obedience, it is possible to perform recognition of complex emotions, including intensity (intensity) levels).
[116] 본발명은소통언어제시기능을가진다.구체적으로본발명은,최대 75개의 감정 (예,활기참,멋짐,실망함,두려워함,많이놀람,절망함,황당해함,불안함, 짜증을냄,초조해함,벗어나고싶어함,활기참,재미있어함,부끄러워함등)에 대해인식하고표현할수있다. [116] The present invention has a function of presenting a communication language. Specifically, the present invention has a maximum of 75 emotions (eg, energetic, cool, disappointed, fearful, a lot of surprise, despair, embarrassment, anxiety, annoyance). They can recognize and express their feelings of anger, irritability, wanting to get out, energetic, fun, and shy.
[117] 감정인식과관련하여,본발명은딥러닝 (Deep Learning)인공지능기법을 [117] Regarding emotion recognition, the present invention uses deep learning artificial intelligence technique.
적용하여표정을효과적으로인식하는방법을사용할수있다.본발명은복합 표정에서의고차원적결합방법을사용함으로써기존에는인식하기어려웠던 다양한복합감정을인식할수있다. The present invention can recognize a variety of complex emotions that were previously difficult to recognize by using a high-dimensional combination method in complex expressions.
[118] 물체인식또는감정인식과관련하여,본발명은개체수가많아지면현저히 떨어지는인식률을보완하기위해,먼저환경에대한인식을진행하고판별된 환경내에서발견될수있는물체들로개체수를줄여인식률을높일수있다. [118] Regarding object recognition or emotion recognition, the present invention first advances environmental awareness and reduces the number of objects to objects that can be found within the identified environment in order to compensate for the recognition rate that decreases significantly when the number of objects increases. Can be raised.
[119] 본발명은표정인식기술에 있어서시퀀스영상의모션플로우를 Dictionary 학습방법에입력하고표정별 latent동작사전학습을통한인식기술을기안할수 있다.본발명은과거의표정특징추출을위한다양한방법들 PCA(Principle Component Analysis), ICA(Independent Component Analysis), ASM( Active Shape
2020/175969 1»(:1^1{2020/002928 [119] In the expression recognition technology, the present invention can input the motion flow of the sequence image into the Dictionary learning method, and design a recognition technology through the pre-learning of latent motions for each expression. The present invention is a variety of methods for extracting past expression features. PCA (Principle Component Analysis), ICA (Independent Component Analysis), ASM (Active Shape 2020/175969 1»(:1^1{2020/002928
Model), A AM( Active Appearance Model)등과대비하여조명환경의변화, 얼굴색의차이등에강인한특징을찾을수있다. Model), A AM (Active Appearance Model), etc., it has strong characteristics against changes in the lighting environment and differences in face color.
[120] 본발명은사물인식알고리즘을사용할수있다.구체적으로, 100여개의 [120] The present invention can use an object recognition algorithm. Specifically, about 100
실내외의분위기를알수있는사물인식및음성안내알고리즘이사용될수 있다. Object recognition and voice guidance algorithms that can detect indoor and outdoor atmospheres can be used.
[121] 본발명은글자인식알고리즘을사용할수있다.구체적으로,한국어,중국어, 일어,영어를포함한 102개의언어문자인식이가능한알고리즘이사용될수 있다. [121] The present invention can use a character recognition algorithm. Specifically, an algorithm capable of recognizing characters in 102 languages including Korean, Chinese, Japanese, and English can be used.
[122] 본발명은기본감정실시간인식알고리즘을사용할수있다.구체적으로, [122] The present invention can use a basic emotional real-time recognition algorithm. Specifically,
5개의기본감정 (예,기쁨,놀라움,슬픔,화남,두려움)에대한실시간인식이 가능한알고리즘이사용될수있다. An algorithm capable of real-time recognition of five basic emotions (eg, joy, surprise, sadness, anger, and fear) can be used.
[123] 본발명은 17개의복합표정및감정인식알고리즘을사용할수있다. [123] The present invention can use 17 complex expression and emotion recognition algorithms.
구체적으로, AU(action unit)검출및 LSTM기반의복합감정인식기능이사용될 수있다. Specifically, AU (action unit) detection and LSTM-based complex emotion recognition function can be used.
[124] 사물인식과관련하여,본발명은데이터수집및데이터베이스 (Database)구축, 데이터학습및모델생성,그리고사물인식 YOLO이용을수행할수있다. [124] Regarding object recognition, the present invention can perform data collection and database construction, data learning and model creation, and object recognition YOLO.
[125] 문자인식과관련하여,본발명은데이터수집및 Database구축,데이터학습및 모델생성,그리고문자인식 tesseract-ocr사용을수행할수있다. [125] Regarding character recognition, the present invention can perform data collection and database construction, data learning and model generation, and character recognition tesseract-ocr use.
[126] 표정인식을지원하는실시간감정분석시스템과관련하여,본발명은 [126] Regarding the real-time emotion analysis system supporting facial expression recognition, the present invention
감정측정에적합한표정데이터수집및 Database구축,실시간감정분석시스템, 그리고소통언어로의전환을수행할수있다. It is possible to collect expression data suitable for emotion measurement, establish a database, a real-time emotion analysis system, and convert to a communication language.
[127] 통합사용시스템구현과관련하여,본발명은사물인식,문자인식,감정인식및 소통언어발화통합시스템을구현할수있다. [127] Regarding the implementation of the integrated use system, the present invention can implement the object recognition, text recognition, emotion recognition and communication language speech integrated system.
[128] 23가지감정변화에따른표정변화에대한특징벡터추출및 AU검출을 [128] Feature vector extraction and AU detection for facial expression changes according to 23 emotion changes
기반으로한복합표정인식알고리즘과관련하여,본발명은 landmark point및 LSTM기반의표정인식을위한특징추출및 AU검출방법, AU검출및 LSTM 특징기반 23가지표정인식방법,그리고표정 DB기반표정인식에최적화를 통한복합표정인식률개선방법을사용할수있다. Regarding the complex expression recognition algorithm based on the basis, the present invention is based on the feature extraction and AU detection method for landmark point and LSTM-based expression recognition, 23 expression recognition methods based on AU detection and LSTM features, and expression DB-based expression recognition. A method of improving the recognition rate of complex expressions through optimization can be used.
[129] 본발명은복합표정인식을기반으로한정서특성모델링을사용할수있다. [129] The present invention can use emotional characteristic modeling based on complex expression recognition.
[130] 본발명은 23가지표정인식을기반으로한감정평가를위한수치적지표를 제공할수있다. [130] The present invention can provide numerical indicators for emotional evaluation based on 23 expression recognition.
[131] 본발명의감정장치또는감정방법은다음과같이감정을측정할수있다. [131] The emotion device or method of the present invention can measure emotion as follows.
[132] 감정인식부 (130)는상대방이미지로부터추출된표정을이용해서상대방의 감정을판단또는인식또는분류할수있다.표정은감정의 1차적판단근거가 될수있다.그러나,표정은발현이후수초이내에없어지는것이일반적이다. 또한,동양인의경우큰표정을짓지않아서감정판단에적용되기어렵다. [132] The emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the expression extracted from the image of the other party. The expression may be the primary basis for judgment of the emotion. However, the expression may be a few seconds after expression. It is common to disappear within. Also, in the case of Asians, it is difficult to apply to emotion judgment because they do not make big expressions.
표정을이용해서 75개감정이측정되거나분류될수있다.본발명에서는얼굴 근육의움직임을고려하여감정을평가할수있다.얼굴근육의움직임은
2020/175969 1»(:1^1{2020/002928 근육량,습관등에따라사람별로달라질수있으므로,상대방적응적인 Using facial expressions, 75 emotions can be measured or classified. In the present invention, emotions can be evaluated by taking into account the movement of the facial muscles. 2020/175969 1»(:1^1{2020/002928 Depending on muscle mass, habits, etc., it may vary from person to person.
데이터가수집되고분석될필요가있다. Data needs to be collected and analyzed.
[133] 감정 인식부 (130)는상대방이미지로부터추출된눈빛을이용해서상대방의 감정을판단또는인식또는분류할수있다.눈빛은눈동자크기,눈동자움직임, 눈동자의 방향,눈동자의움직임속도,눈근육의움직임,눈꺼풀의 개폐정도, 응시시간,미간의간격과주름을종합하는것으로정의될수있다.눈빛을 이용하여감정중에서 정조 (기분,취미,정서적인분위기 ,정서가더욱 [133] The emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the eyes extracted from the image of the other party. The eyes are the size of the eyes, the movement of the eyes, the direction of the eyes, the movement speed of the eyes, and the muscles of the eyes. It can be defined as a combination of the movement of the eyelids, the degree of opening and closing of the eyelids, the gaze time, the gap between the brows and the wrinkles. Using the eyes of the eyes, it can be defined as feelings of emotion (mood, hobbies, emotional atmosphere, emotions, etc.)
발달되면서 일어나는고차원적인복잡한감정포함)가파악될수있다.따라서 , 정서적 안정감과불안정감,열정과무관심 ,부끄럼,죄책감,혐오,호기심 ,공감과 같은감정을측정하는데눈빛이 이용될수있다. Including high-order complex emotions that occur during development) can be grasped; therefore, eyes can be used to measure emotions such as emotional stability and instability, passion and indifference, shyness, guilt, disgust, curiosity, and empathy.
[134] 감정 인식부 (130)는상대방이미지로부터추출된목움직임을이용해서 [134] The emotion recognition unit 130 uses the movement of the neck extracted from the image of the other party
상대방의감정을판단또는인식또는분류할수있다.목움직임은머리의 방향, 머리와목과의각도,머리와등과의각도,목의움직임의속도,어깨의움직임, 어깨의구부러진정도와각도를포함할수있다.목움직임을이용하면우울감, 무기력등의감정이 정확하게측정될수있다. Can judge, recognize or classify the emotions of the other person. Neck movement can include the direction of the head, the angle between the head and the neck, the angle between the head and the back, the speed of the movement of the neck, the movement of the shoulder, and the degree and angle of the bending of the shoulder. Using neck movement, feelings such as depression and lethargy can be accurately measured.
[135] 감정 인식부 (130)는상대방이미지로부터추출된자세,몸짓,행동등의 [135] The emotion recognition unit 130 includes posture, gesture, and behavior extracted from the image of the other party.
제스쳐를이용해서상대방의감정을판단또는인식또는분류할수있다.감정 인식부 (130)는목,팔,등,다리의움직임을종합적으로판단하여신체적 개방성, 공격성,신체의 아픔,신나함 (고양됨),호의를측정할수있다. Gestures can be used to judge, recognize, or classify the emotions of the other party. The emotion recognition unit 130 comprehensively judges the movements of the neck, arms, back, and legs, and the physical openness, aggression, physical pain, and excitement (elevated) ), you can measure your favor.
[136] 감정 인식부 (130)는상대방의목소리를이용하여상대방의감정을판단하거나, 인식하거나,분류할수있다.목소리의높낮이,목소리의길이,목소리의 파동, 목소리의 억양및목소리가커지고작아지는단락의 길이 ,목소리의강도에 대한 측정 및분류를통해서상대방의감정이분류될수있다.목소리는표정 [136] The emotion recognition unit 130 may judge, recognize, or classify the emotion of the other party by using the voice of the other party. The height of the voice, the length of the voice, the wave of the voice, the intonation of the voice, and the voice of the other party are large and small. The other person's emotions can be classified by measuring and classifying the length of the lost paragraph and the intensity of the voice.
데이터가없는경우,감정 파악의주요데이터가될수있으며 ,표정과일치한 경우표정인식의정확도를높일수있다. If there is no data, it can be the main data of emotion recognition, and if it matches the expression, it can increase the accuracy of expression recognition.
[137] 감정 인식부 (130)는상대방의목소리를통해분석된어휘를이용해서상대방의 감정을판단할수있다.감정 인식부 (130)는상대방의목소리가나타내는어휘중 감정과관련된특정 어휘를이용해서상대방의감정을판단하거나,인식하거나, 분류할수있다. [137] The emotion recognition unit 130 may judge the other person’s emotions by using the vocabulary analyzed through the voice of the other party. The emotion recognition unit 130 uses a specific vocabulary related to emotion among the vocabulary expressed by the voice of the other party. You can judge, recognize, or classify the other person's feelings.
[138] 한편,상대방의감정을파악할수있는복수의요소가촬영부,수집부등의각종 입력수단을통해 입력될수있다.감정 인식부 (130)는복수채널을통해 입수된 요소 (감정분석정보)를다음과같이처리할수있다. [138] On the other hand, a plurality of elements capable of grasping the emotions of the other party may be input through various input means such as a photographing unit and a collection unit. The emotion recognition unit 130 receives elements (emotion analysis information) obtained through multiple channels. You can do it like this:
[139] 감정 인식부 (130)는상대방이미지또는상대방의목소리를통해상대방의 [139] The emotion recognition unit 130, through the image of the other party or the voice of the other party,
감정을파악할수있는복수의요소를추출할수있다.해당요소는표정,눈빛, 목움직임,제스쳐,목소리,어휘등을포함할수있다.일예로,감정 인식부는 상대방이미지의분석을통해상대방의 제스쳐,표정 ,목움직임 ,눈빛중적어도 하나를파악할수있다.감정 인식부는수집부를통해상대방의목소리, 상대방이 발성한어휘중적어도하나를파악할수있다.
2020/175969 1»(:1^1{2020/002928 Multiple elements can be extracted from which emotion can be grasped. The factors can include expressions, eyes, movements of the neck, gestures, voices, vocabulary, etc. For example, the emotion recognition unit analyzes the other person's image to analyze the other person's gestures and expressions. The emotion recognition unit can grasp at least one of the voice of the other party and the vocabulary spoken by the other party through the collection unit. 2020/175969 1»(:1^1{2020/002928
[14이 감정 인식부 (130)는복수의요소중시간적으로먼저 획득된데이터를 [14 This emotion recognition unit 130 receives the data obtained first in time among a plurality of elements.
이용해서상대방의감정을인식할수있다.또는,감정 인식부 (130)는각요소가 설정 강도를만족하는지 여부또는각요소가설정시간동안지속되는지 여부에 따라상대방의감정을인식하는데사용되는특정요소를선정할수있다. Alternatively, the emotion recognition unit 130 may recognize a specific element used to recognize the emotion of the other party according to whether each element satisfies a set intensity or whether each element lasts for a set time. Can be selected.
[141] 일예로,상대방과관련된표정,눈빛,목움직임,제스쳐,목소리,어휘가 [141] For example, expressions, eyes, movements, gestures, voices, and vocabulary related to the other party
획득되면,감정 인식부 (130)는각요소의 강도를분석할수있다.예를들어표정 1레벨,눈빛 3레벨,목움직임 2레벨,제스쳐 5레벨,목소리 2레벨,어휘 0레벨인 경우를가정한다.설정강도가 3레벨로설정된상태이면,감정 인식부 (130)는 강도가 3레벨이상인눈빛,제스쳐 2가지만이용해서상대방의감정을인식할수 있다. When obtained, the emotion recognition unit 130 can analyze the intensity of each element. For example, it is assumed that the expression level 1, the eyes 3 level, the movement level 2, the gesture level 5, the voice level 2, and the vocabulary level 0 When the set intensity is set to 3 levels, the emotion recognition unit 130 can recognize the emotion of the other party by using only the eyes and gestures with the intensity 3 or higher.
[142] 일예로,상대방과관련된표정 ,눈빛,목움직임 ,제스쳐 ,목소리 ,어휘가 [142] For example, expressions, eye movements, movements, gestures, voices, and vocabulary related to the other party
획득되면,감정 인식부 (130)는각요소의지속시간을분석할수있다.예를들어 표정 3초,눈빛 4초,목움직임 1초,제스쳐 2초,목소리 3초,어휘 2초인경우를 가정한다.설정시간이 3초로설정된상태이면,감정 인식부 (130)는지속시간이 3초이상인표정,눈빛,목소리만을이용해서상대방의감정을인식/판단/분류할 수있다. Once acquired, the emotion recognition unit 130 can analyze the duration of each element. For example, it is assumed that the expression is 3 seconds, the eyes are 4 seconds, the neck movement is 1 second, the gesture is 2 seconds, the voice is 3 seconds, and the vocabulary is 2 seconds. When the set time is set to 3 seconds, the emotion recognition unit 130 can recognize/determine/classify the other person's emotions using only the expression, eyes, and voice whose duration is 3 seconds or more.
[143] 한편,동일한시간에동일한강도,동일한지속시간을갖는복수의요소가 [143] On the other hand, multiple factors having the same intensity and duration at the same time
촬영부또는수집부를통해 입수될수있다.이때,감정 인식부 (130)는제스쳐, 목소리,표정,어휘,목움직임의순서대로감정을분류하는우선순위를부여할 수있다.또는,감정 인식부 (130)는제스쳐,목소리,표정,어휘,눈빛의순서대로 감정을분류하는우선순위를부여할수있다.본실시예에따르면,다양한복합 감정요소가파악될때,상대방의감정을정확하게파악할수있는우선순위가 각복합감정요소에부여될수있다. At this time, the emotion recognition unit 130 may give priority to classify emotions in the order of gesture, voice, expression, vocabulary, and neck movement. Or, the emotion recognition unit ( 130) can give priority to classify emotions in the order of gesture, voice, expression, vocabulary, and eyes. According to this embodiment, when various complex emotion elements are identified, the priority to accurately grasp the emotion of the other party is given. Can be assigned to each complex emotion element.
[144] [144]
[145] 도 4는본발명의실시예에따른,컴퓨팅장치를나타내는도면이다.도 4의 컴퓨팅장치 (TNW0)는본명세서에서기술된장치 (예,인식장치등)일수있다. 4 is a diagram showing a computing device according to an embodiment of the present invention. The computing device TNW0 of FIG. 4 may be a device (eg, a recognition device) described in this specification.
[146] 도 4의실시예에서,컴퓨팅장치 (TNW0)는적어도하나의프로세서 (TN110), 송수신장치 (TN120),및메모리 (TN130)를포함할수있다.또한,컴퓨팅 장치 (TNW0)는저장장치 (TN140),입력 인터페이스장치 (TN150),출력 인터페이스장치 (TN160)등을더포함할수있다.컴퓨팅장치 (TNW0)에포함된 구성요소들은버스 (bus)(TN170)에의해 연결되어서로통신을수행할수있다. In the embodiment of FIG. 4, the computing device TNW0 may include at least one processor (TN110), a transmission/reception device (TN120), and a memory (TN130). In addition, the computing device (TNW0) is a storage device. (TN140), input interface device (TN150), output interface device (TN160), etc. may be further included. Components included in the computing device (TNW0) are connected by a bus (TN170) to communicate with each other. can do.
[147] 프로세서 (TN110)는메모리 (TN130)및저장장치 (TN140)중에서 적어도하나에 저장된프로그램명령 (program command)을실행할수있다.프로세서 (TN110)는 중앙처리장치 (CPU: central processing unit),그래픽 처리장치 (GPU: graphics processing unit),또는본발명의실시예에 따른방법들이수행되는전용의 프로세서를의미할수있다.프로세서 (TN110)는본발명의실시예와관련하여 기술된절차,기능,및방법등을구현하도록구성될수있다.프로세서 (TN110)는 컴퓨팅장치 (TNW0)의각구성요소를제어할수있다.
2020/175969 1»(:1^1{2020/002928 [147] The processor (TN110) can execute a program command stored in at least one of the memory (TN130) and the storage device (TN140). The processor (TN110) is a central processing unit (CPU), It may mean a graphics processing unit (GPU: graphics processing unit), or a dedicated processor in which the methods according to the embodiment of the present invention are performed. The processor (TN110) is a process, function, and It can be configured to implement methods, etc. The processor (TN110) can control each component of the computing device (TNW0). 2020/175969 1»(:1^1{2020/002928
[148] 메모리 (TN130)및저장장치 (TN140)각각은프로세서 (TN110)의동작과관련된 다양한정보를저장할수있다.메모리 (TN130)및저장장치 (TN140)각각은 휘발성저장매체및비휘발성저장매체중에서적어도하나로구성될수있다. 예를들어,메모리 (TN130)는읽기전용메모리 (ROM: read only memory)및랜덤 액세스메모리 (RAM: random access memory)중에서적어도하나로구성될수 있다. [148] Each of the memory (TN130) and the storage device (TN140) can store various information related to the operation of the processor (TN110). The memory (TN130) and the storage device (TN140) are each a volatile storage medium and a nonvolatile storage medium. It can be composed of at least one of them. For example, the memory (TN130) may consist of at least one of read only memory (ROM) and random access memory (RAM).
[149] 송수신장치 (TN120)는유선신호또는무선신호를송신또는수신할수있다. 송수신장치 (TN120)는네트워크에연결되어통신을수행할수있다. [149] The transmitting and receiving device (TN120) can transmit or receive a wired signal or a wireless signal. The transmitting and receiving device (TN120) is connected to the network and can perform communication.
[15이 한편,본발명의실시예는지금까지설명한장치및/또는방법을통해서만 [15] On the other hand, embodiments of the present invention are only through the apparatus and/or method described so far.
구현되는것은아니며,본발명의실시예의구성에대응하는기능을실현하는 프로그램또는그프로그램이기록된기록매체를통해구현될수도있으며, 이러한구현은상술한실시예의기재로부터본발명이속하는기술분야의 통상의기술자라면쉽게구현할수있는것이다. It is not implemented, but may be implemented through a program that implements a function corresponding to the configuration of the embodiment of the present invention or a recording medium in which the program is recorded, and this implementation is usually carried out in the technical field to which the present invention belongs from the description of the above-described embodiment. This is something that can be easily implemented if you are a technician.
[151] 이상에서본발명의실시예에대하여상세하게설명하였지만본발명의 [151] Although the embodiments of the present invention have been described in detail above,
권리범위는이에한정되는것은아니고다음의청구범위에서정의하고있는본 발명의기본개념을이용한통상의기술자의여러변형및개량형태또한본 발명의권리범위에속하는것이다.
The scope of the rights is not limited thereto, and various modifications and improvements of ordinary technicians using the basic concept of the present invention defined in the following claims also belong to the scope of the present invention.
Claims
2020/175969 1»(:1/10公020/002928 청구범위 2020/175969 1»(:1/10公020/002928 Claims
[청구항 1] 사용자와대면하는상대방을촬영하는촬영부; [Claim 1] A photographing unit for photographing the other party facing the user;
상기촬영부에서촬영된상대방이미지의분석을통해상기상대방의 외형감정을인식하는감정인식부; An emotion recognition unit for recognizing an external emotion of the other party through analysis of the image of the other party photographed by the photographing unit;
음성또는진동으로상기외형감정을표시하는표시부; 를포함하는인식장치. A display unit for displaying the external feeling by voice or vibration; Recognition device comprising a.
[청구항 2] 제 1항에 있어서, [Claim 2] The method of claim 1,
상기상대방의목소리를수집하는수집부가마련되고, 상기감정인식부는상기목소리의분석을통해상기상대방의소리 감정을인식하며,인식된상기소리감정을이용해서상기외형감정을 보정하고, A collection unit for collecting the voice of the other party is provided, and the emotion recognition unit recognizes the voice emotion of the other party through the analysis of the voice, and corrects the appearance emotion using the recognized sound emotion,
상기표시부는보정된외형감정을표시하는인식장치 . The display unit is a recognition device that displays the corrected appearance feeling.
[청구항 3] 제 1항에 있어서, [Claim 3] The method of claim 1,
상기상대방의목소리를수집하는수집부가마련되고, 상기표시부에는동일한종류의외형감정에대해서감정의강약을 구분해서나타내는복수의표시정보가마련되며, A collection unit that collects the voices of the other party is provided, and a plurality of display information is provided on the display unit to indicate the strengths and weaknesses of the emotions for the same type of external emotion.
상기감정인식부는상기상대방이미지의분석을통해상기외형감정의 종류를인식하고, The emotion recognition unit recognizes the type of external emotion through the analysis of the image of the other party,
상기감정인식부는상기목소리의세기에따라기인식된외형감정의 종류에대한강약을판별하며, The emotion recognition unit determines the strength and weakness of the type of external emotion recognized according to the strength of the voice,
상기표시부는상기외형감정의종류와상기강약의판별결과를함께 나타내는특정표시정보를표시하는인식장치. The display unit is a recognition device for displaying specific display information indicating the type of the appearance feeling and the determination result of the strength and weakness.
[청구항 4] 제 1항에 있어서, [Claim 4] The method of claim 1,
상기감정인식부는상대방이미지의분석을통해상기상대방의표정과 제스쳐중적어도하나를인식하고, The emotion recognition unit recognizes at least one of the other party’s expression and gesture through analysis of the other’s image,
상기감정인식부는상기표정또는상기제스쳐를이용해서상기외형 감정을인식하는인식장치 . The emotion recognition unit A recognition device that recognizes the external emotion by using the expression or the gesture.
[청구항 5] 제 1항에 있어서, [Claim 5] The method of claim 1,
상기상대방의목소리를수집하는수집부가마련되고, 상기감정인식부는상기수집부에서수집된상기목소리의분석을통해 상기상대방의소리감정을인식하며 , A collection unit for collecting the voice of the other party is provided, and the emotion recognition unit recognizes the voice emotion of the other party through analysis of the voices collected by the collection unit,
감정인식부는상기외형감정과상기소리감정이서로다르고,상기 소리감정의강도가설정값을만족하면,상기상대방이미지의분석을 통해인식한상기외형감정을수정하고, If the external emotion and the sound emotion are different from each other, and the intensity of the sound emotion satisfies the set value, the emotion recognition unit corrects the recognized external emotion through analysis of the partner image,
상기표시부는상기감정인식부에의해수정된외형감정을표시하며 , 상기감정인식부는상기외형감정과상기소리감정이동일하거나,상기 소리감정의강도가상기설정값을만족하지못하면,기파악된상기외형
2020/175969 1»(:1^1{2020/002928 감정을그대로상기표시부에제공하는인식장치 . The display unit displays the appearance emotion modified by the emotion recognition unit, and the emotion recognition unit indicates that the external appearance is instrumented if the external emotion and the sound emotion are the same, or the intensity of the sound emotion does not satisfy the set value. 2020/175969 1»(:1^1{2020/002928 A recognition device that provides emotions to the above display as it is.
[청구항 6] 제 1항에있어서, [Claim 6] In paragraph 1,
상기촬영부는주변사물을촬영하고, The photographing unit photographs surrounding objects,
상기촬영부에서촬영된상기사물을인식하는사물인식부가마련되고, 상기표시부는상기사물의인식결과에해당하는사물정보를음성또는 진동으로표시하는인식장치 . An object recognition unit for recognizing the object photographed by the photographing unit is provided, and the display unit displays object information corresponding to the recognition result of the object by voice or vibration.
[청구항 7] 제 1항에있어서, [Claim 7] In paragraph 1,
상기촬영부에서촬영된주변사물을인식하는사물인식부가마련되고, 상기감정인식부는상기사물인식부에서인식된상기사물을분석하고, 분석결과에따라상기외형감정을보정하며 , An object recognition unit for recognizing a surrounding object photographed by the photographing unit is provided, and the emotion recognition unit analyzes the object recognized by the object recognition unit, and corrects the appearance emotion according to the analysis result,
상기표시부는상기사물의분석결과에따라보정된외형감정을 표시하는인식장치 . The display unit is a recognition device that displays the appearance emotion corrected according to the analysis result of the thing.
[청구항 8] 제 1항에있어서, [Claim 8] In paragraph 1,
상기촬영부에서촬영된주변사물을인식하는사물인식부가마련되고, 상기감정인식부는상기사물중에서상기상대방을제외한인물을 파악하며, An object recognition unit for recognizing surrounding objects photographed by the photographing unit is provided, and the emotion recognition unit identifies a person excluding the other party among the objects,
상기감정인식부는상기인물의수를분석하고,상기인물의수가설정 개수를만족하면상기외형감정의강도를높게보정하는인식장치 . [청구항 9] 제 1항에있어서, The emotion recognition unit analyzes the number of the characters, and when the number of the characters satisfies the set number, a recognition device that enhances the intensity of the external emotion. [Claim 9] In paragraph 1,
상기촬영부에서촬영된주변사물을인식하는사물인식부가마련되고, 상기감정인식부는상기사물중에서상기상대방을제외한인물을 파악하며, An object recognition unit for recognizing surrounding objects photographed by the photographing unit is provided, and the emotion recognition unit identifies a person excluding the other party among the objects,
상기감정인식부는상기인물의이동여부를분석하고, 상기감정인식부는상기인물이이동하는것으로판별되면,상기외형 감정을그대로상기표시부에제공하며 , The emotion recognition unit analyzes whether the person moves or not, and the emotion recognition unit provides the external emotion to the display unit as it is, when it is determined that the person is moving,
상기감정인식부는상기인물이정지한것으로판별되면,상기외형 감정의강도를높게보정하고,원래의상기외형감정대신강도가높게 보정된외형감정을상기표시부에제공하는인식장치 . The emotion recognition unit, when it is determined that the person has stopped, a recognition device that highly corrects the intensity of the appearance emotion, and provides a highly corrected appearance emotion to the display unit instead of the original appearance emotion.
[청구항 1이 제 1항에있어서, [In paragraph 1 of claim 1,
상기촬영부에서촬영된문자를인식하는문자인식부가마련되고, 상기감정인식부는상기문자의내용을분석하고,상기내용의분석 결과에따라상기외형감정을보정하며 , A character recognition unit that recognizes the character photographed by the photographing unit is provided, and the emotion recognition unit analyzes the content of the character, and corrects the appearance emotion according to the analysis result of the content,
상기표시부는상기내용의분석결과에따라보정된외형감정을 표시하는인식장치 . The display unit is a recognition device that displays the appearance emotion corrected according to the analysis result of the above contents.
[청구항 11] 제 1항에있어서, [Claim 11] In paragraph 1,
사물인식부,문자인식부,선택부가마련되고, Object recognition department, character recognition department, and selection department are prepared,
상기사물인식부는상기촬영부에서촬영된주변사물을인식하며, 상기문자인식부는상기촬영부에서촬영된문자를인식하고,
2020/175969 1»(:1^1{2020/002928 상기선택부는상기외형정보,상기사물의인식결과,상기문자의인식 결과중에서상기표시부를통해표시될대상을상기사용자의선택에 따라선정하며 , The object recognition unit recognizes a surrounding object photographed by the photographing unit, and the character recognition unit recognizes a character photographed by the photographing unit, 2020/175969 1»(:1^1{2020/002928 The selection unit selects the object to be displayed through the display unit among the appearance information, the recognition result of the object, and the recognition result of the character according to the user's selection,
상기표시부는상기선택부에의해선정된대상을표시하는인식장치 . The display unit is a recognition device that displays the object selected by the selection unit.
[청구항 12] 제 11항에있어서, [Claim 12] In paragraph 11,
상기선택부에의해상기외형정보가선정되면,상기감정인식부는상기 외형감정을인식하고, When the appearance information is selected by the selection unit, the emotion recognition unit recognizes the appearance emotion,
상기선택부에의해상기외형정보가선정되면,상기사물인식부및 상기문자인식부중적어도하나는상기감정인식부와함께동작하면서 상기사물또는상기문자를인식하며, When the appearance information is selected by the selection unit, at least one of the object recognition unit and the character recognition unit recognizes the object or the character while operating together with the emotion recognition unit,
상기감정인식부는상기사물인식부로부터상기사물의인식결과에 해당하는사물정보가입수되면,상기사물정보를이용해서상기외형 감정을수정하고, When the emotion recognition unit receives object information corresponding to the recognition result of the object from the object recognition unit, it uses the object information to correct the appearance emotion,
상기감정인식부는상기문자인식부로부터상기문자의인식결과에 해당하는문자의내용정보가입수되면,상기내용정보를이용해서상기 외형감정을수정하는인식장치 . When the emotion recognition unit can subscribe to the content information of the character corresponding to the recognition result of the character from the character recognition unit, a recognition device that uses the content information to correct the appearance emotion.
[청구항 13] 제 1항에있어서, [Claim 13] In paragraph 1,
상기촬영부에서촬영된촬영이미지가설정시간동안저장되는 저장부가마련되고, A storage unit is provided in which the photographed image captured by the photographing unit is stored for a set time,
현재시점에서상기감정인식부에상기상대방이미지가제공되면,현재 시점이전에상기설정시간동안저장된상기촬영이미지가제공되는 사물인식부또는문자인식부가마련되며, When the other person image is provided to the emotion recognition unit at the current point in time, an object recognition unit or a character recognition unit is provided to provide the photographed image stored for the set time before the current point of time,
상기사물인식부는상기촬영이미지에포함된주변사물을인식하고, 상기문자인식부는상기촬영이미지에포함된문자를인식하며, 상기감정인식부는상기사물인식부로부터사물인식결과에해당하는 사물정보를제공받거나,상기문자인식부로부터문자인식결과에 해당하는내용정보를제공받으며 , The object recognition unit recognizes a surrounding object included in the captured image, the character recognition unit recognizes a character included in the captured image, and the emotion recognition unit provides object information corresponding to the object recognition result from the object recognition unit. Or receive content information corresponding to the character recognition result from the character recognition unit,
상기감정인식부는상기사물정보또는상기내용정보를이용해서 기인식된상기외형감정을수정하고, The emotion recognition unit corrects the appearance emotion originated by using the object information or the content information,
상기표시부는상기감정인식부에의해수정된외형감정을표시하는 인식장치. The display unit is a recognition device for displaying an external emotion modified by the emotion recognition unit.
[청구항 14] 제 1항에있어서, [Claim 14] In paragraph 1,
상기감정인식부는상기상대방이미지또는상기상대방의목소리를 통해상기상대방의감정을파악할수있는복수의요소를추출하고, 상기감정인식부는상기복수의요소중시간적으로먼저획득된 데이터를이용해서상기상대방의감정을인식하거나,각요소가설정 강도를만족하는지여부또는각요소가설정시간동안지속되는지 여부에따라상기상대방의감정을인식하는데사용되는특정요소를
2020/175969 1»(:1^1{2020/002928 선정하는인식장치. The emotion recognition unit extracts a plurality of elements capable of grasping the emotion of the other party through the image of the other party or the voice of the other party, and the emotion recognition unit uses the data obtained first in time among the plurality of factors to determine the emotion of the other party. Depending on whether or not each element satisfies a set intensity, or whether each element lasts for a set time, a specific factor used to recognize the emotions of the other party is identified. 2020/175969 1»(:1^1{2020/002928 Recognition device selected.
[청구항 15] 제 1항에 있어서, [Claim 15] The method of claim 1,
상기상대방의목소리를수집하는수집부가마련되고, 상기감정 인식부는상기상대방이미지의분석을통해상기상대방의 제스쳐,표정 ,목움직임 ,눈빛중적어도하나를파악하며 , 상기감정 인식부는상기수집부를통해상기상대방의목소리,어휘중 적어도하나를파악하고, A collection unit that collects the voice of the other party is provided, and the emotion recognition unit recognizes at least one of the other’s gestures, expressions, movements, and eyes through the analysis of the other’s image, and the emotion recognition unit communicates through the collection unit. Grasp at least one of the other's voice and vocabulary,
상기감정 인식부는상기제스쳐,상기목소리,상기표정,상기 어휘,상기 목움직임의순서대로감정을분류하는우선순위를부여하거나, 상기감정 인식부는상기제스쳐,상기목소리,상기표정,상기 어휘,상기 눈빛의순서대로감정을분류하는우선순위를부여하는인식장치. The emotion recognition unit gives priority to classify emotions in the order of the gesture, the voice, the expression, the vocabulary, and the movement of the neck, or the emotion recognition unit gives the above gesture, the voice, the expression, the vocabulary, the eyes A recognition device that gives priority to classifying emotions in the order of.
[청구항 16] 인식장치에의해수행되는인식 방법에 있어서, [Claim 16] In the recognition method performed by the recognition device,
주변을촬영하는단계; Photographing the surroundings;
촬영 이미지를저장하는단계 ; Saving the photographed image;
촬영 이미지를분석하고,사용자와대화하는상대방을주줄하는단계 ; 상기상대방이추출되면,상기상대방의외형감정을인식하고보정하는 단계; Analyzing the captured image and giving a person to talk to the user; When the other party is extracted, recognizing and correcting the external feeling of the other party;
보정된외형감정을표시하는단계 ;를포함하고, Including; displaying the corrected appearance feeling;
상기 외형감정을인식하고보정하는단계는, The step of recognizing and correcting the appearance emotion,
상기 촬영 이미지에투영된상기상대방의표정또는제스쳐를이용해서 상기 외형감정을인식하고, Recognizing the external emotion by using the expression or gesture of the other party projected on the photographed image,
기저장된상기촬영 이미지의분석을통해주변사물을인식하거나 문자를인식하며 , Through the analysis of the previously stored photographed image, the object is recognized or text is recognized,
상기사물의 인식 결과에해당하는사물정보또는상기문자의 인식 결과에 해당하는내용정보를이용해서상기외형감정을보정하는인식 방법. A recognition method for correcting the appearance emotion by using object information corresponding to the recognition result of the object or content information corresponding to the recognition result of the character.
[청구항 17] 제 16항에 있어서, [Claim 17] In paragraph 16,
상기사물정보또는상기 내용정보의분석을통해조용한환경을 요구하는정숙도를판별하고, Through the analysis of the object information or the content information to determine the level of quietness that requires a quiet environment,
상기 정숙도에비례하게외형감정의 강도를보정하는인식방법.
Recognition method for correcting the intensity of external emotion in proportion to the quietness.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20190023632 | 2019-02-28 | ||
KR10-2019-0023632 | 2019-02-28 | ||
KR10-2020-0025111 | 2020-02-28 | ||
KR1020200025111A KR102351008B1 (en) | 2019-02-28 | 2020-02-28 | Apparatus and method for recognizing emotions |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020175969A1 true WO2020175969A1 (en) | 2020-09-03 |
Family
ID=72239771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/002928 WO2020175969A1 (en) | 2019-02-28 | 2020-02-28 | Emotion recognition apparatus and emotion recognition method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020175969A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808623A (en) * | 2021-09-18 | 2021-12-17 | 武汉轻工大学 | Emotion recognition glasses for blind people |
CN117496333A (en) * | 2023-12-29 | 2024-02-02 | 深医信息技术(深圳)有限公司 | Medical equipment data acquisition method and medical equipment data acquisition system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010134937A (en) * | 2008-12-08 | 2010-06-17 | Korea Electronics Telecommun | State recognition device, and state recognition method using the same |
KR20100100380A (en) * | 2009-03-06 | 2010-09-15 | 중앙대학교 산학협력단 | Method and system for optimized service inference of ubiquitous environment using emotion recognition and situation information |
KR20130009123A (en) * | 2011-07-14 | 2013-01-23 | 삼성전자주식회사 | Apparuatus and method for recognition of user's emotion |
KR20150081824A (en) * | 2014-01-07 | 2015-07-15 | 한국전자통신연구원 | Apparatus and method for controlling digital multimedia based on user emotion |
KR20180111467A (en) * | 2017-03-31 | 2018-10-11 | 삼성전자주식회사 | An electronic device for determining user's emotions and a control method thereof |
-
2020
- 2020-02-28 WO PCT/KR2020/002928 patent/WO2020175969A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010134937A (en) * | 2008-12-08 | 2010-06-17 | Korea Electronics Telecommun | State recognition device, and state recognition method using the same |
KR20100100380A (en) * | 2009-03-06 | 2010-09-15 | 중앙대학교 산학협력단 | Method and system for optimized service inference of ubiquitous environment using emotion recognition and situation information |
KR20130009123A (en) * | 2011-07-14 | 2013-01-23 | 삼성전자주식회사 | Apparuatus and method for recognition of user's emotion |
KR20150081824A (en) * | 2014-01-07 | 2015-07-15 | 한국전자통신연구원 | Apparatus and method for controlling digital multimedia based on user emotion |
KR20180111467A (en) * | 2017-03-31 | 2018-10-11 | 삼성전자주식회사 | An electronic device for determining user's emotions and a control method thereof |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808623A (en) * | 2021-09-18 | 2021-12-17 | 武汉轻工大学 | Emotion recognition glasses for blind people |
CN117496333A (en) * | 2023-12-29 | 2024-02-02 | 深医信息技术(深圳)有限公司 | Medical equipment data acquisition method and medical equipment data acquisition system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102351008B1 (en) | Apparatus and method for recognizing emotions | |
Vinola et al. | A survey on human emotion recognition approaches, databases and applications | |
US20210081056A1 (en) | Vpa with integrated object recognition and facial expression recognition | |
US9031293B2 (en) | Multi-modal sensor based emotion recognition and emotional interface | |
US9501743B2 (en) | Method and apparatus for tailoring the output of an intelligent automated assistant to a user | |
CN110349667B (en) | Autism assessment system combining questionnaire and multi-modal model behavior data analysis | |
JP2004310034A (en) | Interactive agent system | |
US20140212854A1 (en) | Multi-modal modeling of temporal interaction sequences | |
US20140212853A1 (en) | Multi-modal modeling of temporal interaction sequences | |
Prado et al. | Visuo-auditory multimodal emotional structure to improve human-robot-interaction | |
JP2019008570A (en) | Information processing device, information processing method, and program | |
KR20200092207A (en) | Electronic device and method for providing graphic object corresponding to emotion information thereof | |
WO2020175969A1 (en) | Emotion recognition apparatus and emotion recognition method | |
Sosa-Jiménez et al. | A prototype for Mexican sign language recognition and synthesis in support of a primary care physician | |
US20200226136A1 (en) | Systems and methods to facilitate bi-directional artificial intelligence communications | |
Zhang et al. | A survey on mobile affective computing | |
JP2017182261A (en) | Information processing apparatus, information processing method, and program | |
CN111310530B (en) | Sign language and voice conversion method and device, storage medium and terminal equipment | |
Mansouri Benssassi et al. | Wearable assistive technologies for autism: opportunities and challenges | |
US20240335952A1 (en) | Communication robot, communication robot control method, and program | |
JP2000194252A (en) | Ideal action support device, and method, system, and recording medium therefor | |
KR102630872B1 (en) | Apparatus and method for learning facial expression recognition | |
EP4170609A1 (en) | Automated filter selection for altering a stream | |
Egorow | Accessing the interlocutor: recognition of interaction-related interlocutor states in multiple modalities | |
Rahul et al. | Emotion Recognition Using Artificial Intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20762976 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20762976 Country of ref document: EP Kind code of ref document: A1 |