+

WO2018060993A1 - Method and system for personality-weighted emotion analysis - Google Patents

Method and system for personality-weighted emotion analysis Download PDF

Info

Publication number
WO2018060993A1
WO2018060993A1 PCT/IL2017/051083 IL2017051083W WO2018060993A1 WO 2018060993 A1 WO2018060993 A1 WO 2018060993A1 IL 2017051083 W IL2017051083 W IL 2017051083W WO 2018060993 A1 WO2018060993 A1 WO 2018060993A1
Authority
WO
WIPO (PCT)
Prior art keywords
personality
emotion
individual
signal
classifier
Prior art date
Application number
PCT/IL2017/051083
Other languages
French (fr)
Inventor
Itzhak Wilf
Shai Gilboa
David GAVRIEL
Original Assignee
Faception Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faception Ltd. filed Critical Faception Ltd.
Publication of WO2018060993A1 publication Critical patent/WO2018060993A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression

Definitions

  • the present invention relates to the field of emotion recognition systems. More particularly, the invention relates to a method and system for estimating emotion from one or more emotion- related signals in a manner that is weighted or moderated by an underlying personality.
  • the emotional state of an individual affects concentration, task solving and decision making skills, as well as behavior, reaction to situations, messages and external stimuli. Human-to-human and human-to-machine interactions are also affected by the emotional state of an individual.
  • Emotion recognition can increase the efficiency of medical treatment, rehabilitation, and counseling. It can help patients deal with stress, anxiety and depression.
  • E-learning one can adjust the presentation style of an online tutor. Monitoring the emotional state and detecting anger can prevent violence and improve call-center management.
  • emotion recognition can adjust content (e.g. music, games) to the mood of the users, and in the field of marketing it can increase the impact of advertisements by increasing attention and engagement.
  • Emotion recognition algorithms typically perform facial detection and semantic analysis to interpret mood from images, videos, text, and speech.
  • facial expression detection tools categorize emotions to Joy, Sadness, Anger, Fear, Surprise, Contempt, and Disgust.
  • Facial Emotion Databases With facial emotion detection, algorithms detect faces within an image or video, and sense expressions by analyzing the relation between points on the face of an individual, based on databases compiled in academic environments. Facial Emotion Databases
  • Emotion definition can be recognized in discrete emotion labels or on a continuous scale.
  • Many databases are based on the basic emotions theory, introduced by Paul Ekman, which assumes the existence of six discrete basic emotions: Anger, Fear, Disgust, Surprise, Joy and Sadness.
  • CK+ Extended Cohn-Kanade Dataset
  • CK+ Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive Database for Facial Expression Analysis. Paper presented at the Fourth IEEE International Conference on Automatic Face and Gesture Recognition] comprises 593 posed image sequences (327 of which have discrete emotion labels), for 123 subjects, covering the 6 basic emotions plus neutral emotion.
  • Fig.l depicts a sample of the CK+ database, showing 7 basic emotions of 5 subjects.
  • Shan et al. [Shan, C, Gong, S., & McOwan, P. W. (2009). Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing] investigate various combinations of face image descriptors with machine learning classifiers to identify emotion from 6 and 7-class (including neutral) expressions.
  • Shan et al. use Local Binary Patterns (LBP) as a basic descriptor, further suggesting partitioning of the face image into multiple windows, and selecting the most discriminative feature for each emotion, with a boosting method.
  • LBP Local Binary Patterns
  • Shan shows that the best results for the CK+ dataset can be achieved when using Support Vector Machine (SVM) classifier, which is a representation of the examples as points in space, mapped such that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
  • SVM Support Vector Machine
  • Zhao et al. proposes extracting a spatial-temporal motion local binary pattern (LBP) feature and integrating it with Gabor multi-orientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions.
  • LBP spatial-temporal motion local binary pattern
  • Fig. 2 shows another method for facial emotion detection proposed by Pilla et al. [Pilla Jr, Valfredo & a I, (2016). Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine, WVC 2016 - Workshop de Visao Computacional, At Campo Grande, Mato Grosso, Brazil].
  • Input comprises a 156-pixels-wide and 176-pixels-high greyscale image that is derived from the CK+ Dataset (stage a).
  • AlexNet Alex Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks, in Advances in Neural Information Processing Systems 25 (NIPS 2012)] is a deep convolutional neural network, trained to classify 1.3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes. The network is often used to generate features for image classifiers in a process known as transfer learning.
  • Sentiment analysis processing software analyzes text to conclude if a statement is generally positive or negative based on keywords and their valence index.
  • US patent no. 8965762 proposes combining voice analysis and image analysis to provide an improved measure of emotion.
  • a customer relation representative is answering a video call by a customer, complaining about the quality of a service or a delivered product. Sometime during the call, the customer may be expressing anger or frustration. Clearly, such emotions are easier to detect if the customer is extrovert. However, an anger classifier trained and applied equally on all customers may fail to detect anger of an introvert customer.
  • the present invention relates to a method for detecting emotions of individuals according to the personality of the individuals, from one or more emotion-related signals.
  • the method comprises: a) training a personality-weighted emotion classifier;
  • the classifier is trained by:
  • the emotion signal and the at least one personality signal are selected from:
  • the at least one personality trait is selected from the big-5 personality traits.
  • the personality traits are predicted by a) performing face detection, alignment and cropping on one or more digital face images of an individual; b) performing feature extraction from the individual's digital face images as to provide image descriptors of the individual; and c) applying a classifier model obtained in a learning stage on the individual's image descriptors as to predict a personality trait of the individual.
  • the emotion is detected by an emotion detector that corresponds to the predicted personality trait level.
  • the personality trait level is predicted in real-time.
  • the personality trait level is predicted from previously gathered user data of the individual.
  • the emotion is detected by an unweighted emotion detector, and is multiplied by an emotion gain factor that corresponds to the predicted personality trait level.
  • the present invention relates to a system for detecting emotions according to the personality of the individual, comprising:
  • a memory comprising computer-readable instructions which when executed by the at least one processor causes the processor to execute a personality-weighted emotions detector, wherein the personality-weighted emotions detector:
  • the present invention relates a non-transitory computer-readable medium, comprising instructions which when executed by at least one processor causes the processor to detect emotions according to the personality of the individual.
  • the present invention relates to a personality-weighted emotions detector, comprising:
  • a personality-weighted emotions classifier configured to receive as input an emotion signal and to output an emotion detected in the emotion signal
  • a dataset acquiring module configured to obtain a dataset comprising a plurality of emotion- tagged training signals of a plurality of subjects
  • a personality trait prediction module configured to predict at least one personality trait from input signals of subjects and the level of each said personality trait
  • a processor configured to:
  • Fig. 1 shows a sample of the Extended Cohn-Kanade Dataset
  • Fig. 2 shows a method for facial emotion detection
  • Fig. 3 (prior art) is a flowchart describing a method of predicting personality traits from face images
  • Fig. 4 schematically illustrates a method for creating emotion training data for different levels of extroversion, according to an embodiment of the present invention
  • Fig. 5 is a flow chart describing the process of training emotion detection classifiers, according to an embodiment of the present invention.
  • Fig. 6 shows a diagram describing a personality weighted emotion analyzer, according to an embodiment of the present invention
  • Fig. 7 shows a diagram describing a personality-weighted or moderated emotion prediction, according to an embodiment of the invention.
  • Fig. 8 shows a diagram describing a method for personality-weighted emotion detection according to an embodiment of the invention.
  • the present invention provides a method for estimating emotions of individuals from one or more emotion-related signals in a manner that is weighted or moderated by one or more personality traits of each subject individual. It is noted that, in contrast to emotions such as anger, sadness or joy which are temporary states, human personality is for all practical purposes fixed. An introvert individual, for instance, will highly probably be introvert tomorrow and next month.
  • Fig. 3 is a flowchart describing the method of predicting personality traits from face images according to US 2015/0242707 Al. As indicated in this figure, the method includes a learning stage and a prediction stage.
  • the learning stage may involve the following procedure:
  • the provided digital images comprises personality metadata and are used as training data for the learning stage;
  • the prediction stage may involve the following procedure:
  • the present invention provides methods for detecting emotion or emotional state of an individual from one or more emotion-related signals in a manner that is weighted or moderated by the individual's personality.
  • a specific personality trait is selected - introversion / extroversion. It is clear that the present invention is not limited to the introversion / extroversion exemplary case, and can be applied to other traits from personality big-5 or other personality traits or a combination of traits.
  • digital image and video analysis can be used as input for predicting personality and emotion of an individual.
  • Other embodiments of the present invention may use input from other sources and/or types.
  • subject personality can be predicted from a subject individual's digital media (e.g., from social media such Facebook) followed by personality weighted emotion analysis based on image / video / audio signals / typing patterns or touch patterns (e.g., behavioral biometric of keystroke dynamics that uses the manner and rhythm in which an individual types characters on a keyboard or keypad, or the manner an individual interacts with a touchscreen or similar touch-based input devices).
  • a 3-level personality trait prediction is considered: introvert / extrovert / balanced. It is clear that this method can be further generalized, for instance, to a 5- level prediction model or, given enough training data, to a continuous measure of introversion.
  • Fig. 4 schematically illustrates a method for creating emotion training data for different levels of extroversion, using a facial emotion database.
  • a 3-level extroversion classifier 420 such as depicted in Fig. 3, is applied to the expression image of each test subject. If, for instance, a subject is classified as extrovert by classifier 420, then all of the subject's facial images / video image sequence tagged as happy is stored in the extrovert happiness image store. Once all subjects are classified as extroverted / balanced / introverted, the whole database has been partitioned into emotion-tagged facial images for introvert, extrovert and balanced individuals.
  • emotion detection classifiers are trained for each emotion and for each introversion value. For example, referring to Fig. 2, image features are extracted using the AlexNet for all extrovert images. The features are grouped into training sets according to the emotion type, and then an SVM Multi-Class Classifier is trained to predict emotion / expression from images of extroverts. Then the training process repeats for introverts and for subjects with balanced extroversion.
  • Fig. 5 is a flow chart describing the process of training emotion detection classifiers, according to an embodiment of the present invention, relying on personality-based partitioning of emotion training facial images, e.g. as shown in Fig. 4.
  • a dataset of emotion and personality signals are obtained, for instance by a dataset acquiring module adapted to obtain a dataset comprising a plurality of emotion-tagged training signals of a plurality of subjects.
  • the signals are facial images tagged by the emotion state expressed therein.
  • a personality classifier predicts the level of a personality trait of each subject individual in the dataset (e.g. introvert / balanced / extrovert), and the dataset is partitioned into subsets according to the personality trait level of each subject individual.
  • the signals are further partitioned into class images according to the emotion type.
  • features and descriptors of the signals of each emotion of each personality trait level subset are extracted, and in step 505 a classifier is trained for each personality trait level tagged emotion.
  • Fig. 6 shows a flowchart describing a personality weighted emotion analyzer, according to an embodiment of the present invention.
  • Personality-related signals 600 are processed by Introvert / Extrovert Signal Classifier 620 to determine the introversion level of the subject. That level serves as a selector for the specific emotion detector used to detect an emotion from emotion-related signals 610 that may include audio signals, face image / video signals, typing speed and pressure and so on.
  • Emotion-related signals are processed by Emotion Feature Vector Extraction 615 that represent the emotion-related variation of the signal in reduced form that can be classified.
  • Emotion Detector 630 for high introversion is used to detect that subject's emotion.
  • emotion detector 640 for balanced introversion is used to detect that subject's emotion.
  • emotion detector 650 for high extroversion is used to detect that subject's emotion.
  • emotion detector 650 i.e., high extroversion
  • the embodiment in figure 6 may be used to predict personality, and subsequently emotion in realtime. For a random / anonymous subject, that we have no prior knowledge about, both need be predicted during interaction time. This requires to predict personality in a manner that is not affected by emotion.
  • personality-related signals 600 may be provided to a temporal selector (not shown) that would for example identify (for a video based classifier) frames of least facial activity and use only those frames to predict personality as described in Fig. 3
  • Fig. 7 described another embodiment of personality-weighted or moderated emotion prediction.
  • personality traits such as introversion / extroversion are fixed and do not change (at least during any relevant time frame).
  • identity of the subject is known, and thus allows access to previously gathered user data such as: Facebook profile images, social media communications, and prior communication with service provider. Therefore personality related signals or even previously identified personality traits can be retrieved for that specific user. If user ID cannot be accessed as simply as caller ID, the user may be identified by signal-based identification methods as known in prior art, for example by face recognition.
  • personality profiles for the user base is stored in the User Profile Data Store 700. After identifying the user by caller ID or other means, the user personality profile 710 is sent to the Introvert / Extrovert metadata classifier 720 which is operated on relevant data collected and optionally analyzed before the real-time interaction. Then personality-weighted or moderated prediction of emotional state proceeds as described hereinabove with respect to Fig. 6
  • Fig. 8 shows a diagram describing a method for personality-weighted emotion detection according to another embodiment of the invention, that is based on the assumption that emotion signals (such as a happy or angry face) are of similar nature among individuals of different personalities, but the magnitude of said emotion signals depend on the subject personality. For example, happiness / anger will be expressed more strongly by extrovert individuals than by introvert individuals.
  • the embodiment is also based on the general structure of a classifier (such as Support Vector Machine) that produce scores (a number) that is later subject to a threshold.
  • a classifier such as Support Vector Machine
  • scores (a number) that is later subject to a threshold.
  • a large number of algorithms for classification can be phrased in terms of a linear function that assigns a score to each possible category k by combining the feature vector of an instance with a vector of weights, using a dot product.
  • the scores are normalized that the SVM gap is centered around zero, so that positive score indicate detection (of anger for example) and negative scores indicate non- anger.
  • Emotion-related signals 800 are processed by unit 805 to generate emotion feature vectors which are then passed through a classifier function 810 to generate emotion classifier score 810. In prior art, that score will be compared with a decision function (a threshold for example) to detect an emotion state or its absence.
  • a decision function a threshold for example
  • the emotion classifier score 810 is moderated by the subject personality as follows:
  • Personality related signals 820 are passed through a personality classifier (extroversion in this embodiment) 830 to predict the extroversion level (one of 3 or 5 levels, or a continuous number if the classifier 830 is a regressor).
  • the extroversion level is passed to the emotion gain calculator 840 which produces an emotion gain factor 845 which is multiplied by the emotion classifier score 815 via multiplier 850.
  • the resulting Personality-Weighted Emotion Score 860 is utilized by the Emotion Decision Layer (870) to detect the emotion. Additionally, Emotion Magnitude Predictor 880 predicts the magnitude of the emotion.
  • extroversion level moderates the estimated emotional state, so that anger will be better detected for introvert subjects, and minor expressions will not be classified as anger by an extrovert person.
  • a threshold yes / no decision
  • the magnitude of the emotion can be predicted as well, after such moderation.
  • the emotion gain parameter is derived by heuristic / rule-based considerations.
  • the emotion gain parameter is derived in a training process.
  • emotion signals are collected from training sets of personality traits (e.g. introvert / balanced / extrovert).
  • the personality-weighted emotion analysis described hereinabove may be performed by executable code and instructions stored in computer readable medium and running on one or more processor-based systems.
  • Embodiments of the invention may be implemented as a computer process or a computing system, such as a computer program product or computer readable media.
  • the computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process.
  • the computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
  • the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method for detecting emotions of individuals according to the personality of the individuals, from one or more emotion-related signals. The method comprises: a) training a personality-weighted emotion classifier; b) obtaining an emotion signal of an individual and at least one personality signal thereof; c) inputting the emotion signal and the at least one personality signal to the classifier; and d) detecting an emotion according to the emotion signal and according to at least one personality trait level predicted from the personality signal of the individual.

Description

METHOD AND SYSTEM FOR PERSONALITY-WEIGHTED EMOTION ANALYSIS Field of the Invention
The present invention relates to the field of emotion recognition systems. More particularly, the invention relates to a method and system for estimating emotion from one or more emotion- related signals in a manner that is weighted or moderated by an underlying personality.
Background of the invention
The emotional state of an individual affects concentration, task solving and decision making skills, as well as behavior, reaction to situations, messages and external stimuli. Human-to-human and human-to-machine interactions are also affected by the emotional state of an individual.
Emotion recognition can increase the efficiency of medical treatment, rehabilitation, and counseling. It can help patients deal with stress, anxiety and depression. In the field of E-learning, one can adjust the presentation style of an online tutor. Monitoring the emotional state and detecting anger can prevent violence and improve call-center management. In the field of entertainment, emotion recognition can adjust content (e.g. music, games) to the mood of the users, and in the field of marketing it can increase the impact of advertisements by increasing attention and engagement.
Emotion recognition technology
Many applications known in the art of emotion recognition technology are facilitated by automatic recognition of emotion, sometimes recognized as expression. Emotion recognition algorithms typically perform facial detection and semantic analysis to interpret mood from images, videos, text, and speech. Several facial expression detection tools categorize emotions to Joy, Sadness, Anger, Fear, Surprise, Contempt, and Disgust.
With facial emotion detection, algorithms detect faces within an image or video, and sense expressions by analyzing the relation between points on the face of an individual, based on databases compiled in academic environments. Facial Emotion Databases
Media recording of facial behavior is essential for training, testing, and validation of algorithms for the development of emotion recognition systems. Emotion definition can be recognized in discrete emotion labels or on a continuous scale. Many databases are based on the basic emotions theory, introduced by Paul Ekman, which assumes the existence of six discrete basic emotions: Anger, Fear, Disgust, Surprise, Joy and Sadness.
For example, the Extended Cohn-Kanade Dataset (CK+) [Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive Database for Facial Expression Analysis. Paper presented at the Fourth IEEE International Conference on Automatic Face and Gesture Recognition] comprises 593 posed image sequences (327 of which have discrete emotion labels), for 123 subjects, covering the 6 basic emotions plus neutral emotion. Fig.l depicts a sample of the CK+ database, showing 7 basic emotions of 5 subjects.
Facial Emotion Detection
Using facial emotion databases, several methods for recognition of facial expression / emotion are proposed in prior art. For example, Shan et al. [Shan, C, Gong, S., & McOwan, P. W. (2009). Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing] investigate various combinations of face image descriptors with machine learning classifiers to identify emotion from 6 and 7-class (including neutral) expressions. Shan et al. use Local Binary Patterns (LBP) as a basic descriptor, further suggesting partitioning of the face image into multiple windows, and selecting the most discriminative feature for each emotion, with a boosting method. Shan shows that the best results for the CK+ dataset can be achieved when using Support Vector Machine (SVM) classifier, which is a representation of the examples as points in space, mapped such that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
Emotion can be assisted by video image sequences, where face image dynamics help to increase accuracy. Zhao et al. [Facial Expression Recognition from Video Sequences Based on Spatial- Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram, Lei Zhao et al, Hindawi Mathematical Problems in Engineering, Volume 2017 (2017), Article ID 7206041, 12 pages], propose extracting a spatial-temporal motion local binary pattern (LBP) feature and integrating it with Gabor multi-orientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a 1:1 strategy based multiclass SVM classifier is applied to classify facial expressions.
Fig. 2 (prior art) shows another method for facial emotion detection proposed by Pilla et al. [Pilla Jr, Valfredo & a I, (2016). Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine, WVC 2016 - Workshop de Visao Computacional, At Campo Grande, Mato Grosso, Brazil]. Input comprises a 156-pixels-wide and 176-pixels-high greyscale image that is derived from the CK+ Dataset (stage a). At stage b, the images are resized to 227*227 pixels and replicated in 2 additional layers (R, G, B) for AlexNet (stage c), which generates features for the SVM classifier (stage d), trained to recognized Aversion, Happiness and Fear (stage e). AlexNet [Alex Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks, in Advances in Neural Information Processing Systems 25 (NIPS 2012)] is a deep convolutional neural network, trained to classify 1.3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes. The network is often used to generate features for image classifiers in a process known as transfer learning.
Other methods of emotion detection suggest detecting emotion in the written word. Sentiment analysis processing software analyzes text to conclude if a statement is generally positive or negative based on keywords and their valence index.
Finally, sonic algorithms have been produced that analyze recorded speech for both tone and word content.
US patent no. 8965762, for instance, proposes combining voice analysis and image analysis to provide an improved measure of emotion.
Although generally efficient in detecting an emotion or expression, the abovementioned methods lack accuracy, inasmuch as accurate estimation of emotion type and magnitude requires knowledge of the subject individual's personality. For example, an introvert or extrovert individual may respond differently to a certain stimulus, and therefore the interpretation of such a response that considers the individual's personality structure would be far more accurate than the prior art capabilities.
In an exemplary application, a customer relation representative is answering a video call by a customer, complaining about the quality of a service or a delivered product. Sometime during the call, the customer may be expressing anger or frustration. Clearly, such emotions are easier to detect if the customer is extrovert. However, an anger classifier trained and applied equally on all customers may fail to detect anger of an introvert customer.
It is therefore an object of the present invention to provide a system and method to detect emotions expressed by an individual, while considering and taking into account the personality structure of the subject individual.
Other objects and advantages of the invention will become apparent as the description proceeds. Summary of the Invention
The present invention relates to a method for detecting emotions of individuals according to the personality of the individuals, from one or more emotion-related signals. The method comprises: a) training a personality-weighted emotion classifier;
b) obtaining an emotion signal of an individual and at least one personality signal thereof; c) inputting the emotion signal and the at least one personality signal to the classifier; and d) detecting an emotion according to the emotion signal and according to at least one personality trait level predicted from the personality signal of the individual.
According to an embodiment of the invention, the classifier is trained by:
a) obtaining a dataset comprising a plurality of emotion-tagged training signals of a plurality of subjects;
b) predicting at least one personality trait of each subject from the training signals; c) partitioning the dataset to subsets of emotion-tagged training signals , according to said predicted personality traits;
d) extracting features and/or descriptors for each subset; and
e) training a classifier for each personality-tagged emotion according to said features and/or descriptors.
According to another embodiment of the invention, the emotion signal and the at least one personality signal are selected from:
• one or more images;
• one or more voice signals;
• typing patterns / touch patterns;
• one or more text signals; or
• a combination of two or more of the above.
According to an embodiment of the invention, the at least one personality trait is selected from the big-5 personality traits.
According to an embodiment of the invention, the personality traits are predicted by a) performing face detection, alignment and cropping on one or more digital face images of an individual; b) performing feature extraction from the individual's digital face images as to provide image descriptors of the individual; and c) applying a classifier model obtained in a learning stage on the individual's image descriptors as to predict a personality trait of the individual.
According to an embodiment of the invention, wherein the emotion is detected by an emotion detector that corresponds to the predicted personality trait level. According to another embodiment of the invention, the personality trait level is predicted in real-time. According to yet another embodiment of the invention, the personality trait level is predicted from previously gathered user data of the individual. According to still another embodiment of the invention, the emotion is detected by an unweighted emotion detector, and is multiplied by an emotion gain factor that corresponds to the predicted personality trait level. In another aspect, the present invention relates to a system for detecting emotions according to the personality of the individual, comprising:
a) at least one processor; and
b) a memory comprising computer-readable instructions which when executed by the at least one processor causes the processor to execute a personality-weighted emotions detector, wherein the personality-weighted emotions detector:
trains a personality-weighted emotion classifier;
obtains an emotion signal of an individual and at least one personality signal thereof; inputs the emotion signal and the at least one personality signal to the classifier; and detects an emotion according to the emotion signal and according to at least one personality trait level predicted from the personality signal of the individual.
In another aspect, the present invention relates a non-transitory computer-readable medium, comprising instructions which when executed by at least one processor causes the processor to detect emotions according to the personality of the individual.
In yet another aspect, the present invention relates to a personality-weighted emotions detector, comprising:
a) a personality-weighted emotions classifier configured to receive as input an emotion signal and to output an emotion detected in the emotion signal;
b) a dataset acquiring module configured to obtain a dataset comprising a plurality of emotion- tagged training signals of a plurality of subjects;
c) a personality trait prediction module configured to predict at least one personality trait from input signals of subjects and the level of each said personality trait;
d) a processor configured to:
feed said training signals to said predictor;
partition said dataset to subsets according to personality trait predictions of the predictor;
extract features and/or descriptors for each subset; and
train said classifier according to said extracted features and/or descriptors. Brief Description of the Drawings
In the drawings:
Fig. 1 (prior art) shows a sample of the Extended Cohn-Kanade Dataset;
Fig. 2 (prior art) shows a method for facial emotion detection;
Fig. 3 (prior art) is a flowchart describing a method of predicting personality traits from face images;
Fig. 4 schematically illustrates a method for creating emotion training data for different levels of extroversion, according to an embodiment of the present invention; Fig. 5 is a flow chart describing the process of training emotion detection classifiers, according to an embodiment of the present invention;
Fig. 6 shows a diagram describing a personality weighted emotion analyzer, according to an embodiment of the present invention;
Fig. 7 shows a diagram describing a personality-weighted or moderated emotion prediction, according to an embodiment of the invention; and
Fig. 8 shows a diagram describing a method for personality-weighted emotion detection according to an embodiment of the invention.
Detailed Description of the Invention
The present invention provides a method for estimating emotions of individuals from one or more emotion-related signals in a manner that is weighted or moderated by one or more personality traits of each subject individual. It is noted that, in contrast to emotions such as anger, sadness or joy which are temporary states, human personality is for all practical purposes fixed. An introvert individual, for instance, will highly probably be introvert tomorrow and next month.
A common way of characterizing personality is based on several key personality traits and quantitative measures of each trait. For example, the big 5 personality traits ("big-5") include openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. Personality may be assessed by various methods including interviews and questionnaires answered by the individual or the individual's friends. Various methods exist for detecting / predicting personality traits. US Patent Application no. 2015/0242707 Al discloses a method for analyzing one or more images of an individual's face so as to predict traits of the individual's personality (such as the big-5) as well as specific behaviors and capabilities. Fig. 3 is a flowchart describing the method of predicting personality traits from face images according to US 2015/0242707 Al. As indicated in this figure, the method includes a learning stage and a prediction stage.
The learning stage may involve the following procedure:
performing face detection, alignment and cropping on provided digital images of face images (310). The provided digital images comprises personality metadata and are used as training data for the learning stage;
performing feature extraction (320) from the detected face as to provide image descriptors;
providing class training sets by grouping the provided image descriptors according to the personality metadata (330); and
applying a machine learning algorithm (340) on the class training sets as to obtain a classifier model.
The prediction stage may involve the following procedure:
performing face detection, alignment and cropping on one or more digital face images of an individual (310);
performing feature extraction (320) from the individual's digital face images as to provide image descriptors of the individual; and
applying the classifier model obtained in the learning stage (350) on the individual's image descriptors as to predict the personality trait of the individual.
The present invention provides methods for detecting emotion or emotional state of an individual from one or more emotion-related signals in a manner that is weighted or moderated by the individual's personality. As an exemplary case, a specific personality trait is selected - introversion / extroversion. It is clear that the present invention is not limited to the introversion / extroversion exemplary case, and can be applied to other traits from personality big-5 or other personality traits or a combination of traits.
According to an embodiment of the present invention, digital image and video analysis can be used as input for predicting personality and emotion of an individual. Other embodiments of the present invention may use input from other sources and/or types. For instance, subject personality can be predicted from a subject individual's digital media (e.g., from social media such Facebook) followed by personality weighted emotion analysis based on image / video / audio signals / typing patterns or touch patterns (e.g., behavioral biometric of keystroke dynamics that uses the manner and rhythm in which an individual types characters on a keyboard or keypad, or the manner an individual interacts with a touchscreen or similar touch-based input devices).
As an extension to the exemplary case, a 3-level personality trait prediction is considered: introvert / extrovert / balanced. It is clear that this method can be further generalized, for instance, to a 5- level prediction model or, given enough training data, to a continuous measure of introversion.
Fig. 4 schematically illustrates a method for creating emotion training data for different levels of extroversion, using a facial emotion database. A 3-level extroversion classifier 420, such as depicted in Fig. 3, is applied to the expression image of each test subject. If, for instance, a subject is classified as extrovert by classifier 420, then all of the subject's facial images / video image sequence tagged as happy is stored in the extrovert happiness image store. Once all subjects are classified as extroverted / balanced / introverted, the whole database has been partitioned into emotion-tagged facial images for introvert, extrovert and balanced individuals.
After data has been partitioned, emotion detection classifiers are trained for each emotion and for each introversion value. For example, referring to Fig. 2, image features are extracted using the AlexNet for all extrovert images. The features are grouped into training sets according to the emotion type, and then an SVM Multi-Class Classifier is trained to predict emotion / expression from images of extroverts. Then the training process repeats for introverts and for subjects with balanced extroversion. Fig. 5 is a flow chart describing the process of training emotion detection classifiers, according to an embodiment of the present invention, relying on personality-based partitioning of emotion training facial images, e.g. as shown in Fig. 4. In the first step 501, a dataset of emotion and personality signals are obtained, for instance by a dataset acquiring module adapted to obtain a dataset comprising a plurality of emotion-tagged training signals of a plurality of subjects. According to an embodiment of the invention, the signals are facial images tagged by the emotion state expressed therein. In the next step 502, a personality classifier predicts the level of a personality trait of each subject individual in the dataset (e.g. introvert / balanced / extrovert), and the dataset is partitioned into subsets according to the personality trait level of each subject individual. In the next step 503, for each personality trait level subset the signals are further partitioned into class images according to the emotion type. In the next step 504, features and descriptors of the signals of each emotion of each personality trait level subset are extracted, and in step 505 a classifier is trained for each personality trait level tagged emotion.
Fig. 6 shows a flowchart describing a personality weighted emotion analyzer, according to an embodiment of the present invention. Personality-related signals 600 are processed by Introvert / Extrovert Signal Classifier 620 to determine the introversion level of the subject. That level serves as a selector for the specific emotion detector used to detect an emotion from emotion-related signals 610 that may include audio signals, face image / video signals, typing speed and pressure and so on. Emotion-related signals are processed by Emotion Feature Vector Extraction 615 that represent the emotion-related variation of the signal in reduced form that can be classified. For a subject classified as an introvert, Emotion Detector 630 for high introversion is used to detect that subject's emotion. For a subject classified as balanced between introversion and extroversion, emotion detector 640 for balanced introversion is used to detect that subject's emotion. Finally, for a subject classified as extrovert, emotion detector 650 for high extroversion is used to detect that subject's emotion. In a specific example, in case the subject is classified as extrovert, emotion detector 650 (i.e., high extroversion) is activated and outputs the corresponding predicted emotional state (as indicated by solid lines between classifier 620 and detector 650).
The embodiment in figure 6 may be used to predict personality, and subsequently emotion in realtime. For a random / anonymous subject, that we have no prior knowledge about, both need be predicted during interaction time. This requires to predict personality in a manner that is not affected by emotion. In a real-time application, personality-related signals 600 may be provided to a temporal selector (not shown) that would for example identify (for a video based classifier) frames of least facial activity and use only those frames to predict personality as described in Fig. 3
Fig. 7 described another embodiment of personality-weighted or moderated emotion prediction. Note that personality traits such as introversion / extroversion are fixed and do not change (at least during any relevant time frame). In many real-world application, the identity of the subject is known, and thus allows access to previously gathered user data such as: Facebook profile images, social media communications, and prior communication with service provider. Therefore personality related signals or even previously identified personality traits can be retrieved for that specific user. If user ID cannot be accessed as simply as caller ID, the user may be identified by signal-based identification methods as known in prior art, for example by face recognition.
In figure 7, personality profiles for the user base is stored in the User Profile Data Store 700. After identifying the user by caller ID or other means, the user personality profile 710 is sent to the Introvert / Extrovert metadata classifier 720 which is operated on relevant data collected and optionally analyzed before the real-time interaction. Then personality-weighted or moderated prediction of emotional state proceeds as described hereinabove with respect to Fig. 6
Fig. 8 shows a diagram describing a method for personality-weighted emotion detection according to another embodiment of the invention, that is based on the assumption that emotion signals (such as a happy or angry face) are of similar nature among individuals of different personalities, but the magnitude of said emotion signals depend on the subject personality. For example, happiness / anger will be expressed more strongly by extrovert individuals than by introvert individuals.
The embodiment is also based on the general structure of a classifier (such as Support Vector Machine) that produce scores (a number) that is later subject to a threshold. A large number of algorithms for classification can be phrased in terms of a linear function that assigns a score to each possible category k by combining the feature vector of an instance with a vector of weights, using a dot product. Usually, the scores are normalized that the SVM gap is centered around zero, so that positive score indicate detection (of anger for example) and negative scores indicate non- anger.
In Fig. 8, Emotion-related signals 800 are processed by unit 805 to generate emotion feature vectors which are then passed through a classifier function 810 to generate emotion classifier score 810. In prior art, that score will be compared with a decision function (a threshold for example) to detect an emotion state or its absence.
According to the present invention, the emotion classifier score 810 is moderated by the subject personality as follows:
Personality related signals 820 are passed through a personality classifier (extroversion in this embodiment) 830 to predict the extroversion level (one of 3 or 5 levels, or a continuous number if the classifier 830 is a regressor). The extroversion level, is passed to the emotion gain calculator 840 which produces an emotion gain factor 845 which is multiplied by the emotion classifier score 815 via multiplier 850.
The resulting Personality-Weighted Emotion Score 860 is utilized by the Emotion Decision Layer (870) to detect the emotion. Additionally, Emotion Magnitude Predictor 880 predicts the magnitude of the emotion.
In this embodiment, extroversion level moderates the estimated emotional state, so that anger will be better detected for introvert subjects, and minor expressions will not be classified as anger by an extrovert person. Besides a threshold (yes / no decision), the magnitude of the emotion, can be predicted as well, after such moderation.
According to an embodiment of the invention, the emotion gain parameter is derived by heuristic / rule-based considerations. According to another embodiment, the emotion gain parameter is derived in a training process. In such an embodiment, emotion signals are collected from training sets of personality traits (e.g. introvert / balanced / extrovert). The personality-weighted emotion analysis described hereinabove may be performed by executable code and instructions stored in computer readable medium and running on one or more processor-based systems.
Embodiments of the invention may be implemented as a computer process or a computing system, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Although embodiments of the invention have been described by way of illustration, it will be understood that the invention may be carried out with many variations, modifications, and adaptations, without exceeding the scope of the claims.

Claims

A method for personality-weighted detection of emotions of individuals from one or more emotion-related signals, comprising:
a) training a personality-weighted emotion classifier;
b) obtaining an emotion signal of an individual and at least one personality signal thereof; c) inputting the emotion signal and the at least one personality signal to the classifier; and d) detecting an emotion according to the emotion signal and according to at least one personality trait level predicted from the personality signal of the individual.
A method according to claim 1, wherein the classifier is trained by:
a) obtaining a dataset comprising a plurality of emotion-tagged training signals of a plurality of subjects;
b) predicting at least one personality trait of each subject from the training signals;
c) partitioning the dataset to subsets of emotion-tagged training signals , according to said predicted personality traits;
d) extracting features and/or descriptors for each subset; and
e) training a classifier for each personality-tagged emotion according to said features and/or descriptors.
A method according to claim 1, wherein the emotion signal and the at least one personality signal are selected from:
• one or more images;
• one or more voice signals;
• typing patterns / touch patterns;
• one or more text signals; or
• a combination of two or more of the above.
A method according to claim 1, wherein the at least one personality trait is selected from the big-5 personality traits.
5. A method according to claim 1, wherein the at least one personality trait is predicted by a) performing face detection, alignment and cropping on one or more digital face images of an individual; b) performing feature extraction from the individual's digital face images as to provide image descriptors of the individual; and c) applying a classifier model obtained in a learning stage on the individual's image descriptors as to predict the personality trait of the individual.
6. A method according to claim 2, wherein the at least one personality trait is predicted by a) performing face detection, alignment and cropping on one or more digital face images of an individual; b) performing feature extraction from the individual's digital face images as to provide image descriptors of the individual; and c) applying a classifier model obtained in a learning stage on the individual's image descriptors as to predict the personality trait of the individual.
7. A method according to claim 1, wherein the emotion is detected by an emotion detector that corresponds to the predicted personality trait level.
8. A method according to claim 7, wherein the personality trait level is predicted in real-time.
9. A method according to claim 7, wherein the personality trait level is predicted from previously gathered user data of the individual.
10. A method according to claim 1, wherein the emotion is detected by an unweighted emotion detector, and is multiplied by an emotion gain factor that corresponds to the predicted personality trait level.
11. A system, comprising:
a) at least one processor; and
b) a memory comprising computer-readable instructions which when executed by the at least one processor causes the processor to execute a personality-weighted emotions detector, wherein the personality-weighted emotions detector:
trains a personality-weighted emotion classifier;
obtains an emotion signal of an individual and at least one personality signal thereof; inputs the emotion signal and the at least one personality signal to the classifier; and detects an emotion according to the emotion signal and according to at least one personality trait level predicted from the personality signal of the individual.
12. A non-transitory computer-readable medium, comprising instructions which when executed by at least one processor causes the processor to perform the method of claim 1.
A personality-weighted emotions detector, comprising:
a) a personality-weighted emotions classifier configured to receive as input an emotion signal and to output an emotion detected in the emotion signal;
b) a dataset acquiring module configured to obtain a dataset comprising a plurality of emotion-tagged training signals of a plurality of subjects;
c) a personality trait predictor configured to predict at least one personality trait from input signals of subjects and the level of each said personality trait;
d) a processor configured to:
feed said training signals to said predictor;
partition said dataset to subsets according to personality trait predictions of the predictor;
extract features and/or descriptors for each subset; and
train said classifier according to said extracted features and/or descriptors.
PCT/IL2017/051083 2016-09-27 2017-09-27 Method and system for personality-weighted emotion analysis WO2018060993A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662400279P 2016-09-27 2016-09-27
US62/400,279 2016-09-27

Publications (1)

Publication Number Publication Date
WO2018060993A1 true WO2018060993A1 (en) 2018-04-05

Family

ID=61759307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2017/051083 WO2018060993A1 (en) 2016-09-27 2017-09-27 Method and system for personality-weighted emotion analysis

Country Status (1)

Country Link
WO (1) WO2018060993A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109829154A (en) * 2019-01-16 2019-05-31 中南民族大学 Semantic-based personality prediction technique, user equipment, storage medium and device
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
CN110390254A (en) * 2019-05-24 2019-10-29 平安科技(深圳)有限公司 Character analysis method, apparatus, computer equipment and storage medium based on face
CN110796020A (en) * 2019-09-30 2020-02-14 深圳云天励飞技术有限公司 Mood index analysis method and related device
CN110889454A (en) * 2019-11-29 2020-03-17 上海能塔智能科技有限公司 Model training method and device, emotion recognition method and device, equipment and medium
CN110991344A (en) * 2019-12-04 2020-04-10 陕西科技大学 A deep learning-based emotional soothing system
WO2020216064A1 (en) * 2019-04-24 2020-10-29 京东方科技集团股份有限公司 Speech emotion recognition method, semantic recognition method, question-answering method, computer device and computer-readable storage medium
CN111985243A (en) * 2019-05-23 2020-11-24 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis method, device and storage medium
WO2021071155A1 (en) 2019-10-08 2021-04-15 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN113221560A (en) * 2021-05-31 2021-08-06 平安科技(深圳)有限公司 Personality trait and emotion prediction method, personality trait and emotion prediction device, computer device, and medium
CN113361307A (en) * 2020-03-06 2021-09-07 上海卓繁信息技术股份有限公司 Facial expression classification method and device and storage equipment
CN113591525A (en) * 2020-10-27 2021-11-02 蓝海(福建)信息科技有限公司 Driver road rage recognition method with deep fusion of facial expressions and voice
CN114203202A (en) * 2020-09-17 2022-03-18 北京有限元科技有限公司 Conversation scene voice emotion recognition method and device and computing device
CN114241546A (en) * 2021-11-18 2022-03-25 南通大学 Face recognition method based on multi-direction local binary pattern

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120002848A1 (en) * 2009-04-16 2012-01-05 Hill Daniel A Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
US20130018837A1 (en) * 2011-07-14 2013-01-17 Samsung Electronics Co., Ltd. Emotion recognition apparatus and method
US20160163332A1 (en) * 2014-12-04 2016-06-09 Microsoft Technology Licensing, Llc Emotion type classification for interactive dialog system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120002848A1 (en) * 2009-04-16 2012-01-05 Hill Daniel A Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
US20130018837A1 (en) * 2011-07-14 2013-01-17 Samsung Electronics Co., Ltd. Emotion recognition apparatus and method
US20160163332A1 (en) * 2014-12-04 2016-06-09 Microsoft Technology Licensing, Llc Emotion type classification for interactive dialog system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Faception App Will Instantly Tell if You're a Good Person", TECHCO, July 2016 (2016-07-01), XP055497503, Retrieved from the Internet <URL:https://tech.co/faception-facial-recognition-app-personality-2016-07> [retrieved on 20160731] *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109583325B (en) * 2018-11-12 2023-06-27 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109829154B (en) * 2019-01-16 2023-04-28 中南民族大学 Personality prediction method based on semantics, user equipment, storage medium and device
CN109829154A (en) * 2019-01-16 2019-05-31 中南民族大学 Semantic-based personality prediction technique, user equipment, storage medium and device
CN110166836B (en) * 2019-04-12 2022-08-02 深圳壹账通智能科技有限公司 A TV program switching method, device, readable storage medium and terminal device
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
WO2020216064A1 (en) * 2019-04-24 2020-10-29 京东方科技集团股份有限公司 Speech emotion recognition method, semantic recognition method, question-answering method, computer device and computer-readable storage medium
CN111985243A (en) * 2019-05-23 2020-11-24 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis method, device and storage medium
CN111985243B (en) * 2019-05-23 2023-09-08 中移(苏州)软件技术有限公司 Emotion model training method, emotion analysis method, device and storage medium
CN110390254A (en) * 2019-05-24 2019-10-29 平安科技(深圳)有限公司 Character analysis method, apparatus, computer equipment and storage medium based on face
CN110390254B (en) * 2019-05-24 2023-09-05 平安科技(深圳)有限公司 Character analysis method and device based on human face, computer equipment and storage medium
CN110796020A (en) * 2019-09-30 2020-02-14 深圳云天励飞技术有限公司 Mood index analysis method and related device
CN110796020B (en) * 2019-09-30 2022-03-25 深圳云天励飞技术有限公司 Mood index analysis method and related device
KR20210041757A (en) * 2019-10-08 2021-04-16 삼성전자주식회사 Electronic apparatus and control method thereof
KR102783867B1 (en) 2019-10-08 2025-03-21 삼성전자주식회사 Electronic apparatus and control method thereof
US11676419B2 (en) 2019-10-08 2023-06-13 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
WO2021071155A1 (en) 2019-10-08 2021-04-15 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
EP4004696A4 (en) * 2019-10-08 2022-09-07 Samsung Electronics Co., Ltd. ELECTRONIC DEVICE AND ITS CONTROL METHOD
US11538278B2 (en) 2019-10-08 2022-12-27 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN110889454A (en) * 2019-11-29 2020-03-17 上海能塔智能科技有限公司 Model training method and device, emotion recognition method and device, equipment and medium
CN110991344B (en) * 2019-12-04 2023-02-24 陕西科技大学 An Emotional Relief System Based on Deep Learning
CN110991344A (en) * 2019-12-04 2020-04-10 陕西科技大学 A deep learning-based emotional soothing system
CN113361307A (en) * 2020-03-06 2021-09-07 上海卓繁信息技术股份有限公司 Facial expression classification method and device and storage equipment
CN114203202A (en) * 2020-09-17 2022-03-18 北京有限元科技有限公司 Conversation scene voice emotion recognition method and device and computing device
CN113591525A (en) * 2020-10-27 2021-11-02 蓝海(福建)信息科技有限公司 Driver road rage recognition method with deep fusion of facial expressions and voice
CN113591525B (en) * 2020-10-27 2024-03-01 蓝海(福建)信息科技有限公司 Driver road anger recognition method by deeply fusing facial expression and voice
CN113221560B (en) * 2021-05-31 2023-04-18 平安科技(深圳)有限公司 Personality trait and emotion prediction method, personality trait and emotion prediction device, computer device, and medium
CN113221560A (en) * 2021-05-31 2021-08-06 平安科技(深圳)有限公司 Personality trait and emotion prediction method, personality trait and emotion prediction device, computer device, and medium
CN114241546A (en) * 2021-11-18 2022-03-25 南通大学 Face recognition method based on multi-direction local binary pattern

Similar Documents

Publication Publication Date Title
WO2018060993A1 (en) Method and system for personality-weighted emotion analysis
CN113094578B (en) Deep learning-based content recommendation method, device, equipment and storage medium
US20210118424A1 (en) Predicting personality traits based on text-speech hybrid data
Dupré et al. Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions
Kumar et al. Personality detection using kernel-based ensemble model for leveraging social psychology in online networks
Pathirana et al. A Reinforcement Learning-Based Approach for Promoting Mental Health Using Multimodal Emotion Recognition
Alipour et al. Ai-driven marketing personalization: Deploying convolutional neural networks to decode consumer behavior
Singh et al. A Survey on: Personality Prediction from Multimedia through Machine Learning
Tiwari et al. A novel approach for detecting emotion in text
Rodriguez-Meza et al. Recurrent neural networks for deception detection in videos
Sharma et al. Depression detection using multimodal analysis with chatbot support
CN117668758A (en) Dialog intention recognition method and device, electronic equipment and storage medium
Singh et al. Facial Emotion Detection Using CNN-Based Neural Network
Dodd et al. Facial emotion recognition and the future of work
Nimitha et al. Deep learning based sentiment analysis on images
Thakkar et al. Automatic assessment of communication skill in real-world job interviews: A comparative study using deep learning and domain adaptation.
Wahid et al. Emotion detection using unobtrusive methods: An integrated approach
Goulart et al. Mapping Personality Traits through Keystroke Analysis.
Kanchana et al. Analysis of social media images to predict user personality assessment
Marry et al. Interview Brilliance: Harnessing AI for Confidence-Driven Evaluation
Rohilla et al. Comparative Analysis of Machine Learning and Deep Learning Techniques for Social Media-Based Stress Detection
US20240177523A1 (en) System and Methods of Predicting Personality and Morals through Facial Emotion Recognition
Rathor et al. The art of context classification and recognition of text conversation using CNN
Anbazhagan et al. Twitter Based Emotion Recognition Using Bi-LSTM
Krishna et al. Analyzing Personality Insights Through Machine Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17855159

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17855159

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载