+

WO2002039371A2 - Estimation de l"intensite d"expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle - Google Patents

Estimation de l"intensite d"expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle Download PDF

Info

Publication number
WO2002039371A2
WO2002039371A2 PCT/EP2001/012346 EP0112346W WO0239371A2 WO 2002039371 A2 WO2002039371 A2 WO 2002039371A2 EP 0112346 W EP0112346 W EP 0112346W WO 0239371 A2 WO0239371 A2 WO 0239371A2
Authority
WO
WIPO (PCT)
Prior art keywords
expression
state
facial
paths
facial expression
Prior art date
Application number
PCT/EP2001/012346
Other languages
English (en)
Other versions
WO2002039371A3 (fr
Inventor
Antonio J. Colmenarez
Srinivas Gutta
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2002541616A priority Critical patent/JP2004513462A/ja
Priority to EP01993900A priority patent/EP1342206A2/fr
Publication of WO2002039371A2 publication Critical patent/WO2002039371A2/fr
Publication of WO2002039371A3 publication Critical patent/WO2002039371A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models

Definitions

  • the present invention relates generally to the field of image signal processing, and more particularly to techniques for estimating facial expression in a video signal or other type of image signal.
  • Facial expressions have been widely studied from psychological and computer vision points of view. Such expressions provide a mechanism to show emotions, which are crucial in inter-personal communications, relationships, and many other contexts.
  • a number of different types of facial expressions have been determined to be consistent across most races and cultures. For example, certain distinct facial expressions are associated with emotional states. These include neutral, happiness, sadness, anger and fear. Other facial expressions are associated with reactions such as disgust and surprise.
  • facial expressions are complex, spatio-temporal motion patterns.
  • the movements associated with a given facial expression are generally divided into three periods: (i) onset, (ii) apex, and (iii) offset. These periods correspond to the transition towards the facial expression, the period sustaining the peak in expressiveness, and the transition back from the expression, respectively.
  • the rate of change in the onset period as well as the duration in the apex period are often related to the intensity of the underlying emotion associated with the facial expression.
  • differences in speed during the onset and offset periods can be used to discriminate between spontaneous and fake facial expressions.
  • HMMs hidden Markov models
  • a Bayesian framework for embedded face and facial expression recognition which overcomes the above-noted problems is described in A. Colmenarez et al., "A Probabilistic Framework for Embedded Face and Facial Expression Recognition," International Conference on Computer Vision and Pattern Recognition (CVPR), 1999; A. Colmenarez et al., “Embedded Face and Facial Expression Recognition,” International Conference on Image Processing (ICIP), 1999; A. Colmenarez et al., “Detection and Tracking of Faces and Facial Features,” International Conference on Image Processing (ICIP), 1999; and A. Colmenarez, "Facial Analysis from Continuous Video with Application to Human-Computer Interface," Ph.D. dissertation, University of Illinois at Urbana-
  • the above-noted Bayesian framework is able to achieve both face recognition and facial expression recognition.
  • the framework generally does not take into account the dynamics of the facial expressions. For example, this approach generally assumes that image frames from a given video signal to be analyzed are independent of each other, and therefore analyzes them one frame at a time.
  • the invention provides methods and apparatus for processing a video signal or other type of image signal in order to estimate facial expression intensity or other characteristics associated with facial expression dynamics, using a bidirectional star topology hidden Markov model (HMM).
  • HMM bidirectional star topology hidden Markov model
  • the HMM has at least one neutral expression state and a plurality of expression paths emanating from the neutral expression state.
  • Each of the expression paths includes a number of states associated with a corresponding facial expression, such as sad, happy, anger, fear, disgust and surprise.
  • Control of one or more actions in the image processing system may be based at least in part on which of the facial expressions supported by the model is determined to be present in the sequence of images and/or the intensity or other characteristic of that expression.
  • a given expression path of the HMM may include an initial state coupled to the neutral state and a final state associated with an apex of the corresponding expression.
  • the given path further includes a forward path from the initial state to the final state, and a return path from the final state to the initial state.
  • the forward path is associated with an onset of the expression, and the return path is associated with an offset of the expression.
  • the forward and reverse paths of the given expression path may each include separate states, or may share a number of states.
  • each of at least a subset of the states of a given one of the expression paths may be interconnected in the HMM with at least one state of at least one other expression path by an interconnection which does not pass through the neutral state.
  • the invention provides significantly improved estimation of facial expression relative to the conventional techniques described previously.
  • the invention allows one to determine not only the particular facial expression present within a given image, but also the intensity or other relevant characteristics of that facial expression.
  • the techniques of the invention can be used in a wide variety of image processing applications, including video-camera-based systems such as video conferencing systems, video surveillance and monitoring systems, and human-machine interfaces.
  • FIG. 1 is a block diagram of an image processing system in which the present invention may be implemented.
  • FIG. 2 shows an example of a model of facial features and regions that may be used in conjunction with estimation of facial expression in an illustrative embodiment of the invention.
  • FIG. 3 shows an example of a Bayesian network for observation distribution suitable for use in the illustrative embodiment of the invention.
  • FIG. 4 shows an example of a bidirectional star topology hidden Markov model (HMM) configured in accordance with the invention.
  • FIGS. 5 and 6 show alternative configurations for the bidirectional star topology HMM of FIG. 4 in accordance with the invention.
  • FIG. 1 shows an image processing system 10 in which facial expression estimation techniques in accordance with the invention may be implemented.
  • the system 10 includes a processor 12, a memory 14, an input output (I O) device 15 and a controller 16, all of which are connected to communicate over a set 17 of one or more system buses or other type of interconnections.
  • the system 10 further includes a camera 18 that is coupled to the controller 16 as shown.
  • the camera 18 may be, e.g., a mechanical pan-tilt-zoom (PTZ) camera, a wide-angle electronic zoom camera, or any other suitable type of image capture device.
  • PTZ mechanical pan-tilt-zoom
  • the system 10 may be adapted for use in any of a number of different image processing applications, including, e.g., video conferencing, video surveillance, human- machine interfaces, etc. More generally, the system 10 can be used in any application that can benefit from the improved facial expression estimation capabilities provided by the present invention.
  • the image processing system 10 generates a video signal or other type of sequence of images of a person 20.
  • the camera 18 may be adjusted such that a head 24 of the person 20 comes within a field of view 22 of the camera 18.
  • a video signal corresponding to a sequence of images generated by the camera 18 and including a face of the person 20 is then processed in system 10 using the facial expression estimation techniques of the invention, as will be described in greater detail below.
  • the sequence of images may be processed so as to determine a particular expression that is on the face of the person 20 within the images, based at least in part on an estimation of the intensity or other characteristic of the expression as determined using a bidirectional star topology hidden Markov model (HMM).
  • HMM bidirectional star topology hidden Markov model
  • An output of the system may then be adjusted based on the determined expression.
  • a human-machine interface or other type of system application may generate a query or other output or take another type of action based on the determined expression or characteristic thereof.
  • Any other type of control of an action of the system may be based at least in part on the determined expression and/or a particular characteristic thereof, such as intensity.
  • Elements or groups of elements of the system 10 may represent corresponding elements of an otherwise conventional desktop or portable computer, as well as portions or combinations of these and other processing devices. Moreover, in other embodiments of the invention, some or all of the functions of the processor 12, memory 14, controller 16 and/or other elements of the system 10 may be combined into a single device. For example, one or more of the elements of system 10 may be implemented as an application specific integrated circuit (ASIC) or circuit card to be incorporated into a computer, television, set-top box or other processing device.
  • ASIC application specific integrated circuit
  • processor as used herein is intended to include a microprocessor, central processing unit (CPU), microcontroller, digital signal processor (DSP) or any other data processing element that may be utilized in a given data processing device.
  • memory 14 may represent an electronic memory, an optical or magnetic disk-based memory, a tape-based memory, as well as combinations or portions of these and other types of storage devices.
  • the present invention in an illustrative embodiment provides techniques for estimating facial expression in an image signal, and for characterizing dynamic aspects of facial expression using an HMM.
  • the invention in the illustrative embodiment models transitions between different facial expressions as well as transitions between multiple states within each facial expression. More particularly, each expression is modeled as multiple states along a path in a multi-dimensional space of facial appearance. This path for a given
  • I expression goes from a point corresponding to a neutral expression to that of an apex of the expression and back to the neutral expression.
  • the invention allows one to determine not only the particular facial expression present within a given image, but also the intensity or other relevant characteristic of that facial expression.
  • the former may be obtained using maximum likelihood classification among a set of different facial expression models.
  • the latter may be estimated by determining how far the observation reaches in terms of the above-noted path of the corresponding facial expression.
  • transition probability matrices p( ⁇ t & t _ 1 p ) capture the statistics of the facial expression dynamics for a given person p .
  • V t (f ⁇ £,, . . . , f t
  • p ) may be computed recursively from
  • FIG. 2 shows a model of facial features and regions that may be used in conjunction with the estimation of facial expression in the illustrative embodiment of the invention. It should be understood that this model is provided by way of example only, and should not be construed as limiting the scope of the invention in any way. As will be apparent to those skilled in the art, the invention can be implemented using a wide variety of other facial models.
  • the facial model of FIG. 2 the face of a person in an image 40 is modeled as a set of four feature regions and nine facial features. The position of each of the facial features in this example is denoted by an X.
  • the four feature regions include a right eyebrow region 50-1, a left eyebrow region 50-2, an eyes and nose region 50-3, and a mouth region 50-4.
  • each facial feature is provided by a corresponding feature image sub-window located around its position. More particularly, the right eyebrow region 50-1 includes image sub- windows 52 and 54, the left eyebrow region 50-2 includes image sub-windows 62 and 64, the eyes and nose region 50-3 includes image sub-windows 72, 74 and 76, and the mouth region 50-4 includes image sub-windows 82 and 84. Facial geometry is given by the facial feature positions, which may be normalized with respect to the position and distance between the outer eye corners.
  • ⁇ , p) may be computed using the positions x and appearances v & of its E* features as:
  • the position of the facial features in a region pfx M , . . . , pj may be modeled jointly with a multi -dimensional Gaussian distribution having a full-covariance matrix.
  • the appearance of each facial feature in a region may be modeled independently, so that Equation (4) becomes
  • FIG. 3 shows an example of a Bayesian network that may be used to model facial appearance and geometry observations in the illustrative embodiment of the invention.
  • Associated with each state k of a given observed facial expression are a set of feature regions 50-1, 50-2, 50-3 and 50-4 and their corresponding set of feature positions 100 and feature image sub-windows 110.
  • each facial feature for a given person and expression state may be modeled with a multi-dimensional Gaussian distribution applied over the p principal components and the distance from this sub-space. Note that this approach is different from a conventional eigenfeatures approach, in which Principal Component Analysis (PCA) is used to find the sub-space in which all object classes span the most. This approach is different in that PCA is applied to each class independently in order to construct simple observation models that handle the high dimensionality of the observations.
  • PCA Principal Component Analysis
  • v e 91 d be a d-dimensional random vector with some distribution that is to be modeled, i.e., an image sub-window around the corresponding facial feature position.
  • a set of training samples of a class is used to estimate the mean v and the covariance matrix ⁇ of the observation of that class.
  • singular value decomposition one can obtain the diagonal matrix ⁇ corresponding to the/? largest eigenvalues of ⁇ , and the transformation matrix T containing the corresponding eigenvectors. So, the conditional probability of v for a given class is computed from
  • the above-described learning procedure is supervised. In other words, it assumes that the class of each training sample is known so that the statistics for each class can be easily computed.
  • video segments may be labeled so that the facial expression is known for each image frame.
  • a conventional Expectation- Maximization (EM) algorithm may then be used to estimate the parameters of the observation model as set forth in equations (3) to (6).
  • the EM algorithm is described in greater detail in, e.g., BJ. Frey, "Graphical Models for Machine Learning and Digital Communication," MTT Press, Cambridge, MA, 1998, which is incorporated by reference herein.
  • FIG. 4 shows an example of a set of the above-noted paths in the form of a bidirectional star topology HMM 120 having a single neutral expression state 122. From the neutral expression state 122 there are a set of six different paths, each corresponding to a particular facial expression and each including a total of N states.
  • the facial expressions modeled are happy, sad, anger, fear, disgust and surprise.
  • Other embodiments could of course use other numbers, types and arrangements of facial expressions.
  • each expression is modeled with a path of multiple interconnected states.
  • the neutral expression is modeled with a single state, the neutral expression state 122.
  • the neutral state 122 is connected to the first state of each facial expression path.
  • all of the states of the path are visited, first in forward order and then in backward order, returning to the neutral expression state 122.
  • the bidirectional star topology HMM of FIG. 4 captures the evolution of facial expressions at all levels of intensity.
  • Each state represents one step towards the maximum level of expressiveness associated with the last state in the corresponding facial expression path.
  • an observation does not necessarily have to reach the last state in the path. Therefore, one can measure the intensity of the observed facial expression using the highest state visited in the path as well as the duration of its visit to that state.
  • the separation between two consecutive states in the HMM of FIG. 4 can be determined using the well-known Kullback-Leibler divergence of the corresponding observation probability distributions computed along the line that connects the observation points with maximum likelihood for each state. That is,
  • v x and v 2 are the mean vectors in the case of Gaussian models
  • each path is first trained by assuming a default number of states and measuring the average separation between the states. Then, the number of states is iteratively increased or reduced, and the HMM path is retrained until the average separation is within a predefined range. Additional details on techniques for determining the appropriate number of states in a given path of the HMM 120 may be found in U.S. Patent Application Attorney Docket No. 701255 entitled "Method and Apparatus for Determining a Number of States for a Hidden Markov Model in a Signal Processing System," filed concurrently herewith in the name of inventors A. Colmenarez and S. Gutta, which application is incorporated by reference herein.
  • each path in the FIG. 4 HMM is shown as including the same number of states N, this is by way example and not limitation. In other embodiments, each of the paths may include a different number of states, with the above-described training procedure used to determine the appropriate number of states for a given expression path of the HMM.
  • bidirectional star topology HMM as used herein is intended to include any HMM having one or more bidirectional paths emanating outward from at least one neutral state, and thus includes the alternative arrangements described below as well as numerous other arrangements.
  • FIG. 5 shows a portion of an alternative bidirectional star topology HMM
  • the portion shown includes the expression path for the facial expression of surprise.
  • the surprise facial expression in HMM 120' starts from the neutral expression state 122.
  • the surprise facial expression path in HMM 120' includes two separate paths for modeling the transitions from the neutral state to the expression apex and from that expression apex back to the neutral state. Such an arrangement can be used to provide hysteresis in the state transition process. Although only a single expression is shown in FIG. 5, similar separate paths may be provided for each of the expressions in the HMM.
  • FIG. 6 shows another alternative configuration of a bidirectional star topology HMM in accordance with the invention.
  • an HMM 120" includes the neutral state 122 and the expression paths for the same six expressions as the HMM 120 of FIG. 4. Only a portion of each of the expression paths is shown for simplicity of illustration.
  • the first states of each of the different expression paths are interconnected with the first states from one or more other expression paths as shown. For example, the state Happy 1 is interconnected with the states Disgust 1, Surprise 1 and Sad 1, the state Surprise 1 is interconnected with the states Happy 1, Anger 1 and Fear 1, and so on.
  • interconnection of some or all of the first few states of each expression path allows transitions from one expression to another without going through the neutral state 122.
  • the interconnection may include adjacent paths in the star topology, such as the happy and surprise paths in the figure, as well as non-adjacent paths, such as the surprise and fear, disgust and happy, and sad and happy paths.
  • the particular states, paths and interconnections shown in FIG. 6 are only an example, and numerous other configurations are possible.
  • the above-described embodiments of the invention are intended to be illustrative only.
  • the invention can be implemented using a bidirectional star topology HMM having any number or arrangement of expression paths, states and interconnections.
  • the invention can be used to provide facial expression estimation in a wide variety of applications, including video conferencing systems, video surveillance systems, and other camera-based systems.
  • the invention can be implemented at least in part in the form of one or more software programs which are stored on an electronic, magnetic or optical storage medium and executed by a processing device, e.g., by the processor 12 of system 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un système de traitement d"images effectuant le traitement d"une séquence d"images au moyen d"un modèle de Markov caché à topologie en étoile bidirectionnelle afin d"estimer l"intensité de l"expression faciale ou d"autres caractéristiques. Le modèle de Markov caché présente au moins un état d"expression neutre et une pluralité de voies d"expression provenant de l"état d"expression neutre. Chacune des voies d"expression inclut un nombre d"états associés à l"expression faciale correspondante, telle, la tristesse, le bonheur, la colère, le dégoût et la surprise. Un voie d"expression donnée peut comprendre un état initial couplé à un état neutre et un état final associé à un pic de l"expression correspondante. La voie d"expression peut comprendre en outre une voie d"avancement à partir d"un état initial vers un état final, associé à l"apparition de l"expression, et une voie retour à partir de l"état final vers l"état initial, associé à la disparition de l"expression. Le contrôle d"une ou des actions dans le système de traitement d"images peut s"effectuer en fonction au moins en partie de la détermination de laquelle des expressions faciales prises en charge par le modèle doit figurer dans la séquence d"images et/ou de l"intensité ou d"autres caractéristiques de cette expression faciale.
PCT/EP2001/012346 2000-11-03 2001-10-23 Estimation de l"intensite d"expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle WO2002039371A2 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2002541616A JP2004513462A (ja) 2000-11-03 2001-10-23 双方向スター型トポロジーの隠れマルコフモデルを用いた顔の表情の強さの推定方法及び装置
EP01993900A EP1342206A2 (fr) 2000-11-03 2001-10-23 Estimation de l'intensite d'expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US70566600A 2000-11-03 2000-11-03
US09/705,666 2000-11-03

Publications (2)

Publication Number Publication Date
WO2002039371A2 true WO2002039371A2 (fr) 2002-05-16
WO2002039371A3 WO2002039371A3 (fr) 2002-08-01

Family

ID=24834448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2001/012346 WO2002039371A2 (fr) 2000-11-03 2001-10-23 Estimation de l"intensite d"expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle

Country Status (3)

Country Link
EP (1) EP1342206A2 (fr)
JP (1) JP2004513462A (fr)
WO (1) WO2002039371A2 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006505875A (ja) * 2002-11-07 2006-02-16 本田技研工業株式会社 確率的外観集合体を使用するビデオに基づく顔認識
US20070026254A1 (en) * 2003-06-11 2007-02-01 Mohamed Ben-Malek Method for processing surfaces of aluminium alloy sheets and strips
JP2007521550A (ja) * 2003-06-30 2007-08-02 本田技研工業株式会社 顔認識システム及び方法
EP1934677A1 (fr) * 2005-09-12 2008-06-25 Emotiv Systems Pty Ltd. Methode et systeme de detection et de classification de mouvements musculaires faciaux
US7607097B2 (en) 2003-09-25 2009-10-20 International Business Machines Corporation Translating emotion to braille, emoticons and other special symbols
US7734071B2 (en) 2003-06-30 2010-06-08 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US20130108123A1 (en) * 2011-11-01 2013-05-02 Samsung Electronics Co., Ltd. Face recognition apparatus and method for controlling the same
US8478711B2 (en) 2011-02-18 2013-07-02 Larus Technologies Corporation System and method for data fusion with adaptive learning
CN103971131A (zh) * 2014-05-13 2014-08-06 华为技术有限公司 一种预设表情识别方法和装置
TWI457872B (zh) * 2011-11-15 2014-10-21 Univ Nat Taiwan Normal 具人臉表情辨識輔助之測驗系統及方法
US9269374B1 (en) 2014-10-27 2016-02-23 Mattersight Corporation Predictive video analytics system and methods
EP3537368A4 (fr) * 2017-02-01 2019-11-20 Samsung Electronics Co., Ltd. Dispositif et procédé de recommandation de produits
CN112862936A (zh) * 2021-03-16 2021-05-28 网易(杭州)网络有限公司 表情模型处理方法及装置、电子设备、存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101309954B1 (ko) 2012-07-20 2013-09-17 고려대학교 산학협력단 대화형 표정 인식 장치 및 방법과 이에 관한 기록매체

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CUZZOLIN, F. ET AL: "Towards Unsupervised Detection of Actions in Clutter" TECH. REPORT UCLA, [Online] 4 December 2000 (2000-12-04), XP002198616 Retrieved from the Internet: <URL:http://www.vision.cs.ucla.edu/papers/ action.pdf> [retrieved on 2002-05-10] *
IRA COHEN: "Automatic facial expression recognition from video sequences using temporal information" MASTER THESIS, [Online] May 2000 (2000-05), XP002198614 Urbana, USA Retrieved from the Internet: <URL:http://www.ifp.uiuc.edu/~iracohen/pub lications/thesismain.pdf> [retrieved on 2002-05-10] *
MATTHEW BRAND: "Learningg Consise Models of HumanActivity from Ambient Video ia a Structure-Inducing M-Step Estimator" MITSUBISHI ELECTRIC RESEARCH LABORATORY TECHNICAL REPORT, [Online] November 1997 (1997-11), XP002198615 Cambridge, MA, USA Retrieved from the Internet: <URL:http://www.merl.com/papers/docs/TR97- 25.pdf> [retrieved on 2002-05-10] *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006505875A (ja) * 2002-11-07 2006-02-16 本田技研工業株式会社 確率的外観集合体を使用するビデオに基づく顔認識
US20070026254A1 (en) * 2003-06-11 2007-02-01 Mohamed Ben-Malek Method for processing surfaces of aluminium alloy sheets and strips
JP2007521550A (ja) * 2003-06-30 2007-08-02 本田技研工業株式会社 顔認識システム及び方法
US7734071B2 (en) 2003-06-30 2010-06-08 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US7783082B2 (en) 2003-06-30 2010-08-24 Honda Motor Co., Ltd. System and method for face recognition
US7607097B2 (en) 2003-09-25 2009-10-20 International Business Machines Corporation Translating emotion to braille, emoticons and other special symbols
EP1934677A1 (fr) * 2005-09-12 2008-06-25 Emotiv Systems Pty Ltd. Methode et systeme de detection et de classification de mouvements musculaires faciaux
EP1934677A4 (fr) * 2005-09-12 2009-12-09 Emotiv Systems Pty Ltd Methode et systeme de detection et de classification de mouvements musculaires faciaux
US8478711B2 (en) 2011-02-18 2013-07-02 Larus Technologies Corporation System and method for data fusion with adaptive learning
US20130108123A1 (en) * 2011-11-01 2013-05-02 Samsung Electronics Co., Ltd. Face recognition apparatus and method for controlling the same
US8861805B2 (en) * 2011-11-01 2014-10-14 Samsung Electronics Co., Ltd. Face recognition apparatus and method for controlling the same
TWI457872B (zh) * 2011-11-15 2014-10-21 Univ Nat Taiwan Normal 具人臉表情辨識輔助之測驗系統及方法
CN103971131A (zh) * 2014-05-13 2014-08-06 华为技术有限公司 一种预设表情识别方法和装置
US9269374B1 (en) 2014-10-27 2016-02-23 Mattersight Corporation Predictive video analytics system and methods
US9437215B2 (en) 2014-10-27 2016-09-06 Mattersight Corporation Predictive video analytics system and methods
US10262195B2 (en) 2014-10-27 2019-04-16 Mattersight Corporation Predictive and responsive video analytics system and methods
EP3537368A4 (fr) * 2017-02-01 2019-11-20 Samsung Electronics Co., Ltd. Dispositif et procédé de recommandation de produits
US11151453B2 (en) 2017-02-01 2021-10-19 Samsung Electronics Co., Ltd. Device and method for recommending product
CN112862936A (zh) * 2021-03-16 2021-05-28 网易(杭州)网络有限公司 表情模型处理方法及装置、电子设备、存储介质
CN112862936B (zh) * 2021-03-16 2023-08-08 网易(杭州)网络有限公司 表情模型处理方法及装置、电子设备、存储介质

Also Published As

Publication number Publication date
WO2002039371A3 (fr) 2002-08-01
JP2004513462A (ja) 2004-04-30
EP1342206A2 (fr) 2003-09-10

Similar Documents

Publication Publication Date Title
Pantic et al. Automatic analysis of facial expressions: The state of the art
US7031499B2 (en) Object recognition system
Mitra et al. Gesture recognition: A survey
Sung et al. Example-based learning for view-based human face detection
Shih et al. Face detection using discriminating feature analysis and support vector machine
Gutta et al. Mixture of experts for classification of gender, ethnic origin, and pose of human faces
Borkar et al. Real-time implementation of face recognition system
Yang Recent advances in face detection
Sing et al. Face recognition using point symmetry distance-based RBF network
WO2002039371A2 (fr) Estimation de l&#34;intensite d&#34;expression faciale mettant en oeuvre un modele de markov cache a topologie en etoile bidirectionnelle
Haber et al. A practical approach to real-time neutral feature subtraction for facial expression recognition
Sabri et al. A comparison of face detection classifier using facial geometry distance measure
Cohen et al. Vision-based overhead view person recognition
Littlewort et al. Analysis of machine learning methods for real-time recognition of facial expressions from video
Khuwaja An adaptive combined classifier system for invariant face recognition
Sharma et al. Face recognition using neural network and eigenvalues with distinct block processing
Nayak et al. A versatile online system for person-specific facial expression recognition
Howell et al. RBF network methods for face detection and attentional frames
Ge et al. Active affective facial analysis for human-robot interaction
Colmenarez Facial analysis from continuous video with application to human-computer interface
Lucey et al. Improved facial-feature detection for AVSP via unsupervised clustering and discriminant analysis
Ren et al. Real-time head pose estimation on mobile platforms
Lu et al. Head gesture recognition based on bayesian network
Moore et al. Automatic facial expression recognition using boosted discriminatory classifiers
Wu Human action recognition using deep probabilistic graphical models

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2002 541616

Kind code of ref document: A

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A3

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 2001993900

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2001993900

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2001993900

Country of ref document: EP

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载