WO2019033570A1 - Lip movement analysis method, apparatus and storage medium - Google Patents
Lip movement analysis method, apparatus and storage medium Download PDFInfo
- Publication number
- WO2019033570A1 WO2019033570A1 PCT/CN2017/108749 CN2017108749W WO2019033570A1 WO 2019033570 A1 WO2019033570 A1 WO 2019033570A1 CN 2017108749 W CN2017108749 W CN 2017108749W WO 2019033570 A1 WO2019033570 A1 WO 2019033570A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lip
- real
- feature points
- image
- lips
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 33
- 230000001815 facial effect Effects 0.000 claims abstract description 92
- 238000000034 method Methods 0.000 claims abstract description 40
- 238000013145 classification model Methods 0.000 claims abstract description 38
- 238000003384 imaging method Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 50
- 238000004422 calculation algorithm Methods 0.000 claims description 41
- 238000004364 calculation method Methods 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 9
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 241001125929 Trisopterus luscus Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present application relates to the field of computer vision processing technologies, and in particular, to a lip motion analysis method, apparatus, and computer readable storage medium.
- Lip motion capture is a biometric recognition technique that performs user lip motion recognition based on human facial feature information.
- the application of lip motion capture is very extensive, and plays a very important role in many fields such as access control attendance and identity recognition, which brings great convenience to people's lives.
- the capture of lip movements, the general product approach is to use the deep learning method to train the classification model of the lip features through deep learning, and then use the classification model to judge the characteristics of the lips.
- the number of lip features depends entirely on the type of lip sample, such as judging mouth opening, closing mouth, then at least need to take a mouth, shut a large number of samples, if you want to judge the mouth, then It takes a lot of samples to pout and then retrain. This is not only time consuming, but also impossible to capture in real time.
- the lip feature is judged based on the classification model of the lip feature, and it is not possible to analyze whether or not the recognized lip region is a human lip region.
- the present application provides a lip motion analysis method, device and computer readable storage medium, the main purpose of which is to calculate the motion information of the lips in the real-time facial image according to the coordinates of the lip feature points, and realize the analysis of the lip region and the action on the lips. Capture in real time.
- the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a lip motion analysis program, and the lip motion analysis program is executed by the processor to implement the following step:
- a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
- a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
- Lip movement judging step if the lip area is a human lip area, according to the real-time face The x and y coordinates of t lip feature points in the image are calculated, and the moving direction and moving distance of the lips in the real-time facial image are calculated.
- the lip motion analysis program when executed by the processor, the following steps are further implemented:
- Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- the lip movement determining step comprises:
- the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
- the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- the present application further provides a lip motion analysis method, the method comprising:
- a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
- a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
- Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- the lip motion analysis program when executed by the processor, the following steps are further implemented:
- Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- the lip movement determining step comprises:
- the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left corners of the lips;
- the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- the present application further provides a computer readable storage medium including a lip motion analysis program, when the lip motion analysis program is executed by a processor, implementing the above Any step in the lip motion analysis method.
- the lip motion analysis method, apparatus and computer readable storage medium determine whether a region composed of lip feature points is a human lip region by recognizing a lip feature point from a real-time facial image, and if so, according to the lip feature point
- the coordinate calculation calculates the motion information of the lips, and the deep analysis of the samples of the various movements of the lips is not required, so that the analysis of the lip region and the real-time capture of the lip motion can be realized.
- FIG. 1 is a schematic diagram of a preferred embodiment of an electronic device of the present application.
- FIG. 2 is a block diagram of the lip motion analysis program of FIG. 1;
- FIG. 3 is a flow chart of a preferred embodiment of a lip motion analysis method of the present application.
- FIG. 4 is a schematic diagram showing the refinement of the step S40 of the lip motion analysis method of the present application.
- the application provides an electronic device 1 .
- FIG. 1 it is a schematic diagram of a preferred embodiment of the electronic device 1 of the present application.
- the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
- a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
- the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
- the camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
- Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
- Communication bus 15 is used to implement connection communication between these components.
- the memory 11 includes at least one type of readable storage medium.
- the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
- the readable storage medium may be an internal storage unit of the electronic device 1, For example, the hard disk of the electronic device 1.
- the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
- SMC smart memory card
- SD Secure Digital
- the readable storage medium of the memory 11 is generally used for storing a lip motion analysis program 10 installed on the electronic device 1, a face image sample library, a human lip sample library, and being constructed and trained. Lip average model and lip classification model.
- the memory 11 can also be used to temporarily store data that has been output or is about to be output.
- the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing lip motion analysis. Program 10 and so on.
- CPU Central Processing Unit
- microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing lip motion analysis.
- Program 10 and so on.
- FIG. 1 shows only the electronic device 1 having the components 11-15 and the lip motion analysis program 10, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
- the electronic device 1 may further include a user interface
- the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
- the user interface may also include a standard wired interface and a wireless interface.
- the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
- a display may also be appropriately referred to as a display screen or a display unit.
- it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
- the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
- the electronic device 1 further comprises a touch sensor.
- the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
- the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
- the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
- the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
- the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
- a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
- the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
- RF radio frequency
- an operating system and a lip motion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the lip motion analysis program 10 stored in the memory 11, the following is realized as follows step:
- the real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm.
- the camera 13 When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12.
- the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
- the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
- Feature point recognition step input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image.
- the face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
- the face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm.
- ERT Regression Tress
- t represents the cascading sequence number
- ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
- Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
- Each level of regression is based on feature points for prediction.
- the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
- the 15 feature points are randomly selected from the 20 feature points.
- the first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image)
- the residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained.
- the lip average model of the face is obtained, and the model file and the sample library are saved in the memory 11. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
- the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image to match the 20 lip feature points of the lip average model.
- 20 lip feature points It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
- the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ⁇ P20).
- 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip).
- the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm.
- SIFT scale-invariant feature transform
- the SIFT algorithm extracts the local features of each lip feature point from the lip model of the face, selects a lip feature point as the reference feature point, and finds the same or similar feature in the real-time face image as the local feature of the reference feature point.
- the feature extraction algorithm may also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, or the like.
- SURF Speeded Up Robust Features
- LBP Long Binary Patterns
- HOG Histogram of Oriented Gridients
- Lip region recognizing step determining a lip region based on the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
- the m lip positive sample image and the k lip negative sample image are collected to form a second sample library.
- the lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image.
- a negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
- the local features of the positive sample image of each lip and the negative sample image of the lips are extracted.
- the feature extraction algorithm is used to extract the Histogram of Oriented Gradient (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region.
- HOG Histogram of Oriented Gradient
- the (Support Vector Machine, SVM) classifier performs training to obtain a lip classification model of the face.
- a lip region can be determined according to the 20 lip feature points, and then the determined lip region is input into the trained lip classification model, and the result is determined according to the result of the model. Whether the determined lip area is the human lip area.
- Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- the lip motion determining step includes:
- the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
- the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human
- the distance between the two points is as follows:
- the coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively.
- x 9 , y 9 connect P18 with P1 and P9 to form vectors respectively Calculation vector
- the angle ⁇ between the calculation formula is as follows:
- ⁇ represents a vector
- the angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
- the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector
- the angle between the calculations is as follows:
- ⁇ indicates a vector
- the angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
- Prompting step When the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image. After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. The real-time image taken by the camera 13 is reacquired and the subsequent steps are performed.
- the electronic device 1 of the present embodiment extracts a real-time facial image from a real-time image, recognizes a lip feature point in the real-time facial image by using a lip average model, and analyzes a lip region determined by a lip feature point using a lip classification model. If the lip region is a human lip region, the motion information of the lip in the real-time facial image is calculated according to the coordinates of the lip feature point, and the analysis of the lip region and the real-time capture of the lip motion are realized.
- the lip motion analysis program 10 can also be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application.
- a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
- FIG. 2 it is a block diagram of the lip motion analysis program 10 of FIG.
- the lip motion analysis program 10 can be divided into: an acquisition module 110, an identification module 120, a determination module 130, a calculation module 140, and a prompt module 150.
- the functions or operational steps implemented by the modules 110-150 are similar to the above, and are not described in detail herein, by way of example, for example:
- the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm;
- the recognition module 120 is configured to input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the lip position in the real-time facial image;
- the determining module 130 is configured to determine a lip region according to the t lip feature points, input the lip region into a pre-trained lip classification model, and determine whether the lip region is a human lip region;
- the calculating module 140 is configured to: when the lip region is a human lip region, calculate a moving direction and a moving distance of the lip in the real-time facial image according to the x and y coordinates of the t-lip feature points in the real-time facial image; and
- the prompting module 150 is configured to: when the lip classification model determines that the lip region is not a human lip region, prompting that the human lip region is not detected from the current real-time image, and the lip motion cannot be determined, the flow returns to the real-time image capturing step, and the capturing is performed. A live image.
- the present application also provides a lip motion analysis method.
- a flow chart of a preferred embodiment of the lip motion analysis method of the present application is shown. The method can be performed by a device that can be implemented by software and/or hardware.
- the lip motion analysis method includes steps S10 to S50.
- Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
- the camera transmits the real-time image to the processor.
- the processor receives the real-time image, first acquires the size of the image, and creates a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
- the face recognition algorithm for extracting the real-time facial image from the real-time image may also be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
- step S20 the real-time facial image is input into the pre-trained lip average model, and the lip average model is used to identify t lip feature points representing the lip position in the real-time facial image.
- the face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
- the face feature recognition model is an ERT algorithm.
- the ERT algorithm is expressed as follows:
- t represents the cascading sequence number
- ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
- Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
- each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
- Each level of regression is based on feature points for prediction.
- the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
- the 15 feature points are randomly selected from the 20 feature points.
- the first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image)
- the residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained.
- the average model of the face's lips is obtained, and the model file and the sample library are saved. To the memory. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
- the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image for matching the 20 lip feature points of the lip average model.
- 20 lip feature points It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
- the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ⁇ P20).
- 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip).
- the feature extraction algorithm may also be a SIFT algorithm, a SURF algorithm, an LBP algorithm, an HOG algorithm, or the like.
- Step S30 determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
- the m lip positive sample image and the k lip negative sample image are collected to form a second sample library.
- the lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image.
- a negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
- the local features of the positive sample image of each lip and the negative sample image of the lips are extracted.
- a feature extraction algorithm is used to extract a direction gradient histogram (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the Cell cells are combined into a large block.
- HOG direction gradient histogram
- the support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips, and the extracted HOG feature to obtain a lip classification model of the face.
- the 20 lips can be based on The feature point determines a lip region, and then inputs the determined lip region into the trained lip classification model, and judges whether the determined lip region is a human lip region based on the result obtained by the model.
- Step S40 if the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- step S40 includes:
- Step S41 calculating a distance between a central feature point of the inner side of the upper lip and a central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
- Step S42 connecting the left outer lip feature point and the feature points closest to the left outer lip feature point on the outer contour of the upper and lower lips respectively to form a vector Calculation vector The angle between the left side of the lips;
- Step S43 connecting the feature points of the right outer lip corner with the feature points closest to the feature points of the right outer lip corner on the outer contour lines of the upper and lower lips respectively to form a vector Calculation vector The angle between the right side of the lips is obtained.
- the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human
- the distance between the two points is as follows:
- the coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively.
- x 9 , y 9 connect P18 with P1 and P9 to form vectors respectively Calculation vector
- the angle ⁇ between the calculation formula is as follows:
- ⁇ represents a vector
- the angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
- the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector
- the angle between the calculations is as follows:
- ⁇ indicates a vector
- the angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
- Step S50 when the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image.
- the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image.
- the lip classification model After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. Re-acquire the live image captured by the camera and follow the next steps.
- the lip motion analysis method of the present embodiment uses the lip average model to identify the lip feature points in the real-time facial image, and uses the lip classification model to analyze the lip region determined by the lip feature point, if the lip region is a human lip region Then, according to the coordinates of the lip feature points, the motion information of the lips in the real-time facial image is calculated, and the analysis of the lip region and the real-time capture of the lip motion are realized.
- the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a lip motion analysis program, and when the lip motion analysis program is executed by the processor, the following operations are implemented:
- Model construction steps construct and train a facial feature recognition model, obtain a lip average model on the face, and use the lip sample image to train the SVM to obtain a lip classification model;
- a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
- a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
- a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
- Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- the lip motion analysis program when executed by the processor, the following operations are also implemented:
- Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- the lip motion determining step includes:
- the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
- the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
- a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
A lip movement analysis method, an apparatus and a storage medium. The method comprises: obtaining a real-time image captured by an imaging apparatus, and extracting a real-time facial image from the real-time image (S10); inputting the real-time facial image to a pre-trained lip average model, and identifying t lip feature points representing lip positions in the real-time facial image (S20); determining a lip area according to the t lip feature points, inputting the lip area to a pre-trained lip classification model, and determining whether the lip area is a lip area of a person (S30); if yes, calculating a lip movement direction and movement distance in the real-time facial image according to x and y coordinates of the t lip feature points in the real-time facial image (S40). Lip movement information in the real-time facial image is calculated according to the coordinates of the lip feature points, so as to implement lip area analysis and real-time lip movement capturing.
Description
优先权申明Priority claim
本申请基于巴黎公约申明享有2017年8月17日递交的申请号为CN 201710708364.9、名称为“嘴唇动作分析方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。This application is based on the priority of the Chinese Patent Application entitled "Palm Action Analysis Method, Apparatus and Storage Medium" filed on August 17, 2017, with the application number of CN 201710708364.9, which is filed on August 17, 2017, the entire contents of which are The manner of reference is incorporated in the present application.
本申请涉及计算机视觉处理技术领域,尤其涉及一种嘴唇动作分析方法、装置及计算机可读存储介质。The present application relates to the field of computer vision processing technologies, and in particular, to a lip motion analysis method, apparatus, and computer readable storage medium.
嘴唇动作捕捉是基于人的脸部特征信息进行用户嘴唇动作识别的一种生物识别技术。目前,嘴唇动作捕捉的应用领域很广泛,在门禁考勤、身份识别等众多领域起到非常重要的作用,给人们的生活带来很大便利。嘴唇动作的捕捉,一般产品的做法是使用深度学习方法,通过深度学习训练出嘴唇特征的分类模型,然后使用分类模型来判断嘴唇的特征。Lip motion capture is a biometric recognition technique that performs user lip motion recognition based on human facial feature information. At present, the application of lip motion capture is very extensive, and plays a very important role in many fields such as access control attendance and identity recognition, which brings great convenience to people's lives. The capture of lip movements, the general product approach is to use the deep learning method to train the classification model of the lip features through deep learning, and then use the classification model to judge the characteristics of the lips.
然而,使用深度学习的方法来训练嘴唇特征,嘴唇特征的多少完全取决于嘴唇样本的种类,比如判断张嘴,闭嘴,那么至少需要取张嘴,闭嘴的大量样本,如果再想判断撇嘴,就需要再取撇嘴的大量样本,然后重新训练。这样不仅耗时,还不能做到实时捕捉。另外,根据嘴唇特征的分类模型判断嘴唇特征,并不能分析识别出的嘴唇区域是否为人的嘴唇区域。However, using deep learning methods to train lip features, the number of lip features depends entirely on the type of lip sample, such as judging mouth opening, closing mouth, then at least need to take a mouth, shut a large number of samples, if you want to judge the mouth, then It takes a lot of samples to pout and then retrain. This is not only time consuming, but also impossible to capture in real time. In addition, the lip feature is judged based on the classification model of the lip feature, and it is not possible to analyze whether or not the recognized lip region is a human lip region.
发明内容Summary of the invention
本申请提供一种嘴唇动作分析方法、装置及计算机可读存储介质,其主要目的在于根据嘴唇特征点的坐标计算实时脸部图像中嘴唇的运动信息,实现对嘴唇区域的分析及对嘴唇动作的实时捕捉。The present application provides a lip motion analysis method, device and computer readable storage medium, the main purpose of which is to calculate the motion information of the lips in the real-time facial image according to the coordinates of the lip feature points, and realize the analysis of the lip region and the action on the lips. Capture in real time.
为实现上述目的,本申请提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中包括嘴唇动作分析程序,所述嘴唇动作分析程序被所述处理器执行时实现如下步骤:In order to achieve the above object, the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a lip motion analysis program, and the lip motion analysis program is executed by the processor to implement the following step:
实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部
图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip movement judging step: if the lip area is a human lip area, according to the real-time face
The x and y coordinates of t lip feature points in the image are calculated, and the moving direction and moving distance of the lips in the real-time facial image are calculated.
优选地,所述嘴唇动作分析程序被所述处理器执行时,还实现如下步骤:Preferably, when the lip motion analysis program is executed by the processor, the following steps are further implemented:
提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
优选地,所述嘴唇运动判断步骤包括:Preferably, the lip movement determining step comprises:
计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and
将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
此外,为实现上述目的,本申请还提供一种嘴唇动作分析方法,该方法包括:In addition, in order to achieve the above object, the present application further provides a lip motion analysis method, the method comprising:
实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
优选地,所述嘴唇动作分析程序被所述处理器执行时,还实现如下步骤:Preferably, when the lip motion analysis program is executed by the processor, the following steps are further implemented:
提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
优选地,所述嘴唇运动判断步骤包括:Preferably, the lip movement determining step comprises:
计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左
撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left corners of the lips; and
将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括嘴唇动作分析程序,所述嘴唇动作分析程序被处理器执行时,实现如上所述的嘴唇动作分析方法中的任意步骤。In addition, in order to achieve the above object, the present application further provides a computer readable storage medium including a lip motion analysis program, when the lip motion analysis program is executed by a processor, implementing the above Any step in the lip motion analysis method.
本申请提出的嘴唇动作分析方法、装置及计算机可读存储介质,通过从实时脸部图像中识别出嘴唇特征点,判断嘴唇特征点组成的区域是否为人的嘴唇区域,若是,则根据嘴唇特征点的坐标计算得到嘴唇的运动信息,不需要取嘴唇各种动作的样本进行深度学习,即可实现对嘴唇区域的分析及对嘴唇动作的实时捕捉。The lip motion analysis method, apparatus and computer readable storage medium provided by the present application determine whether a region composed of lip feature points is a human lip region by recognizing a lip feature point from a real-time facial image, and if so, according to the lip feature point The coordinate calculation calculates the motion information of the lips, and the deep analysis of the samples of the various movements of the lips is not required, so that the analysis of the lip region and the real-time capture of the lip motion can be realized.
图1为本申请电子装置较佳实施例的示意图;1 is a schematic diagram of a preferred embodiment of an electronic device of the present application;
图2为图1中嘴唇动作分析程序的模块示意图;2 is a block diagram of the lip motion analysis program of FIG. 1;
图3为本申请嘴唇动作分析方法较佳实施例的流程图;3 is a flow chart of a preferred embodiment of a lip motion analysis method of the present application;
图4为本申请嘴唇动作分析方法步骤S40的细化流程示意图。FIG. 4 is a schematic diagram showing the refinement of the step S40 of the lip motion analysis method of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请提供一种电子装置1。参照图1所示,为本申请电子装置1较佳实施例的示意图。The application provides an electronic device 1 . Referring to FIG. 1 , it is a schematic diagram of a preferred embodiment of the electronic device 1 of the present application.
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。In this embodiment, the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。其中,摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线15用于实现这些组件之间的连接通信。The electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15. The camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network. Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface). Communication bus 15 is used to implement connection communication between these components.
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,
例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1,
For example, the hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的嘴唇动作分析程序10、人脸图像样本库、人的嘴唇样本库及构建并训练好的嘴唇平均模型及嘴唇分类模型等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 11 is generally used for storing a lip motion analysis program 10 installed on the electronic device 1, a face image sample library, a human lip sample library, and being constructed and trained. Lip average model and lip classification model. The memory 11 can also be used to temporarily store data that has been output or is about to be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行嘴唇动作分析程序10等。The processor 12, in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing lip motion analysis. Program 10 and so on.
图1仅示出了具有组件11-15以及嘴唇动作分析程序10的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。1 shows only the electronic device 1 having the components 11-15 and the lip motion analysis program 10, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。Optionally, the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like. Optionally, the user interface may also include a standard wired interface and a wireless interface.
可选地,该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit. In some embodiments, it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor. The display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。Optionally, the electronic device 1 further comprises a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. Moreover, the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like. Furthermore, the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
可选地,该电子装置1还可以包括射频(Radio Frequency,RF)电路,传感器、音频电路等等,在此不再赘述。Optionally, the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及嘴唇动作分析程序10;处理器12执行存储器11中存储的嘴唇动作分析程序10时实现如下步骤:In the apparatus embodiment shown in FIG. 1, an operating system and a lip motion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the lip motion analysis program 10 stored in the memory 11, the following is realized as follows step:
实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。
The real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm.
当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接收到该实时图像后,首先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12. When the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点。Feature point recognition step: input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image.
建立一个有n张人脸图像的第一样本库,在第一样本库中的每张人脸图像中的嘴唇部位人工标记t个特征点,所述t个特征点均匀分布于上、下嘴唇及左、右唇角。Establishing a first sample library with n face images, and manually marking t feature points in the lip portion of each face image in the first sample library, the t feature points are evenly distributed on the Lower lip and left and right lip corners.
利用所述标记嘴唇特征点的人脸图像对人脸特征识别模型进行训练得到关于人脸的嘴唇平均模型。所述人脸特征识别模型为Ensemble of Regression Tress(简称ERT)算法。ERT算法用公式表示如下:The face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face. The face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm. The ERT algorithm is expressed as follows:
其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。Where t represents the cascading sequence number and τ t (·, ·) represents the regression of the current stage. Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
其中S(t)为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和S(t)来预测一个增量把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。Where S(t) is the shape estimate of the current model; each regression τ t (·, ·) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model. Each level of regression is based on feature points for prediction. The training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
在模型训练的过程中,样本库中人脸图像的数量为n,假设t=20,即每一张样本图片有20个特征点,取所有样本图片的部分特征点(例如在每个样本图片的20个特征点中随机取15个特征点)训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的15个特征点的加权平均值)的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到人脸的嘴唇平均模型,并将模型文件及样本库保存至存储器11中。因为训练模型的样本图像标记了20个嘴唇特征点,则训练得到的人脸的嘴唇平均模型可用于从人脸图像中识别20个嘴唇特征点。In the process of model training, the number of face images in the sample library is n, assuming t=20, that is, each sample picture has 20 feature points, and some feature points of all sample pictures are taken (for example, in each sample picture) The 15 feature points are randomly selected from the 20 feature points. The first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image) The residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the lip average model of the face is obtained, and the model file and the sample library are saved in the memory 11. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
从存储器11中调用训练好的嘴唇平均模型后,将实时脸部图像与嘴唇平均模型进行对齐,然后利用特征提取算法在该实时脸部图像中搜索与该嘴唇
平均模型的20个嘴唇特征点匹配的20个嘴唇特征点。假设从该实时脸部图像中识别出的20个嘴唇特征点依然记为P1~P20,所述20个嘴唇特征点的坐标分别为:(x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。After the trained lip average model is called from the memory 11, the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image to match the 20 lip feature points of the lip average model. 20 lip feature points. It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
其中,如图2所示,唇部的上、下嘴唇分别有8个特征点(分别记为P1~P8,P9~P16),左右唇角分别有2个特征点(分别记为P17~P18,P19~P20)。上嘴唇的8个特征点中,5个位于上嘴唇外轮廓线(P1~P5)、3个位于上嘴唇内轮廓线(P6~P8,P7为上嘴唇内侧中心特征点);下嘴唇的8个特征点中,5个位于下嘴唇外轮廓线(P9~P13)、3个位于下嘴唇内轮廓线(P14~P16,P15为下嘴唇内侧中心特征点)。左右唇角各自的2个特征点中,1个位于嘴唇外轮廓线(例如P18、P20,以下称作外唇角特征点),1个位于嘴唇外轮廓线(例如P17、P19,以下称作内唇角特征点)。在本实施例中,该特征提取算法为SIFT(scale-invariant feature transform)算法。SIFT算法从人脸的嘴唇平均模型后提取每个嘴唇特征点的局部特征,选择一个嘴唇特征点为参考特征点,在实时脸部图像中查找与该参考特征点的局部特征相同或相似的特征点(例如,两个特征点的局部特征的差值在预设范围内),依此原理直到在实时脸部图像中查找出所有嘴唇特征点。在其他实施例中,该特征提取算法还可以为SURF(Speeded Up Robust Features)算法,LBP(Local Binary Patterns)算法,HOG(Histogram of Oriented Gridients)算法等。Among them, as shown in Fig. 2, the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ~ P20). Of the 8 feature points of the upper lip, 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip). Of the two feature points of the left and right lip angles, one is located outside the lip contour (for example, P18, P20, hereinafter referred to as the outer lip feature point), and one is located outside the lip contour (for example, P17, P19, hereinafter referred to as Inner lip corner feature point). In this embodiment, the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm. The SIFT algorithm extracts the local features of each lip feature point from the lip model of the face, selects a lip feature point as the reference feature point, and finds the same or similar feature in the real-time face image as the local feature of the reference feature point. The point (for example, the difference of the local features of the two feature points is within a preset range), according to this principle until all the lip feature points are found in the real-time face image. In other embodiments, the feature extraction algorithm may also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, or the like.
嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域。Lip region recognizing step: determining a lip region based on the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
收集m张嘴唇正样本图像和k张嘴唇负样本图像,构成第二样本库。嘴唇正样本图像是指包含人类的嘴唇的图像,可以从人脸图像样本库中抠出嘴唇部分作为嘴唇正样本图像。嘴唇负样本图像是指人的嘴唇区域残缺、或是图像中的嘴唇不是人类(例如动物)的嘴唇的图像,多张嘴唇正样本图像及负样本图像形成第二样本库。The m lip positive sample image and the k lip negative sample image are collected to form a second sample library. The lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image. A negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
提取每张嘴唇正样本图像、嘴唇负样本图像的局部特征。利用特征提取算法提取嘴唇样本图像的方向梯度直方图(Histogram of Oriented Gradient,简称HOG)特征。由于嘴唇样本图像中颜色信息作用不大,通常将其转化为灰度图,并将整个图像进行归一化,计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,以捕获轮廓、人影和一些纹理信息,且进一步弱化光照的影响。然后把整个图像分割为一个个的Cell单元格,为每个Cell单元格构建梯度方向直方图,以统计局部图像梯度信息并进行量化,得到局部图像区域的特征描述向量。接着把Cell单元格组合成大的块(block),由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,这就需要对梯度强度做归一化,进一步地对光照、阴影和边缘进行压缩。最后将所有“block”的HOG描述符组合在一起,形成最终的HOG特征描述向量。The local features of the positive sample image of each lip and the negative sample image of the lips are extracted. The feature extraction algorithm is used to extract the Histogram of Oriented Gradient (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the Cell cells are combined into a large block. Due to the change of local illumination and the change of the foreground-background contrast, the gradient intensity varies greatly. This requires normalization of the gradient intensity, further Light, shadow, and edges are compressed. Finally, all "block" HOG descriptors are combined to form the final HOG feature description vector.
利用嘴唇正样本图像、嘴唇负样本图像及提取的HOG特征对支持向量机
(Support Vector Machine,SVM)分类器进行训练,得到人脸的嘴唇分类模型。Using the positive sample image of the lips, the negative sample image of the lips, and the extracted HOG feature pair support vector machine
The (Support Vector Machine, SVM) classifier performs training to obtain a lip classification model of the face.
当从实时脸部图像中识别到20个嘴唇特征点后,可以根据该20个嘴唇特征点确定一个嘴唇区域,然后将确定的嘴唇区域输入训练好的嘴唇分类模型,根据模型所得的结果判断所述确定的嘴唇区域是否为人的嘴唇区域。After identifying 20 lip feature points from the real-time facial image, a lip region can be determined according to the 20 lip feature points, and then the determined lip region is input into the trained lip classification model, and the result is determined according to the result of the model. Whether the determined lip area is the human lip area.
嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
具体地,所述嘴唇运动判断步骤包括:Specifically, the lip motion determining step includes:
计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and
将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
在实时脸部图像中,上嘴唇内侧中心特征点P7的坐标为(x7、y7),下嘴唇内侧中心特征点P15的坐标为(x15、y15),且所述嘴唇区域为人的嘴唇区域,那么,两点间的距离公式如下:In the real-time facial image, the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human The lip area, then, the distance between the two points is as follows:
若d=0,则表示P7、P15两点重合,也就是说,嘴唇处于闭合状态;若d>0,则根据d的大小判断嘴唇的张开程度,d越大,则表示嘴唇张开程度越大。If d=0, it means that P7 and P15 are coincident, that is, the lips are in a closed state; if d>0, the degree of opening of the lips is judged according to the size of d, and the larger d is, the degree of opening of the lips. The bigger.
左侧外唇角特征点P18的坐标为(x18、y18),与上、下嘴唇外轮廓线上离P18最近的特征点P1、P9的坐标分别为(x1、y1)、(x9、y9),将P18与P1、P9相连,分别形成向量计算向量之间的夹角α,计算公式如下:The coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively. x 9 , y 9 ), connect P18 with P1 and P9 to form vectors respectively Calculation vector The angle α between the calculation formula is as follows:
其中,α表示向量之间的夹角,通过计算夹角大小,可判断嘴唇左撇的程度;夹角越小,表示嘴唇左撇程度越大。among them, α represents a vector The angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
同理,右侧外唇角特征点P20的坐标为(x20、y20),与上、下嘴唇外轮廓线上离P20最近的特征点P5、P13的坐标分别为(x5、y5)、(x13、y13),将P20与P5、P13相连,分别形成向量计算向量之间的夹角,计算公式如下:
Similarly, the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector The angle between the calculations is as follows:
其中,β表示向量之间的夹角,通过计算夹角大小,可判断嘴唇右撇的程度;夹角越小,表示嘴唇右撇程度越大。among them, 表示 indicates a vector The angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,流程返回到实时图像捕获步骤,捕获下一张实时图像。将所述20个嘴唇特征点确定的嘴唇区域输入嘴唇分类模型后,根据模型结果判断该嘴唇区域不是人的嘴唇区域,提示未识别到人的嘴唇区域,无法进行下一步嘴唇运动判断步骤,同时,重新获取摄像装置13拍摄的实时图像,并进行后续步骤。Prompting step: When the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image. After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. The real-time image taken by the camera 13 is reacquired and the subsequent steps are performed.
本实施例提出的电子装置1,从实时图像中提取实时脸部图像,利用嘴唇平均模型识别出该实时脸部图像中的嘴唇特征点,利用嘴唇分类模型对嘴唇特征点确定的嘴唇区域进行分析,若该嘴唇区域为人的嘴唇区域,则根据嘴唇特征点的坐标,计算得到该实时脸部图像中嘴唇的运动信息,实现对嘴唇区域的分析及对嘴唇动作的实时捕捉。The electronic device 1 of the present embodiment extracts a real-time facial image from a real-time image, recognizes a lip feature point in the real-time facial image by using a lip average model, and analyzes a lip region determined by a lip feature point using a lip classification model. If the lip region is a human lip region, the motion information of the lip in the real-time facial image is calculated according to the coordinates of the lip feature point, and the analysis of the lip region and the real-time capture of the lip motion are realized.
在其他实施例中,嘴唇动作分析程序10还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。参照图2所示,为图1中嘴唇动作分析程序10的模块示意图。在本实施例中,所述嘴唇动作分析程序10可以被分割为:获取模块110、识别模块120、判断模块130、计算模块140及提示模块150。所述模块110-150所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:In other embodiments, the lip motion analysis program 10 can also be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application. A module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function. Referring to FIG. 2, it is a block diagram of the lip motion analysis program 10 of FIG. In this embodiment, the lip motion analysis program 10 can be divided into: an acquisition module 110, an identification module 120, a determination module 130, a calculation module 140, and a prompt module 150. The functions or operational steps implemented by the modules 110-150 are similar to the above, and are not described in detail herein, by way of example, for example:
获取模块110,用于获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;The acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm;
识别模块120,用于将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;The recognition module 120 is configured to input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the lip position in the real-time facial image;
判断模块130,用于根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;The determining module 130 is configured to determine a lip region according to the t lip feature points, input the lip region into a pre-trained lip classification model, and determine whether the lip region is a human lip region;
计算模块140,用于若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离;及The calculating module 140 is configured to: when the lip region is a human lip region, calculate a moving direction and a moving distance of the lip in the real-time facial image according to the x and y coordinates of the t-lip feature points in the real-time facial image; and
提示模块150,用于当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,流程返回到实时图像捕获步骤,捕获下一张实时图像。
The prompting module 150 is configured to: when the lip classification model determines that the lip region is not a human lip region, prompting that the human lip region is not detected from the current real-time image, and the lip motion cannot be determined, the flow returns to the real-time image capturing step, and the capturing is performed. A live image.
此外,本申请还提供一种嘴唇动作分析方法。参照图3所示,为本申请嘴唇动作分析方法较佳实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present application also provides a lip motion analysis method. Referring to FIG. 3, a flow chart of a preferred embodiment of the lip motion analysis method of the present application is shown. The method can be performed by a device that can be implemented by software and/or hardware.
在本实施例中,嘴唇动作分析方法包括:步骤S10-步骤S50。In the present embodiment, the lip motion analysis method includes steps S10 to S50.
步骤S10,获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。Step S10: Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
当摄像装置拍摄到一张实时图像,摄像装置将这张实时图像发送到处理器,当处理器接受到该实时图像后,首先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。具体地,从该实时图像中提取实时脸部图像的人脸识别算法还可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。When the camera captures a real-time image, the camera transmits the real-time image to the processor. When the processor receives the real-time image, first acquires the size of the image, and creates a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process. Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may also be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
步骤S20,将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点。In step S20, the real-time facial image is input into the pre-trained lip average model, and the lip average model is used to identify t lip feature points representing the lip position in the real-time facial image.
建立一个有n张人脸图像的第一样本库,在第一样本库中的每张人脸图像中的嘴唇部位人工标记t个特征点,所述t个特征点均匀分布于上、下嘴唇及左、右唇角。Establishing a first sample library with n face images, and manually marking t feature points in the lip portion of each face image in the first sample library, the t feature points are evenly distributed on the Lower lip and left and right lip corners.
利用所述标记嘴唇特征点的人脸图像对人脸特征识别模型进行训练得到关于人脸的嘴唇平均模型。所述人脸特征识别模型为ERT算法。ERT算法用公式表示如下:The face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face. The face feature recognition model is an ERT algorithm. The ERT algorithm is expressed as follows:
其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。Where t represents the cascading sequence number and τ t (·, ·) represents the regression of the current stage. Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
其中S(t)为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和S(t)来预测一个增量把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。Where S(t) is the shape estimate of the current model; each regression τ t (·, ·) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model. Each level of regression is based on feature points for prediction. The training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
在模型训练的过程中,样本库中人脸图像的数量为n,假设t=20,即每一张样本图片有20个特征点,取所有样本图片的部分特征点(例如在每个样本图片的20个特征点中随机取15个特征点)训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的15个特征点的加权平均值)的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到人脸的嘴唇平均模型,并将模型文件及样本库保存
至存储器中。因为训练模型的样本图像标记了20个嘴唇特征点,则训练得到的人脸的嘴唇平均模型可用于从人脸图像中识别20个嘴唇特征点。In the process of model training, the number of face images in the sample library is n, assuming t=20, that is, each sample picture has 20 feature points, and some feature points of all sample pictures are taken (for example, in each sample picture) The 15 feature points are randomly selected from the 20 feature points. The first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image) The residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the average model of the face's lips is obtained, and the model file and the sample library are saved.
To the memory. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
从存储器中调用训练好的嘴唇平均模型后,将实时脸部图像与嘴唇平均模型进行对齐,然后利用特征提取算法在该实时脸部图像中搜索与该嘴唇平均模型的20个嘴唇特征点匹配的20个嘴唇特征点。假设从该实时脸部图像中识别出的20个嘴唇特征点依然记为P1~P20,所述20个嘴唇特征点的坐标分别为:(x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。After calling the trained lip average model from the memory, the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image for matching the 20 lip feature points of the lip average model. 20 lip feature points. It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
其中,如图2所示,唇部的上、下嘴唇分别有8个特征点(分别记为P1~P8,P9~P16),左右唇角分别有2个特征点(分别记为P17~P18,P19~P20)。上嘴唇的8个特征点中,5个位于上嘴唇外轮廓线(P1~P5)、3个位于上嘴唇内轮廓线(P6~P8,P7为上嘴唇内侧中心特征点);下嘴唇的8个特征点中,5个位于下嘴唇外轮廓线(P9~P13)、3个位于下嘴唇内轮廓线(P14~P16,P15为下嘴唇内侧中心特征点)。左右唇角各自的2个特征点中,1个位于嘴唇外轮廓线(例如P18、P20,以下称作外唇角特征点),1个位于嘴唇外轮廓线(例如P17、P19,以下称作内唇角特征点)。Among them, as shown in Fig. 2, the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ~ P20). Of the 8 feature points of the upper lip, 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip). Of the two feature points of the left and right lip angles, one is located outside the lip contour (for example, P18, P20, hereinafter referred to as the outer lip feature point), and one is located outside the lip contour (for example, P17, P19, hereinafter referred to as Inner lip corner feature point).
具体地,该特征提取算法还可以为SIFT算法,SURF算法,LBP算法,HOG算法等。Specifically, the feature extraction algorithm may also be a SIFT algorithm, a SURF algorithm, an LBP algorithm, an HOG algorithm, or the like.
步骤S30,根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域。Step S30, determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
收集m张嘴唇正样本图像和k张嘴唇负样本图像,构成第二样本库。嘴唇正样本图像是指包含人类的嘴唇的图像,可以从人脸图像样本库中抠出嘴唇部分作为嘴唇正样本图像。嘴唇负样本图像是指人的嘴唇区域残缺、或是图像中的嘴唇不是人类(例如动物)的嘴唇的图像,多张嘴唇正样本图像及负样本图像形成第二样本库。The m lip positive sample image and the k lip negative sample image are collected to form a second sample library. The lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image. A negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
提取每张嘴唇正样本图像、嘴唇负样本图像的局部特征。利用特征提取算法提取嘴唇样本图像的方向梯度直方图(HOG)特征。由于嘴唇样本图像中颜色信息作用不大,通常将其转化为灰度图,并将整个图像进行归一化,计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,以捕获轮廓、人影和一些纹理信息,且进一步弱化光照的影响。然后把整个图像分割为一个个的Cell单元格,为每个Cell单元格构建梯度方向直方图,以统计局部图像梯度信息并进行量化,得到局部图像区域的特征描述向量。接着把Cell单元格组合成大的块(block),由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,这就需要对梯度强度做归一化,进一步地对光照、阴影和边缘进行压缩。最后将所有“block”的HOG描述符组合在一起,形成最终的HOG特征描述向量。The local features of the positive sample image of each lip and the negative sample image of the lips are extracted. A feature extraction algorithm is used to extract a direction gradient histogram (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the Cell cells are combined into a large block. Due to the change of local illumination and the change of the foreground-background contrast, the gradient intensity varies greatly. This requires normalization of the gradient intensity, further Light, shadow, and edges are compressed. Finally, all "block" HOG descriptors are combined to form the final HOG feature description vector.
利用嘴唇正样本图像、嘴唇负样本图像及提取的HOG特征对支持向量机分类器进行训练,得到人脸的嘴唇分类模型。The support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips, and the extracted HOG feature to obtain a lip classification model of the face.
当从实时脸部图像中识别到20个嘴唇特征点后,可以根据该20个嘴唇
特征点确定一个嘴唇区域,然后将确定的嘴唇区域输入训练好的嘴唇分类模型,根据模型所得的结果判断所述确定的嘴唇区域是否为人的嘴唇区域。After recognizing 20 lip feature points from the real-time facial image, the 20 lips can be based on
The feature point determines a lip region, and then inputs the determined lip region into the trained lip classification model, and judges whether the determined lip region is a human lip region based on the result obtained by the model.
步骤S40,若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Step S40, if the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
参照图4所示,为本申请嘴唇动作分析方法中步骤S40的细化流程示意图。具体地,步骤S40包括:Referring to FIG. 4, it is a schematic flowchart of the refinement of step S40 in the lip motion analysis method of the present application. Specifically, step S40 includes:
步骤S41,计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Step S41, calculating a distance between a central feature point of the inner side of the upper lip and a central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
步骤S42,将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及Step S42, connecting the left outer lip feature point and the feature points closest to the left outer lip feature point on the outer contour of the upper and lower lips respectively to form a vector Calculation vector The angle between the left side of the lips; and
步骤S43,将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。Step S43, connecting the feature points of the right outer lip corner with the feature points closest to the feature points of the right outer lip corner on the outer contour lines of the upper and lower lips respectively to form a vector Calculation vector The angle between the right side of the lips is obtained.
在实时脸部图像中,上嘴唇内侧中心特征点P7的坐标为(x7、y7),下嘴唇内侧中心特征点P15的坐标为(x15、y15),且所述嘴唇区域为人的嘴唇区域,那么,两点间的距离公式如下:In the real-time facial image, the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human The lip area, then, the distance between the two points is as follows:
若d=0,则表示P7、P15两点重合,也就是说,嘴唇处于闭合状态;若d>0,则根据d的大小判断嘴唇的张开程度,d越大,则表示嘴唇张开程度越大。If d=0, it means that P7 and P15 are coincident, that is, the lips are in a closed state; if d>0, the degree of opening of the lips is judged according to the size of d, and the larger d is, the degree of opening of the lips. The bigger.
左侧外唇角特征点P18的坐标为(x18、y18),与上、下嘴唇外轮廓线上离P18最近的特征点P1、P9的坐标分别为(x1、y1)、(x9、y9),将P18与P1、P9相连,分别形成向量计算向量之间的夹角α,计算公式如下:The coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively. x 9 , y 9 ), connect P18 with P1 and P9 to form vectors respectively Calculation vector The angle α between the calculation formula is as follows:
其中,α表示向量之间的夹角,通过计算夹角大小,可判断嘴唇左撇的程度;夹角越小,表示嘴唇左撇程度越大。among them, α represents a vector The angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
同理,右侧外唇角特征点P20的坐标为(x20、y20),与上、下嘴唇外轮廓线上离P20最近的特征点P5、P13的坐标分别为(x5、y5)、(x13、y13),将P20与P5、P13相连,分别形成向量计算向量之间的夹角,计算公式如下:Similarly, the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector The angle between the calculations is as follows:
其中,β表示向量之间的夹角,通过计算夹角大小,可判断嘴唇右撇的程度;夹角越小,表示嘴唇右撇程度越大。among them, 表示 indicates a vector The angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
步骤S50,当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,流程返回到实时图像捕获步骤,捕获下一张实时图像。将所述20个嘴唇特征点确定的嘴唇区域输入嘴唇分类模型后,根据模型结果判断该嘴唇区域不是人的嘴唇区域,提示未识别到人的嘴唇区域,无法进行下一步嘴唇运动判断步骤,同时,重新获取摄像装置拍摄的实时图像,并进行后续步骤。Step S50, when the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image. After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. Re-acquire the live image captured by the camera and follow the next steps.
本实施例提出的嘴唇动作分析方法,利用嘴唇平均模型识别出该实时脸部图像中的嘴唇特征点,利用嘴唇分类模型对嘴唇特征点确定的嘴唇区域进行分析,若该嘴唇区域为人的嘴唇区域,则根据嘴唇特征点的坐标,计算得到该实时脸部图像中嘴唇的运动信息,实现对嘴唇区域的分析及对嘴唇动作的实时捕捉。The lip motion analysis method of the present embodiment uses the lip average model to identify the lip feature points in the real-time facial image, and uses the lip classification model to analyze the lip region determined by the lip feature point, if the lip region is a human lip region Then, according to the coordinates of the lip feature points, the motion information of the lips in the real-time facial image is calculated, and the analysis of the lip region and the real-time capture of the lip motion are realized.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括嘴唇动作分析程序,所述嘴唇动作分析程序被处理器执行时实现如下操作:In addition, the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a lip motion analysis program, and when the lip motion analysis program is executed by the processor, the following operations are implemented:
模型构建步骤:构建并训练人脸特征识别模型,得到关于人脸的嘴唇平均模型,利用嘴唇样本图像对SVM进行训练,得到嘴唇分类模型;Model construction steps: construct and train a facial feature recognition model, obtain a lip average model on the face, and use the lip sample image to train the SVM to obtain a lip classification model;
实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
可选地,所述嘴唇动作分析程序被处理器执行时,还实现如下操作:Optionally, when the lip motion analysis program is executed by the processor, the following operations are also implemented:
提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
可选地,所述嘴唇运动判断步骤包括:Optionally, the lip motion determining step includes:
计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;
Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and
将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
本申请之计算机可读存储介质的具体实施方式与上述嘴唇动作分析方法的具体实施方式大致相同,在此不再赘述。The specific implementation manner of the computer readable storage medium of the present application is substantially the same as the specific embodiment of the lip motion analysis method described above, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a series of elements includes those elements. It also includes other elements not explicitly listed, or elements that are inherent to such a process, device, item, or method. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, the device, the item, or the method that comprises the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments. Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.
Claims (20)
- 一种电子装置,其特征在于,所述装置包括:存储器、处理器及摄像装置,所述存储器中包括嘴唇动作分析程序,所述嘴唇动作分析程序被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory, a processor, and an imaging device, wherein the memory includes a lip motion analysis program, and when the lip motion analysis program is executed by the processor, the following steps are implemented:实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- 根据权利要求1所述的电子装置,其特征在于,所述嘴唇动作分析程序被所述处理器执行时,还实现如下步骤:The electronic device according to claim 1, wherein when the lip motion analysis program is executed by the processor, the following steps are further implemented:提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- 根据权利要求2所述的电子装置,其特征在于,所述嘴唇运动判断步骤包括:The electronic device according to claim 2, wherein the lip movement determining step comprises:计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- 根据权利要求2所述的电子装置,其特征在于,所述嘴唇分类模型的训练步骤包括:The electronic device according to claim 2, wherein the training step of the lip classification model comprises:收集m张嘴唇正样本图像和k张嘴唇负样本图像,构成第二样本库;Collecting m lip positive sample images and k lips negative sample images to form a second sample library;提取每张嘴唇正样本图像、嘴唇负样本图像的局部特征;及 Extracting the local features of the positive sample image of each lip and the negative sample image of the lips;利用嘴唇正样本图像、嘴唇负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的嘴唇分类模型。The support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips and its local features to obtain a lip classification model of the face.
- 根据权利要求1所述的电子装置,其特征在于,所述嘴唇平均模型的训练步骤包括:The electronic device according to claim 1, wherein the training step of the lip average model comprises:建立一个有n张人脸图像的第一样本库,在第一样本库中的每张人脸图像中的嘴唇部位标记t个特征点,所述t个特征点均匀分布于上、下嘴唇及左、右唇角;及A first sample library having n face images is created, and t feature points are marked in the lip portion of each face image in the first sample library, and the t feature points are evenly distributed on the upper and lower sides. Lips and left and right lip angles; and利用所述标记嘴唇特征点的人脸图像对人脸特征识别模型进行训练得到关于人脸的嘴唇平均模型。The face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
- 根据权利要求5所述的电子装置,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The electronic device according to claim 5, wherein the face feature recognition model is an ERT algorithm, which is formulated as follows:其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到人脸的嘴部平均模型。Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the mouth average model of the face is obtained according to the regression trees.
- 根据权利要求1所述的电子装置,其特征在于,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。The electronic device according to claim 1, wherein the face recognition algorithm comprises: a geometric feature based method, a local feature analyzing method, a feature face method, an elastic model based method, and a neural network method.
- 一种嘴唇动作分析方法,应用于电子装置,其特征在于,所述方法包括:A lip motion analysis method is applied to an electronic device, characterized in that the method comprises:实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。 Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- 根据权利要求8所述的嘴唇动作分析方法,其特征在于,该方法还包括:The lip motion analysis method according to claim 8, wherein the method further comprises:提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- 根据权利要求9所述的嘴唇动作分析方法,其特征在于,所述嘴唇运动判断步骤包括:The lip motion analysis method according to claim 9, wherein the lip motion determining step comprises:计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
- 根据权利要求9所述的嘴唇动作分析方法,其特征在于,所述嘴唇分类模型的训练步骤包括:The lip motion analysis method according to claim 9, wherein the training step of the lip classification model comprises:收集m张嘴唇正样本图像和k张嘴唇负样本图像,构成第二样本库;Collecting m lip positive sample images and k lips negative sample images to form a second sample library;提取每张嘴唇正样本图像、嘴唇负样本图像的局部特征;及Extracting the local features of the positive sample image of each lip and the negative sample image of the lips;利用嘴唇正样本图像、嘴唇负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的嘴唇分类模型。The support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips and its local features to obtain a lip classification model of the face.
- 根据权利要求8所述的嘴唇动作分析方法,其特征在于,所述嘴唇平均模型的训练步骤包括:The lip motion analysis method according to claim 8, wherein the training step of the lip average model comprises:建立一个有n张人脸图像的第一样本库,在第一样本库中的每张人脸图像中的嘴唇部位标记t个特征点,所述t个特征点均匀分布于上、下嘴唇及左、右唇角;及A first sample library having n face images is created, and t feature points are marked in the lip portion of each face image in the first sample library, and the t feature points are evenly distributed on the upper and lower sides. Lips and left and right lip angles; and利用所述标记嘴唇特征点的人脸图像对人脸特征识别模型进行训练得到关于人脸的嘴唇平均模型。The face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
- 根据权利要求12所述的嘴唇动作分析方法,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The lip motion analysis method according to claim 12, wherein the face feature recognition model is an ERT algorithm, which is expressed by the following formula:其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵 回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到人脸的嘴部平均模型。Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the mouth average model of the face is obtained according to the regression trees.
- 根据权利要求8所述的嘴唇动作分析方法,其特征在于,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。The lip motion analysis method according to claim 8, wherein the face recognition algorithm comprises: a geometric feature based method, a local feature analysis method, a feature face method, an elastic model based method, and a neural network method.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括嘴唇动作分析程序,所述嘴唇动作分析程序被处理器执行时实现如下步骤:A computer readable storage medium, comprising: a lip motion analysis program, wherein the lip motion analysis program is executed by a processor to implement the following steps:实时脸部图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;a real-time facial image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;特征点识别步骤:将该实时脸部图像输入预先训练好的嘴唇平均模型,利用该嘴唇平均模型识别出该实时脸部图像中代表嘴唇位置的t个嘴唇特征点;a feature point recognition step: inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;嘴唇区域识别步骤:根据所述t个嘴唇特征点确定嘴唇区域,将该嘴唇区域输入预先训练好的嘴唇分类模型,判断该嘴唇区域是否为人的嘴唇区域;及a lip region recognizing step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;嘴唇运动判断步骤:若该嘴唇区域为人的嘴唇区域时,根据该实时脸部图像中t个嘴唇特征点的x、y坐标,计算得到该实时脸部图像中嘴唇的运动方向及运动距离。Lip motion judging step: If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
- 根据权利要求15所述的计算机可读存储介质,其特征在于,所述嘴唇动作分析程序被处理器执行时,还实现如下步骤:The computer readable storage medium according to claim 15, wherein when said lip motion analysis program is executed by a processor, the following steps are further implemented:提示步骤:当嘴唇分类模型判断该嘴唇区域不是人的嘴唇区域时,提示未从当前实时图像中检测到人的嘴唇区域、无法判断嘴唇运动,并返回至实时脸部图像获取步骤。Prompting step: When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
- 根据权利要求16所述的计算机可读存储介质,其特征在于,所述嘴唇运动判断步骤包括:The computer readable storage medium of claim 16, wherein the lip motion determining step comprises:计算实时脸部图像中上嘴唇内侧中心特征点与下嘴唇内侧中心特征点的距离,判断嘴唇的张开程度;Calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;将左侧外唇角特征点与上、下嘴唇外轮廓线上离左侧外唇角特征点最近的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇左撇的程度;及The characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips; and将右侧外唇角特征点与上、下嘴唇外轮廓线上离右侧外唇角特征点最近 的特征点分别相连形成向量计算向量之间的夹角,得到嘴唇右撇的程度。The characteristic points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour lines of the upper and lower lips are respectively connected to form a vector. Calculation vector The angle between the right side of the lips is obtained.
- 根据权利要求16所述的计算机可读存储介质,其特征在于,所述嘴唇分类模型的训练步骤包括:The computer readable storage medium of claim 16 wherein the training step of the lip classification model comprises:收集m张嘴唇正样本图像和k张嘴唇负样本图像,构成第二样本库;Collecting m lip positive sample images and k lips negative sample images to form a second sample library;提取每张嘴唇正样本图像、嘴唇负样本图像的局部特征;及Extracting the local features of the positive sample image of each lip and the negative sample image of the lips;利用嘴唇正样本图像、嘴唇负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的嘴唇分类模型。The support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips and its local features to obtain a lip classification model of the face.
- 根据权利要求15所述的计算机可读存储介质,其特征在于,所述嘴唇平均模型的训练步骤包括:The computer readable storage medium of claim 15 wherein the training step of the lip average model comprises:建立一个有n张人脸图像的第一样本库,在第一样本库中的每张人脸图像中的嘴唇部位标记t个特征点,所述t个特征点均匀分布于上、下嘴唇及左、右唇角;及A first sample library having n face images is created, and t feature points are marked in the lip portion of each face image in the first sample library, and the t feature points are evenly distributed on the upper and lower sides. Lips and left and right lip angles; and利用所述标记嘴唇特征点的人脸图像对人脸特征识别模型进行训练得到关于人脸的嘴唇平均模型。The face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The computer readable storage medium according to claim 19, wherein the face feature recognition model is an ERT algorithm, which is formulated as follows:其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到人脸的嘴部平均模型。 Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the mouth average model of the face is obtained according to the regression trees.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710708364.9 | 2017-08-17 | ||
CN201710708364.9A CN107633205B (en) | 2017-08-17 | 2017-08-17 | lip motion analysis method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019033570A1 true WO2019033570A1 (en) | 2019-02-21 |
Family
ID=61099627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/108749 WO2019033570A1 (en) | 2017-08-17 | 2017-10-31 | Lip movement analysis method, apparatus and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107633205B (en) |
WO (1) | WO2019033570A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738126A (en) * | 2019-09-19 | 2020-01-31 | 平安科技(深圳)有限公司 | Lip shearing method, device and equipment based on coordinate transformation and storage medium |
CN113095146A (en) * | 2021-03-16 | 2021-07-09 | 深圳市雄帝科技股份有限公司 | Mouth state classification method, device, equipment and medium based on deep learning |
WO2021224669A1 (en) * | 2020-05-05 | 2021-11-11 | Ravindra Kumar Tarigoppula | System and method for controlling viewing of multimedia based on behavioural aspects of a user |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710836B (en) * | 2018-05-04 | 2020-10-09 | 南京邮电大学 | A lip detection and reading method based on cascade feature extraction |
CN108763897A (en) * | 2018-05-22 | 2018-11-06 | 平安科技(深圳)有限公司 | Method of calibration, terminal device and the medium of identity legitimacy |
CN108874145B (en) * | 2018-07-04 | 2022-03-18 | 深圳美图创新科技有限公司 | Image processing method, computing device and storage medium |
CN110223322B (en) * | 2019-05-31 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Image recognition method and device, computer equipment and storage medium |
CN111241922B (en) * | 2019-12-28 | 2024-04-26 | 深圳市优必选科技股份有限公司 | Robot, control method thereof and computer readable storage medium |
CN111259875B (en) * | 2020-05-06 | 2020-07-31 | 中国人民解放军国防科技大学 | Lip reading method based on self-adaptive semantic space-time diagram convolutional network |
CN116405635A (en) * | 2023-06-02 | 2023-07-07 | 山东正中信息技术股份有限公司 | Multi-mode conference recording method and system based on edge calculation |
CN119383471A (en) * | 2024-12-25 | 2025-01-28 | 深圳市维海德技术股份有限公司 | Target positioning method, device, video conferencing equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071289A1 (en) * | 2005-09-29 | 2007-03-29 | Kabushiki Kaisha Toshiba | Feature point detection apparatus and method |
CN104616438A (en) * | 2015-03-02 | 2015-05-13 | 重庆市科学技术研究院 | Yawning action detection method for detecting fatigue driving |
CN105139503A (en) * | 2015-10-12 | 2015-12-09 | 北京航空航天大学 | Lip moving mouth shape recognition access control system and recognition method |
CN106250815A (en) * | 2016-07-05 | 2016-12-21 | 上海引波信息技术有限公司 | A kind of quick expression recognition method based on mouth feature |
CN106485214A (en) * | 2016-09-28 | 2017-03-08 | 天津工业大学 | A kind of eyes based on convolutional neural networks and mouth state identification method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702199B (en) * | 2009-11-13 | 2012-04-04 | 华为终端有限公司 | Smiling face detection method and device and mobile terminal |
CN104951730B (en) * | 2014-03-26 | 2018-08-31 | 联想(北京)有限公司 | A kind of lip moves detection method, device and electronic equipment |
CN106529379A (en) * | 2015-09-15 | 2017-03-22 | 阿里巴巴集团控股有限公司 | Method and device for recognizing living body |
CN106997451A (en) * | 2016-01-26 | 2017-08-01 | 北方工业大学 | Lip contour positioning method |
CN105975935B (en) * | 2016-05-04 | 2019-06-25 | 腾讯科技(深圳)有限公司 | A kind of face image processing process and device |
-
2017
- 2017-08-17 CN CN201710708364.9A patent/CN107633205B/en active Active
- 2017-10-31 WO PCT/CN2017/108749 patent/WO2019033570A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071289A1 (en) * | 2005-09-29 | 2007-03-29 | Kabushiki Kaisha Toshiba | Feature point detection apparatus and method |
CN104616438A (en) * | 2015-03-02 | 2015-05-13 | 重庆市科学技术研究院 | Yawning action detection method for detecting fatigue driving |
CN105139503A (en) * | 2015-10-12 | 2015-12-09 | 北京航空航天大学 | Lip moving mouth shape recognition access control system and recognition method |
CN106250815A (en) * | 2016-07-05 | 2016-12-21 | 上海引波信息技术有限公司 | A kind of quick expression recognition method based on mouth feature |
CN106485214A (en) * | 2016-09-28 | 2017-03-08 | 天津工业大学 | A kind of eyes based on convolutional neural networks and mouth state identification method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738126A (en) * | 2019-09-19 | 2020-01-31 | 平安科技(深圳)有限公司 | Lip shearing method, device and equipment based on coordinate transformation and storage medium |
WO2021224669A1 (en) * | 2020-05-05 | 2021-11-11 | Ravindra Kumar Tarigoppula | System and method for controlling viewing of multimedia based on behavioural aspects of a user |
CN113095146A (en) * | 2021-03-16 | 2021-07-09 | 深圳市雄帝科技股份有限公司 | Mouth state classification method, device, equipment and medium based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN107633205B (en) | 2019-01-18 |
CN107633205A (en) | 2018-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10534957B2 (en) | Eyeball movement analysis method and device, and storage medium | |
WO2019033570A1 (en) | Lip movement analysis method, apparatus and storage medium | |
WO2019033572A1 (en) | Method for detecting whether face is blocked, device and storage medium | |
US10445562B2 (en) | AU feature recognition method and device, and storage medium | |
WO2019033568A1 (en) | Lip movement capturing method, apparatus and storage medium | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
WO2019033571A1 (en) | Facial feature point detection method, apparatus and storage medium | |
CN111989689B (en) | Method for identifying an object in an image and mobile device for executing the method | |
US8792722B2 (en) | Hand gesture detection | |
US8750573B2 (en) | Hand gesture detection | |
US10635946B2 (en) | Eyeglass positioning method, apparatus and storage medium | |
US10650234B2 (en) | Eyeball movement capturing method and device, and storage medium | |
US8965117B1 (en) | Image pre-processing for reducing consumption of resources | |
WO2019041519A1 (en) | Target tracking device and method, and computer-readable storage medium | |
WO2019033573A1 (en) | Facial emotion identification method, apparatus and storage medium | |
WO2019071664A1 (en) | Human face recognition method and apparatus combined with depth information, and storage medium | |
WO2016150240A1 (en) | Identity authentication method and apparatus | |
JP6351243B2 (en) | Image processing apparatus and image processing method | |
JP5361524B2 (en) | Pattern recognition system and pattern recognition method | |
Vazquez-Fernandez et al. | Built-in face recognition for smart photo sharing in mobile devices | |
CN108304789A (en) | Face recognition method and device | |
JP2021503139A (en) | Image processing equipment, image processing method and image processing program | |
JP2013206458A (en) | Object classification based on external appearance and context in image | |
Lahiani et al. | Hand pose estimation system based on Viola-Jones algorithm for android devices | |
CN109409322B (en) | Living body detection method and device, face recognition method and face detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17921689 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.09.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17921689 Country of ref document: EP Kind code of ref document: A1 |