CN107784294A - A kind of persona face detection method based on deep learning - Google Patents
A kind of persona face detection method based on deep learning Download PDFInfo
- Publication number
- CN107784294A CN107784294A CN201711128363.3A CN201711128363A CN107784294A CN 107784294 A CN107784294 A CN 107784294A CN 201711128363 A CN201711128363 A CN 201711128363A CN 107784294 A CN107784294 A CN 107784294A
- Authority
- CN
- China
- Prior art keywords
- face
- target
- tracking
- detected
- kcf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of persona face detection method based on deep learning, it is related to technical field of video image processing.This method is:1. Face datection, carrying out the Face datection task of the frame image data, for the target information detected, including face rectangle frame, face classification and human face characteristic point, queue output is formed;2. face tracking, the result queue that Face datection is obtained is matched and updated, and completes the task of face tracking;3. face judges, carrying out the positive face of face to the newest track human faces in tracking list judges, such as meets that the high quality face that certain condition can be used as detecting is exported.The present invention is blocked to face and has carried out comprehensively solve the problem of rotating and cause tracking failure, reduces the problem of face image data repeats to report;Whether it is that positive face is designed and trained to reach the purpose method of discrimination of upload to face, produces a desired effect;Facial image reliable, that information content is high is provided to face alignment scheduling algorithm.
Description
Technical field
The present invention relates to technical field of video image processing, more particularly to a kind of Face datection based on deep learning with
Track method.
Background technology
In traditional persona face detection method, it is preferable that Face datection only aligns face effect, and offside face or part hide
That keeps off is ineffective, and is influenceed by illumination power;During face tracking, face easily blocked or turned round etc. behavior and
Lose, cause face tracking to terminate, so as to be traced again as fresh target after detecting again so that face repeats to capture.
Limitation to the face alignment in face identification system, that is, it is positive face to capture face picture, then can be more than to effect
It is good.But after carrying out human face characteristic point extraction in the prior art, the face that is then rotated and obtained to face can allow information
Distortion is produced, so as to cause face alignment incorrect,
The content of the invention
The purpose of the present invention is that overcomes shortcoming and defect existing for prior art, there is provided a kind of based on deep learning
Persona face detection method
The object of the present invention is achieved like this:
Using face and non-face sample, it is extracted after feature is trained, to being produced during persona face detection
Raw facial image is judged, is obtained quality highest face and is completed face output.
The technical problems to be solved by the invention are:
1) which type of takes face detect in Face datection, during preventing Face datection, to the loss of face;
2) during face tracking, face tracking is subject to block or the behavior such as turns round and lose, and same target is repeatedly captured;
3) output is just adapted to the decision problem of facial orientation in face picture, i.e. which type of face picture.
Specifically, this method comprises the following steps:
1. Face datection
The Face datection task of the frame image data is carried out, for the target information detected, including face rectangle frame, people
Face classification and human face characteristic point, form queue output;
2. face tracking
The result queue that Face datection is obtained is matched and updated, and completes the task of face tracking;
3. face judges that carrying out the positive face of face to the newest track human faces in tracking list judges, such as meets certain condition i.e.
It can be exported as the high quality face detected.
The present invention has following advantages and good effect:
1. being blocked to face and all having carried out comprehensively solve the problem of rotating and cause tracking failure, reduce facial image
The problem of Data duplication reports;
2. whether being that positive face is designed and trained to reach the method for discrimination of the purpose of upload to face, reach pre-
The effect of phase;
3. facial image reliable, that information content is high is provided to face alignment scheduling algorithm.
Brief description of the drawings
Fig. 1 is the step total figure of this method;
Fig. 2 is Face datection algorithm flow chart;
Fig. 3 is Face tracking algorithm flow chart;
Fig. 4 is face decision algorithm flow chart;
Fig. 5 is human face characteristic point coordinate diagram;
Fig. 6 is face big data actual combat system diagram.
English to Chinese:
1、MTCNN:Multi-task convolutional neural networks, i.e. multitask convolutional Neural net
Network, existing Face datection network structure;
2、P-net:Proposal network, i.e. candidate frame network structure;
3、R-net:Refine network, that is, become more meticulous candidate frame network structure;
4、O-net:Output network, that is, export candidate frame network structure;
5、IoU:Intersection over union, that is, hand over and compare;
6、kcf:Kernel correlation filter, i.e. core correlation filter, existing tracking;
7、SVM:Support vector machine, i.e. SVMs.
Embodiment
Embodiment describes in detail below in conjunction with the accompanying drawings:
First, method
Such as Fig. 1, this method comprises the following steps:
1. Face datection -101
The Face datection task of the frame image data is carried out, for the target information detected, including face rectangle frame, people
Face classification and human face characteristic point, form queue output;
2. face tracking -102
The result queue that Face datection is obtained is matched and updated, and completes the task of face tracking;
3. face judges -103
The positive face of face is carried out to the newest track human faces in tracking list to judge, such as meets that certain condition can be used as detecting
To high quality face exported.
The specific implementing procedure of modules is illustrated below:
1st, the flow of step 1.
As Fig. 2, the flow of step 1. are as follows:
A, carry out detecting -201 using the P-net models of off-line training
The output number num_output of conv4-1 layers is changed into 3 from 2 in P-net network structures, that is, adds a classification
The back side of head is marked, training sample is added and checking sample carries out off-line training and obtains the P-net models of off-line training, use the model
Detected, obtain deletion and merging that target carries out unnecessary target rectangle frame using NMS methods, obtain target detection rectangle frame
And class label;
B, determine whether to detect target -202, be then to enter step C, otherwise jump to step F;
Determine whether the target detected, including face and the back side of head;
C, carry out detecting -203 using the R-net models of off-line training
The output number num_output of conv5-1 layers is changed to 3 by 2 in R-net network structures, that is, adds a classification
The back side of head is marked, training is added and checking sample carries out off-line training and obtains the R-net models of off-line training, carried out using the model
Detection, deletion and merging that target carries out unnecessary target rectangle frame using NMS methods are obtained, obtains target detection rectangle frame and class
Distinguishing label;
D, determine whether to detect target -204, be then to enter step E, otherwise jump to step F;
Determine whether the target detected, including face and the back side of head;
E, carry out detecting -205 using the O-net models of off-line training
The output number num_output of conv6-1 layers is changed into 3 from 2 in O-net network structures, that is, adds a classification
The back side of head is marked, training sample is added and checking sample carries out off-line training and obtains the O-net models of off-line training, use the model
Detected, obtain deletion and merging that target carries out unnecessary target rectangle frame using NMS methods, obtain target detection rectangle
Five frame, class label and face characteristic points;
F, the target information that output detects enters face tracking -206
Export the target information detected and enter face tracking, target information includes target detection rectangle frame, class label
(including face and back side of head) and five characteristic points of face (including left eye central point, right eye central point, nose, the left corners of the mouth and the right side
The corners of the mouth).
2nd, the flow of step 2.
As Fig. 3, the flow of step 2. are as follows:
A, the target information detected is pressed into object queue -301 to be matched;
B, object queue to be matched carries out IoU matching results calculating -302 with tracking object queue
IoU is handed over and compared;
C, judge whether target to be matched is fresh target -303, is then to enter step d, otherwise jumps to step e
Using result of calculation, judging target to be matched, whether the match is successful with tracking object queue, that is, the result calculated
Less than one threshold value, if it fails, being then fresh target, into step d, otherwise jump to step e;
D, -304 are initialized according to the newly-built kcf object of target information to be matched and
A newly-built kcf object is tracked to fresh target and according to this object of fresh target information initializing, into step
j;
E, judge whether tracking target finds corresponding target -305 to be detected, be then to enter step i, otherwise jump to step
Rapid f;
Using step b result of calculation, judge whether tracking target finds matching target in object queue to be detected, be
Then enter step i, otherwise jump to step f;
F, kcf Object trackings target loss number adds 1-306
G, track target and lose whether number is more than threshold value -307, be then to enter step h, otherwise jump to step j;
H, kcf tracking objects -308 are deleted;
I, -309 are updated to kcf tracking box using target information to be detected
Using target information to be detected, i.e. target rectangle frame is updated to kcf on-time models, no matter target be face or
The back side of head, so that tracking process will not interrupt because the number of people rotates;
J, kcf tracking object queues -310 are updated
3rd, the flow of step 3.
As Fig. 4, the flow of step 3. are as follows:
I, from tracking object queue extraction target information -401
Target information includes five target detection rectangle frame, class label and face characteristic points;
II, tracking target labels determine whether face -402, are then to enter step III, otherwise jump to step VII;
III, calculate 10 dimensional feature vectors -403
As shown in figure 5, according to the five of face characteristic points, (A points are left eye central point, and B points are right eye central point, and C points are
Prenasale, D points are left corners of the mouth point, and E points are right corners of the mouth point) and face picture height H and width W (O coordinate origins, X are horizontal stroke
Axle, Y are the longitudinal axis, and wherein X* represents point * abscissa, and Y* represents point * ordinate), extract 10 dimensional feature vectors, characteristic value meter
Calculate as follows:
1) absolute value divided by [left eye back gauge (XA)+right eye back gauge (W- of [left eye back gauge (XA)-right eye back gauge (W-XB)]
XB)];
2) absolute value of [upper eye back gauge (YA+YB)-lower mouth away from (2*H-YD-YE)] divided by [upper eye back gauge (YA+YB)+
Lower mouth is away from (2*H-YD-YE)];
3) [pupil distance (XB-XA)+corners of the mouth is away from (XE-XD)] divided by [vertical range (YD-YA) of left eye to the left corners of the mouth+right side
Eye arrives the vertical range (YE-YB) of the right corners of the mouth];
4) [pupil distance (XB-XA)] divided by width W;
5) [corners of the mouth is away from (XE-XD)] divided by width W;
6) vertical range (YD-YA) divided by height H of the left eye to the left corners of the mouth;
7) vertical range (YD-YA) divided by height H of the right eye to the right corners of the mouth;
8) [nose YC- pupils vertical centre (YA+YB)/2] divided by [vertical range of corners of the mouth vertical centre point to nose
(YD+YE)/2-YC)];
9) [the horizontal center point horizontal range of nose to left eye and the left corners of the mouth (XC- (XA+XD)/2)] divided by right eye and the right side
Horizontal range [(XB+XE)/2-XC] of the horizontal center point of the corners of the mouth to nose;
10) width W/ height H;
IV, using the model of off-line training carry out calculating -404
Off-line training model by positive sample (positive face) and negative sample (anon-normal face) mode, calculating 10 dimensional feature vectors,
Input the training that SVM SVMs completes model;It is current using the positive face judgment models of the good face of off-line training, input
The characteristic vector calculated, obtains result of calculation;
V, positive face -405 are determine whether, be then to enter step VI, otherwise jump to step VII;
VI, carry out face output -406
Face picture and relevant information are exported after directly carrying out face picture coding;
VII, continue target following -407.
2nd, system
1st, it is overall
As Fig. 6, portrait big data actual combat system A include Face datection server 503, recognition of face server 504, storage
Server 505, database 506 and management server 507;
Monitor supervision platform B includes management server 502 and media server 501;
Client 508 is set to provide Consumer's Experience;
This patent algorithm is implemented in Face datection server 503;
Its annexation is:
(1) management server 501 and media server 502 are interconnected, and to be analyzed regard is provided for portrait big data actual combat system A
Frequency evidence;
(2) Face datection server 503, recognition of face server 504, storage server 505, database 506 and management
Server 507 and client 508 are sequentially connected, realize user command issue, video human face analysis management, video human face detection,
Facial image identifies and store function.
2nd, operation principle
Given people by sending instructions under client -508 as big data actual combat system B, wherein being sent out by management server 507 instruction
It is sent to monitor supervision platform A and obtains video data, media server 501 carries out the transmission of video data, reaches portrait big data under battle conditions
Face datection server 503 in system B carries out Face datection task, uses the present invention's in Face datection server 503
Embodiment completes the extraction work of face image data, and face image data is issued into recognition of face server 504, obtains people
Face information data, face image data, which stores, arrives storage server 505, in face information data Cun Chudao databases 506, and
The face information data analyzed are sent to the effect for client 508, reaching Consumer's Experience by management server 507.
Claims (4)
- A kind of 1. persona face detection method based on deep learning, it is characterised in that:1. Face datection(101)The Face datection task of the frame image data is carried out, for the target information detected, including face rectangle frame, face class Do not exported with human face characteristic point, formation queue;2. face tracking(102)The result queue that Face datection is obtained is matched and updated, and completes the task of face tracking;3. face judges(103)The positive face of face is carried out to the newest track human faces in tracking list and judges that certain condition can be used as detecting as met High quality face is exported.
- 2. a kind of persona face detection method based on deep learning as described in claim 1, it is characterised in that described The flow of step 1. is as follows:A, detected using the P-net models of off-line training(201);B, determine whether to detect target(202), it is then to enter step C, otherwise jumps to step F;C, detected using the R-net models of off-line training(203);D, determine whether to detect target(204), it is then to enter step E, otherwise jumps to step F;Determine whether the target detected, including face and the back side of head;E, detected using the O-net models of off-line training(205);F, the target information that output detects enters face tracking(206).
- 3. a kind of persona face detection method based on deep learning as described in claim 1, it is characterised in that described The flow of step 2. is as follows:A, the target information detected is pressed into object queue to be matched(301);B, object queue to be matched carries out IoU matching result calculating with tracking object queue(302);C, judge whether target to be matched is fresh target(303), it is then to enter step d, otherwise jumps to stepRapid e;D, according to the newly-built kcf object of target information to be matched and initialize(304)A newly-built kcf object is tracked to fresh target and according to this object of fresh target information initializing, enteredStep j;E, judge whether tracking target finds corresponding target to be detected(305), it is then to enter step i, it is noThen jump to step f;F, kcf Object trackings target loss number adds 1(306)G, track target and lose whether number is more than threshold value(307), it is then to enter step h, otherwise jumps toStep j;H, kcf tracking objects are deleted(308);I, kcf tracking box is updated using target information to be detected(309);J, kcf tracking object queues are updated(310).
- 4. a kind of persona face detection method based on deep learning as described in claim 1, it is characterised in that described The flow of step 3. is as follows:I, from tracking object queue extraction target information(401);II, tracking target labels determine whether face(402), it is then to enter step III, otherwise jumps to step VII;III, calculate 10 dimensional feature vectors(403);V, determine whether positive face(405), it is then to enter step VI, otherwise jumps to step VII;VI, carry out face output(406);VII, continue target following(407).
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711128363.3A CN107784294B (en) | 2017-11-15 | 2017-11-15 | Face detection and tracking method based on deep learning |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711128363.3A CN107784294B (en) | 2017-11-15 | 2017-11-15 | Face detection and tracking method based on deep learning |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN107784294A true CN107784294A (en) | 2018-03-09 |
| CN107784294B CN107784294B (en) | 2021-06-11 |
Family
ID=61433064
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711128363.3A Active CN107784294B (en) | 2017-11-15 | 2017-11-15 | Face detection and tracking method based on deep learning |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107784294B (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109271942A (en) * | 2018-09-26 | 2019-01-25 | 上海七牛信息技术有限公司 | A kind of stream of people's statistical method and system |
| CN109359603A (en) * | 2018-10-22 | 2019-02-19 | 东南大学 | A vehicle driver face detection method based on cascaded convolutional neural network |
| CN109635749A (en) * | 2018-12-14 | 2019-04-16 | 网易(杭州)网络有限公司 | Image processing method and device based on video flowing |
| CN109635693A (en) * | 2018-12-03 | 2019-04-16 | 武汉烽火众智数字技术有限责任公司 | A kind of face image detection method and device |
| CN109871760A (en) * | 2019-01-15 | 2019-06-11 | 北京奇艺世纪科技有限公司 | A kind of Face detection method, apparatus, terminal device and storage medium |
| CN109977811A (en) * | 2019-03-12 | 2019-07-05 | 四川长虹电器股份有限公司 | The system and method for exempting from voice wake-up is realized based on the detection of mouth key position feature |
| CN110688930A (en) * | 2019-09-20 | 2020-01-14 | Oppo广东移动通信有限公司 | Face detection method, device, mobile terminal and storage medium |
| CN111178218A (en) * | 2019-12-23 | 2020-05-19 | 北京中广上洋科技股份有限公司 | Multi-feature combined video tracking method and system based on face recognition |
| CN111209818A (en) * | 2019-12-30 | 2020-05-29 | 新大陆数字技术股份有限公司 | Video individual identification method, system, equipment and readable storage medium |
| CN112861576A (en) * | 2019-11-27 | 2021-05-28 | 顺丰科技有限公司 | Employee image detection method and device, computer equipment and storage medium |
| CN113283305A (en) * | 2021-04-29 | 2021-08-20 | 百度在线网络技术(北京)有限公司 | Face recognition method and device, electronic equipment and computer readable storage medium |
| CN113449677A (en) * | 2021-07-14 | 2021-09-28 | 上海骏聿数码科技有限公司 | MTCNN-based face detection improvement method and device |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105488478A (en) * | 2015-12-02 | 2016-04-13 | 深圳市商汤科技有限公司 | Face recognition system and method |
| US9430697B1 (en) * | 2015-07-03 | 2016-08-30 | TCL Research America Inc. | Method and system for face recognition using deep collaborative representation-based classification |
| CN106874868A (en) * | 2017-02-14 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on three-level convolutional neural networks |
| CN107316317A (en) * | 2017-05-23 | 2017-11-03 | 深圳市深网视界科技有限公司 | A kind of pedestrian's multi-object tracking method and device |
-
2017
- 2017-11-15 CN CN201711128363.3A patent/CN107784294B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9430697B1 (en) * | 2015-07-03 | 2016-08-30 | TCL Research America Inc. | Method and system for face recognition using deep collaborative representation-based classification |
| CN105488478A (en) * | 2015-12-02 | 2016-04-13 | 深圳市商汤科技有限公司 | Face recognition system and method |
| CN106874868A (en) * | 2017-02-14 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on three-level convolutional neural networks |
| CN107316317A (en) * | 2017-05-23 | 2017-11-03 | 深圳市深网视界科技有限公司 | A kind of pedestrian's multi-object tracking method and device |
Non-Patent Citations (2)
| Title |
|---|
| YANCHENG BAI ET AL.: "Multi-scale Fully Convolutional Network for Face Detection in the Wild", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》 * |
| 何玉冰: "基于视频的人脸检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109271942A (en) * | 2018-09-26 | 2019-01-25 | 上海七牛信息技术有限公司 | A kind of stream of people's statistical method and system |
| CN109359603A (en) * | 2018-10-22 | 2019-02-19 | 东南大学 | A vehicle driver face detection method based on cascaded convolutional neural network |
| CN109635693B (en) * | 2018-12-03 | 2023-03-31 | 武汉烽火众智数字技术有限责任公司 | Front face image detection method and device |
| CN109635693A (en) * | 2018-12-03 | 2019-04-16 | 武汉烽火众智数字技术有限责任公司 | A kind of face image detection method and device |
| CN109635749B (en) * | 2018-12-14 | 2021-03-16 | 网易(杭州)网络有限公司 | Image processing method and device based on video stream |
| CN109635749A (en) * | 2018-12-14 | 2019-04-16 | 网易(杭州)网络有限公司 | Image processing method and device based on video flowing |
| CN109871760A (en) * | 2019-01-15 | 2019-06-11 | 北京奇艺世纪科技有限公司 | A kind of Face detection method, apparatus, terminal device and storage medium |
| CN109977811A (en) * | 2019-03-12 | 2019-07-05 | 四川长虹电器股份有限公司 | The system and method for exempting from voice wake-up is realized based on the detection of mouth key position feature |
| CN110688930A (en) * | 2019-09-20 | 2020-01-14 | Oppo广东移动通信有限公司 | Face detection method, device, mobile terminal and storage medium |
| CN112861576A (en) * | 2019-11-27 | 2021-05-28 | 顺丰科技有限公司 | Employee image detection method and device, computer equipment and storage medium |
| CN112861576B (en) * | 2019-11-27 | 2024-09-27 | 顺丰科技有限公司 | Employee image detection method, device, computer equipment and storage medium |
| CN111178218A (en) * | 2019-12-23 | 2020-05-19 | 北京中广上洋科技股份有限公司 | Multi-feature combined video tracking method and system based on face recognition |
| CN111178218B (en) * | 2019-12-23 | 2023-07-04 | 北京中广上洋科技股份有限公司 | Multi-feature joint video tracking method and system based on face recognition |
| CN111209818A (en) * | 2019-12-30 | 2020-05-29 | 新大陆数字技术股份有限公司 | Video individual identification method, system, equipment and readable storage medium |
| CN113283305B (en) * | 2021-04-29 | 2024-03-26 | 百度在线网络技术(北京)有限公司 | Face recognition method, device, electronic equipment and computer readable storage medium |
| CN113283305A (en) * | 2021-04-29 | 2021-08-20 | 百度在线网络技术(北京)有限公司 | Face recognition method and device, electronic equipment and computer readable storage medium |
| CN113449677A (en) * | 2021-07-14 | 2021-09-28 | 上海骏聿数码科技有限公司 | MTCNN-based face detection improvement method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107784294B (en) | 2021-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107784294A (en) | A kind of persona face detection method based on deep learning | |
| CN106815566B (en) | Face retrieval method based on multitask convolutional neural network | |
| CN101271514B (en) | Image detection method and device for fast object detection and objective output | |
| US11804071B2 (en) | Method for selecting images in video of faces in the wild | |
| CN107133612A (en) | Based on image procossing and the intelligent ward of speech recognition technology and its operation method | |
| CN113553979A (en) | A security clothing detection method and system based on improved YOLO V5 | |
| CN110425005A (en) | The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning | |
| CN106257489A (en) | Expression recognition method and system | |
| CN105469105A (en) | Cigarette smoke detection method based on video monitoring | |
| JP2007128513A (en) | Scene analysis | |
| CN109754478A (en) | A kind of face intelligent Checking on Work Attendance method of low user's fitness | |
| CN104036237A (en) | Detection method of rotating human face based on online prediction | |
| CN111091057A (en) | Information processing method and device and computer readable storage medium | |
| CN102184016B (en) | Noncontact type mouse control method based on video sequence recognition | |
| CN116895090A (en) | A method and system for detecting facial features status based on machine vision | |
| CN111881775B (en) | Real-time face recognition method and device | |
| Harini et al. | A novel static and dynamic hand gesture recognition using self organizing map with deep convolutional neural network | |
| JP5552946B2 (en) | Face image sample collection device, face image sample collection method, program | |
| CN115424311A (en) | Interview monitoring method, device, equipment and storage medium | |
| CN109711232A (en) | Deep learning pedestrian recognition methods again based on multiple objective function | |
| CN112926518A (en) | Gesture password track restoration system based on video in complex scene | |
| CN111860165A (en) | Dynamic face recognition method and device based on video stream | |
| CN114399789B (en) | Mechanical arm remote control method based on static gesture recognition | |
| CN113505729B (en) | Interview cheating detection method and system based on human facial motion unit | |
| CN105184244B (en) | Video human face detection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |