CN107729947A - A kind of Face datection model training method, device and medium - Google Patents
A kind of Face datection model training method, device and medium Download PDFInfo
- Publication number
- CN107729947A CN107729947A CN201711032994.5A CN201711032994A CN107729947A CN 107729947 A CN107729947 A CN 107729947A CN 201711032994 A CN201711032994 A CN 201711032994A CN 107729947 A CN107729947 A CN 107729947A
- Authority
- CN
- China
- Prior art keywords
- classifier
- training
- strong classifier
- sample
- strong
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of Face datection model training method, device and medium, to reduce the training time needed for Face datection model, improves the efficiency of Face datection algorithm Face datection model training.Face datection model training method, including:For given training sample, the classification results collection that the Weak Classifier that repeatedly training obtains strong classifier and its included is carried out using preset algorithm merges storage;According to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, after then increasing the Weak Classifier quantity for training strong classifier, new strong classifier is obtained using the training sample re -training, until obtained strong classifier meets first preparatory condition;Cascade classifier is updated using obtained strong classifier;Terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise update training sample re -training, until the false drop rate of cascade classifier is not more than the first predetermined threshold value.
Description
Technical field
The present invention relates to technical field of face recognition, more particularly to a kind of Face datection model training method, device and Jie
Matter.
Background technology
This part is it is intended that the embodiments of the present invention stated in claims provide background or context.Herein
Description recognizes it is prior art not because not being included in this part.
With the popularization of intelligent camera, intelligent camera comes into huge numbers of families, and user wishes that video camera is not only one
The equipment of individual recorded video, being more desirable to shooting function has some intelligent functions.Most common intelligent function is also user's concern
Most is the detection and identification of face, and Face datection is also the first step of recognition of face.It is generally necessary to Face datection mould
Type carries out constantly renewal iteration, so that Face datection model adapts to different application scenarios.
Haar+adaboost algorithms are a kind of Face datection algorithms of lightweight, there is many applications in practice.But
In practical application, in order to improve the accuracy of Face datection, improvement of the people to algorithm focuses primarily upon innovatory algorithm and detected
The performance of aspect, such as with more complicated feature, or improvement adaboost algorithms.
These modified hydrothermal process improve the degree of accuracy of Face datection to a certain extent, but on do not there is algorithm to be related to pair
Face datection model training process optimizes.With the variation of Face datection application scenarios, complicate, it is often necessary to train
Or Face datection model is updated to adapt to different application scenarios.Because the sample size for training is very huge, training
Process is time-consuming longer.And after training every time, be required for being tested, with the effect of detection training.If training all consumes every time
The substantial amounts of time, this is obviously unfavorable for the iteration more new development of Face datection algorithm.
The content of the invention
The embodiment of the present invention provides a kind of Face datection model training method, device and medium, to Face datection mould
Type training process is improved, and to reduce the training time needed for Face datection model, improves Face datection algorithm Face datection
The efficiency of model training.
First aspect, there is provided a kind of Face datection model training method, including:
For given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its included weak
The classification results collection of grader merges storage, wherein, training each time updates the weight of each sample in the training sample before;
According to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then increase
After adding the Weak Classifier quantity for training strong classifier, new strong classifier is obtained using the training sample re -training,
Until obtained strong classifier meets first preparatory condition;
Cascade classifier is updated using obtained strong classifier;
Terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewal training
Sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Wherein, for given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its wrapped
The classification results collection of the Weak Classifier contained merges storage, specifically includes:
For given training sample, extract N number of Haar features corresponding to the training sample and obtain and every Haar spies
N number of Weak Classifier corresponding to sign difference, wherein, N is the integer more than or equal to 1;
Each Weak Classifier is utilized respectively the training sample is classified to obtain corresponding classification results;
Determine that this trains obtained optimal Weak Classifier according to the classification results;
Obtained optimal Weak Classifier is added in strong classifier, and added in the classification results set of strong classifier
The classification results for the optimal Weak Classifier that this training obtains;
If the quantity of the optimal Weak Classifier included in obtained strong classifier is less than the second predetermined threshold value, according to most
The classification results of excellent Weak Classifier update the weight of each sample in the training sample, and re -training obtain it is new optimal weak
Grader simultaneously stores classification results, until to reach described second pre- for the quantity of optimal Weak Classifier included in the strong classifier
If threshold value.
Alternatively, judge whether the strong classifier meets the first preparatory condition according to below scheme:
The classification thresholds according to corresponding to classification results set corresponding to the strong classifier determines the strong classifier;
The false drop rate according to corresponding to the classification thresholds determine the strong classifier;
If the false drop rate is not more than default false drop rate threshold value, it is determined that the strong classifier meets the first default bar
Part, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for the first preparatory condition.
Alternatively, for each sample, Haar features corresponding to the sample are extracted in accordance with the following methods:
F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
Second aspect, there is provided a kind of Face datection model training apparatus, including:
First training unit, for for given training sample, carrying out repeatedly training using preset algorithm and being divided by force
The classification results collection of class device and its Weak Classifier included merges storage, wherein, update the training before training each time
The weight of each sample in sample;
Second training unit, for the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for
First preparatory condition, then after increasing the Weak Classifier quantity for training strong and weak grader, instructed again using the training sample
New strong classifier is got, until obtained strong classifier meets first preparatory condition;
Updating block, for updating cascade classifier using obtained strong classifier;
3rd training unit, for if the false drop rate of the cascade classifier be not more than the first predetermined threshold value if terminate to instruct
Practice, otherwise, update training sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Alternatively, first training unit, including:
Subelement is extracted, for for given training sample, extracting N number of Haar features corresponding to the training sample and obtaining
To N number of Weak Classifier corresponding with every Haar features difference, wherein, N is the integer more than or equal to 1;
Classify subelement, the training sample is classified for being utilized respectively each Weak Classifier and divided accordingly
Class result;
Determination subelement, for determining that this trains obtained optimal Weak Classifier according to the classification results;
Subelement is added, for obtained optimal Weak Classifier to be added in strong classifier, and in point of strong classifier
The classification results for the optimal Weak Classifier that this training obtains are added in class results set;
Subelement is trained, if to be less than second pre- for the quantity of the optimal Weak Classifier for being included in obtained strong classifier
If threshold value, then the weight of each sample in the training sample is updated according to the classification results of optimal Weak Classifier, and instructed again
Get new optimal Weak Classifier and store classification results, until the number of the optimal Weak Classifier included in the strong classifier
Amount reaches second predetermined threshold value.
Alternatively, a kind of Face datection model training apparatus, in addition to:
First determining unit, the strong classifier pair is determined for the classification results set according to corresponding to the strong classifier
The classification thresholds answered;
Second determining unit, for false drop rate corresponding to determining the strong classifier according to the classification thresholds;
3rd determining unit, if being not more than default false drop rate threshold value for the false drop rate, it is determined that the strong classification
Device meets the first preparatory condition, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is not
Meet the first preparatory condition.
Alternatively, the extraction subelement, specifically for for each sample, it is corresponding to extract the sample in accordance with the following methods
Haar features:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
The third aspect, there is provided a kind of computing device, including at least one processing unit and at least one memory cell,
Wherein, the memory cell is stored with computer program, when described program is performed by the processing unit so that the processing
Unit performs the either step described in above-mentioned Face datection model training method.
Fourth aspect, there is provided a kind of computer-readable medium, it is stored with the computer program that can be performed by computing device,
When described program is run on the computing device so that the computing device is performed described in above-mentioned Face datection model training method
Either step.
Face datection model training method, device and model provided in an embodiment of the present invention, by depositing in the training process
The classification results of obtained optimal Weak Classifier are trained in storage each time, so, without every time during strong classifier is trained
Training computes repeatedly the classification results of identical optimal Weak Classifier, so as to reduce the training time of Face datection model, carries
The training effectiveness of high Face datection model.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and obtained in book, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hair
Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 a are two rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 1 b are three rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 1 c are three rectangular characteristic schematic diagrames in the embodiment of the present invention;
Fig. 2 is the implementation process diagram according to the Face datection model training method of embodiment of the present invention;
Fig. 3 is to obtain the schematic flow sheet of optimal Weak Classifier classification results set according to embodiment of the present invention;
Fig. 4 is the Face datection model training method schematic flow sheet according to another embodiment of the present invention;
Fig. 5 is the structural representation according to the Face datection model training apparatus of mode of the embodiment of the present invention;
Fig. 6 is the structural representation according to the computing device of mode of the embodiment of the present invention.
Embodiment
In order to the time required to reducing Face datection model training, improve the efficiency of Face datection model training, the present invention is real
Apply example and provide a kind of Face datection model training method, device and associated media.
The preferred embodiments of the present invention are illustrated below in conjunction with Figure of description, it will be appreciated that described herein
Preferred embodiment is merely to illustrate and explain the present invention, and is not intended to limit the present invention, and in the case where not conflicting, this hair
The feature in embodiment and embodiment in bright can be mutually combined.
Herein, it is to be understood that in involved term:
The verification and measurement ratio and false drop rate of strong classifier:Verification and measurement ratio and false drop rate can preassign as needed.According to detection
Rate can determine the classification thresholds of strong classifier, and the false drop rate of this strong classifier is then determined according to this classification thresholds.
It is thus determined that whether strong classifier meets to require, it is only necessary to contrasts the false drop rate of strong classifier and preassigned false drop rate threshold
Value.
Wherein, grader corresponding to Haar features is Weak Classifier.With Adaboost algorithm train come grader (if
Dry optimal weak classifier set) it can be described as strong classifier or Adaboost graders.Strong classifier is corresponding with Weak Classifier,
Wherein, the classification capacity of Weak Classifier is weaker, typically only than random guess (i.e. correct classification rate 0.5) height little by little;
The classification capacity of strong classifier is stronger, and it is more to be higher by comparison than random guess.Cascade classifier:It is made up of strong classifier.
Assuming that if several optimal are categorized as:H1, h2 ... hm, the then output of strong classifier are:R=a1h1+a2h2+
Amhm, wherein, a1, a2, am are respectively the weight of Weak Classifier, and weight is relevant with the classification rate of each Weak Classifier.
Assuming that it is respectively fd and fa to specify the verification and measurement ratio of strong classifier and false drop rate threshold value, then strong classification is determined according to fd
The classification thresholds T1 of device, the false drop rate fa_ of strong classifier is can obtain according to T1 and negative sample, if fa_<Fa, then strong classifier expire
Foot requires that otherwise strong classifier is unsatisfactory for requirement, it is necessary to increase Weak Classifier number, re -training, obtains new strong classifier,
Same new strong classifier is also required to by above-mentioned judgement.
Cascade classifier:The generalization ability of one strong classifier is not strong, and several strong classifiers of generally use are joined together
Form a cascade classifier.
In addition, any number of elements in accompanying drawing is used to example and unrestricted, and any name is only used for distinguishing,
Without any restrictions implication.
Inventor has found, in the existing method that Face datection is carried out based on Haar+adaboost algorithms, in order to allow feature
It is more representative, Haar features are extended, it is proposed that extension Haar features:Haar features are lifted by 5 original classes
To 14 classes so that Haar features can preferably characterize face in the case of various visual angles.The face of Haar+adaboost algorithms is examined
Survey and various visual angles etc., but the very rare innovatory algorithm for being related to Face datection model training process have been generalized to by positive face.
In view of this, the embodiments of the invention provide a kind of Face datection model training method, by reducing Face datection
The time that model training is consumed, to improve the efficiency of Face datection model training.
AdaBoost algorithms are a kind of iterative algorithms, for one group of training set, by the distribution for changing wherein each sample
Probability, and obtain different training set Si, be trained for each Si so as to obtain a Weak Classifier Hi, then by these
If grader gets up according to different weighed combinations, strong classifier has just been obtained.
When training for the first time, the training sample included in each training set is to be uniformly distributed, and is obtained by training
Grader H0, in the training set, classification is correct, just reduces its distribution probability;Classification error, it is general just to improve its distribution
Rate, the new training set S1 so obtained is just primarily directed to the sample poorly classified.Reuse S1 to be trained, obtain
Grader H1, iteration continues successively ..., if iterations is T, then can obtain T grader.For each grader
Weights, its classification accuracy is higher, and weights are higher.
Haar features are the templates of some rectangular characteristics, and respectively as shown in Fig. 1 a, Fig. 1 b and Fig. 1 c, it is respectively two rectangles
Feature, three rectangular characteristics and four rectangular characteristic schematic diagrames.For given 24X24 window, according to different positions, with
And different scalings, more than 160,000 features can be produced.One Weak Classifier, actually this 160,000+'s
A feature is chosen in feature, it is non-face to distinguish face or with this feature, and error rate is relatively low.
Based on this, the embodiments of the invention provide a kind of Face datection model training method, as shown in Fig. 2 can include
Following steps:
S21, the training sample for giving, carry out repeatedly training using preset algorithm and obtaining strong classifier and its being included
Weak Classifier classification results collection merge storage, wherein, each time training before update each sample in the training sample
Weight.
S22, the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition,
After then increasing the Weak Classifier quantity for training strong classifier, new strong classification is obtained using the training sample re -training
Device, until obtained strong classifier meets first preparatory condition.
S23, utilize obtained strong classifier renewal cascade classifier.
S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewal
Training sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
In the step s 21, for given training sample, repeatedly training can be carried out according to the flow shown in Fig. 3 and is obtained
Strong classifier and the classification results for storing its multiple Weak Classifier included:
S31, obtain given training sample.
Wherein, several face samples (positive sample) and several non-face sample (negative samples can be included in training sample
This), for example, including 2000 positive samples, 4000 negative samples in training sample.
N number of Haar features corresponding to S32, the extraction training sample obtain corresponding N number of respectively with every Haar features
Weak Classifier.
In this step, for weak N number of Haar features of given training sample, first calculating sample, so as to obtain N
Individual Weak Classifier.
When it is implemented, the Haar features of sample can be calculated according to below equation:F=(Sumb-Sumw)/(Sumb+
Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
By taking two rectangular characteristics as an example, it is assumed that have n pixel in the black and white rectangle shown in Fig. 1 a, according to each square
The pixel value of each pixel included in shape, the sum of the pixel value of each rectangular area is calculated respectively, it is assumed that black region institute
Have pixel value and be Sumb, in white rectangle region all pixels value and be Sumw, then can be calculated according to above-mentioned formula
Obtain its Haar feature.According to the different position in rectangular area and different scalings, N number of Haar features can be obtained, N is big
In the positive integer equal to 1, correspondingly, N number of Weak Classifier can be obtained.
S33, it is utilized respectively each Weak Classifier the training sample is classified to obtain corresponding classification results.
In this step, the grader obtained using step S32 is classified to training sample respectively, can be obtained each weak
Classification results corresponding to grader.
S34, according to the classification results determine this obtained optimal Weak Classifier of training.
In this step, accuracy rate highest Weak Classifier can be selected as optimal Weak Classifier, i.e., optimal Weak Classifier
For correct classification rate highest Weak Classifier.
S35, obtained optimal Weak Classifier is added in strong classifier, and in the classification results set of strong classifier
Add the classification results for the optimal Weak Classifier that this training obtains.
Whether the quantity of the optimal Weak Classifier included in the strong classifier that S36, judgement obtain is less than the second preset value threshold
Value, if it is, step S37 is performed, if not, flow terminates.
S37, the classification results of the optimal Weak Classifier obtained according to this training are updated in the training sample per the same
This weight, and return and perform step S32.
Wherein, can be so that for each sample in training sample, the sample can be determined according to below equation in step S37
Weight corresponding to this:Wherein:wt,iIt is the weight that i-th of sample is taken turns in t, as sample xiCorrectly classified
When, ei=0, otherwise, ei=1.εtIt is the misclassification rate that t takes turns optimal Weak Classifier.Wherein, optimal weak typing
The misclassification rate of device refers to the ratio by the quantity of classification results mistake and whole sample sizes.
If the quantity of the optimal Weak Classifier included in strong classifier is less than the second predetermined threshold value, each sample is updated
Weight after return and perform step S32.Until the quantity of the optimal Weak Classifier included in obtained strong classifier reaches second
Predetermined threshold value.When it is implemented, the second predetermined threshold value can be set based on experience value, this is not entered in the embodiment of the present invention
Row limits.
So, after by taking turns training more, can obtain strong classifier and its comprising each optimal Weak Classifier classification
Results set.Contain the optimal Weak Classifier that each round trains to obtain in strong classifier, and the classification results collection of strong classifier
The classification results for the optimal Weak Classifier that each round obtains are contained in conjunction.
After strong classifier classification results set corresponding with its has been obtained, further, it is necessary to judge to obtain strong
Whether the classification results set of grader meets the first preparatory condition.Specifically, the weak typing included according to the strong classifier
The classification results set of device determines classification thresholds corresponding to strong classifier;According to corresponding to the classification thresholds determine strong classifier
False drop rate;If the false drop rate is not more than default false drop rate threshold value, it is determined that the strong classifier meets the first preparatory condition,
If the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for the first preparatory condition.
When it is implemented, the obtained corresponding classification thresholds of strong classifier can be determined according to below equation:
Classification thresholds are calculated according to given verification and measurement ratio, it is assumed that given verification and measurement ratio is d, in training sample, positive sample
Quantity is n, and each positive sample can be obtained into n numerical value by strong classifier, these numerical value according to arranging from big to small,
TakeIndividual numerical value as classification thresholds T, wherein,Represent to take the minimum integer more than r.
When it is implemented, the obtained false drop rate of strong classifier can be determined according to below equation:
The classification thresholds T obtained according to verification and measurement ratio calculates false drop rate.M negative sample is passed sequentially through into strong classifier, obtained
Value then represent the sample by flase drop more than classification thresholds T, it is assumed that such sample hasIt is individual, then can be according to below equation meter
Calculate false drop rate Rfa:
Whether the false drop rate for judging to obtain is more than default false drop rate threshold value, if false drop rate is more than default false drop rate threshold value,
The strong classifier for then determining to obtain is unsatisfactory for the first preparatory condition, need in this case increase Weak Classifier quantity after again
The training of strong classifier is carried out, until strong classifier meets the first preparatory condition;If false drop rate is not more than default false drop rate threshold
Value, it is determined that obtained strong classifier meets the first preparatory condition, and the strong classifier obtained according to epicycle updates cascade classifier.
When it is implemented, each strong classifier can be combined to obtain cascade classifier according to different weighted values, its
In, weighted value is relevant with its classification results corresponding to each strong classifier, and the accuracy rate of classification results is higher, its corresponding weight
Value is also higher.
It should be noted that if the first round determines that strong class divides device to meet the first preparatory condition, then only need directly defeated
Go out the cascade classifier that strong classifier obtains, i.e. now only include a strong classifier in cascade classifier, judge to cascade
Whether the false drop rate of grader meets the requirements, if do not met, needs more new samples re -training to obtain new strong classifier,
And new cascade classifier is obtained according to certain weight and the strong sort merge for training to obtain before according to its classification results, according to
It is secondary to analogize, untill the false drop rate of obtained cascade classifier meets the requirements.
After it have updated cascade classifier, it is also necessary to which whether the false drop rate for testing cascade classifier is default no more than first
Threshold value, if it is, can terminate to train, the cascade classifier for exporting to obtain is as Face datection model, otherwise, it is necessary to update
Sample, training is re-started, until the false drop rate for the cascade classifier that training obtains is not more than the first predetermined threshold value.
When it is implemented, when updating cascade classifier, the optimal weak classifier set that can obtain epicycle is put into
After the cascade classifier that one wheel obtains, so, when the cascade classifier to obtaining is tested, sample only passes through
Weak Classifier above, it can just be input into the increased optimal Weak Classifier of epicycle and be detected.
In order to be better understood from the embodiment of the present invention, below in conjunction with the embodiment of the present invention implementing procedure to the present invention reality
The process of applying is described in detail, as shown in figure 4, may comprise steps of:
S41, obtain given training sample.
S42, M Haar feature of extraction sample obtain M Weak Classifier.
S43, it is utilized respectively each Weak Classifier the training sample is classified to obtain corresponding classification results.
In this step, M classification results can be exported, wherein, the classification results of Weak Classifier can include Weak Classifier
Classification results accuracy rate.
S44, according to classification results select optimal Weak Classifier.
When it is implemented, it can be obtained using selection sort result accuracy rate highest Weak Classifier as this training optimal
Weak Classifier.
S45, obtained optimal Weak Classifier is added in strong classifier and by the classification of obtained optimal Weak Classifier
As a result it is added in classification results set corresponding to strong classifier and stores.
Whether the quantity of the optimal Weak Classifier included in the strong classifier that S46, judgement obtain reaches the second predetermined threshold value,
If it is, performing step S47, otherwise, step S48 is performed.
Whether the strong classifier that S47, judgement obtain meets the first preparatory condition, if it is, performing step S49, otherwise, holds
Row step S412.
The each sample included in S48, the classification results renewal training sample according to obtained optimal Weak Classifier is corresponding
Weight, and perform step S42.
S49, utilize obtained strong classifier renewal cascade classifier.
Whether the false drop rate for the cascade classifier that S410, judgement obtain is not more than default false drop rate threshold value, if it is, stream
Journey terminates, and otherwise, performs step S411.
S411, renewal training sample, and perform step S42.
S412, increase Weak Classifier number, and perform step S43.
Face datection model training method provided in an embodiment of the present invention, is trained each time by storing in the training process
The classification results of obtained optimal Weak Classifier, so, the classification of identical optimal Weak Classifier is computed repeatedly without training every time
As a result, so as to reduce the training time of Face datection model, the training effectiveness of Face datection model is improved.
In the embodiment of the present invention, what training obtained first is optimal Weak Classifier:From numerous haar features (each features
A corresponding Weak Classifier) selection one is optimal, namely the Weak Classifier that misclassification rate is minimum.By several optimal weak typings
Device forms a strong classifier (set of optimal Weak Classifier):Can be advance comprising several optimal Weak Classifiers in strong classifier
Specify.When most starting training, a numerical value can be rule of thumb specified, is unsatisfactory for if training the strong classifier false drop rate come
It is required that then increasing the number of Weak Classifier in strong classifier, strong classifier is made up of several Weak Classifiers.
Based on same inventive concept, a kind of Face datection model training apparatus is additionally provided in the embodiment of the present invention, due to
The principle that said apparatus solves problem is similar to Face datection model training method, therefore the implementation side of may refer to of said apparatus
The implementation of method, repeat part and repeat no more.
As shown in figure 5, it is structural representation of Face datection model training apparatus provided in an embodiment of the present invention, can be with
Including:
First training unit 51, for for given training sample, carrying out repeatedly training using preset algorithm and obtaining by force
The classification results collection of grader and its Weak Classifier included merges storage, wherein, update the instruction before training each time
Practice the weight of each sample in sample;
Second training unit 52, for the classification results set according to storage, if it is judged that the strong classifier is discontented with
The first preparatory condition of foot, then after increasing the Weak Classifier quantity for training strong and weak grader, using the training sample again
Training obtains new strong classifier, until obtained strong classifier meets first preparatory condition;
Updating block 53, for updating cascade classifier using obtained strong classifier;
3rd training unit 54, for if the false drop rate of the cascade classifier be not more than the first predetermined threshold value if terminate
Training, otherwise, training sample re -training is updated, until the false drop rate of obtained cascade classifier is no more than the first default threshold
Value.
Alternatively, first training unit 51, including:
Subelement is extracted, for for given training sample, extracting N number of Haar features corresponding to the training sample and obtaining
To N number of Weak Classifier corresponding with every Haar features difference, wherein, N is the integer more than or equal to 1;
Classify subelement, the training sample is classified for being utilized respectively each Weak Classifier and divided accordingly
Class result;
Determination subelement, for determining that this trains obtained optimal Weak Classifier according to the classification results;
Subelement is added, for obtained optimal Weak Classifier to be added in strong classifier, and in point of strong classifier
The classification results for the optimal Weak Classifier that this training obtains are added in class results set;
Subelement is trained, if to be less than second pre- for the quantity of the optimal Weak Classifier for being included in obtained strong classifier
If threshold value, then the weight of each sample in the training sample is updated according to the classification results of optimal Weak Classifier, and instructed again
Get new optimal Weak Classifier and store classification results, until the number of the optimal Weak Classifier included in the strong classifier
Amount reaches second predetermined threshold value.
Alternatively, a kind of Face datection model training apparatus, in addition to:
First determining unit, the strong classifier pair is determined for the classification results set according to corresponding to the strong classifier
The classification thresholds answered;
Second determining unit, for false drop rate corresponding to determining the strong classifier according to the classification thresholds;
3rd determining unit, if being not more than default false drop rate threshold value for the false drop rate, it is determined that the strong classification
Device meets the first preparatory condition, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is not
Meet the first preparatory condition.
Alternatively, the extraction subelement, specifically for for each sample, it is corresponding to extract the sample in accordance with the following methods
Haar features:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:
F represents any Haar features corresponding to the sample;
SumbRepresent first area in pixel and;
SumwRepresent second area in pixel and.
For convenience of description, above each several part is divided by function describes respectively for each module (or unit).Certainly, exist
The function of each module (or unit) can be realized in same or multiple softwares or hardware when implementing of the invention.
After the Face datection model training method and device of exemplary embodiment of the invention is described, next,
Introduce the computing device of the another exemplary embodiment according to the present invention.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or
Program product.Therefore, various aspects of the invention can be implemented as following form, i.e.,:It is complete hardware embodiment, complete
The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.), or hardware and software, can unite here
Referred to as " circuit ", " module " or " system ".
In some possible embodiments, it is single that at least one processing can be comprised at least according to the computing device of the present invention
Member and at least one memory cell.Wherein, the memory cell has program stored therein code, when described program code is described
When processing unit performs so that the processing unit perform this specification foregoing description according to the various exemplary implementations of the present invention
Step in mode Face datection model training method.For example, the step of processing unit can perform as shown in Figure 2
S21, the training sample for giving, carry out repeatedly training the weak typing for obtaining strong classifier and its being included using preset algorithm
The classification results collection of device merges storage, wherein, the weight of each sample in the training sample is updated before training each time, and walk
Rapid S22, the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then increase
After Weak Classifier quantity for training strong classifier, new strong classifier is obtained using the training sample re -training, directly
Meet first preparatory condition to obtained strong classifier;Step S23, cascade sort is updated using obtained strong classifier
Device;Step S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, renewal instruction
Practice sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
The computing device 60 according to the embodiment of the invention is described referring to Fig. 6.The calculating dress that Fig. 6 is shown
It is only an example to put 60, should not bring any restrictions to the function and use range of the embodiment of the present invention.
As shown in fig. 6, computing device 60 is showed in the form of universal computing device.The component of computing device 60 can include
But it is not limited to:Above-mentioned at least one processing unit 61, above-mentioned at least one memory cell 62, connection different system component (including
Memory cell 62 and processing unit 61) bus 63.
Bus 63 represents the one or more in a few class bus structures, including memory bus or Memory Controller,
Peripheral bus, processor or the local bus using any bus structures in a variety of bus structures.
Memory cell 62 can include the computer-readable recording medium of form of volatile memory, such as random access memory (RAM)
621 and/or cache memory 622, it can further include read-only storage (ROM) 623.
Memory cell 62 can also include program/utility 625 with one group of (at least one) program module 624,
Such program module 624 includes but is not limited to:Operating system, one or more application program, other program modules and
Routine data, the realization of network environment may be included in each or certain combination in these examples.
Computing device 60 can also communicate with one or more external equipments 64 (such as keyboard, sensing equipment etc.), may be used also
The equipment communication that is interacted with computing device 60 is enabled a user to one or more, and/or with enabling the computing device 60
Any equipment (such as the router, modem etc.) communication to be communicated with one or more of the other computing device.This
Kind communication can be carried out by input/output (I/O) interface 65.Also, computing device 60 can also pass through network adapter 66
With one or more network (such as LAN (LAN), wide area network (WAN) and/or public network, such as internet) communication.
As illustrated, network adapter 66 is communicated by bus 63 with other modules for computing device 60.It will be appreciated that though figure
Not shown in, computing device 60 can be combined and use other hardware and/or software module, included but is not limited to:Microcode, equipment
Driver, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage system
Deng.
In some possible embodiments, the various aspects of Face datection model training method provided by the invention may be used also
In the form of being embodied as a kind of program product, it includes program code, when described program product is run on a computing device,
Described program code be used to making the computer equipment perform this specification foregoing description according to the various exemplary realities of the present invention
The step in the Face datection model training method of mode is applied, for example, the computer equipment can perform as shown in Figure 2
Step S21, for given training sample, carry out repeatedly training using preset algorithm and obtain strong classifier and its included weak
The classification results collection of grader merges storage, wherein, training each time updates the weight of each sample in the training sample before,
With step S22, according to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then
After increasing the Weak Classifier quantity for training strong classifier, new strong classification is obtained using the training sample re -training
Device, until obtained strong classifier meets first preparatory condition;Step S23, cascade is updated using obtained strong classifier
Grader;Step S24, terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, more
New training sample re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
Described program product can use any combination of one or more computer-readable recording mediums.Computer-readable recording medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example may be-but not limited to-electricity, magnetic, optical, electromagnetic, red
The system of outside line or semiconductor, device or device, or any combination above.The more specifically example of readable storage medium storing program for executing
(non exhaustive list) includes:Electrical connection, portable disc with one or more wires, hard disk, random access memory
(RAM), read-only storage (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc
Read memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The program product for Face datection model training of embodiments of the present invention can use portable compact disc
Read-only storage (CD-ROM) and including program code, and can run on the computing device.However, the program product of the present invention
Not limited to this, in this document, readable storage medium storing program for executing can be it is any include or the tangible medium of storage program, the program can be with
It is commanded the either device use or in connection of execution system, device.
Readable signal medium can be included in a base band or as a part of data-signal propagated of carrier wave, wherein carrying
Readable program code.The data-signal of this propagation can take various forms, including --- but being not limited to --- electromagnetism letter
Number, optical signal or above-mentioned any appropriate combination.Readable signal medium can also be beyond readable storage medium storing program for executing it is any can
Read medium, the computer-readable recording medium can send, propagate either transmit for being used by instruction execution system, device or device or
Program in connection.
The program code included on computer-readable recording medium can be transmitted with any appropriate medium, including --- but being not limited to ---
Wirelessly, wired, optical cable, RF etc., or above-mentioned any appropriate combination.
Can being combined to write the program operated for performing the present invention with one or more programming languages
Code, described program design language include object oriented program language-Java, C++ etc., include routine
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user
Perform on computing device, partly perform on a user device, the software kit independent as one performs, is partly calculated in user
Its upper side point is performed or performed completely in remote computing device or server on a remote computing.It is remote being related to
In the situation of journey computing device, remote computing device can pass through the network of any kind --- including LAN (LAN) or wide
Domain net (WAN)-be connected to user calculating equipment, or, it may be connected to external computing device (such as utilize Internet service
Provider passes through Internet connection).
It should be noted that although being referred to some units or subelement of device in above-detailed, but this stroke
Point it is merely exemplary not enforceable.In fact, according to the embodiment of the present invention, it is above-described two or more
The feature and function of unit can embody in a unit.Conversely, the feature and function of an above-described unit can
To be further divided into being embodied by multiple units.
In addition, although the operation of the inventive method is described with particular order in the accompanying drawings, still, this do not require that or
Hint must perform these operations according to the particular order, or the operation having to carry out shown in whole could realize it is desired
As a result.Additionally or alternatively, it is convenient to omit some steps, multiple steps are merged into a step and performed, and/or by one
Step is decomposed into execution of multiple steps.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program
Product.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or more
The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram
Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
The processors of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation
Property concept, then can make other change and modification to these embodiments.So appended claims be intended to be construed to include it is excellent
Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the present invention to the present invention
God and scope.So, if these modifications and variations of the present invention belong to the scope of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to comprising including these changes and modification.
Claims (10)
- A kind of 1. Face datection model training method, it is characterised in that including:For given training sample, carry out repeatedly training the weak typing for obtaining strong classifier and its being included using preset algorithm The classification results collection of device merges storage, wherein, training each time updates the weight of each sample in the training sample before;According to the classification results set of storage, if it is judged that the strong classifier is unsatisfactory for the first preparatory condition, then increase is used After the Weak Classifier quantity of training strong classifier, new strong classifier is obtained using the training sample re -training, until Obtained strong classifier meets first preparatory condition;Cascade classifier is updated using obtained strong classifier;Terminate to train if the false drop rate of the cascade classifier is not more than the first predetermined threshold value, otherwise, update training sample Re -training, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
- 2. the method as described in claim 1, it is characterised in that for given training sample, carried out using preset algorithm more The classification results collection for the Weak Classifier that secondary training obtains strong classifier and its included merges storage, specifically includes:For given training sample, extract N number of Haar features corresponding to the training sample and obtain and every Haar features point Not corresponding N number of Weak Classifier, wherein, N is the integer more than or equal to 1;Each Weak Classifier is utilized respectively the training sample is classified to obtain corresponding classification results;Determine that this trains obtained optimal Weak Classifier according to the classification results;Obtained optimal Weak Classifier is added in strong classifier, and this is added in the classification results set of strong classifier Train the classification results of obtained optimal Weak Classifier;If the quantity of the optimal Weak Classifier included in obtained strong classifier is less than the second predetermined threshold value, according to optimal weak The classification results of grader update the weight of each sample in the training sample, and re -training obtains new optimal weak typing Device simultaneously stores classification results, until the quantity of the optimal Weak Classifier included in the strong classifier reaches the described second default threshold Value.
- 3. the method as described in claim 1, it is characterised in that judge whether the strong classifier meets according to below scheme One preparatory condition:The classification thresholds according to corresponding to classification results set corresponding to the strong classifier determines the strong classifier;The false drop rate according to corresponding to the classification thresholds determine the strong classifier;If the false drop rate is not more than default false drop rate threshold value, it is determined that the strong classifier meets the first preparatory condition, such as False drop rate described in fruit is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for the first preparatory condition.
- 4. method as claimed in claim 2, it is characterised in that for each sample, extract the sample pair in accordance with the following methods The Haar features answered:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:F represents any Haar features corresponding to the sample;SumbRepresent first area in pixel and;SumwRepresent second area in pixel and.
- A kind of 5. Face datection model training apparatus, it is characterised in that including:First training unit, for for given training sample, carrying out repeatedly training using preset algorithm and obtaining strong classifier And its classification results collection of the Weak Classifier included merges storage, wherein, update the training sample before training each time In each sample weight;Second training unit, for the classification results set according to storage, if it is judged that the strong classifier is unsatisfactory for first Preparatory condition, then after increasing the Weak Classifier quantity for training strong and weak grader, obtained using the training sample re -training To new strong classifier, until obtained strong classifier meets first preparatory condition;Updating block, for updating cascade classifier using obtained strong classifier;3rd training unit, for if the false drop rate of the cascade classifier be not more than the first predetermined threshold value if terminate to train, Otherwise, training sample re -training is updated, until the false drop rate of obtained cascade classifier is not more than the first predetermined threshold value.
- 6. device as claimed in claim 5, it is characterised in that first training unit, including:Extract subelement, for for given training sample, extract N number of Haar features corresponding to the training sample obtain with N number of Weak Classifier corresponding to every Haar features difference, wherein, N is the integer more than or equal to 1;Classification subelement, the training sample is classified to obtain corresponding classification knot for being utilized respectively each Weak Classifier Fruit;Determination subelement, for determining that this trains obtained optimal Weak Classifier according to the classification results;Subelement is added, for obtained optimal Weak Classifier to be added in strong classifier, and in the classification knot of strong classifier The classification results for the optimal Weak Classifier that this training obtains are added in fruit set;Subelement is trained, if the quantity of the optimal Weak Classifier for being included in obtained strong classifier is less than the second default threshold Value, then the weight of each sample in the training sample is updated according to the classification results of optimal Weak Classifier, and re -training obtains To new optimal Weak Classifier and classification results are stored, until the quantity of the optimal Weak Classifier included in the strong classifier reaches To second predetermined threshold value.
- 7. device as claimed in claim 5, it is characterised in that also include:First determining unit, determined for the classification results set according to corresponding to the strong classifier corresponding to the strong classifier Classification thresholds;Second determining unit, for false drop rate corresponding to determining the strong classifier according to the classification thresholds;3rd determining unit, if being not more than default false drop rate threshold value for the false drop rate, it is determined that the strong classifier is expired The first preparatory condition of foot, if the false drop rate is more than the default false drop rate threshold value, it is determined that the strong classifier is unsatisfactory for First preparatory condition.
- 8. device as claimed in claim 6, it is characterised in thatThe extraction subelement, for for each sample, extracting Haar features corresponding to the sample in accordance with the following methods:F=(Sumb-Sumw)/(Sumb+Sumw), wherein:F represents any Haar features corresponding to the sample;SumbRepresent first area in pixel and;SumwRepresent second area in pixel and.
- A kind of 9. computing device, it is characterised in that including at least one processing unit and at least one memory cell, wherein, The memory cell is stored with computer program, when described program is performed by the processing unit so that the processing unit Perform claim requires the step of 1~4 any claim methods described.
- A kind of 10. computer-readable medium, it is characterised in that it is stored with the computer program that can be performed by computing device, when When described program is run on the computing device so that the computing device perform claim requires the step of 1~4 any methods described Suddenly.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711032994.5A CN107729947A (en) | 2017-10-30 | 2017-10-30 | A kind of Face datection model training method, device and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711032994.5A CN107729947A (en) | 2017-10-30 | 2017-10-30 | A kind of Face datection model training method, device and medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN107729947A true CN107729947A (en) | 2018-02-23 |
Family
ID=61203198
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711032994.5A Pending CN107729947A (en) | 2017-10-30 | 2017-10-30 | A kind of Face datection model training method, device and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107729947A (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109583904A (en) * | 2018-11-30 | 2019-04-05 | 深圳市腾讯计算机系统有限公司 | Training method, impaired operation detection method and the device of abnormal operation detection model |
| CN109766919A (en) * | 2018-12-18 | 2019-05-17 | 通号通信信息集团有限公司 | Cascade the gradual change type Classification Loss calculation method and system in object detection system |
| CN112669276A (en) * | 2020-12-24 | 2021-04-16 | 苏州华兴源创科技股份有限公司 | Screen detection positioning method and device, electronic equipment and storage medium |
| WO2022199148A1 (en) * | 2021-03-23 | 2022-09-29 | 中国科学院深圳先进技术研究院 | Classification model training method, image classification method, electronic device and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8385609B2 (en) * | 2008-10-21 | 2013-02-26 | Flashfoto, Inc. | Image segmentation |
| CN104680120A (en) * | 2013-12-02 | 2015-06-03 | 华为技术有限公司 | Method and device for generating strong classifier for face detection |
| CN105404901A (en) * | 2015-12-24 | 2016-03-16 | 上海玮舟微电子科技有限公司 | Training method of classifier, image detection method and respective system |
| CN106874835A (en) * | 2016-12-28 | 2017-06-20 | 深圳云天励飞技术有限公司 | A kind of image processing method and device |
-
2017
- 2017-10-30 CN CN201711032994.5A patent/CN107729947A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8385609B2 (en) * | 2008-10-21 | 2013-02-26 | Flashfoto, Inc. | Image segmentation |
| CN104680120A (en) * | 2013-12-02 | 2015-06-03 | 华为技术有限公司 | Method and device for generating strong classifier for face detection |
| CN105404901A (en) * | 2015-12-24 | 2016-03-16 | 上海玮舟微电子科技有限公司 | Training method of classifier, image detection method and respective system |
| CN106874835A (en) * | 2016-12-28 | 2017-06-20 | 深圳云天励飞技术有限公司 | A kind of image processing method and device |
Non-Patent Citations (1)
| Title |
|---|
| 李瑞淇: ""基于肤色和改进的AdaBoost人脸检测算法研究"", 《中国优秀硕士学位论文全文数据库(信息科技缉)》 * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109583904A (en) * | 2018-11-30 | 2019-04-05 | 深圳市腾讯计算机系统有限公司 | Training method, impaired operation detection method and the device of abnormal operation detection model |
| CN109583904B (en) * | 2018-11-30 | 2023-04-07 | 深圳市腾讯计算机系统有限公司 | Training method of abnormal operation detection model, abnormal operation detection method and device |
| CN109766919A (en) * | 2018-12-18 | 2019-05-17 | 通号通信信息集团有限公司 | Cascade the gradual change type Classification Loss calculation method and system in object detection system |
| CN109766919B (en) * | 2018-12-18 | 2020-11-10 | 通号通信信息集团有限公司 | Gradual change type classification loss calculation method and system in cascade target detection system |
| CN112669276A (en) * | 2020-12-24 | 2021-04-16 | 苏州华兴源创科技股份有限公司 | Screen detection positioning method and device, electronic equipment and storage medium |
| WO2022199148A1 (en) * | 2021-03-23 | 2022-09-29 | 中国科学院深圳先进技术研究院 | Classification model training method, image classification method, electronic device and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230119593A1 (en) | Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium | |
| CN112508044A (en) | Artificial intelligence AI model evaluation method, system and equipment | |
| CN110633745A (en) | A kind of image classification training method, device and storage medium based on artificial intelligence | |
| CN108229267A (en) | Object properties detection, neural metwork training, method for detecting area and device | |
| CN106874826A (en) | Face key point-tracking method and device | |
| CN110717554A (en) | Image recognition method, electronic device, and storage medium | |
| CN106599789A (en) | Video class identification method and device, data processing device and electronic device | |
| CN107729947A (en) | A kind of Face datection model training method, device and medium | |
| CN112712068B (en) | Key point detection method and device, electronic equipment and storage medium | |
| CN112183672B (en) | Image classification method, feature extraction network training method and device | |
| CN106484837A (en) | The detection method of similar video file and device | |
| JP6897749B2 (en) | Learning methods, learning systems, and learning programs | |
| CN109345553A (en) | A kind of palm and its critical point detection method, apparatus and terminal device | |
| CN107729928A (en) | Information acquisition method and device | |
| CN117746015A (en) | Small target detection model training method, small target detection method and related equipment | |
| CN113610064B (en) | Handwriting recognition method and device | |
| CN113887673B (en) | Image aesthetic quality evaluation method, device, electronic device and storage medium | |
| CN116934385A (en) | Construction method of user loss prediction model, user loss prediction method and device | |
| CN115578739A (en) | Training method and device for realizing IA classification model by combining RPA and AI | |
| CN113033444A (en) | Age estimation method and device and electronic equipment | |
| CN112861601B (en) | Method and related device for generating adversarial samples | |
| CN117892142A (en) | Method, device and storage medium for judging source of generated content | |
| CN113255701A (en) | Small sample learning method and system based on absolute-relative learning framework | |
| CN119888870A (en) | Face-changing detection method, device and equipment based on multi-granularity feature fusion | |
| CN116777814A (en) | Image processing method, apparatus, computer device, storage medium, and program product |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180223 |