US20090210362A1 - Object detector trained using a working set of training data - Google Patents
Object detector trained using a working set of training data Download PDFInfo
- Publication number
- US20090210362A1 US20090210362A1 US12/030,876 US3087608A US2009210362A1 US 20090210362 A1 US20090210362 A1 US 20090210362A1 US 3087608 A US3087608 A US 3087608A US 2009210362 A1 US2009210362 A1 US 2009210362A1
- Authority
- US
- United States
- Prior art keywords
- samples
- weak classifier
- sample
- working set
- subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Definitions
- ODR Automated object detection and/or recognition
- ODR can be used to detect types or classes of physical objects—from simple objects such as geometric shapes to more complex objects such as geographic features and faces—in raw image data (still or video).
- ODR can also be used to detect audio objects such as songs or voices in raw audio data.
- a myriad of different techniques have been developed for ODR.
- a boost cascade detector uses a number of “weak” classifiers that are unified to produce a “strong” classifier.
- a large set of training data can be used to train the weak classifiers to recognize the possible variations in the features of the object to be detected.
- the computational costs and memory demands of training a detector on a large set of training data are unacceptably high. To put this in perspective, weeks have been spent to train a detector with 4297 features on a training set of 4916 faces. To date, the largest known set of positive samples used for training contains 20,000 face samples.
- An object detector that includes a number of weak classifiers can be trained using a subset (a “working set”) of training data instead of all of the training data.
- the working set can be updated so that, for example, it remains representative of the training data.
- a decision to update the working set may be made based on the false positive sample rate - if that rate falls below a threshold value, an update of the working set can be triggered.
- FIG. 1 is a block diagram showing one embodiment of a computer system environment.
- FIG. 2 is a block diagram showing inputs and outputs for training an object detector in one embodiment.
- FIG. 3 is a flowchart of one embodiment of a computer-implemented method for training an object detector.
- FIGS. 4 and 5 are flowcharts of embodiments of computer-implemented methods for updating a working set of training data.
- FIG. 6 is a flowchart of one embodiment of a method for classifying a sample using an object detector.
- FIG. 7 is a flowchart of a method for building a Bayesian Stump.
- Embodiments described herein may be discussed in the general context of computer-executable instructions residing on some form of computer-usable medium, such as program modules, executed by one or more computers or other devices.
- program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- the functionality of the program modules may be combined or distributed as desired in various embodiments.
- Computer-usable media may comprise computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), flash memory or other memory technology, compact disk ROM (CD-ROM), digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information.
- Communication media can embody computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
- FIG. 1 is a block diagram showing elements of a computer system environment in one embodiment.
- FIG. 1 shows a block diagram of one embodiment of an exemplary computer system 100 upon which embodiments described herein may be implemented.
- the system 100 includes at least one processing unit 102 and a memory 104 .
- the memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two.
- This most basic configuration is illustrated in FIG. 1 by dashed line 106 .
- the system 100 may also have additional features/functionality.
- the system 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 1 by removable storage 108 and non-removable storage 110 .
- the system 100 may also have input device(s) 114 such as a keyboard, mouse, pen, voice input device, touch input device, etc.
- Output device(s) 116 such as a display, speakers, printer, etc. may also be included. All these devices are well know in the art and need not be discussed at length here.
- the system 100 may operate in a networked environment using logical connections to one or more remote servers, which instead may be a personal computer (PC), a router, a network PC, a peer device or other common network node, and which may include many or all of the elements described above relative to the system 100 .
- the logical connections may include a local area network (LAN) and a wide area network (WAN), but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the system 100 can be connected to the network through communication connection(s) 112 .
- the memory 104 includes computer-readable instructions, data structures, program modules and the like associated with an operating system 150 , such as but not limited to the WINDOWSTM brand operating system.
- the operating system 150 forms a platform for executing software such as an object detection and/or recognition (ODR) classifier (or detector) 156 and an ODR trainer 160 .
- ODR object detection and/or recognition
- both the detector 156 and trainer 160 need not be implemented on a single computer system.
- the trainer 160 may be made available to the detector 156 as a suitable distributed function, perhaps distributed over several computer systems.
- the training data 162 need not reside in a single database, solely on a single computer system, or on the same computer system(s) as the detector 156 and the trainer 160 .
- the features ⁇ j can be linear (e.g., wavelet transforms) or nonlinear (e.g., local binary pattern filters).
- the training data 162 may be utilized by the ODR trainer 160 to build an ODR detector 156 capable of detecting object classes ⁇ c ⁇ .
- the raw samples x i may be static (still) images.
- a training sample x i is a 24 ⁇ 24 image that is cropped from a larger image.
- a training sample is labeled as a positive sample if it contains one human face. If a training sample does not include a face or includes two or more faces, relatively small faces, or a portion of a face, then the sample is labeled as a negative sample.
- FIG. 2 is a block diagram showing inputs and outputs useful for training an object detector 156 in one embodiment.
- the inputs include the training data 162 , a target detection rate D target , a target number T of weak classifiers, and a dynamic working set update rate f u .
- the training data 162 may include billions of samples. As will be elaborated on, even for such a massive set of training data consisting of large numbers of both positive and negative samples, the ODR classifier 156 can be trained efficiently and accurately without overwhelming computational resources, and in far less time than that needed for conventional training methods. Instead of training using the entire set of training data 162 , a relatively small “dynamic working set” (S w ) 210 is used.
- the dynamic working set consists of a subset of positive samples P w and another subset of negative samples Q w .
- the positive and negative samples in the training data 162 are properly labeled as such.
- the positive samples in the dynamic working set 210 are randomly selected from the positive samples in the training data 162
- the negative samples in the dynamic working set are randomly selected from the negative samples in the training data.
- the remaining negative samples in the training data 162 are reserved for subsequent negative sample bootstrapping.
- a set of validation samples P v is also randomly selected from the positive samples in the training data 162 .
- the dynamic working set 210 can be updated with new samples if/when its distribution is less representative of the entire set of training data 162 .
- FIG. 3 is a flowchart 300 of one embodiment of a computer-implemented method for training an object detector.
- FIGS. 4 and 5 are flowcharts 400 and 500 , respectively, of embodiments of computer-implemented methods for updating a working set of training data.
- FIG. 6 is a flowchart 600 of one embodiment of a method for classifying a sample using an object detector.
- FIG. 7 is a flowchart 700 of a method for building a Bayesian Stump.
- the steps in the flowcharts 300 - 700 may be performed in an order different than presented. Furthermore, the features of the various embodiments described by the flowcharts 300 - 700 can be used alone or in combination with each other. In one embodiment, the flowcharts 300 - 700 can be implemented by the system 100 ( FIG. 1 ) as computer-readable program instructions stored in a memory unit and executed by a processor.
- the positive samples in the training data 162 are randomly sampled to form the positive working set P w
- the negative samples in the training data are also randomly sampled to form the negative working set Q w
- N p is the number of samples in the positive working set
- N q is the number of samples in the negative working set.
- the training data 162 may include billions of samples
- the dynamic working set 210 may include on the order of tens of thousands of samples.
- the target detection rate D target , the target number T of weak classifiers, and the dynamic working set update rate f u are user-specified values.
- a set of classifier weights is initialized. More specifically, the weight w i of each sample x i in the dynamic working set 210 is initialized to a value of one (1). Also, a detection rate D* is initialized to a value of 1.
- the weight w t,i for each sample x i in the dynamic working set 210 is normalized to guarantee an initial distribution of weights while satisfying the following condition:
- the sum of the weights of the positive samples and the sum of the weights of the negative samples in the dynamic working set 210 are each equal to 0.5.
- the detection rate D* for the current training stage t is updated as follows:
- v t is the false negative rate
- k is a normalization factor that satisfies the target detection rate
- the false negative rate v t is assumed to change exponentially in each stage t.
- a weak classifier is trained on the dynamic working set 210 for each feature ⁇ j .
- the features ⁇ j may be, for example, Haar-like features, Gabor wavelet features, and/or EOH (edge orientation histogram) features.
- multiple levels of feature sets are used based on the dynamic cascade structure, with the less computationally intensive features applied first.
- Haar-like features are used first, then when the false positive rate drops below a predefined threshold, Gabor and EOH features can be used to further improve the detection rate. Consequently, computational cost can be reduced and detection accuracy improved, because most negative samples are rejected in the evaluation of the Haar-like features.
- a global confidence output is provided, and combining classification results from multiple models is avoided, resulting in a simpler and more robust detector.
- the “best” feature ⁇ t is chosen and added to the detector 156 .
- the best feature ⁇ t is the feature with the lowest Bayesian error in comparison to the other features.
- Bayesian error is determined using a technique referred to herein as Bayesian Stump, which is described below in conjunction with FIG. 7 .
- the validation set P v is used to adjust the rejection threshold r t for the feature selected in block 360 , under the constraint that the detection rate does not fall below the detection rate D* determined in block 340 .
- a rejection threshold is determined for each weak classifier, by adjusting the rejection threshold r t for that classifier to achieve the adjusted target detection rate D*.
- the weights of the samples in the dynamic working set 210 are adjusted as follows:
- H t (x i ) is a value that is determined as described in conjunction with FIG. 6 , below.
- the dynamic working set 210 is updated if the false positive rate falls below the update rate f u .
- a value of 0.6 for f u provides a good trade-off between detector performance and the computational cost of training.
- the flowchart 300 then returns to block 330 , to repeat the process for the next training stage, until a T-stage detector 156 is trained.
- FIG. 4 is a flowchart 400 of an embodiment of a method for updating the positive working set P w .
- the weights of the samples in the dynamic working set 210 are adjusted.
- the samples with the smallest weights are then removed from the positive working set. In one embodiment, ten percent of the total weight sum is removed.
- a new positive working set of N p samples is produced by randomly selecting additional positive samples from the training data 162 .
- FIG. 5 is a flowchart 500 of an embodiment of a method for updating the negative working set Q w .
- the weights of the samples in the dynamic working set 210 are adjusted.
- the samples with the smallest weights are then removed from the negative working set. In one embodiment, ten percent of the total weight sum is removed.
- the weak classifier h t is used to bootstrap false positive samples from the negative set of the training data 162 , until the negative working set has N q samples.
- bootstrapping is performed relatively infrequently.
- FIG. 6 is a flowchart 600 of one embodiment of a method for classifying a sample using an object detector.
- a rejection threshold r t is calculated for each weak classifier.
- a sample x i is classified as a positive sample if all t of the results H t (x i ) are greater than or equal to r t .
- a sample x i is classified as a negative sample if any one of the results H t (x i ) is less than r t .
- a first result H 1 (x i ) is determined using a first weak classifier h 1 (x i ). Also, a rejection rate r 1 associated with the first weak classifier is determined as described in conjunction with FIG. 3 above. If H 1 (x i ) is less than r 1 , then the detection procedure is halted for the current sample and the sample is classified as negative (block 640 ). Otherwise, the flowchart 600 proceeds to block 620 .
- a second result H 2 (x i ) is determined using a second weak classifier h 1 (x i ) and the result H 1 (x i ) from the first weak classifier. Also, a rejection rate r 2 associated with the second weak classifier is determined. If H 2 (x i ) is less than r 2 , then the detection procedure is halted for the current sample and the sample is classified as negative (block 640 ).
- a result H t (x i ) is determined using a t th weak classifier h t (x i ) and the result H t ⁇ 1 (x i ) from the preceding (t ⁇ 1) weak classifier. Also, a rejection rate r t associated with the t th weak classifier is determined. If H t (x i ) is less than r t , then the detection procedure is halted for the current sample and the sample is classified as negative.
- the sample x i is classified as positive because all t of the results H t (x i ) are greater than or equal to the respective value of r t . Because most samples are negative, computational costs are significantly reduced, because the evaluation of many samples will not proceed through all T stages of the object detector. The steps in the flowchart 600 can be repeated for each sample to be classified.
- FIG. 7 is a flowchart 700 of a method for building a Bayesian Stump.
- the probability of error (Bayesian error, BE) is:
- r k ⁇ r k ⁇ 1 , k ⁇ 1, . . . , K ⁇ . ⁇ r k ⁇ , with k ⁇ 0, . . . , K ⁇ , are the thresholds.
- the histogram of p( ⁇ j (x), ⁇ c ), c ⁇ 1,2 ⁇ is:
- a conventional decision stump classifier can be extended to a single-node, multi-way split decision tree.
- ⁇ k a decision is made that minimizes the Bayesian error ⁇ circumflex over (B) ⁇ k ( ⁇ j ), where:
- Equation (10) shows that ⁇ circumflex over (B) ⁇ ( ⁇ j ) is an upper bound of the overall Bayesian error BE( ⁇ j ). Therefore, the histogram classifier can be optimized mathematically by finding a set of thresholds ⁇ r k ⁇ that minimize BE( ⁇ j ).
- Bayesian Stump With reference to FIG. 7 , a K-bin Bayesian Stump is built. Compared to a binary-split decision stump, Bayesian Stump significantly reduces the Bayesian error and results in a boost classifier with fewer features. Moreover, Bayesian Stump can easily be extended to a lookup table (LUT) type of weak classifier by using log-likelihood output instead of binary output in every interval.
- LUT lookup table
- histograms are built by quantizing feature x′ into L bins, where L>>K.
- . While L′>K, the interval ⁇ ′ l* is iteratively split by maximizing mutual information gain, l* argmax l
- a robust and stable detector can be trained on a massive data set.
- the distribution of the massive data set can be estimated by sampling a smaller subset of the data (a working set).
- the detector is fully automatic without using training parameters for classifiers in each stage.
- the detector can be trained on large amounts of both positive and negative samples while providing a good trade-off between detection speed and accuracy. Training using multiple feature sets is supported. Also, efficient parallel distributed learning is enabled. The use of Bayesian Stump further improves efficiency.
- the working set can be updated continuously, for example, after each new weak classifier is trained.
- the working set can be updated based on a threshold as described herein. Based on experimental results, performance using either of these strategies is similar. However, the use of a threshold is more computationally efficient since updates are performed less frequently (not continuously).
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
- Automated object detection and/or recognition (ODR) can be used to detect types or classes of physical objects—from simple objects such as geometric shapes to more complex objects such as geographic features and faces—in raw image data (still or video). ODR can also be used to detect audio objects such as songs or voices in raw audio data. A myriad of different techniques have been developed for ODR.
- Face detection in particular has attracted much attention due to the potential value of its applications as well as its theoretical challenges. Techniques known by names such as boost cascade and boosting have been somewhat successful for face detection. Still, robust detection is challenging because of variations in illumination and expressions.
- A boost cascade detector uses a number of “weak” classifiers that are unified to produce a “strong” classifier. A large set of training data can be used to train the weak classifiers to recognize the possible variations in the features of the object to be detected. However, the computational costs and memory demands of training a detector on a large set of training data are unacceptably high. To put this in perspective, weeks have been spent to train a detector with 4297 features on a training set of 4916 faces. To date, the largest known set of positive samples used for training contains 20,000 face samples.
- An object detector that includes a number of weak classifiers can be trained using a subset (a “working set”) of training data instead of all of the training data. The working set can be updated so that, for example, it remains representative of the training data. A decision to update the working set may be made based on the false positive sample rate - if that rate falls below a threshold value, an update of the working set can be triggered.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
- The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments and, together with the description, serve to explain the principles of the embodiments:
-
FIG. 1 is a block diagram showing one embodiment of a computer system environment. -
FIG. 2 is a block diagram showing inputs and outputs for training an object detector in one embodiment. -
FIG. 3 is a flowchart of one embodiment of a computer-implemented method for training an object detector. -
FIGS. 4 and 5 are flowcharts of embodiments of computer-implemented methods for updating a working set of training data. -
FIG. 6 is a flowchart of one embodiment of a method for classifying a sample using an object detector. -
FIG. 7 is a flowchart of a method for building a Bayesian Stump. - Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.
- It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing the terms such as “accessing,” “training,” “updating,” “initializing,” “determining,” “reducing,” “selecting,” “adjusting,” “removing,” “adding,” “bootstrapping,” “calculating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- Embodiments described herein may be discussed in the general context of computer-executable instructions residing on some form of computer-usable medium, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments.
- By way of example, and not limitation, computer-usable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), flash memory or other memory technology, compact disk ROM (CD-ROM), digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information.
- Communication media can embody computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
-
FIG. 1 is a block diagram showing elements of a computer system environment in one embodiment.FIG. 1 shows a block diagram of one embodiment of anexemplary computer system 100 upon which embodiments described herein may be implemented. - In its most basic configuration, the
system 100 includes at least oneprocessing unit 102 and amemory 104. Depending on the exact configuration and type of computing device, thememory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This most basic configuration is illustrated inFIG. 1 bydashed line 106. Thesystem 100 may also have additional features/functionality. For example, thesystem 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG. 1 by removable storage 108 andnon-removable storage 110. - The
system 100 may also have input device(s) 114 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 116 such as a display, speakers, printer, etc. may also be included. All these devices are well know in the art and need not be discussed at length here. - The
system 100 may operate in a networked environment using logical connections to one or more remote servers, which instead may be a personal computer (PC), a router, a network PC, a peer device or other common network node, and which may include many or all of the elements described above relative to thesystem 100. The logical connections may include a local area network (LAN) and a wide area network (WAN), but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. When used in a networking environment, thesystem 100 can be connected to the network through communication connection(s) 112. - In the example of
FIG. 1 , thememory 104 includes computer-readable instructions, data structures, program modules and the like associated with an operating system 150, such as but not limited to the WINDOWS™ brand operating system. In the present embodiment, the operating system 150 forms a platform for executing software such as an object detection and/or recognition (ODR) classifier (or detector) 156 and an ODRtrainer 160. However, both thedetector 156 andtrainer 160 need not be implemented on a single computer system. For example, thetrainer 160 may be made available to thedetector 156 as a suitable distributed function, perhaps distributed over several computer systems. Similarly, thetraining data 162 need not reside in a single database, solely on a single computer system, or on the same computer system(s) as thedetector 156 and thetrainer 160. - The
training data 162 may include a set S of classified samples {xi, yu}, i=1, . . . , n, where xiεX; xi is a sample unit (e.g., a file or record) of raw data that either includes or does not include a particular class (type) of object; yi is an object classification label corresponding to a classification of the sample xi; and n is any suitable number of samples. On the space X, a set of feature extractors Φ={φj}, j=1, . . . , m, are defined to map X→R, where R is the one-dimensional space of real numbers and m is any suitable number. The features φj can be linear (e.g., wavelet transforms) or nonlinear (e.g., local binary pattern filters). The object classification labels yi may be taken from a set of object classification labels {ωc}, c=1, . . . , C, where C may be any suitable number. Thetraining data 162 may be utilized by theODR trainer 160 to build anODR detector 156 capable of detecting object classes {ωc}. - As an example, the raw samples xi may be static (still) images. In one embodiment, a training sample xi is a 24×24 image that is cropped from a larger image. In one embodiment, a training sample is labeled as a positive sample if it contains one human face. If a training sample does not include a face or includes two or more faces, relatively small faces, or a portion of a face, then the sample is labeled as a negative sample. The set of object classification labels may be {ω1, ω2}, where ω1 (yi=1) corresponds to a positive classification and ω2 (yi=−1) corresponds to a negative classification.
-
FIG. 2 is a block diagram showing inputs and outputs useful for training anobject detector 156 in one embodiment. In the present embodiment, the inputs include thetraining data 162, a target detection rate Dtarget, a target number T of weak classifiers, and a dynamic working set update rate fu. The outputs include the weak classifiers ht (t=1, 2, . . . , T) and a rejection rate rt for each of the weak classifiers. - The
training data 162 may include billions of samples. As will be elaborated on, even for such a massive set of training data consisting of large numbers of both positive and negative samples, theODR classifier 156 can be trained efficiently and accurately without overwhelming computational resources, and in far less time than that needed for conventional training methods. Instead of training using the entire set oftraining data 162, a relatively small “dynamic working set” (Sw) 210 is used. The dynamic working set consists of a subset of positive samples Pw and another subset of negative samples Qw. - The positive and negative samples in the
training data 162 are properly labeled as such. The positive samples in the dynamic working set 210 are randomly selected from the positive samples in thetraining data 162, and the negative samples in the dynamic working set are randomly selected from the negative samples in the training data. The remaining negative samples in thetraining data 162 are reserved for subsequent negative sample bootstrapping. A set of validation samples Pv is also randomly selected from the positive samples in thetraining data 162. As will be seen, the dynamic working set 210 can be updated with new samples if/when its distribution is less representative of the entire set oftraining data 162. -
FIG. 3 is aflowchart 300 of one embodiment of a computer-implemented method for training an object detector.FIGS. 4 and 5 areflowcharts FIG. 6 is aflowchart 600 of one embodiment of a method for classifying a sample using an object detector.FIG. 7 is aflowchart 700 of a method for building a Bayesian Stump. Although specific steps are disclosed in theflowcharts FIG. 1 ) as computer-readable program instructions stored in a memory unit and executed by a processor. - With reference first to
FIG. 3 , inblock 310, the positive samples in thetraining data 162 are randomly sampled to form the positive working set Pw, and the negative samples in the training data are also randomly sampled to form the negative working set Qw, where Np is the number of samples in the positive working set and Nq is the number of samples in the negative working set. While thetraining data 162 may include billions of samples, the dynamic working set 210 may include on the order of tens of thousands of samples. The target detection rate Dtarget, the target number T of weak classifiers, and the dynamic working set update rate fu are user-specified values. - In
block 320, a set of classifier weights is initialized. More specifically, the weight wi of each sample xi in the dynamic working set 210 is initialized to a value of one (1). Also, a detection rate D* is initialized to a value of 1. -
Blocks 330 through 390 are performed for each weak classifier t=1, 2, . . . , T. Inblock 330, the weight wt,i for each sample xi in the dynamic working set 210 is normalized to guarantee an initial distribution of weights while satisfying the following condition: -
Σxi εPw wt,i=Σxi εQw wt,i, (1) - Thus, initially, the sum of the weights of the positive samples and the sum of the weights of the negative samples in the dynamic working set 210 are each equal to 0.5.
- In
block 340, the detection rate D* for the current training stage t is updated as follows: -
D t *=D t−1 *−v t; (2) -
vt=1−ke−αt/T; (3) - where vt is the false negative rate, k is a normalization factor that satisfies the target detection rate and α is a free parameter that can be used to trade between speed and accuracy. The smaller the value of α, the faster the detector. As noted above, D0*=1. Thus, instead of tuning the detection rates one-by-one, the false negative rate vt is assumed to change exponentially in each stage t.
- In
block 350, a weak classifier is trained on the dynamic working set 210 for each feature φj. For face detection, the features φj may be, for example, Haar-like features, Gabor wavelet features, and/or EOH (edge orientation histogram) features. In one embodiment, multiple levels of feature sets are used based on the dynamic cascade structure, with the less computationally intensive features applied first. Thus, for example, during training, Haar-like features are used first, then when the false positive rate drops below a predefined threshold, Gabor and EOH features can be used to further improve the detection rate. Consequently, computational cost can be reduced and detection accuracy improved, because most negative samples are rejected in the evaluation of the Haar-like features. Furthermore, in comparison to post-filtering methods, a global confidence output is provided, and combining classification results from multiple models is avoided, resulting in a simpler and more robust detector. - In
block 360, for weak classifier t, the “best” feature φt is chosen and added to thedetector 156. In one embodiment, the best feature φt is the feature with the lowest Bayesian error in comparison to the other features. In one such embodiment, Bayesian error is determined using a technique referred to herein as Bayesian Stump, which is described below in conjunction withFIG. 7 . - Continuing with reference to
FIG. 3 , inblock 370, the validation set Pv is used to adjust the rejection threshold rt for the feature selected inblock 360, under the constraint that the detection rate does not fall below the detection rate D* determined inblock 340. In other words, a rejection threshold is determined for each weak classifier, by adjusting the rejection threshold rt for that classifier to achieve the adjusted target detection rate D*. - In contrast to conventional methods, the false negative samples rejected in stage t are removed from the dynamic training set 210. Experimental results show that this improves the training converge rate and results in a detector with fewer features.
- In
block 380, the weights of the samples in the dynamic working set 210 are adjusted as follows: -
- where Ht(xi) is a value that is determined as described in conjunction with
FIG. 6 , below. - In
block 390 ofFIG. 3 , the dynamic working set 210 is updated if the false positive rate falls below the update rate fu. A continuous update strategy is implemented if fu=1, and the update process is disabled if fu=0. A value of 0.6 for fu provides a good trade-off between detector performance and the computational cost of training. Theflowchart 300 then returns to block 330, to repeat the process for the next training stage, until a T-stage detector 156 is trained. -
FIG. 4 is aflowchart 400 of an embodiment of a method for updating the positive working set Pw. As described above, inblock 380 ofFIG. 3 , the weights of the samples in the dynamic working set 210 are adjusted. Inblock 410 ofFIG. 4 , the samples with the smallest weights are then removed from the positive working set. In one embodiment, ten percent of the total weight sum is removed. - In
block 420, as mentioned above, false negative samples have also been removed from the positive working set. Inblock 430, a new positive working set of Np samples is produced by randomly selecting additional positive samples from thetraining data 162. -
FIG. 5 is aflowchart 500 of an embodiment of a method for updating the negative working set Qw. As described above, inblock 380 ofFIG. 3 , the weights of the samples in the dynamic working set 210 are adjusted. Inblock 510 ofFIG. 5 , the samples with the smallest weights are then removed from the negative working set. In one embodiment, ten percent of the total weight sum is removed. - In
block 520, the weak classifier ht is used to bootstrap false positive samples from the negative set of thetraining data 162, until the negative working set has Nq samples. Thus, in contrast to conventional methods that bootstrap continuously at high computational cost, bootstrapping is performed relatively infrequently. -
FIG. 6 is aflowchart 600 of one embodiment of a method for classifying a sample using an object detector. Thedetector 156 can be implemented as follows, for t=1, . . . , T: -
H 1(x)=h 1(x); (5a) -
H 2(x)=H 1(x)+h 2(x)=h 1(x)+h 2(x); (5b) -
H t(x)=H t−1(x)+h t(x)=h 1(x)+h 2(x)+ . . . +h t(x); and (5c) -
H T(x)=H T−1(x)+h T(x)=h 1(x)+h 2(x)+ . . . +h T(x). (5d) - As discussed above, a rejection threshold rt is calculated for each weak classifier. In general, a sample xi is classified as a positive sample if all t of the results Ht(xi) are greater than or equal to rt. A sample xi is classified as a negative sample if any one of the results Ht(xi) is less than rt. Accordingly, an object detection procedure can be implemented as follows.
- Using equation (5a), in
block 610, a first result H1(xi) is determined using a first weak classifier h1(xi). Also, a rejection rate r1 associated with the first weak classifier is determined as described in conjunction withFIG. 3 above. If H1(xi) is less than r1, then the detection procedure is halted for the current sample and the sample is classified as negative (block 640). Otherwise, theflowchart 600 proceeds to block 620. - In
block 620 ofFIG. 6 , based on equation (5b), a second result H2(xi) is determined using a second weak classifier h1(xi) and the result H1(xi) from the first weak classifier. Also, a rejection rate r2 associated with the second weak classifier is determined. If H2(xi) is less than r2, then the detection procedure is halted for the current sample and the sample is classified as negative (block 640). - The process described in
block 620 is repeated for each subsequent value of t until t=T. In general, a result Ht(xi) is determined using a tth weak classifier ht(xi) and the result Ht−1(xi) from the preceding (t−1) weak classifier. Also, a rejection rate rt associated with the tth weak classifier is determined. If Ht(xi) is less than rt, then the detection procedure is halted for the current sample and the sample is classified as negative. - In
block 630, the sample xi is classified as positive because all t of the results Ht(xi) are greater than or equal to the respective value of rt. Because most samples are negative, computational costs are significantly reduced, because the evaluation of many samples will not proceed through all T stages of the object detector. The steps in theflowchart 600 can be repeated for each sample to be classified. -
FIG. 7 is aflowchart 700 of a method for building a Bayesian Stump. In a two-category classification problem such as face detection, the probability of error (Bayesian error, BE) is: -
- Because P(ωc,x) may not be available, a histogram method referred to as Bayesian Stump is introduced to estimate probability distributions in the feature space. Each feature φj(x) can be discretized into K intervals σk=(rk−1, rk) having widths |σk|=rk−rk−1, kε{1, . . . , K}. {rk}, with kε{0, . . . , K}, are the thresholds. The histogram of p(φj(x),ωc), cε{1,2} is:
-
- where wi is the weight distribution of xi.
- Using these histograms, a conventional decision stump classifier can be extended to a single-node, multi-way split decision tree. In each interval σk, a decision is made that minimizes the Bayesian error {circumflex over (B)}Êk(φj), where:
-
{circumflex over (B)}Ê k(φj)=min[p(k,ω 1), p(k,ω 2)]. (8) - The overall Bayesian error of Bayesian Stump is:
-
- In order to further analyze the relationship between {circumflex over (B)}Ê(φj) and the expected Bayesian error BE(φj), the following definition is introduced: suppose σ is an interval, and f and g are two functions defined on σ. If for any xr,xsεσ, (f(xr)−g(xr))*(f(xs)−g(xs))≧0, then the region σ is defined to be the consistent region of functions f and g. From this definition, if ∀rk, subject to −∞=r0< . . . <rk=∞, then:
-
{circumflex over (B)}Ê(φj)≧BE(φj), (10) - with equality if and only if ∀k, σk is the consistent region of f(x)=p(ω1,φj(x)) and g(x)=p(ω2,φj(x)). Equation (10) shows that {circumflex over (B)}Ê(φj) is an upper bound of the overall Bayesian error BE(φj). Therefore, the histogram classifier can be optimized mathematically by finding a set of thresholds {rk} that minimize BE(φj).
- With reference to
FIG. 7 , a K-bin Bayesian Stump is built. Compared to a binary-split decision stump, Bayesian Stump significantly reduces the Bayesian error and results in a boost classifier with fewer features. Moreover, Bayesian Stump can easily be extended to a lookup table (LUT) type of weak classifier by using log-likelihood output instead of binary output in every interval. - In the discussion of
FIG. 7 , a training set S={xi′, yi}, xi′=φ(xi), and a weight set W={wi}, i=1, . . . , n are used. Inblock 710, to estimate p(x′,ω1) and p(x′,ω2), histograms are built by quantizing feature x′ into L bins, where L>>K. - In
block 720, adjacent consistent intervals σl and σl+1 are merged to produce a set of consistent regions {σ′l}, l=1, . . . , L′. More specifically, while L′>K, adjacent intervals σ′l*−1, σ′l* and σ′l*+1, where l*=argminl|p(l,ω1)−p(l,ω2)|. While L′>K, the interval σ′l* is iteratively split by maximizing mutual information gain, l*=argmaxl|min(p(l,ω1),p(l,ω2))|. - In
block 730, on each interval σk, the following decision rule is used to give the binary prediction output: -
P(error|x)=min[P(ω1 |x),P(ω2 |x)]. (11) - In this manner, an LUT function h(k) is built on intervals σk, k=1, . . . , K.
- In summary, a robust and stable detector can be trained on a massive data set. The distribution of the massive data set can be estimated by sampling a smaller subset of the data (a working set). The detector is fully automatic without using training parameters for classifiers in each stage. The detector can be trained on large amounts of both positive and negative samples while providing a good trade-off between detection speed and accuracy. Training using multiple feature sets is supported. Also, efficient parallel distributed learning is enabled. The use of Bayesian Stump further improves efficiency.
- The working set can be updated continuously, for example, after each new weak classifier is trained. Alternatively, the working set can be updated based on a threshold as described herein. Based on experimental results, performance using either of these strategies is similar. However, the use of a threshold is more computationally efficient since updates are performed less frequently (not continuously).
- As a demonstration of efficiency, over 531,000 positive samples were generated and almost ten billion negative samples were collected. These samples were used to generate a dynamic working set and a validation set. Using about 30 desktop computers, it took less than seven hours to train a detector with 700 features on this massive data set.
- In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is, and is intended by the applicant to be, the invention is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Hence, no limitation, element, property, feature, advantage, or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/030,876 US8099373B2 (en) | 2008-02-14 | 2008-02-14 | Object detector trained using a working set of training data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/030,876 US8099373B2 (en) | 2008-02-14 | 2008-02-14 | Object detector trained using a working set of training data |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090210362A1 true US20090210362A1 (en) | 2009-08-20 |
US8099373B2 US8099373B2 (en) | 2012-01-17 |
Family
ID=40955998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/030,876 Active 2030-09-17 US8099373B2 (en) | 2008-02-14 | 2008-02-14 | Object detector trained using a working set of training data |
Country Status (1)
Country | Link |
---|---|
US (1) | US8099373B2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8605998B2 (en) | 2011-05-06 | 2013-12-10 | Toyota Motor Engineering & Manufacturing North America, Inc. | Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping |
US8799201B2 (en) | 2011-07-25 | 2014-08-05 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for tracking objects |
US9207760B1 (en) * | 2012-09-28 | 2015-12-08 | Google Inc. | Input detection |
US20160110882A1 (en) * | 2013-06-25 | 2016-04-21 | Chung-Ang University Industry-Academy Cooperation Foundation | Apparatus and method for detecting multiple objects using adaptive block partitioning |
CN106650782A (en) * | 2015-11-04 | 2017-05-10 | 豪威科技股份有限公司 | System and method for evaluating a classifier implemented within an image signal processor |
CN108764030A (en) * | 2018-04-17 | 2018-11-06 | 中国地质大学(武汉) | A kind of Falls in Old People detection method, equipment and storage device |
US10552299B1 (en) | 2019-08-14 | 2020-02-04 | Appvance Inc. | Method and apparatus for AI-driven automatic test script generation |
US10628630B1 (en) | 2019-08-14 | 2020-04-21 | Appvance Inc. | Method and apparatus for generating a state machine model of an application using models of GUI objects and scanning modes |
US11200454B1 (en) * | 2018-10-17 | 2021-12-14 | Objectvideo Labs, Llc | People selection for training set |
CN114090601A (en) * | 2021-11-23 | 2022-02-25 | 北京百度网讯科技有限公司 | Data screening method, device, equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699285B (en) * | 2021-03-24 | 2021-06-18 | 平安科技(深圳)有限公司 | Data classification method and device, computer equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179719A1 (en) * | 2003-03-12 | 2004-09-16 | Eastman Kodak Company | Method and system for face detection in digital images |
US20050013479A1 (en) * | 2003-07-16 | 2005-01-20 | Rong Xiao | Robust multi-view face detection methods and apparatuses |
US20050102246A1 (en) * | 2003-07-24 | 2005-05-12 | Movellan Javier R. | Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus |
US20060045337A1 (en) * | 2004-08-26 | 2006-03-02 | Microsoft Corporation | Spatial recognition and grouping of text and graphics |
US20060062451A1 (en) * | 2001-12-08 | 2006-03-23 | Microsoft Corporation | Method for boosting the performance of machine-learning classifiers |
US20060088207A1 (en) * | 2004-10-22 | 2006-04-27 | Henry Schneiderman | Object recognizer and detector for two-dimensional images using bayesian network based classifier |
US20070047822A1 (en) * | 2005-08-31 | 2007-03-01 | Fuji Photo Film Co., Ltd. | Learning method for classifiers, apparatus, and program for discriminating targets |
US20070086660A1 (en) * | 2005-10-09 | 2007-04-19 | Haizhou Ai | Apparatus and method for detecting a particular subject |
US20070223790A1 (en) * | 2006-03-21 | 2007-09-27 | Microsoft Corporation | Joint boosting feature selection for robust face recognition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200539046A (en) | 2004-02-02 | 2005-12-01 | Koninkl Philips Electronics Nv | Continuous face recognition with online learning |
-
2008
- 2008-02-14 US US12/030,876 patent/US8099373B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060062451A1 (en) * | 2001-12-08 | 2006-03-23 | Microsoft Corporation | Method for boosting the performance of machine-learning classifiers |
US20040179719A1 (en) * | 2003-03-12 | 2004-09-16 | Eastman Kodak Company | Method and system for face detection in digital images |
US20050013479A1 (en) * | 2003-07-16 | 2005-01-20 | Rong Xiao | Robust multi-view face detection methods and apparatuses |
US20050102246A1 (en) * | 2003-07-24 | 2005-05-12 | Movellan Javier R. | Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus |
US20060045337A1 (en) * | 2004-08-26 | 2006-03-02 | Microsoft Corporation | Spatial recognition and grouping of text and graphics |
US20060088207A1 (en) * | 2004-10-22 | 2006-04-27 | Henry Schneiderman | Object recognizer and detector for two-dimensional images using bayesian network based classifier |
US20070047822A1 (en) * | 2005-08-31 | 2007-03-01 | Fuji Photo Film Co., Ltd. | Learning method for classifiers, apparatus, and program for discriminating targets |
US20070086660A1 (en) * | 2005-10-09 | 2007-04-19 | Haizhou Ai | Apparatus and method for detecting a particular subject |
US20070223790A1 (en) * | 2006-03-21 | 2007-09-27 | Microsoft Corporation | Joint boosting feature selection for robust face recognition |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8605998B2 (en) | 2011-05-06 | 2013-12-10 | Toyota Motor Engineering & Manufacturing North America, Inc. | Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping |
US8799201B2 (en) | 2011-07-25 | 2014-08-05 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for tracking objects |
US9207760B1 (en) * | 2012-09-28 | 2015-12-08 | Google Inc. | Input detection |
US9836851B2 (en) * | 2013-06-25 | 2017-12-05 | Chung-Ang University Industry-Academy Cooperation Foundation | Apparatus and method for detecting multiple objects using adaptive block partitioning |
US20160110882A1 (en) * | 2013-06-25 | 2016-04-21 | Chung-Ang University Industry-Academy Cooperation Foundation | Apparatus and method for detecting multiple objects using adaptive block partitioning |
US9842280B2 (en) * | 2015-11-04 | 2017-12-12 | Omnivision Technologies, Inc. | System and method for evaluating a classifier implemented within an image signal processor |
CN106650782A (en) * | 2015-11-04 | 2017-05-10 | 豪威科技股份有限公司 | System and method for evaluating a classifier implemented within an image signal processor |
TWI615809B (en) * | 2015-11-04 | 2018-02-21 | 豪威科技股份有限公司 | System and method for evaluating a classifier implemented within an image signal processor |
CN108764030A (en) * | 2018-04-17 | 2018-11-06 | 中国地质大学(武汉) | A kind of Falls in Old People detection method, equipment and storage device |
US11200454B1 (en) * | 2018-10-17 | 2021-12-14 | Objectvideo Labs, Llc | People selection for training set |
US10552299B1 (en) | 2019-08-14 | 2020-02-04 | Appvance Inc. | Method and apparatus for AI-driven automatic test script generation |
US10628630B1 (en) | 2019-08-14 | 2020-04-21 | Appvance Inc. | Method and apparatus for generating a state machine model of an application using models of GUI objects and scanning modes |
CN114090601A (en) * | 2021-11-23 | 2022-02-25 | 北京百度网讯科技有限公司 | Data screening method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US8099373B2 (en) | 2012-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8099373B2 (en) | Object detector trained using a working set of training data | |
US20230108692A1 (en) | Semi-Supervised Person Re-Identification Using Multi-View Clustering | |
CN109783582B (en) | Knowledge base alignment method, device, computer equipment and storage medium | |
US11620578B2 (en) | Unsupervised anomaly detection via supervised methods | |
US7296018B2 (en) | Resource-light method and apparatus for outlier detection | |
US9202255B2 (en) | Identifying multimedia objects based on multimedia fingerprint | |
US20130173648A1 (en) | Software Application Recognition | |
US20230009121A1 (en) | Data Object Classification Using an Optimized Neural Network | |
CN110796154A (en) | Method, device and equipment for training object detection model | |
Sikder et al. | Application of rough set and decision tree for characterization of premonitory factors of low seismic activity | |
US20220253725A1 (en) | Machine learning model for entity resolution | |
US12271429B2 (en) | Quantifying and improving the performance of computation-based classifiers | |
US20220309292A1 (en) | Growing labels from semi-supervised learning | |
CN110781293B (en) | Validating training data of a classifier | |
US8301584B2 (en) | System and method for adaptive pruning | |
US12217490B2 (en) | Image processing apparatus, image processing method and non-transitory computer readable medium | |
CN111444816A (en) | Multi-scale dense pedestrian detection method based on fast RCNN | |
AU2021251463B2 (en) | Generating performance predictions with uncertainty intervals | |
US8108325B2 (en) | Method and system for classifying data in system with limited memory | |
US11645587B2 (en) | Quantizing training data sets using ML model metadata | |
US7949621B2 (en) | Object detection and recognition with bayesian boosting | |
CN112434730A (en) | GoogleNet-based video image quality abnormity classification method | |
CN102308307B (en) | Method for pattern discovery and recognition | |
Ali et al. | A review of calibration methods for biometric systems in forensic applications | |
US11100108B2 (en) | Inflationary segment approach to temporal data partitioning for optimized model scoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XIAO, RONG;TANG, XIAO-OU;REEL/FRAME:021343/0616;SIGNING DATES FROM 20080211 TO 20080213 Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XIAO, RONG;TANG, XIAO-OU;SIGNING DATES FROM 20080211 TO 20080213;REEL/FRAME:021343/0616 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034542/0001 Effective date: 20141014 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |