WO2002009024A1 - Systemes d'identite - Google Patents
Systemes d'identite Download PDFInfo
- Publication number
- WO2002009024A1 WO2002009024A1 PCT/GB2001/003322 GB0103322W WO0209024A1 WO 2002009024 A1 WO2002009024 A1 WO 2002009024A1 GB 0103322 W GB0103322 W GB 0103322W WO 0209024 A1 WO0209024 A1 WO 0209024A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- facial
- subject
- facial identification
- identification matrix
- biometric
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/221—Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- FIM Facial Identification Matrix
- Biometrics is the science of automatically identifying individuals based on their unique physiological and/or behavioural characteristics. Biometric information based on the unique characteristics of a person's face, iris, voice, fingerprints, signature, palm prints, hand prints, or DNA can all be used to authenticate a person's identity or establish an identity from a database.
- facial biometric information has a number of advantages. Existing methods have a good level of reliability and the information can be obtained quickly using non-intrusive techniques. Furthermore, the data is not sensitive to superficial facial features such as sunglasses and beards; primarily because the system can undertake training of the data whilst processing the image.
- One of the major methods of the first generation of facial biometric software based on "neural net” creates a template from a two-dimensional (2D) image of the subjects face; although this can be very accurate, both the angle at which the image is captured and the nature of the lighting are critical. Problems also arise with individuals having low skin contrast levels (e.g. people of Afro-Caribbean origin).
- the invention allows for images captured to be rendered into 3D models, and depending on the number of cameras used, the image can be accurately rotated through 360 degrees.
- Our invention makes use of a 3D Biometric Engine [The 3D Biometric Engine includes the following modules; 3D Patch Engine, 3D Graph Engine, 3D Feature Selector Engine, 3D Indexing Engine and a control module which controls the modules and the input and output functions of the system.].
- This takes data from a 3D camera system which is passed to both a 2D and separate 3D Biometric Engine; the 3D engine enables a correlated index key to be created.
- the present invention provides a process for producing a FIM [A FIM contains both 2D and 3D templates, indexing data and optionally a copy of the original captured.] which is more accurate than any currently existing biometric identifier.
- the present invention proposes a methodology for producing a FIM from a subject which includes:
- the invention provides the ability to detect motion at minute levels, down to the level of 3 to 4 pixels, for example the partial blink of an eye-lid.
- the invention is more accommodating for poor lighting conditions and weak skin contrast levels, the range of angles through which a successful image can be captured is increased through the ability to correct angular displacement.
- the use of the two different kinds of biometric data significantly increases the overall accuracy of identification when measured against using either of the biometric methods individually.
- the accuracy of the matrix-generating process can be improved by obtaining a plurality (verification set) of facial images from the subject, generating the two kinds of biometric information for each successive image, and comparing each new set of information with the information previously generated until a predetermined correlation level is achieved.
- the FIM may be stored together with the facial images which were used to generate it.
- the invention further provides a FIM which contains both a 3D facial biometric template and a 2D biometric template for a subject.
- the invention using the FIM, further provides a process for determining the identity of a subject or for verifying the subject against a known FIM.
- the process compares the matrix (thereby obtained through the image process) with a stored FIM which contains 3D facial biometric data and a 2D biometric template derived there from, to determine whether the two matrixes match.
- the identification matrix is small enough to be written to portable identification cards, added to a database along with numerous similar matrixes or transmitted electronically.
- the stored matrix may be included on a portable data carrier such as an identity card, a 2D Barcode printout, smart card or any portable media capable of storing data.
- the verification process can be used to verify that the data carrier belongs to the individual presenting it.
- the accuracy of the verification process can be improved by repeatedly generating identification matrixes from the subject and comparing them with the stored matrix until a predetermined correlation level is achieved.
- the stored FIM can be found from a database of similar matrixes using the index key value of the FIM.
- the identity of an unknown individual can be ascertained by first finding all the matching index values which equal the newly generated FIM and then comparing each of the stored matrixes until a best match is found.
- the invention allows for duplicate index key values.
- FIG. 0 - This is a high level overview of the main modules forming the core of the invention.
- FIG. 1 A high level diagrammatic representation of the process for producing a FIM in accordance with the invention
- Figure 2 A high level diagrammatic representation of the process for verifying the identity of a subject using the matrix.
- Figure 3 This is a high level diagrammatic representation of the process for identifying a subject using the FIM recalled from storage.
- a standard PC 750 MHz processor, 128 MB of RAM, 10 gigabyte hard drive, 15 inch CRT or TFT LCD screen, keyboard, mouse, video Grabber Card
- Microsoft Windows 2000 Professional Operating System System requires a secure O/S
- an approved video camera for the low-end systems a USB Video Web Camera can be utilised
- lighting units as specified. (Basically the lights illuminate the subject with light which is outside of the spectrum visible to a human being.)
- an optional control device such as a touch screen may be utilised. If Process Step 7 (as described below) is implemented then a device, such as a smart card reader or a printer capable of outputting 2D barcode would be required.
- the enrolment process involves the following steps:
- Image capture This is the act of obtaining a digital image of the subject as a direct result of an input into the system from either a digital video camera or a Web camera. Depending on which methodology is used to obtain the third coordinate (required for the 3D biometric engine) either a single camera is used or a stereoscopic camera system. For details of the two different methodologies which are available see the entry below on pseudo 3D image calculation and stereoscopic 3D image. Whilst the image capture is being processed the system will normally display to the subject a copy of the image capture, however, in certain circumstances this will not be the case. Both the input from the Web Camera and the video camera provide a number of frames which are available for capture and utilisation by the system.
- the process monitors the quality of the image and assigns a value which relates to the acceptableness of the image quality provided. In the event that the image quality is adversely affected by strong peripheral lighting the subject would be advised of the problem.
- a series of "live" images are preferably obtained from the subject from whom successive pairs of biometric identifiers are created as described. The newly formed identifiers are compared to previously stored identifiers until such time as a perfect match is achieved or the verification level reaches a preset figure. The image can be taken at various angles covering the front of the subject.
- Step 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine.
- the 3D image will be calculated from a single camera system or a 2D image will be calculated from the images provided by the stereoscopic system.
- Process Step 3 The 2D data is passed to the 2D biometric engine which performs a number of tasks and results in the 2D biometric template.
- Process Step 4 The 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
- Process Step 5 Indexing, at this point data from the 3D biometric engine is used to create an index value which has a specific correlation to the features of the subject involved. [The system allows for a duplicated index value, for example, identical twins, who are truly identical, may well produce a calculated value which in itself is identical.] At this point the two sets of data (2D and 3D templates) linking the index values are encapsulated together, this data set is known as a FIM.
- Process Step 6 The FIM produced is appended to a database.
- Process Step 7. (Optional) If required the FIM calculated can be output directly to a portable data storage product.
- the two best biometric identifiers are then converted into a binary format which is suitable for storage. These biometric identifiers are then stored as a FIM, along with the "best match" image of the subject and any other desired data. Since the size of the matrix is remarkably small it can be recorded on a plastic or paper identity card, e.g. airline boarding pass or a company ID card, a smart card or any suitable carrier.
- the matrix could, for example, be incorporated into a barcode in PDF417 format (capacity about 1200 bytes), a USB token, a memory stick or similar binary recording means. In the case of a barcode the information can be printed using a laser printer for paper/cardboard or a commercial 150 dpi thermal printer for plastic cards. Other biometric, image or reference data may be included on the card as desired.
- the FIM may be transmitted rapidly by any communication means, for example Blue tooth fax or email, to enable the FIM to be used at a remote location.
- the verification process involves the following steps:
- Process Step 1 The FIM or unique ID of the person to be verified is entered into the system, the method and type of input device by which data is entered into the system can vary considerably for example; PDF417 (2D barcode), smart cards, smart keys, proximity card, keyboard entry or even a floppy disk could be the means of triggering the system to verify the subject.
- the FIM does not actually have to be entered into the system, the input to the system only needs to identify the location of the FIM, it is required that the location of the FIM is available to the system. If a touch screen facility is provided the image may be displayed on the touch screen so that the required facial image can be selected from a group of people by touching the appropriate face.
- Process Step 2 The 2D and 3D images are acquired as per step 1 of the enrolment. Depending on the application either a single or multi camera system would be utilised.
- Step 3 Image separation.
- the 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine.
- the 3D data will be calculated from the single image (provided from a one camera system) or 2D data will be calculated from the images provided by the stereoscopic system.
- the 2D data is passed to the 2D biometric engine which performs a number of tasks resulting in the creation of a 2D biometric template.
- the 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
- Process Step 4 The 2D and 3D biometric engines are used to compare the data held in the FIM against the newly acquired image.
- the user as part of the system setup, can set a predefined threshold with regard to the accuracy of the system; this value is used in determining whether there is an acceptable match between the two sets of data.
- Process Step 5 Dependent on the results of Process Step 4, an action or series of actions could be implemented, for example a trigger signal to open a door release.
- Process Step 6 In most circumstances a message will be output indicating the results of the verification. This might be through the means of a message displayed on a computer screen, an audible response or the illumination of a sign.
- the system can record the details of the verification process which has just been enacted in the form of an Audit Trail. This ability enables the verification process to form a major part of an automated Time and Attendance system.
- the system can also run a training module which updates the FIM with the latest acquired image of the subject. In the case of a failed verification or where the input device failed a particular test (valid date) the system may withhold the return of the input device or overwrite the device effectively canceling its operational use.
- Process Step 1 The 2D and 3D images are acquired as per step 1 of the enrolment. Depending on the application either a single or multi camera system would be utilised.
- Process Step 2 Image separation, at this point the 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine.
- the 3D data will be calculated from the single image (provided from a one camera system) or 2D data will be calculated from the images provided by the stereoscopic system.
- the 2D data is passed to the 2D biometric engine which performs a number of tasks and results in the 2D biometric template.
- the 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
- Process Step 4 Using the key value generated at Process Step 3 the system recovers all FIM's with that specific key value.
- the 2D and 3D biometric engines are used to compare the data held in the returned FIM's against the FIM of the newly acquired image.
- the user as part of the system setup, can set a predefined threshold with regard to the accuracy of the system; this value is used in determining whether there is an acceptable match between the two sets of data.
- Process Step 5 Dependent on the results of Process Step 4, an action or series of actions could be implemented, for example a trigger signal needed to open the door release. In most circumstances a message will be output indicating the results of the verification. This might be through the means of a message displayed on a computer screen, an audible response or the illumination of a sign.
- the system can record the details of the identification process which has just been enacted to form an Audit Trail.
- the system can also run a training module which updates the FIM with the latest acquired image of the subject thus helping to maintain an accurate collection of images of the subject.
- the system may withhold the return of the input device or overwrite the device effectively canceling its operational use.
- the enrolment and verification processes are not restricted to live CCTVfor image capture.
- Digital images can be acquired by means of digital still cameras, laptop computers with camera attachments, i.e. a Web camera, a PDA hand held barcode scanner with an image capture head, scanned from printed documents or photographs, camera-enabled WAP phones, etc. Images when acquired by these means would normally result in the 2D biometric process only, being applied.
- Uses of the identification method include travel documents, banking, healthcare, social security, passports and immigration, education, the prison service, ATM machines, retail, secure access, document security, internet services and identification of football fans.
- the figure 4a shows a single camera system where a projector is used to illuminate the subject with a calibrated grid; the frequency of illumination is outside that of a person's ability to register.
- the subject is further illuminated by a balanced light source (i.e. light falling of each side of the subjects face is balance to reflect an equal level of light) providing light of a wavelength outside the visible spectrum.
- the camera is capable of switching, via software control, between colour and the wavelength(s) of the emitters used in the projector and lamps.
- Figure 4b uses two cameras the angle between each camera is known and fixed; the lighting characteristics are the same as for figure 4a. The images from both cameras are fed simultaneously to the computer system.
- At figure 4c multiple cameras are used, at least one pair of cameras will be used as described at figure 4b, the additional camera or cameras will normally be used to take colour images of the subject in addition to providing extra calibration information for the stereoscopic imaging process.
- a single camera is used, meeting the range of wavelength sensitivity for the lighting and camera conditions as at figure 4a, the subject is partially surrounded by tracking which will enable the camera to track at high speed from one side of the subject to the other taking a number of images at predetermined points along the location of the tracking.
- the lighting fix does not have to be applied in every situation, if a constant level of light is available then the lighting fix may not need to be applied.
- Image capture is the process of obtaining a digital image of the subject normally as a direct result of an input into the system from either a digital video camera or a Web camera. Either a single camera or a stereoscopic/multi camera system is used (applying the appropriate methodology) to obtain the third coordinate required for the 3D biometric engine.
- a framing box is positioned around the face, in order to achieve the maximum clarity of the image either a telescopic lens is used to zoom in on the subject or the image is digital enlarged to fill the framing box.
- Process Step 3 The main process, see steps 3a and 3b below, like a software 'Read ahead' operation, sets an initial value which the main process can use as a comparator.
- Process Step 3a The captured image is passed for processing by the 2D biometric engine; the first template created is a master.
- Process Step 3b The template is passed back to the application including an assessment score and size of the template.
- Process Step 3 (Repeated). Further n images of the subject are taken (the actual number selected is determined in the program setup, for example 10 images may be taken of which only 5 are used in the process) The images are passed to the 2D biometric engine in order to determine that they are (a) valid template material (b) the score obtained is of an acceptable value (c) and the size of template is acceptable.
- Process Step 4 The templates are sorted in order, based on their quality and their size; a predetermined number of templates are selected to be encapsulated into the FIM.
- Process Step 5 The selected templates are used to create a 2D person template which is then written to the application database.
- Process Step 1 Projection using non-visible light of a calibration grid
- the target subject is illuminated using a wavelength of the spectrum, which is not visible to the naked eye. Included within the projected light is a calibration grid (horizontal and vertical lines at a known distance), which through the use of filters and an appropriate camera device which can detect the grid, are captured along with the image.
- the camera used must have both a high-resolution lens and image capture element (Charge Coupled Device (CCD) or Low Noise CMOS) element and must also be able to see the wavelength of light projected.
- CCD Charge Coupled Device
- CMOS Low Noise CMOS
- Process Step 2 The captured image contains the target subject overlaid with the lines of the grid, because the contours of the surface where the horizontal and vertical lines of the calibration grid are overlaid over the subject's image the surface contour is deformed (appears to bend).
- Process Step 3 Preparation of image data.
- two images are required; a standard image incorporating colour information and a calibrated image which incorporates the standard image overlaid with the calibration grid.
- Process Step 4 Linear scan of image data.
- the standard image overlaid with the information from the generated calibration grid is then placed in 3D space (R3), this gives the effect of a plane in 3D space (with all z co-ordinates set to zero), note that the calibration grid has the same resolution as the projected calibration grid.
- a linear scan of the image can now be performed; the process involves scanning along each of the horizontal calibration grid lines, from left to right, top to bottom. For each pass of the scan, the generated calibration grid is adjusted, at each intersecting point on the grid the corresponding picture element is checked in the image which incorporates both the image and calibration grid. If the pixel information reveals a colour, which is part of the calibrated grid itself the next scan is processed, otherwise the point referenced needs to be adjusted until all checks have been processed.
- Process Step 5 Applying image texture data.
- a specific area of interest (nose, mouth etc.) in the standard image can be used to map across the surface area of the patch, normally however the area of the face (forehead down to the bottom of the chin) would be used.
- Values for u and v now range from 0.0 through to 1.0 these values can now be scaled to the device co-ordinates, however most 3D texture mapping hardware uses unit values ranging from 0.0 to 1.0.
- Figure 7 shows the layout of a pair of camera, for use in stereoscopic image capture.
- the principles on how this process functions are well known (see Mori, K., Kidode, M & Asada, H. (1973). An iterative prediction and correction method for automatic stereo comparison. Comp. Graph. Image Process).
- the figure indicates how two cameras can be used to determine depth information of a subject. By taking an image point from camera 1 (the point x1,y1) and searching for the same point in the image capture by camera 2 (the point x2,y2) and using image block cross-correlation it is possible to calculate the depth of this point in the real world.
- the stereo search will have two dimensions (x2,y2). Whilst there are different methods for calculating depth information, the use of the block cross-correlation method is preferred (even though costly in processor time) as it provides a more reliable way of obtaining the z1 values.
- the information obtained is used to build up a data set of the x,y,z coordinates for use by other modules within the application.
- Process Step 1 Obtain Working Resolution.
- the size of the patch array (resolution) is obtained directly from the resolution provided by the image capture device; effectively this establishes the working parameters of the patch.
- the unit scale of the patch is 2 by 2 (represented as a true mathematical model).
- a scaling factor can be applied creating a true (real world) scale.
- the minimum working resolution will be 90 units thus the scale is 90/2, which is 45 units.
- Process Step 2 Initialise Patch Variables. Two variables are required in order to specify the points in the patch these are commonly known as u and v. The variables u and v are the working variables required for the generation of the 3D co-ordinates. At initialisation both the u and v variables are initialised to the value minus one, this representing the top left co-ordinate of the patch. The top left coordinate serves as the starting point for the linear scan process. Further variables are defined in order for the process to be completed.
- Process Step 3 Generation of a Patch Constraint Points.
- This process generates 3D point components, where the x and y components are calculated and the third component (z) is set to the value zero.
- the patch at this stage represents a flat plane in 3D space, standing upright spanning across the Y axis, with a width that spans across the X axis.
- This process also serves as an initialisation process for the patch array.
- Process Step 4. Import of constraint data. Constraint information can be imported into the process from the data output by the 3D image capture engine in the form of 3D co-ordinates. The 3D coordinates now act as constraint points to be used in the generation of the curves; this provides the ability to automate the process of patch curve generation.
- Process Step 5 Generation of Patch Curves.
- the generation of patch curves is accomplished by taking a collection of constraint points and calculating each part of the curve.
- the smoothness of the curve is variable depending on the level smoothness required. This process is applied across the u array of constraint points with the results feeding back into the patch array on completion of the u array being filled the process is repeated for the v array of constraint points.
- the final result is a grid of curves that breaks the patch down into quadrants with all curves being inter-connecting.
- Process Step 6 Storage of 3D template information via a database.
- the final process is the creation of a 3D template, which is in the form of a FIM 3D Class; this is then stored in a database. Information held in the database can be recalled to be rendered on the display device.
- the application module has the ability to rotate the model about a central axis +/- 15 degrees.
- the graph-matching module has been developed at the University of York (UK); it is an advanced system for finding solutions to constraint-based problems.
- the solution prevents the exponential complexity of the problem by using methods based on that originally proposed by Hancock and Kittler.
- This method was extended and developed as neural relaxation by Austin and Turner, further development introduced correlation matrix memories as a calculation engine for the system.
- This technique and its extensions are embodied in the Advanced Uncertain Reasoning Architecture (AURA) graph matching engine (from the University of York) versions 1.0 onwards.
- AURA Advanced Uncertain Reasoning Architecture
- the engine is use to delimit a face from the surrounding background and for defining the key features of the face, for example eyes, mouth, nose etc.
- This element of the invention uses a face / facial characteristics detection algorithm, which is able to handle a wide variety of variations in colour images; the engine also makes of pixel depth data from information supplied by the Patch Engine.
- the uses of colour information can simplify the task of finding faces in different environmental lighting conditions.
- the process for locating facial images/facial characteristics is shown below:
- Modelling skin tone colour involves isolating the skin colour space and using the cluster associated with the skin colour in this space.
- ROI Region of Interest
- a 3D Depth filter to aid the segmentation process. It follows that the reverse is also true and that the skin tone information can be mapped to the Patch Engine output as a fourth component.
- the invention also applies the method of the colour difference between adjacent pixels, proportional to the original image colour difference.
- ROI detection relies on skin tones, which allow (eye, nose etc) areas of interest to be processed. This process does not isolate a face in a scene but isolates areas of interest.
- TRUE positives with FALSE positives and various other percentage results, a probability face graph can be constructed and a decision threshold implemented.
- a neural net is now trained with results taken from the ROI process; both FALSE and TRUE results are used, resulting in face detection in a scene.
- the index module works with information provided from several modules, the major ones being the delimitation of facial features and the 3D depth calculations. Common facial features, for example eyes, eyes to ear edge, nose to closed-jaw, depth of eyebrows etc can be used as elements of a compound index. The element values would be a normalized set of data based on five or more images of the subject.
- the delimitation of facial features module has to determine whether a true element value is used or if there is a concern regarding accuracy of the element then whether an average value should be substituted. The more elements used the greater the ability to separate out individuals and the lower the risk of using average values for some of the elements in the key. Each element has a one bit Boolean flag to indicate whether a real or substituted value has been supplied to the element.
- L.Eye (205); R.Eye (204); Nose to Closed-Jaw (650); Eye to Eye (200) the key would like 205b204b650B200b, 650 is a substituted value.
- Using 30 or more elements creates a highly selective and searchable index; identification can be against any part(s) of the key or the whole key.
- the invention will identify a subject, relative to a specific camera which is mounted at a known position in the environment, with the subjects face being at various angles.
- the subjects head position is first aligned to 0 degrees, which is perpendicular to the camera from which the subject will be tracked from.
- the first enrolment image is taken, the subject then rotates their head by a number of degrees and a second enrolment image is taken. This procedure is then repeated until the subject's head is at right angles to the camera, that is the subject's head is at 90 degrees to the camera.
- the enrolment procedure is completed, there will be a collection of enrolled images spanning the head 90 degrees either side of a front of head view.
- a new template is created holding a number of images representing the subject covering an angular range of 180 degrees.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2001272667A AU2001272667A1 (en) | 2000-07-25 | 2001-07-24 | Identity systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0018161.0 | 2000-07-25 | ||
GBGB0018161.0A GB0018161D0 (en) | 2000-07-25 | 2000-07-25 | Identity systems |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002009024A1 true WO2002009024A1 (fr) | 2002-01-31 |
Family
ID=9896258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2001/003322 WO2002009024A1 (fr) | 2000-07-25 | 2001-07-24 | Systemes d'identite |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2001272667A1 (fr) |
GB (2) | GB0018161D0 (fr) |
WO (1) | WO2002009024A1 (fr) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2396002A (en) * | 2002-10-09 | 2004-06-09 | Canon Kk | Gaze tracking system |
EP1471455A2 (fr) * | 2003-04-15 | 2004-10-27 | Nikon Corporation | Caméra numérique |
WO2005008570A3 (fr) * | 2003-05-27 | 2005-05-06 | Honeywell Int Inc | Verification d'identification faciale utilisant une modelisation tridimensionnelle |
WO2005064525A1 (fr) * | 2003-12-30 | 2005-07-14 | Kield Martin Kieldsen | Procede et appareil destines a fournir des informations relatives a une partie du corps d'une personne, notamment en vue de l'identifier |
FR2880158A1 (fr) * | 2004-12-23 | 2006-06-30 | Sagem | Procede d'identification d'un individu a partir de caracteristiques de l'individu, avec detection de fraude |
US7158097B2 (en) | 2002-10-09 | 2007-01-02 | Canon Kabushiki Kaisha | Gaze tracking system |
WO2005096768A3 (fr) * | 2004-04-06 | 2007-01-04 | Rf Intelligent Systems Inc | Authentification faciale 2d/3d combinee |
WO2005098744A3 (fr) * | 2004-04-06 | 2007-03-01 | Rf Intelligent Systems Inc | Dispositif portatif biometrique de calcul et de saisie d'image 2d/3d |
US7599531B2 (en) | 2004-08-24 | 2009-10-06 | Tbs Holding Ag | Method and arrangement for optical recording of biometric finger data |
US7606395B2 (en) | 2004-08-25 | 2009-10-20 | Tbs Holding Ag | Method and arrangement for optical recording of data |
US8675926B2 (en) | 2010-06-08 | 2014-03-18 | Microsoft Corporation | Distinguishing live faces from flat surfaces |
US8823864B2 (en) | 2007-06-28 | 2014-09-02 | Sony Corporation | Image capturing apparatus and associated methodology for auto-focus and facial detection |
WO2015086964A1 (fr) * | 2013-12-12 | 2015-06-18 | Orange | Installation de prise de vue pour restitution avec impression de relief |
WO2017060329A1 (fr) | 2015-10-06 | 2017-04-13 | Janssen Vaccines & Prevention B.V. | Procédés pour la prévention de la dégradation induite par du plastique de produits biologiques |
CN113228037A (zh) * | 2018-12-18 | 2021-08-06 | 创新先进技术有限公司 | 创建虹膜标识以减少生物特征识别系统的搜索空间 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0353385A (ja) * | 1989-07-21 | 1991-03-07 | Nippon Denki Sekiyuritei Syst Kk | 特徴抽出装置 |
US5801763A (en) * | 1995-07-06 | 1998-09-01 | Mitsubishi Denki Kabushiki Kaisha | Face image taking device |
DE19712844A1 (de) * | 1997-03-26 | 1998-10-08 | Siemens Ag | Verfahren zur dreidimensionalen Identifizierung von Objekten |
-
2000
- 2000-07-25 GB GBGB0018161.0A patent/GB0018161D0/en not_active Ceased
- 2000-08-10 GB GBGB0019555.2A patent/GB0019555D0/en not_active Ceased
-
2001
- 2001-07-24 WO PCT/GB2001/003322 patent/WO2002009024A1/fr active Application Filing
- 2001-07-24 AU AU2001272667A patent/AU2001272667A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0353385A (ja) * | 1989-07-21 | 1991-03-07 | Nippon Denki Sekiyuritei Syst Kk | 特徴抽出装置 |
US5801763A (en) * | 1995-07-06 | 1998-09-01 | Mitsubishi Denki Kabushiki Kaisha | Face image taking device |
DE19712844A1 (de) * | 1997-03-26 | 1998-10-08 | Siemens Ag | Verfahren zur dreidimensionalen Identifizierung von Objekten |
Non-Patent Citations (4)
Title |
---|
LOPEZ R ET AL: "3D HEAD POSE COMPUTATION FROM 2D IMAGES: TEMPLATES VERSUS FEATURES", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. (ICIP). WASHINGTON, OCT. 23 - 26, 1995, LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. 2, 23 October 1995 (1995-10-23), pages 599 - 602, XP000624040, ISBN: 0-7803-3122-2 * |
PATENT ABSTRACTS OF JAPAN vol. 015, no. 205 (P - 1206) 27 May 1991 (1991-05-27) * |
TORU ABE ET AL: "AUTOMATIC IDENTIFICATION OF HUMAN FACES BY 3-D SHAPE OF SURFACES-USING VERTICES OF B-SPLINE SURFACE", SYSTEMS & COMPUTERS IN JAPAN, SCRIPTA TECHNICA JOURNALS. NEW YORK, US, vol. 22, no. 7, 1991, pages 96 - 104, XP000259345, ISSN: 0882-1666 * |
YACOOB Y ET AL: "LABELING OF HUMAN FACE COMPONENTS FROM RANGE DATA", CVGIP IMAGE UNDERSTANDING, ACADEMIC PRESS, DULUTH, MA, US, vol. 60, no. 2, 1 September 1994 (1994-09-01), pages 168 - 178, XP000484201, ISSN: 1049-9660 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2396002A (en) * | 2002-10-09 | 2004-06-09 | Canon Kk | Gaze tracking system |
GB2396002B (en) * | 2002-10-09 | 2005-10-26 | Canon Kk | Gaze tracking system |
US7158097B2 (en) | 2002-10-09 | 2007-01-02 | Canon Kabushiki Kaisha | Gaze tracking system |
EP1471455A2 (fr) * | 2003-04-15 | 2004-10-27 | Nikon Corporation | Caméra numérique |
EP1471455A3 (fr) * | 2003-04-15 | 2005-05-25 | Nikon Corporation | Caméra numérique |
US9147106B2 (en) | 2003-04-15 | 2015-09-29 | Nikon Corporation | Digital camera system |
WO2005008570A3 (fr) * | 2003-05-27 | 2005-05-06 | Honeywell Int Inc | Verification d'identification faciale utilisant une modelisation tridimensionnelle |
US7421097B2 (en) | 2003-05-27 | 2008-09-02 | Honeywell International Inc. | Face identification verification using 3 dimensional modeling |
WO2005064525A1 (fr) * | 2003-12-30 | 2005-07-14 | Kield Martin Kieldsen | Procede et appareil destines a fournir des informations relatives a une partie du corps d'une personne, notamment en vue de l'identifier |
WO2005098744A3 (fr) * | 2004-04-06 | 2007-03-01 | Rf Intelligent Systems Inc | Dispositif portatif biometrique de calcul et de saisie d'image 2d/3d |
WO2005096768A3 (fr) * | 2004-04-06 | 2007-01-04 | Rf Intelligent Systems Inc | Authentification faciale 2d/3d combinee |
US7599531B2 (en) | 2004-08-24 | 2009-10-06 | Tbs Holding Ag | Method and arrangement for optical recording of biometric finger data |
US7606395B2 (en) | 2004-08-25 | 2009-10-20 | Tbs Holding Ag | Method and arrangement for optical recording of data |
WO2006070099A3 (fr) * | 2004-12-23 | 2006-09-21 | Sagem Defense Securite | Procede d'identification d'un individu a partir de caracteristiques de l'individu, avec detection de fraude. |
FR2880158A1 (fr) * | 2004-12-23 | 2006-06-30 | Sagem | Procede d'identification d'un individu a partir de caracteristiques de l'individu, avec detection de fraude |
US8823864B2 (en) | 2007-06-28 | 2014-09-02 | Sony Corporation | Image capturing apparatus and associated methodology for auto-focus and facial detection |
US8675926B2 (en) | 2010-06-08 | 2014-03-18 | Microsoft Corporation | Distinguishing live faces from flat surfaces |
WO2015086964A1 (fr) * | 2013-12-12 | 2015-06-18 | Orange | Installation de prise de vue pour restitution avec impression de relief |
FR3015161A1 (fr) * | 2013-12-12 | 2015-06-19 | Orange | Installation de prise de vue pour restitution avec impression de relief |
WO2017060329A1 (fr) | 2015-10-06 | 2017-04-13 | Janssen Vaccines & Prevention B.V. | Procédés pour la prévention de la dégradation induite par du plastique de produits biologiques |
CN113228037A (zh) * | 2018-12-18 | 2021-08-06 | 创新先进技术有限公司 | 创建虹膜标识以减少生物特征识别系统的搜索空间 |
CN113228037B (zh) * | 2018-12-18 | 2024-03-15 | 创新先进技术有限公司 | 创建虹膜标识以减少生物特征识别系统的搜索空间 |
Also Published As
Publication number | Publication date |
---|---|
AU2001272667A1 (en) | 2002-02-05 |
GB0019555D0 (en) | 2000-09-27 |
GB0018161D0 (en) | 2000-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3271750B2 (ja) | アイリス識別コード抽出方法及び装置、アイリス認識方法及び装置、データ暗号化装置 | |
Tirunagari et al. | Detection of face spoofing using visual dynamics | |
EP3807792B1 (fr) | Authentification de l'identité d'une personne | |
US7715596B2 (en) | Method for controlling photographs of people | |
US7983451B2 (en) | Recognition method using hand biometrics with anti-counterfeiting | |
Kose et al. | Countermeasure for the protection of face recognition systems against mask attacks | |
Miao et al. | A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template | |
Benalcazar et al. | Synthetic id card image generation for improving presentation attack detection | |
EP1629415B1 (fr) | Verification d'identification faciale utilisant une modelisation tridimensionnelle | |
Ferrara et al. | Face image conformance to iso/icao standards in machine readable travel documents | |
US20090161925A1 (en) | Method for acquiring the shape of the iris of an eye | |
US20150347833A1 (en) | Noncontact Biometrics with Small Footprint | |
Galbally et al. | A review of iris anti-spoofing | |
EP0731426A2 (fr) | Procédé pour chiffrer une empreinte digitale sur une carte d'identité | |
WO2002009024A1 (fr) | Systemes d'identite | |
Benlamoudi | Multi-modal and anti-spoofing person identification | |
Singh et al. | FDSNet: Finger dorsal image spoof detection network using light field camera | |
JPH11283033A (ja) | 画像識別のための特徴量の利用方法およびそのプログラムを格納した記録媒体 | |
CN111429156A (zh) | 一种手机使用的人工智能识别系统及其应用 | |
Rani et al. | Pre filtering techniques for face recognition based on edge detection algorithm | |
Colombo et al. | Recognizing faces in 3d images even in presence of occlusions | |
Grabovskyi et al. | Facial recognition with using of the microsoft face API Service | |
JP4670619B2 (ja) | 生体情報照合システム | |
CN111160137B (zh) | 一种基于生物3d信息的智能业务处理设备 | |
CN111291586A (zh) | 活体检测方法、装置、电子设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: COMMUNICATION UNDER RULE 69 EPC (EPO FORM 1205A OF 04.06.2003) |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |