US8965069B2 - Three dimensional minutiae extraction in three dimensional scans - Google Patents
Three dimensional minutiae extraction in three dimensional scans Download PDFInfo
- Publication number
- US8965069B2 US8965069B2 US13/631,041 US201213631041A US8965069B2 US 8965069 B2 US8965069 B2 US 8965069B2 US 201213631041 A US201213631041 A US 201213631041A US 8965069 B2 US8965069 B2 US 8965069B2
- Authority
- US
- United States
- Prior art keywords
- identification
- dimensional image
- minutiae
- dimensional
- analyzing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000000605 extraction Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000001914 filtration Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 abstract description 8
- 239000013598 vector Substances 0.000 description 17
- 238000009499 grossing Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000217377 Amblema plicata Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1353—Extracting features related to minutiae or pores
-
- G06K9/00073—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
Definitions
- the invention is generally related to analyzing a three dimensional image to identify three dimensional identification minutiae located on the three dimensional image.
- an identification feature e.g., finger print, palm-print, hand print
- An identification feature such as a fingerprint
- An identification feature may be analyzed to determine a person's identity by comparing the captured fingerprint to a database including images of fingerprints, and a person's identity may be confirmed by matching the captured fingerprint to a previously captured image of the person's fingerprint.
- the identification feature includes a plurality of identification minutiae located at unique positions on the identification feature, where the unique locations of the identification minutiae may be identified and compared to identification minutiae on the previously captured identification feature to determine a match (i.e., determine the person's identity and/or confirm the person's identity).
- ridge ends for example, where a ridge of a fingerprint ends, referred to as a “termination;” and ridge splits—for example where a fingerprint ridge splits into two ridges, which is referred to as a “bifurcation.”
- ridge splits for example where a fingerprint ridge splits into two ridges, which is referred to as a “bifurcation.”
- a ravine e.g., a valley
- a fingerprint typically comprises a plurality of ridges and a plurality of proximate ravines.
- minutiae extraction Conventional systems analyze a two dimensional image of the fingerprint to identify the identification minutiae, which is generally referred to as minutiae extraction.
- the two dimensional image of the identification feature may be analyzed to identify identification minutiae.
- the extracted identification minutiae may be compared by the computer to one or more identification minutiae associated with an identity (i.e., a particular person) to determine if the captured fingerprint also corresponds to the identity.
- identity i.e., a particular person
- the extracted identification minutiae are used to determine a person's identity by determining if they match identification minutiae previously associated with the person.
- identification image acquisition was based on contact. For example, a finger would be placed on a fingerprint scanner, and a two dimensional image of the fingerprint would be captured by the fingerprint scanner.
- placing the identification feature e.g., a finger, a hand, etc.
- placing the identification feature introduces distortions and deformations to the captured two dimensional image. For example, pressing a finger onto a scanning surface may cause the finger to flatten and cause the ridges of the fingerprint to distort and deform in unpredictable ways.
- the distortions and deformations associated with two dimensional capture may lead to problems analyzing the two dimensional image to identify the identification minutiae on the two dimensional image.
- the errors in correctly identifying the identification minutiae may lead to incorrect identification and/or confirmation of a person's identity.
- pressing a finger to a surface of a scanner may cause latent fingerprint problems, where a trace of the fingerprint remains on the surface of the scanner, which may lead to forgery and hygiene problems.
- Other issues such as degraded or partial images caused by improper fingerprint placement, smearing, or sensor noise from a tear on a surface coating often occur in conventional two dimensional identification systems. All the issues in turn may lead to mis-identification and/or failure to determine an identity.
- Three dimensional images of identification features have been developed, such that contact is no longer required to capture the image.
- a person may place a finger into a three dimensional scanner and a fingerprint may be captured in a three dimensional image without the person pressing the finger to a surface.
- three dimensional images of fingerprints, palm-prints, hand-prints, and other such images may be captured without significant distortions or deformations of the identification feature being captured.
- three dimensional identification image capture systems address other issues including hygiene, latent fingerprints, etc.
- three dimensional image capturing systems may capture large areas as images quickly.
- the captured three dimensional images are typically projected into a two dimensional image, and the two dimensional image is then analyzed using conventional two dimensional methods to identify identification minutiae.
- While capturing the identification feature in a three dimensional image reduces deformations and distortions associated with capturing an image of the identification feature, projecting the three dimensional image into a two dimensional image introduces deformations and distortions into the two dimensional image. As such, in these conventional systems, errors in identifying the identification minutiae, determining an identification, and/or confirming an identification may still occur.
- the invention addresses these and other problems associated with the prior art by using a system and method that identify three dimensional identification minutiae on a three dimensional image of a biometric identification feature, where it is understood that a three dimensional image stores data representing an object in a three dimensional space, and typically represents locations of the features on the object in a three dimensional space (e.g., using x-y-z coordinates, etc.).
- the invention directly analyzes the three dimensional image without projecting the image to a two dimensional image, thereby reducing distortion and deformations associated with projecting the image.
- the invention projects the three dimensional image to a two dimensional image and adjusts the projected two dimensional image based on one or more determined texture characteristics of the three dimensional image to reduce distortion and deformations associated with projecting the image.
- a computer including a processor and memory may receive a three dimensional image of a fingerprint, and the computer may directly analyze the three dimensional image to identify identification minutiae included in the three dimensional image of the fingerprint. After identifying three dimensional minutiae in a three dimensional image, the three dimensional image and/or extracted identification minutiae may be compared to one or more identification images (i.e., images of previously captured fingerprints and/or extracted identification minutiae associated with a specific person) to determine the identity of the person and/or confirm the identity of the person.
- identification images i.e., images of previously captured fingerprints and/or extracted identification minutiae associated with a specific person
- analysis of the three dimensional image includes analyzing one or more three dimensional characteristics of the identification feature captured in the three dimensional image to identify the identification minutiae.
- a computer may analyze the three dimensional image to determine one or more curvatures of the captured identification feature, one or more vector directions associated with the captured identification feature, one or more depths associated with the captured identification feature, and/or other such three dimensional characteristics.
- the computer may extract a plurality of identification minutiae from the three dimensional image based at least in part on the determined three dimensional characteristics.
- a computer loads a three dimensional image of an identification feature.
- the computer analyzes at least a portion of the three dimensional image of the identification feature to identify ridge point locations on the portion of the three dimensional image.
- the computer analyzes each ridge point location to identify identification minutiae on the portion of the three dimensional image.
- FIG. 1 is a diagrammatic illustration of a computer configured to analyze a three dimensional image of an identification feature consistent with embodiments of the invention to identify identification minutiae on the three dimensional image;
- FIG. 2 is a flowchart illustrating a sequence of operations executable by a processor of the computer of FIG. 1 to thereby cause the processor to perform steps necessary to analyze a three dimensional image of an identification feature to identify a plurality of identification minutiae.
- FIG. 3 is a flowchart illustrating a sequence of operations executable by a processor of the computer of FIG. 1 to thereby cause the processor to perform steps necessary to convert the three dimensional image to a two dimensional image and analyze the two dimensional image to identify identification minutiae thereon.
- FIG. 4 is an example of a three dimensional image of an identification feature in the form of a fingerprint.
- FIG. 5 is an example of the three dimensional image of FIG. 4 after the computer of FIG. 1 performs a smoothing operation thereon according to the operations of shown in FIG. 3 .
- FIG. 6 is an example of a two dimensional image generated by the computer of FIG. 1 based on the three dimensional image of FIG. 4 .
- FIG. 7 is an example of an enhanced two dimensional image based on the two dimensional image of FIG. 6 .
- FIG. 8 illustrates identified identification minutiae on the enhanced two dimensional image of FIG. 7 .
- FIG. 9 is a flowchart illustrating a sequence of operations executable by a processor of the computer of FIG. 1 to thereby cause the processor to perform steps necessary to identify identification minutiae on the three dimensional image of FIG. 4 .
- FIG. 10 is a flowchart illustrating a sequence of operations executable by a processor of the computer of FIG. 1 to thereby cause the processor to perform steps necessary to identify identification minutiae on the three dimensional image of FIG. 4 .
- FIG. 11 is an example of the three dimensional image of FIG. 4 after the computer performs one or more filtering operations.
- FIG. 12 an example of the three dimensional image of FIG. 4 after the computer performs one or more filtering operations.
- FIG. 13 illustrates the maximum principal curvature for vertices of a mesh on the three dimensional image of FIG. 12 .
- FIG. 14 illustrates the minimum principal curvature for vertices of the mesh on the three dimensional image of FIG. 12 .
- FIG. 15 illustrates the maximum principal direction for vertices of the mesh of the three dimensional image of FIG. 12 .
- FIG. 16 illustrates the minimum principal direction for vertices of the mesh of the three dimensional image of FIG. 12 .
- FIG. 17 illustrates determined ridge and ravine vertices of a mesh fitted to the three dimensional image of FIG. 12 .
- FIG. 18 illustrates determined ridge and ravine lines for the three dimensional image of FIG. 4 .
- FIG. 19 illustrates an example of a ridge termination type of identification minutiae.
- FIG. 20 illustrates an example of a ridge bifurcation type of identification minutiae.
- FIG. 21 illustrates an example of identified identification minutiae for the three dimensional image of FIG. 4 .
- FIG. 22 illustrates an example of a ridge termination type identification minutiae and an example of a ravine bifurcation type identification minutiae.
- FIG. 23 illustrates an example of a ridge bifurcation type identification minutiae and an example of a ravine termination type identification minutiae.
- FIG. 24 is a flowchart illustrating a sequence of operations executable by a processor of the computer of FIG. 1 to thereby cause the processor to perform steps necessary to generate a quality map for the three dimensional image of FIG. 4 .
- FIG. 25 is an example of a region map for the three dimensional image of FIG. 4 .
- FIG. 26 is an example of a low depth map for the three dimensional image of FIG. 4 .
- FIG. 27 is an example of a low flow map for the three dimensional image of FIG. 4 .
- FIG. 28 is an example of a quality map for the three dimensional image of FIG. 4
- FIG. 29 is an example of the quality map of FIG. 28 applied to the identified identification minutiae shown in FIG. 21 .
- Embodiments consistent with the invention analyze a three dimensional scan (i.e., a three dimensional image) of an identification feature to identify a plurality of identification minutiae. While such three dimensional scans are referred to herein as a three dimensional image, the invention is not so limited. In general, the three dimensional image/scan may be any data that provides a representation of a surface of an identification feature.
- a three dimensional image of an identification feature is converted to a two dimensional image of the identification feature, where the two dimensional image is based at least in part on a texture of the three dimensional image, and identification minutiae are identified on the two dimensional image.
- a three dimensional image is analyzed and identification minutiae are identified on the three dimensional image.
- FIG. 1 this figure is a diagrammatic illustration of a computer 10 consistent with embodiments of the invention.
- computer 10 includes a processor 12 and a memory 14 .
- Memory 14 may include application 16 , where application 16 includes one or more instructions configured to be executed by processor 12 to thereby cause processor 12 to perform the steps necessary to execute elements consistent with embodiments of the invention.
- Memory 14 may further include identification database 18 , where identification database 18 includes one or more identification data records 20 .
- Each identification data record 20 corresponds an individual and generally stores data associated with the identification of the individual, including for example, the individual's personal information (e.g., name, address, citizenship, etc.), one or more three dimensional images of identification features of the individual, and/or data indicating one or more identified identification minutiae of one or more identification features of the individual.
- an identification record 20 may include data indicating identified identification minutiae of a finger of an individual and may further include data indicating an identity associated with the identified identification minutiae.
- an identification data record 20 may include a three dimensional image of a finger of an individual and may further include data indicating an identity associated with the three dimensional image of the finger.
- an individual's identity and sensitive identification data may be limited by only storing data indicating identified identification minutiae of an identification feature in an identification data record and not image data of the identification feature.
- Computer 10 may further include input-output interface (I/O interface) 22 , where I/O interface 22 may be configured to transmit data to and receive data from one or more devices connected to computer 10 .
- I/O interface 22 may be configured to transmit data to and receive data from one or more devices connected to computer 10 .
- Devices connected to computer 10 and communicating through I/O interface 22 may include, for example a keyboard, a computer mouse, a computer monitor, a printer, a three dimensional scanner, computer speakers, and other such devices.
- computer 10 may include a transceiver (Tx/Rx) 24 , where Tx/Rx may be configured to transmit data to and receive data from a communication network 26 .
- computer 10 may be connected to an identification feature scanner 28 , where the scanner may be configured to scan an identification feature to generate data corresponding to the scanned identification feature. For example the scanner may capture a three dimensional image of at least a portion of an identification feature such as a fingerprint, palm print, etc.
- FIG. 2 is a flowchart 100 illustrating a sequence of operations that may be executed by a computer to cause a processor of the computer to analyze a three dimensional image of an identification feature, identify a plurality of identification minutiae of the identification feature, and confirm an identification and/or determine an identification based on the identified identification minutiae.
- a computer loads a three dimensional image (block 102 ), where the three dimensional image represents at least a portion of an identification feature (e.g., a fingerprint, a palm print, a hand print, a foot print).
- an identification feature e.g., a fingerprint, a palm print, a hand print, a foot print.
- the computer analyzes the three dimensional image to identify a plurality of identification minutiae (block 104 ). In some embodiments, the computer analyzes characteristics of the three dimensional image to identify the plurality of identification minutiae from the three dimensional image. For example, the computer may analyze one or more characteristics of the three dimensional image to identify identification minutiae. In another example the computer may convert the three dimensional image to a two dimensional image, where the computer adjusts the two dimensional image based at least in part on one or more texture characteristics of the three dimensional image, and the computer may analyze the two dimensional image to identify the identification minutiae.
- analyzing the three dimensional image to identify the plurality of identification minutiae may include fitting a three dimensional mesh to at least a portion of the three dimensional image. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include determining a curvature of one or more points in the three dimensional image. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include determining a direction vector of one or more points in the three dimensional image. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include determining a depth associated with one or more points in the three dimensional image.
- analyzing the three dimensional image to identify the plurality of identification minutiae may include determining one or more ridge points in the three dimensional image. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include determining one or more ravine points in the three dimensional. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include generating one or more ridge lines on the three dimensional image based at least in part on the determined ridge points. In some embodiments, analyzing the three dimensional image to identify the plurality of identification minutiae may include generating one or more ravine lines on the three dimensional image based at least in part on the determined ravine lines.
- analyzing the three dimensional image to identify the plurality of identification minutiae may include generating a quality map based at least in part on the determined depth(s) and/or determined direction vector(s), the quality map including data indicating regions of high quality and regions of low quality. Moreover, in some embodiments, analyzing the three dimensional may include applying the generated quality map to the three dimensional image, and identifying the plurality of identification minutiae may include identifying identification minutiae in indicated high quality regions.
- the computer may generate an identification minutiae dataset for the three dimensional image based on the identified identification minutiae (block 105 ).
- the identification minutiae dataset stores data indicating the identified identification minutiae in a format that may be compared to one or more other identification minutiae datasets to determine and/or confirm an identity.
- the computer may compare the identification minutiae dataset of the three dimensional image to one or more datasets of identification minutiae, where each dataset is associated with an identification (i.e., a person's identity), such that the computer may match the extracted identification minutiae to a previously classified dataset to determine a match (block 106 ).
- the computer determines whether the extracted identification minutiae match a dataset to confirm an identification (block 108 ), i.e., the computer matches identification minutiae to determine if a person is a specific person. In some embodiments, the computer determines which dataset of identification minutiae of a plurality of datasets the extracted identification minutiae match to determine an identification (block 110 ), i.e., the computer scans a database including a plurality of datasets of identification minutiae, where each dataset includes an associated identification, to match the extracted identification minutiae to a respective dataset and thereby determine the identification that corresponds to the extracted identification minutiae based on the respective dataset.
- FIG. 3 provides flowchart 120 that illustrates a sequence of operations that may be performed by a computer consistent with embodiments of the invention to determine and extract identification minutiae from a three dimensional image of an identification feature.
- the three dimensional image is input (block 122 ), and the computer extracts a smoothed surface of the three dimensional image (block 124 ).
- the computer may extract the smoothed surface of the three dimensional image using a weighted linear least square algorithm, where the weights may be calculated by a Gaussian function.
- a plane may be fitted to the point under consideration and the points in the neighborhood of it. Therefore, an N ⁇ N window centered at the point of interest is considered and the plane is fitted to the points inside the window using a weighted linear least square method. The points close to the point of interest or center of the window are given higher weights, and points that are further are given lower weights.
- N 2 corresponds to the number of points in the N ⁇ N window. The weight of each point shows its influence on the plane fitting.
- a linear system of equations may be obtained by taking partial derivatives with respect to the unknown coefficients a, b, and c.
- FIG. 4 illustrates an example of a three dimensional image of a fingerprint
- FIG. 5 illustrates an example of an extracted smoothed surface of the three dimensional image of FIG. 4 .
- the computer unfolds the smoothed three dimensional surface to a two dimensional surface (i.e., a two dimensional image) (block 126 ).
- the three dimensional smoothed surface may be unfolded/unrolled to generate a two dimensional rolled-equivalent image (e.g., a two dimensional rolled-equivalent fingerprint image).
- Such unfolding may be performed by the computer utilizing a “springs algorithm” as discussed by Atkins et al. in the article “Halftone post-processing for improved rendition of highlights and shadows,” J. Elec.
- a physics based modeling system may be utilized to unfold the smoothed three dimensional surface, such as the physics based modeling system disclosed in “Acquiring a 2-d rolled equivalent fingerprint image from a non-contact 3-d finger scan,” Proceedings of SPIE, the International Society for Optical Engineering, pg. 2020C-1-62020C-8, 2006, by Fatehpuria et al.
- Such methods may include performing halftone post-processing on the smoothed three dimensional surface to rearrange image pixels and thereby generate a smoother rendition.
- the computer may assume virtual springs between a point under consideration and neighboring points. The point may be moved to a different location based on minimizing the energy in the virtual springs.
- a point under consideration and neighboring points may be considered a mechanical system, in which each point has some mass, and each point is connected to neighboring points with virtual springs.
- Each virtual spring may include a relax length and the virtual spring has a minimum energy when the virtual spring is at its relax length.
- An iterative process may be performed by the computer to calculate the displacement of one or more points, such that the virtual springs are stretched/compressed to reach their relax length, thereby achieving minimum energy in the springs.
- Displacement may be applied iteratively to the points under consideration, while neighboring points remain fixed.
- the computer may adjust the two dimensional surface based at least in part on texture characteristics of the three dimensional image.
- the computer may analyze the three dimensional image to determine one or more texture characteristics of the three dimensional image (block 128 ).
- analyzing the three dimensional image to determine one or more texture characteristics of the three dimensional image includes identifying ridge points (i.e., pixels that correspond to a ridge of the identification feature represented by the three dimensional image) on the three dimensional image, and the computer may assign a value to a corresponding pixel on the two dimensional image to thereby indicate that the corresponding pixel of the two dimensional image is associated with a ridge point.
- the computer may analyze texture characteristics of the three dimensional image by analyzing curvature of points of the three dimensional image.
- the computer may apply a median filter to the three dimensional image to thereby reduce sharp spikes that may occur when scanning an identification feature.
- the computer may further apply a low pass filter to smooth the surface represented by the three dimensional image, such that ridges and valleys of the identification feature represented in the three dimensional image may be smoothed to reduce noise associated with collecting the data and/or to adjust for pores of the identification feature.
- Such filtering may be performed similar to the methods disclosed in “Face recognition based on 3d ridge images obtained from range data,” Pattern Recognition, 42:445-451, March 2009.
- Embodiments of the invention may identify ridge points of the three dimensional image by utilizing Gaussian and mean curvature analysis.
- a surface may be defined by two partial differential equations, the so-called first and second fundamental form of differential geometry. These fundamental forms determine how to measure the length, area and the angle of the surface, and the normal surface curvature may be calculated from these two fundamental forms.
- the first fundamental form (I) is defined as the inner product of dx with itself, where dx is tangent to the surface in the direction defined by du and dv:
- the computer may utilize the Guassian and mean curvature to determine whether each point (i.e., pixel) of the three dimensional image corresponds to a ridge point or a valley point of the fingerprint. Therefore, in some embodiments, the texture of the three dimensional image may be determined utilizing the Guassian and mean curvature.
- the computer may apply the texture of the three dimensional image to the two dimensional image (block 130 ).
- the computer may apply the texture of the three dimensional image to the unfolded two dimensional image by setting the color value of each point (i.e., pixel) of the two dimensional image to black if the corresponding point (i.e., voxel) of the three dimensional image is determined to correspond to a ridge point.
- FIG. 6 illustrates an example unfolded two dimensional image of a fingerprint with the texture of the three dimensional image applied to the surface represented by the two dimensional image.
- the computer may analyze the two dimensional image to determine one or more identification minutiae of the surface of the identification feature represented by the two dimensional image (block 132 ).
- the computer may filter the two dimensional image to thereby enhance the two dimensional image prior to analyzing the two dimensional image to identify the identification minutiae.
- the computer may enhance the two dimensional image utilizing a block-wise contextual filter method as described in the article “Verifying fingerprint match by local correlation methods,” by J. Li et al., First IEEE International Conference on Biometrics: Theory, Applications, and Systems, pg. 1-5, 2007.
- FIG. 7 illustrates the two dimensional image of FIG. 6 after enhancing the image.
- the computer may analyze the surface of the identification feature represented by the two dimensional image to determine identification minutiae utilizing the Biometric Image Software (BIS) provided by the National Institute of Science and Technology (NIST) of the United States Department of Commerce and/or other such known two dimensional image analysis software packages such as the NIST Fingerprint Image Software (NFIS) also provided by the NIST.
- FIG. 8 illustrates identified identification minutiae 202 , 204 , 206 , 208 on the two dimensional image.
- the computer may further determine a quality associated with each identified minutiae such that the computer may disregard identification minutiae that are identified due to errors and/or distortion associated with capturing the image of the identification feature.
- a minutiae detection package of the NFIS and BIS determines a quality associated with each identified identification minutiae based on a direction, contrast, flow, and/or high curve associated with pixels of the two dimensional image.
- the identification minutiae 142 , 144 , 146 , 148 are labeled with different colors to indicate a quality associated with such identification minutiae 142 , 144 , 146 , 148 .
- identification minutiae labeled with the red color 142 are generally identified as high quality identification minutiae and identification minutiae labeled with the blue color 144 are identified as low quality minutiae.
- the identification minutiae labeled with the green color 146 and yellow color 148 are identification minutiae between the high quality and low quality identification minutiae 142 , 144 .
- FIG. 9 provides a flowchart 160 illustrating a sequence of operations that may be performed by a computer to identify identification minutiae of an identification feature represented by a three dimensional image.
- the computer may load the three dimensional image (block 162 ), analyze the three dimensional image to determine ridges and ravines of the identification feature represented by the three dimensional image (block 164 ), and the computer may identify one or more identification minutiae of the identification feature on the three dimensional image based at least in part on the determined ridges/ravines of the identification feature (block 166 ).
- the computer may analyze the curvature of one or more three dimensional points (i.e., voxels) of the three dimensional image to determine whether the voxel corresponds to a ridge or a ravine of the identification feature represented by the three dimensional image.
- the computer may determine ridges and ravines of the identification feature represented by the three dimensional image based on the ridge/ravine determination of the one or more voxels, and the computer may identify identification minutiae based on the determined ridges and ravines.
- the computer may store the identified identification minutiae as an identification minutiae dataset, and the identification minutiae dataset extracted from the three dimensional image may be compared to one or more other identification minutiae datasets to determine an identity and/or confirm an identity.
- FIG. 10 this figure provides flowchart 160 that illustrates a sequence of operations that may be performed by the computer executing an application to identify identification minutiae on an identification feature represented by a three dimensional image.
- the three dimensional image of the identification feature may be captured using a three dimensional scanner, including for example a three dimensional fingerprint scanner or other such device, and the three dimensional image may be received by the computer (block 182 ).
- the computer may analyze one or more voxels of the three dimensional image to identify identification minutiae.
- the computer may fit a mesh including a plurality of vertices to the surface represented by the three dimensional image (block 184 ). In some embodiments of the invention, the vertices of the mesh are in a triangular relationship to one another.
- the computer may smooth the three dimensional mesh to reduce noise and gaps that may be present in the three dimensional image due to capture (i.e., scanning) of the identification feature (block 186 ).
- the computer may smooth the mesh using a centroid smoothing process.
- centroid smoothing a smoothed mesh is generated by using adjacent triangles. For every vertex in the mesh, a one-ring neighborhood is considered. Then the arithmetic mean of the centroids of the adjacent triangles of the considered vertex is obtained as a new vertex. Additional details regarding such centroid smoothing are provided in “Fast and robust detection of crest lines on meshes,” by Yoshizawa et al., Symposium on Solid and Physical Modeling, pg. 227-232, 2005.
- FIG. 11 illustrates the three dimensional image of FIG. 4 after the centroid smoothing process is performed on the three dimensional image.
- the computer may perform an adaptive smoothing process by smoothing the normals of the mesh using a Guassian filter and modifying the mesh vertex positions in order to fit the mesh to the field of the smoothed normals. Additional details regarding such adaptive smoothing are provided in “Mesh smoothing by adaptive and anisotropic Gaussian filter applied to mesh normals,” by Ohtake et al., Vision, Modeling, and Visualization, pg. 203-210, 2002.
- FIG. 12 illustrates the three dimensional image of FIG. 11 after performing the adaptive smoothing process.
- the computer analyzes the smoothed mesh to determine the curvature and direction of the vertices of the mesh (block 188 ).
- the computer determines a normal vector for each vertex of the mesh. Determining the normal vector for each vertex may include determining an average of face normals for faces adjacent to the vertex, where the computer may also apply different weightings to the face normals.
- the normal vector for each vertex may also be determined as the normal to a plane that best fits the vertex and one or more nearby vertices.
- the normal for each vertex may be determined as a normalized weighted sum of normals of triangles incident to the vertex as described in “Weights for computing vertex normals from facet normals,” by Nelson, Journal of Graphics Tools, vol. 4, pg. 1-6, March 1999.
- the computer may build a local coordinate system with an origin located at each vertex based at least in part on the determined normal vector and two orthonormal vectors in a plane through the considered vertex.
- the computer may fit a polynomial surface to each vertex using the local coordinate system and including neighboring points and/or vertices such that the polynomial surface may be interrogated to determine a curvature and direction for the vertex.
- the computer may fit the polynomial surface to the vertex by performing a quadratic fitting process. Further details regarding the quadratic fitting process are provided in “Differential geometry for characterizing 3d shape change,” by Amini et al., Proceedings of SPIE Conference on Mathematical Methods in Medical Imaging, San Diego, Calif., pp.
- the computer may fit the polynomial surface to the vertex by performing a cubic fitting process. Further details regarding the cubic fitting process are provided in “A novel cubic-order algorithm for approximating principal direction vectors,” by Goldfeather et al., ACM Transactions on Graphics, vol. 23(1), pg. 45-63, New York, N.Y., January 2004.
- a cubic polynomial may be fitted to p (the vertex under consideration) and its neighboring vertices:
- N ⁇ ( x i , y i ) ⁇ ( f x ⁇ ( x , y ) , f y ⁇ ( x , y ) , - 1 ) ⁇ ( Ax i + By i + 3 ⁇ Dx i 2 + 2 ⁇ Ex i ⁇ y i + Fy i 2 , Bx i + Cy i + Ex i 2 + 2 ⁇ Fx i ⁇ y i + 3 ⁇ Gy i 2 - 1 ) ( 11 )
- the real normal vector may be considered
- the computer may extract the curvature and directions from the local surface fitted to each vertex to determine the curvature and direction of the vertices. If P is a point on the polynomial surface S fitted to the vertex, X(u, v) may be considered a local parameterization of S in a neighborhood of P.
- the partial derivatives of X with respect to u and v may be denoted by x u (P) and x v (P).
- the unit normal vector N(P) to the surface at point P may be computed as:
- N ⁇ ( P ) X u ⁇ ( P ) ⁇ X v ⁇ ( P ) ⁇ X u ⁇ ( P ) ⁇ X v ⁇ ( P ) ⁇ ( 15 )
- X u (P), X v (P), N(P) may be considered the local orthogonal coordinate system
- F X u ( P ) ⁇ X v ( P ) (17)
- G X v ( P ) ⁇ X v ( P ) (18)
- g N ( P ) ⁇ X vv ( P ) (21)
- the eigenvalues ⁇ 1 and ⁇ 2 of the matrix W are the maximum and minimum principal curvatures of the surface at point P.
- the eigenvectors v 1 and v 2 are the corresponding maximum and minimum principal directions. Since x u and x v are orthogonal unit vectors, equation (23) may be rewritten as:
- FIG. 13 illustrates the maximum principal curvature for vertices of a mesh on the three dimensional image of FIG. 12 .
- FIG. 14 illustrates the minimum principal curvature for vertices of the mesh on the three dimensional image of FIG. 12 .
- FIG. 15 illustrates the maximum principal direction for vertices of the mesh of the three dimensional image of FIG. 12
- FIG. 16 illustrates the minimum principal direction for vertices of the mesh of the three dimensional image of FIG. 12 .
- the computer analyzes the curvatures and directions associated with each vertex to determine ridge and ravine points of the identification feature represented by the three dimensional image (block 190 ).
- the computer analyzes the points (i.e., vertices) to determine whether each point corresponds to a ridge or ravine point.
- a point may correspond to a ridge point if the maximal principal curvature attains a local positive maxima along its curvature line.
- a point may correspond to a ravine point if the minimal principal curvature attains a negative minimum along its curvature line.
- ridges and ravines are dual in nature, and the computer may consider either or both in identifying identification minutiae. Further details regarding determining which vertices correspond to a ridge point or a ravine point are provided in “Detection of ridges and ravines on range images and triangular meshes,” by Belyaev et al., Vision Geometry IX, SPIE 4117, pg. 146-154, San Diego, Calif., July-August 2000, and further details may be found in Three Dimensional Computer Vision , ch. 4: Edge Detection, MIT Press, 1993.
- the maximal and minimal curvatures of the vertex are called k max and k min and the maximal and minimal directions are called t max and t min .
- the computer determines the intersection between the normal plane of the vertex set as P and a polygon formed by a first ring of neighboring vertices, where the normal plane may be generated by the normal vector at P and the maximal principal curvature at P.
- the surfaces intersect at two points: Q and S by linear interpolation of curvature values of neighboring vertices.
- the maximum principal curvature at P is greater than the interpolated curvature values of Q and S, then the maximum principal curvature attains a maximum at P along the normal section curve and P corresponds to a ridge point.
- a similar comparison may be performed to determine whether the vertices correspond to ridge points, where the minimum principal curvature would be less at P than the interpolated curvature values at the comparison points.
- Each vertex that corresponds to a ridge point or a ravine point is marked accordingly. Furthermore, all marked vertices may be checked using the following relationship: Ridge: k max >
- the ridge vertices may be further filtered by requiring each ridge vertex to neighbor at least two other ridge vertices.
- the ravine vertices may be filtered in a similar manner.
- the computer may determine ridge and ravine lines based on the determined ridge and ravine points (block 192 ). In some embodiments, the computer traces the ridge vertices together to generate ridge lines and the ravine vertices together to generate ravine lines for the three dimensional image.
- FIG. 18 illustrates the determined ridge lines 206 (shown in red) and the ravine lines 208 (shown in blue) based on the marked vertices of FIG. 17 . Further details regarding tracing generating the ridge and ravine lines are provided in “Ridge-valley lines on meshes via implicit surface fitting,” by Ohtake et al., ACM Transactions on Graphics , vol. 23(3), pg. 609-612, New York, 2004.
- the computer may analyze curvature principals and derivatives thereof for each vertex to determine whether the vertex corresponds to a ridge point (i.e., a ridge vertex) or a ravine point (i.e., a ravine vertex).
- a ridge point i.e., a ridge vertex
- a ravine point i.e., a ravine vertex
- a non-umbilic point P is called a ridge point if k max attains a local maximum at P along the corresponding principal direction t max .
- a non-umbilic point P is called a ridge point if k min attains a local minimum at P along the corresponding principal direction t min .
- (28) Ravines: e min 0, ⁇ e min / ⁇ t min >0, k min ⁇
- the computer may detect ridge points according to the process described in the previously referenced article “Ridge-valley lines on meshes via implicit surface fitting.” To detect ridge points, edges of the mesh are analyzed, where an edge e is defined by two neighboring vertices v 1 and v 2 .
- the computer analyzes points along the the edge e based on the following conditions: k max ( v )>
- ridge points may be connected by a straight line. If three ridge points are detected on three edges of a triangle of the mesh, the ridge points may be connected by the centroid of the triangle. In this manner, ridge lines may be determined. Ravine points and ravine lines may be detected in the same manner.
- the computer may analyze the ridge and/or ravine lines to identify identification minutiae on the three dimensional image (block 194 ).
- the computer may perform one or more processing procedures on the three dimensional image to remove short ridges, remove short branches, and/or connect broken ridges/ravines.
- short ridges, short branches and/or broken ridges/ravines may be caused on the three dimensional image by noise or other artifacts during scanning of the identification feature.
- Short ridges may be removed by the computer by determining a ridge length for each ridge and removing ridges below a defined threshold.
- FIG. 19 provides an example illustration of a type of ridge identification minutiae, referred to as a ridge termination 210 .
- FIG. 20 provides an example illustration of another type of ridge identification minutiae referred to as a ridge bifurcation 212 .
- Ravines generally include the same types of identification minutiae.
- identifying identification minutiae includes analyzing each vertex of the mesh to determine a degree associated with each vertex. The degree of a vertex may correspond to the number of ridge edges incident the vertex. For example, a vertex with a degree of two indicates that the vertex is in the middle of a ridge.
- a vertex with a degree of one corresponds to a ridge termination identification minutiae (see e.g., 210 of FIG. 19 ), and a vertex with a degree of three corresponds to a ridge bifurcation identification minutiae (see e.g., 212 of FIG. 20 ). All vertices having a degree of one or three associated therewith are marked as identification minutiae. A similar relationship may be utilized if ravine identification minutiae were to be utilized for identification purposes.
- FIG. 21 provides an example of marked identification minutiae 214 , 216 (illustrated as red indicators 214 and blue indicators 216 ) on the three dimensional image of FIG. 4 based on the determined ridge lines 206 illustrated in FIG. 18 .
- identifying the identification minutiae on the three dimensional image may include selectively filtering the identified identification minutiae based on a quality associated with each identification minutiae.
- ridge identification minutiae and ravine identification minutiae are dual in nature; i.e., a ridge termination is generally proximate a ravine bifurcation, and similarly, a ridge bifurcation is generally proximate a ravine termination.
- FIGS. 22 and 23 provide examples illustrating this relationship— FIG. 22 illustrates a ridge termination 210 proximate a ravine bifurcation 218 , and FIG.
- the computer selectively filters the identified identification minutiae based on this relationship—where the computer marks identified ridge identification minutiae that are proximate the corresponding ravine identification minutiae as high quality identification minutiae, and the computer may ignore identified ridge identification minutiae that are not proximate the corresponding ravine identification minutiae, as such ignored ridge identification minutiae are of low quality.
- the identified ridge identification minutiae that are not proximate the corresponding ravine identification minutiae are illustrated with blue indicators 216 and the high quality identification minutiae are illustrated with red indicators 214 .
- the computer may generate an identification minutiae dataset based on the identified identification minutiae and compare and/or determine an identity based on the identified identification minutiae of the identification minutiae dataset by comparing the identification minutiae dataset to other identification minutiae datasets that include a corresponding identity associated therewith.
- the identified identification minutiae extracted from the three dimensional image may be compared to a dataset including previously confirmed identification minutiae associated with the identity. If an identity is to be determined, the computer may compare the identification minutiae dataset to a database of identification minutiae datasets that have been previously associated with an identity to determine if the identification minutiae dataset of the three dimensional image matches any datasets of the database.
- the computer may further selectively filter identified identification minutiae based on a quality associated with different regions of the three dimensional image.
- the computer may determine a quality associated with each region of the three dimensional image based at least in part on depth and/or flow information associated with the regions.
- regions that include low-depth and/or low flow direction may represent unstable areas where identification minutiae detection may be unreliable.
- FIG. 24 provides flowchart 300 that illustrates a sequence of operations that may be performed by the computer to identify any low quality regions on the three dimensional image.
- the computer may receive the three dimensional image including the mesh (block 302 ).
- the computer may divide the three dimensional image into regions of a predefined number of mesh vertices. The number of mesh vertices may be determined based at least in part on a required reliability and/or a required efficiency.
- FIG. 25 illustrates a region map 400 that includes regions 402 composed of approximately thirty vertices. Regions composed of different number of vertices may be utilized to adjust for efficiency and/or reliability.
- the computer may analyze each region to determine a depth and flow associated therewith (block 306 ). Depth information for a particular region may be determined based at least in part on one or more curvature tensors associated with vertices of the region, determined ridges and ravines in the region, and/or a depth associated with each vertex of the region.
- the computer may check the curvature tensor for each ridge and ravine vertex using the following conditions: ridge vertex: k max >T (35) ravine vertex: k min ⁇ T (36)
- T is a threshold associated with the physical normal ridge depth of an identification feature. If such conditions are met, the vertex is marked as an acceptable vertex, otherwise the vertex is marked as unacceptable.
- the result for the region may be determined by taking the average of the values for each vertex of the block such that the average curvature for the region must meet the threshold T, and if the average does not meet the threshold condition, the region may be identified as a low-quality region. Each region may be identified as a low quality region based on the number of acceptable/unacceptable vertices located in the region.
- FIG. 26 illustrates an example low depth information map 420 based on the regions 402 of the region map 400 of FIG. 25 that may be generated based on the analysis discussed above, where the shaded regions are low-depth regions 422 and the regions illustrated in white are acceptable depth regions 424 .
- Flow information for the region may be determined by analyzing the principal curvature and direction for each vertex of the region. Flow generally refers to the direction and curvature of ridges. Regions where such direction and curvature are low generally correspond to regions with poorly defined ridges, where such regions generally occur due to noise and interference during scanning of the identification feature.
- the computer may determine the direction of ridges and ravines of the region with the following equations:
- Aligned vertices and non-aligned vertices may be determined based on the determined direction of each vertex, and the average is taken over the vertices and the result for the block is obtained. The result is assigned to all of the vertices inside the region, where each region having a low flow assigned thereto is marked as a low-flow region.
- FIG. 27 provides an example of a low-flow map 440 based on the region map 400 of FIG. 25 where the regions shaded in black are low-flow regions 442 and the regions shown in white are acceptable flow regions 444 .
- the computer Based on the determined depth and flow of each region, the computer identifies any region having a low depth and/or flow associated therewith as a low quality region (block 308 ), and the computer may generate a low quality map (block 310 ), where the low quality map may be utilized to selectively filter identified identification minutiae located in low-quality regions identified on the map.
- FIG. 28 provides an example low quality map 460 based at least in part on the low depth map 420 of FIG. 26 and the low flow map 440 of FIG. 27 , where the shaded regions are low quality regions 462 from which identification minutiae should be discarded and the regions shown in white are high quality regions from which identified identification minutiae may be extracted.
- the computer may generate the quality map based on the depth and flow characteristics of the three dimensional image, and the computer may utilize the quality map to selectively filter identified identification minutiae, such that only identification minutiae from high quality regions (i.e., reliable identification minutiae) may be utilized in confirming and/or determining an identity.
- FIG. 29 illustrates an example application of the quality map 460 on the identified identification minutiae shown in the example three dimensional image of FIG. 21 .
- the reliable identification minutiae 482 are indicated in red
- the identification minutiae corresponding to low quality regions 484 are illustrated in black.
- a computer may analyze a three dimensional image of an identification feature, such as a fingerprint to identify identification minutiae on the three dimensional image.
- the computer may selectively filter the identified identification minutiae based on proximity to corresponding identification minutiae and/or based on a quality associated with a region of the three dimensional image.
- the computer may extract the filtered identification minutiae to determine and/or confirm an identity associated with the identification feature of the three dimensional image.
- the computer may directly analyze the three dimensional image to identify identification minutiae on the three dimensional image, and in other embodiments the computer may unfold the three dimensional image onto a two dimensional image such that other methods may be utilized to analyze and identify the identification minutiae.
- Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause that computer to perform the steps necessary to execute steps or elements embodying the various aspects of the invention.
- computer readable media include but are not limited to tangible, recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, magnetic tape, optical disks (e.g., CD-ROMs, DVDs, etc.) among others.
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
Description
S=mina,b,cΣi=1 N
wi is the weight of the ith point. N2 corresponds to the number of points in the N×N window. The weight of each point shows its influence on the plane fitting. Closer points to the center have a higher weight than the further points. A Gaussian function is used to calculate the weights:
w i =e −d
where di is the Euclidean distance between the ith point inside the window and the window's center point. Equation (1) can be minimized by setting partial derivatives of the function Σi=1 Nwi·(Zi−(axi+byi+c))2 to zero. A linear system of equations may be obtained by taking partial derivatives with respect to the unknown coefficients a, b, and c. As such, coefficients may be obtained from the formula:
C=(X T WX)−1 X T WZ (3)
where C includes coefficients; X includes the three dimensional x and y coordinates of the points; Z includes the three dimensional z coordinates of the points, and W includes the weights.
where E, F, and G are first fundamental coefficients. The second fundamental form (II) is defined as the inner product of dx and dN, where dN means the spatial rate of change of unit normal vector N to the surface:
L, M, and N are the second fundamental coefficients. The Guassian and mean curvatures K and H may be defined as:
and the principal curvatures k1 and k2 may be determined by the computer using the following formulas:
k 1 =H+√{square root over (H 2 −K)}
k 2 =H−√{square root over (H 2 −K)} (8)
Each point in the selected neighborhood of p should fit in the surface such that equation (9) may be rewritten as:
where b=(A B C D E F G)T and (xi yi zi) are the coordinates of the points in the selected neighborhood of p. In addition, the normals determined for the each vertex may be considered such that the calculated normal at each vertex should be equal to the normal at the point of the polynomial surface fitted to the vertex. If (ai bi ci) indicates the normal vector at the point (xi yi zi) and normal to the surface at the surface at the same point, the normal is given by:
The real normal vector may be considered
and should be equal to equation (11), such that:
which leads to the following equations:
A linear least-square method may be performed to solve the equations (10), (13), and (14) as system Ub=d.
Moreover, Xu(P), Xv(P), N(P) may be considered the local orthogonal coordinate system, and the coefficients of the first fundamental form may be computed as:
E=X u(P)·X u(P) (16)
F=X u(P)·X v(P) (17)
G=X v(P)·X v(P) (18)
The coefficients of the second fundamental form may be computed as:
e=N(P)·X uu(P) (19)
f=N(P)·X uv(P) (20)
g=N(P)·X vv(P) (21)
The Weingarten curvature matrix at the point P may therefore be computed as:
Furthermore, when xu and xv are orthogonal unit vectors, the matrix becomes the symmetric matrix:
Furthermore, assuming v is a unit vector in the tangent plane to S at P, then Kv=vTWv is the normal curvature of the surface at point P in the direction of v.
Ridge: k max >|k min| (25)
Ravine: k min <−|k max| (26)
e max =∂k max /∂t max and e min =∂k min /∂t min (27)
Ridges: e max=0, ∂e max /∂t max<0, k max >|k min| (28)
Ravines: e min=0, ∂e min /∂t min>0, k min <−|k max| (29)
It should be noted that if the orientation of the surface changed, ridges may be described as ravines and vice-versa.
where t=(t1, t2)T is the principal direction corresponding to the principal curvature k. Further details are provided in Computational Differential Geometry Tools for Surface Interrogation, Fairing, and Design by Yoshizawa, PhD thesis, Saarland University, 2006, MPI Informatik.
k max(v)>|k min(v)|f or v=v 1 , v 2 (31)
e max(v 1)e min(v 2)<0 (32)
e max(v 1)[(v 3−i −v 1)·t max(v i)]>0 with i=1 or 2 (33)
Equation (32) corresponds to whether emax has a zero crossing on the edge e, and equation (33) corresponds to whether emax obtains a maximum on edge e. Based at least in part on these conditions, a linear interpolation may be performed by the computer to determine a zero-crossing of emax on the edge, where such zero-crossing would correspond to a ridge point.
If two ridge points are detected on two edges of a triangle of the mesh, the ridge points may be connected by a straight line. If three ridge points are detected on three edges of a triangle of the mesh, the ridge points may be connected by the centroid of the triangle. In this manner, ridge lines may be determined. Ravine points and ravine lines may be detected in the same manner.
ridge vertex: k max >T (35)
ravine vertex: k min <−T (36)
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/631,041 US8965069B2 (en) | 2011-09-30 | 2012-09-28 | Three dimensional minutiae extraction in three dimensional scans |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161541595P | 2011-09-30 | 2011-09-30 | |
US13/631,041 US8965069B2 (en) | 2011-09-30 | 2012-09-28 | Three dimensional minutiae extraction in three dimensional scans |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140093146A1 US20140093146A1 (en) | 2014-04-03 |
US8965069B2 true US8965069B2 (en) | 2015-02-24 |
Family
ID=50385264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/631,041 Expired - Fee Related US8965069B2 (en) | 2011-09-30 | 2012-09-28 | Three dimensional minutiae extraction in three dimensional scans |
Country Status (1)
Country | Link |
---|---|
US (1) | US8965069B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160049001A1 (en) * | 2013-06-25 | 2016-02-18 | Google Inc. | Curvature-Driven Normal Interpolation for Shading Applications |
CN109840458A (en) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | A kind of fingerprint identification method and fingerprint collecting equipment |
US10585497B2 (en) * | 2014-08-20 | 2020-03-10 | MotinVirtual, Inc. | Wearable device |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8922802B2 (en) * | 2013-02-21 | 2014-12-30 | Ricoh Company, Ltd. | Method and system for halftoning using energy minimization |
US10146988B2 (en) * | 2013-06-24 | 2018-12-04 | Blackberry Limited | Obtaining a biometric image and handling damaged biometrics |
CN104156707B (en) * | 2014-08-14 | 2017-09-22 | 深圳市汇顶科技股份有限公司 | Fingerprint identification method and its fingerprint identification device |
US9754180B2 (en) * | 2015-01-15 | 2017-09-05 | Siemens Product Lifecycle Management Software Inc. | Robust automatic computation of ridges and valleys of a height field |
US10572749B1 (en) * | 2018-03-14 | 2020-02-25 | Synaptics Incorporated | Systems and methods for detecting and managing fingerprint sensor artifacts |
CN110263726B (en) * | 2019-06-24 | 2021-02-02 | 浪潮集团有限公司 | Finger vein identification method and device based on deep correlation feature learning |
US11127205B2 (en) | 2019-11-12 | 2021-09-21 | Adobe Inc. | Three-dimensional mesh segmentation |
DE102020208211A1 (en) * | 2020-07-01 | 2022-01-05 | Robert Bosch Gesellschaft mit beschränkter Haftung | Devices and methods for training a machine learning model for recognizing an object topology of an object from an image of the object |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799098A (en) * | 1994-10-20 | 1998-08-25 | Calspan Corporation | Fingerprint identification system |
US7197168B2 (en) * | 2001-07-12 | 2007-03-27 | Atrua Technologies, Inc. | Method and system for biometric image assembly from multiple partial biometric frame scans |
US7212655B2 (en) * | 2000-09-15 | 2007-05-01 | Tumey David M | Fingerprint verification system |
US7333641B2 (en) * | 2002-08-13 | 2008-02-19 | Nec Corporation | Method and apparatus for analyzing streaked pattern image |
US7356171B2 (en) * | 2003-10-23 | 2008-04-08 | Lumeniq, Inc. | Systems and methods relating to AFIS recognition, extraction, and 3-D analysis strategies |
US20080101664A1 (en) * | 2004-08-09 | 2008-05-01 | Asher Perez | Non-Contact Optical Means And Method For 3D Fingerprint Recognition |
US8520888B2 (en) * | 2007-04-26 | 2013-08-27 | Bell And Howell, Llc | Apparatus, method and programmable product for identification of a document with feature analysis |
US8600123B2 (en) * | 2010-09-24 | 2013-12-03 | General Electric Company | System and method for contactless multi-fingerprint collection |
-
2012
- 2012-09-28 US US13/631,041 patent/US8965069B2/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799098A (en) * | 1994-10-20 | 1998-08-25 | Calspan Corporation | Fingerprint identification system |
US7212655B2 (en) * | 2000-09-15 | 2007-05-01 | Tumey David M | Fingerprint verification system |
US7197168B2 (en) * | 2001-07-12 | 2007-03-27 | Atrua Technologies, Inc. | Method and system for biometric image assembly from multiple partial biometric frame scans |
US7333641B2 (en) * | 2002-08-13 | 2008-02-19 | Nec Corporation | Method and apparatus for analyzing streaked pattern image |
US7356171B2 (en) * | 2003-10-23 | 2008-04-08 | Lumeniq, Inc. | Systems and methods relating to AFIS recognition, extraction, and 3-D analysis strategies |
US20080101664A1 (en) * | 2004-08-09 | 2008-05-01 | Asher Perez | Non-Contact Optical Means And Method For 3D Fingerprint Recognition |
US8520888B2 (en) * | 2007-04-26 | 2013-08-27 | Bell And Howell, Llc | Apparatus, method and programmable product for identification of a document with feature analysis |
US8600123B2 (en) * | 2010-09-24 | 2013-12-03 | General Electric Company | System and method for contactless multi-fingerprint collection |
Non-Patent Citations (1)
Title |
---|
"Data Acquisition and Quality Analysis of 3-Dimensional Fingerprints". Florida: IEEE conference on Biometrics, Identity and Security. Ridge and valley signatures may also be obtained using a touchless three-dimensional ridge and valley scanner using a digital processing means. (Wang, Yongchang; Q. Hao, A. Fatehpuria, D. L. Lau and L. G. Hassebrook. * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160049001A1 (en) * | 2013-06-25 | 2016-02-18 | Google Inc. | Curvature-Driven Normal Interpolation for Shading Applications |
US9965893B2 (en) * | 2013-06-25 | 2018-05-08 | Google Llc. | Curvature-driven normal interpolation for shading applications |
US10585497B2 (en) * | 2014-08-20 | 2020-03-10 | MotinVirtual, Inc. | Wearable device |
CN109840458A (en) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | A kind of fingerprint identification method and fingerprint collecting equipment |
Also Published As
Publication number | Publication date |
---|---|
US20140093146A1 (en) | 2014-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8965069B2 (en) | Three dimensional minutiae extraction in three dimensional scans | |
Abate et al. | 2D and 3D face recognition: A survey | |
US11625954B2 (en) | Method and apparatus with liveness testing | |
Deschaud et al. | Point cloud non local denoising using local surface descriptor similarity | |
CN103971122B (en) | Three-dimensional face based on depth image describes method | |
JP5018029B2 (en) | Authentication system and authentication method | |
US8655084B2 (en) | Hand-based gender classification | |
Pokrass et al. | Partial shape matching without point-wise correspondence | |
Guo et al. | Multi-pose 3D face recognition based on 2D sparse representation | |
Tang et al. | 3D face recognition with asymptotic cones based principal curvatures | |
US10990796B2 (en) | Information processing apparatus, image processing method and recording medium on which image processing program is recorded | |
Nouri et al. | Multi-scale saliency of 3D colored meshes | |
Zhao et al. | Using region-based saliency for 3d interest points detection | |
Boukamcha et al. | 3D face landmark auto detection | |
US20180047206A1 (en) | Virtual mapping of fingerprints from 3d to 2d | |
Pal et al. | Face recognition using interpolated Bezier curve based representation | |
Fadaifard et al. | Multiscale 3D feature extraction and matching | |
Dihl et al. | A Content-aware Filtering for RGBD Faces. | |
Kulkarni et al. | ROI based Iris segmentation and block reduction based pixel match for improved biometric applications | |
Zhang et al. | Fingerprint orientation field interpolation based on the constrained delaunay triangulation | |
Naruniec | A survey on facial features detection | |
Ramadan et al. | 3D Face compression and recognition using spherical wavelet parametrization | |
Chong et al. | Range image derivatives for GRCM on 2.5 D face recognition | |
Biswas et al. | Extraction of regions of interest from face images using cellular analysis | |
Di Angelo et al. | Point clouds registration based on constant radius features for large and detailed cultural heritage objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: UNIVERSITY OF LOUISVILLE RESEARCH FOUNDATION, INC. Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:INANC, TAMER;SHAFAEI, SARA;SIGNING DATES FROM 20121007 TO 20121010;REEL/FRAME:029111/0400 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551) Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20230224 |