WO2007033380A2 - Biometric sensing device and method - Google Patents
Biometric sensing device and method Download PDFInfo
- Publication number
- WO2007033380A2 WO2007033380A2 PCT/US2006/036217 US2006036217W WO2007033380A2 WO 2007033380 A2 WO2007033380 A2 WO 2007033380A2 US 2006036217 W US2006036217 W US 2006036217W WO 2007033380 A2 WO2007033380 A2 WO 2007033380A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hand
- predetermined
- vein pattern
- image
- support surface
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Definitions
- the present invention relates to a biometric scanning system for identifying a vein pattern on a human hand.
- Biometric identification is based on a paradigm in which a feature of a person is scanned, recorded and used as an identifier for the person.
- biometric identification has been based on external characteristics such as facial recognition, voice recognition, fingerprint recognition, retinal scanning, etc.
- biometric identification of a human can also be based on vein pattern recognition, where the vein pattern in a person's hand, particularly the back side of the person's palm, can be used as the identifier.
- the present invention provides a new and useful apparatus and technique for a biometric identification system in which imaging of a person's hand, particularly the vein pattern from the back of the person's hand, is used as the biometric identifier.
- imaging of a person's hand particularly the vein pattern from the back of the person's hand
- the biometric identifier is used as the biometric identifier.
- a live image of the vein pattern of the back of a person's hand can be compared to a previously recorded digitized image of the vein pattern of the back of the person's hand, in order to identify the person.
- the present invention provides a support structure that preferably positions a person's hand in a predetermined relation to the scanning system and a technique for scanning the vein pattern on a person's hand and using that scan to identify the person.
- the present invention also provides a new and useful technique for comparing the scanned vein pattern, with a reference vein pattern, to determine whether the scanned vein pattern matches the reference vein pattern.
- the present invention is designed to scan the vein pattern of a person's hand, in an environment that may be at least partially exposed to ambient light.
- the present invention provides a scanning device and method that can be used to scan the vein pattern on the back of a person's hand, in an environment that is at least partially exposed to ambient light, and also in a manner that does not require a specific support device for the person's hand.
- the device and method are designed to detect the presence and location of a person's hand within the field of view of a scanning camera, and then using the scanning camera to capture an image of the vein pattern on the back of the person's hand, in an environment that is at least partially exposed to ambient light.
- a biometric scanning system provides new and useful apparatus and methods for identifying a human vein pattern, by imaging the back of the human hand, and using the imaged data to identify the human associated with that hand.
- Figure 1 is a schematic three dimensional illustration of one embodiment of a biometric sensing system with a support device according to the principles of the present invention
- Figure 2 is a schematic front view of the sensing system of Figure 1;
- Figure 3 is a schematic rear view of the sensing system of Figure 1;
- Figure 4 is a schematic right side view of the sensing system of Figure 1, with portions broken away;
- Figure 5 is a schematic left side view of the sensing system of Figure 1;
- Figure 6 is a schematic top view of the sensing system of Figure 1;
- Figure 7 is a schematic bottom view of the sensing system of Figure 1 ;
- Figure 8 is a schematic three dimensional illustration of a support device according to the principles of the present invention.
- Figure 9 is a schematic front view of the support device of Figure 8.
- Figure 10 is a schematic rear view of the support device of Figure 8.
- Figure 11 is a schematic left side view of the support device of Figure 8.
- Figure 12 is a schematic right side view of the support device of Figure 8.
- Figure 13 is a schematic top view of the support device of Figure 8.
- Figure 14 is a schematic bottom view of the support device of Figure 8.
- Figure 15 is an illustration of a raw live image that is captured by a camera, in a system and method according to the principles of the present invention
- [UUZb j Figure 16 is a schematic illustration of a thresholded and thinned live image, as provided in a system and method according to the principles of the present invention
- Figure 17 is a schematic illustration of an expanded recorded image of a vein pattern, that can be used in a system and method according to the principles of the present invention.
- Figure 18 is a schematic illustration of the manner in which a live image can be compared with a recorded image, according to the principles of the present invention
- Figure 19 is a flow chart, illustrating the steps in comparing a live image with a recorded image, according to the principles of the present invention.
- Figure 20 is a schematic illustration of the manner in which the vein pattern on the back of a person's hand can be illuminated in the presence of ambient light, and an image captured by a camera, according to the principles of the present invention
- Figure 21 is a schematic illustration of a modified hood that can be used in sensing the presence and location of a person's hand, and capturing an image of the back of the person's hand, at least partially in the presence of ambient light, according to the principles of the present invention
- Figure 22 schematically illustrates, in top and side view, certain components associated with the hood of Figure 21;
- Figures 23-25 schematically illustrate how the components of Figures 21 and 22 are used to sense the presence and location of a person's hand, within the field of view of a camera, in a device and method according to the present invention.
- Exhibits A-B are drawings of the sensing system of Figures 1-7;
- Exhibits C-I are drawings of the support device of Figures 8-14; [0036] Exhibit J is a description of pseudocode that is used in the comparison technique of Figures 15-19.
- the present invention provides a biometric scanning system for supporting a human hand, scanning the vein pattern of the human hand, and then comparing the vein pattern to a stored vein pattern, for use in identifying the human associated with the vein pattern.
- FIGS 1-7 schematically illustrate a biometric hand scanning system 100, with a hand support 102 according to one embodiment of the present invention.
- the hand scanning system 100 includes a housing 104 with a front side 106 and a rear side 108 ( Figure 4).
- the housing 104 is configured to be supported on a wall with the rear side 108 against the wall, and the front side 106 away from the wall.
- the front side 106 includes a front opening 110 ( Figures 1-4) through which a human hand is extended, so that the hand can be supported on the hand support 102.
- the backside of the palm of the hand is facing the scanning component of the scanning system.
- the backside of the palm of the hand is scanned, e.g.
- a smart card is industry terminology describing a credit card with a microchip and antenna laminated inside.
- the microchip is capable of storing and emitting information through the antenna when it encounters a magnetic field.
- the smart card is an example of a data storage device.
- the scanner then programmatically compares the images and outputs the result as a match or not a match.
- a digital image is captured of the subcutaneous vein pattern on the back of a human hand (see Figure 15) and that image is parsed algorithmically to determine its similarity with a recorded image of the alleged same hand.
- the system of capture (which is described herein for illustrative purposes only) includes a digital CMOS monochrome camera 160 ( Figures 20, 22, 23, e.g. OmniVision Technologies OV7141), infrared LED illumination devices 162 ( Figure 20, e.g. Osram Opto Semiconductors SFH415), a digital signal processor, e.g. the computer shown schematically at 170 in Figure 22, (e.g. Texas Instruments TMS320C6713), grey level histogram, thresholding and thinning algorithms (such as those found in "The Pocket Handbook of Image Processing Algorithms in C", by Myler and Weeks, Prentice Hall, 1993), 2 MB static RAM memory, a smart card reader (e.g. HID Corporation OEM300), a power supply plus the hand support described in the detailed description to which this exhibit is appended.
- the pseudocode representation of the type of thresholding and thinning algorithms referenced above are shown in Exhibit J hereto.
- the first step is to obtain the recorded image.
- the user presents his smart card containing his recorded vein image to the smart card reader and the recorded data therein is transferred to local memory by the processor.
- the user places his hand on the hand support 102 (with within the hood 104 shown in Figure 21) so the digital camera 160 can obtain a picture of the user's hand.
- the live digital image is moved into memory by the processor 170.
- the live digital image (see Figure 15) is grey-scale, that is, each pixel is assigned a value from 0 to 255, with 255 being absolute white, and 0 being black.
- the image must be transformed to emphasize the veins so they appear darker than the surrounding background.
- the image is algorithmically histogrammed in order to obtain the distribution of grey levels in the image. Then by using the histogram information, an intelligent decision can be made as to where the darker regions (the veins) appear as distinct from the background. That breakpoint is usually an inflection point in the histogram distribution. Using that inflection point, the image is now thresholded.
- any pixel having a value ABOVE the breakpoint number is assumed background, and as such is reassigned the number 255 (pure white), and any pixel at or below that number is assigned the number 0 (absolute black).
- the image is now a thresholded, binarized version of the original image, i.e., every pixel in the original image is now either 0 or 255.
- the thresholded image now being binarized consists of dark and white areas, dark indicating vein patterns, white is background.
- the vein areas are usually many pixels wide, looking like the stripes on a zebra. However, those stripes are wide, and should be thinned down to 1 pixel wide lines in order to process them quickly and store them efficiently. Therefore, the image is now morphologically thinned so that the thresholded vein pattern consists simply of one-pixel wide lines, the lines being the vein pattern (see Figure 16).
- the image is now ready for comparison to the recorded image that which was retrieved from the user's smart card because the image on the smart card was created in the same way and thus represents a thresholded, thinned binary image.
- the imaging area on the hand is assumed to be consistent from capture to capture since the hand support as described herein insures the hand is in a consistent position with respect to the camera. Therefore, no image alignment is necessary.
- each image may be of the size 200 pixels high by 300 pixels wide.
- the first pixel is designated 0,0 (row, column) and is in the upper left corner.
- the pixels valued at 0 (black) in the unpadded (original, non-reverse- thinned) Recorded image are counted. This is called the ORIGINAL COUNT.
- the pixels in both images that correspond to each other are compared.
- the pixel value in the Recorded image at 0,0 is compared to the Live image's pixel 0,0. If both pixels are 0, a "HIT" is tallied.
- the pixel to the right of pixel 0,0 is checked. If one or both are 255 (white, or background), we don't record a "HIT.”
- AU pixels, row by row are thus checked.
- the ratio of HITS to ORIGINAL COUNT HITS/ORIGINAL COUNT
- the images are assumed to be similar enough for the user to be the same person who stored the image on the smart card.
- Step 2 If the resulting number of HITS counted in Step 2 is not enough to reach the predetermined percentage, another comparison is made of the two images. This time, the Live image's pixels are first shifted to the LEFT by a predetermined number of pixels so the entire Live image appears to be shifted left. Thus, when the Recorded image's pixel at 1,8 is checked, the Live image's pixel 1,9 is examined. The comparison as described in Step 2 is now performed, and if the result is still not good enough to pass, the Live image is shifted further LEFT.
- the Live image is shifted to the left a predetermined number of times, and no match has been confirmed, the Live image is shifted to the RIGHT a number of times from the original position (see Figure 18) and the comparison performed after each shift. Again, if the HIT count still does not reach the acceptable percentage, further shifting is executed. If still no success, the Live image is shifted UP then DOWN a number of times while performing the comparison.
- the Live image is ANGLED to the left several times, varying from a negative predetermined degree angle to a positive predetermined degree angle, with the pivot point being the geographic CENTER of the image because the line on which the user's wrist is positioned is located there, and the wrist can move to the left or right slightly while the fingers are held fixed by the hand support's vertical ribs.
- the Live image is ANGLED to the right several times. Each time the image is shifted, the comparison described in Step 2 is performed.
- Additional components of the scanning system 100 include a card reader
- a processor board 114 that includes the camera, a protective cover lens 116 associated with the camera, and a liquid crystal display (LCD) 118 on the top of the housing.
- the card reader 112, processor board 114, and protective cover lens 116 carry out the overall scanning functions of the scanning system, as described above.
- the LCD 118 provides a visual aid in assisting a user in properly inserting the user's hand into the housing. Such features are also not part of the present invention, and are provided to further provide the environment in which the present invention is used, to facilitate an understanding of the principles of the present invention.
- FIGs 8-14 illustrate the configuration of a hand support 102 that is configured according to the principles of the present invention.
- the hand support 102 has a front side 102a and a rear side 102b ( Figures 11, 12, 13).
- the front side 102a is oriented closest to the front side 106 of the housing.
- the rear side 102b is oriented toward the rear side 108 of the housing.
- the hand support 102 has a three dimensional rounded support surface 120, which is configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm facing a predetermined portion of the scanning system 100.
- "rounded" support surface means a surface configured such that a cross section taken through the surface will have a circular, curved or complex curved configuration (meaning a plurality of curves that may or may not have the same curvature, and which may also include one or more straight lines).
- a "rounded position' for a hand means a relatively natural rounded or curved position of the palm of a hand when the hand is resting on a ball or some other rounded object.
- the support surface 120 includes at least one positioning device that is configured to enable a human hand to be positioned in a substantially similar position in relation to the scanning system each time the hand is supported on the support surface.
- the positioning device comprises recesses 124, 126, 128, 130 in the support surface, and a pair of spaced apart posts 132, 134 extending outward from the support surface.
- the recesses and posts are preferably formed in one piece with the support surface (e.g. the hand support 102, with the recesses and posts, is preferably formed as a single molded article from a synthetic resin material and it is contemplated that the hand support can also be formed from other materials).
- the recesses 124, 126, 128, 130 are each configured to receive a predetermined finger of a human hand when the hand is supported on the support surface.
- the recesses 124, 126 are configured to receive the thumb of a human hand (the recess 124 is oriented to receive the thumb of a right hand and the recess 126 is oriented to receive the thumb of a left hand).
- the recesses 128, 130 are located adjacent the outsides of the posts 132, 134, and are configured to receive the index and pinkie fingers of a hand.
- the posts 132, 134 are spaced apart a distance such that the ring and middle fingers of a human hand can be placed between the posts, with the sides of those fingers against the inner walls 132a, 134a of the posts, and webs between the index and middle finger and between the ring finger and pinkie finger against the front walls 132b, 134b of the posts.
- Figure 4 and Exhibit G schematically illustrates the direction D from which a hand is inserted into the sensing system and onto the support device 102, and Exhibit G further illustrates (in dashed lines) the position of the hand when it is resting on the support device 102.
- the support surface 120 also has a relatively flat surface portion 140 against which the palm of a human hand would normally rest when the hand is supported on the support surface, irrespective of whether the hand is a right or a left hand.
- the height L of the support surface (see Figure 9) is configured to enable the back of the palm of a hand resting on the support surface to be within the range of focus of an imaging device (e.g. the camera) forming part of the scanning system.
- an imaging device e.g. the camera
- the back of the palm of the hand will naturally face the camera of the scanning system, so that the camera can scan the vein pattern on the backside of the person's palm.
- the live hand should be in a repeatable position with respect to the digital camera inside the scanner housing, so that the vein pattern of the hand can be reliably scanned and compared to the image on the smart card (which would have been produced from the same hand).
- a person would extend either hand into the front opening 110 in the housing 104.
- the palm of the hand should rest on the substantially flat surface 140, the ring and middle fingers should be located between the posts 132, 134, the index and pinkie fingers should be located in recesses 128, 130, and the thumb of the hand will be disposed in a respective one of the recesses 124, 126, depending on whether the hand is a left or right hand.
- the hand will then be supported in a rounded orientation on the support surface 120, and should consistently be oriented in substantially the same position on the support surface, of course depending on whether the hand is a right or a left hand.
- the hand should be consistently in a predetermined position relative to the digital camera, the focus and size of a scanned image of the vein pattern of the hand should be consistent, and comparison of the image of the vein pattern of the backside of the palm with the image of the smart card, should be reliable.
- a live hand is supported in a naturally rounded orientation on the support surface, but is not arched. Moreover, because the palm of the hand rests against the substantially flat surface portion 140 of the support surface, the backside of the palm can be oriented substantially parallel to the camera lens. This feature further facilitates obtaining a reliable and useful scanned image. In addition, by supporting the user's hand in a naturally rounded position, a user should feel comfortable in positioning the hand.
- the configuration of the support device 102 is designed to intuitively guide the placement of the user's hand. Feedback to the user while placing his hand can be visually displayed via the LCD.
- pressure sensors along the side and front walls 132a, b, 134a, b of the posts and additional pressure sensor(s) on the substantially flat portion 140 of the support device 102 can provide output that can be used to determine if a hand is properly positioned on the support device, and provide output to aid a person in properly positioning his/her hand, via the LCD.
- the digitized image on the "smart card” is obtained in the same manner as the comparison process - i.e. the user places his/her hand on a similar support device (or the same support device), and a reference image of the vein pattern of the person's hand is obtained via a digital camera.
- the image is digitized then written to the smart card's microchip.
- the digitized image on the "smart card” is then available when the person inserts the smart card in a sensing device, and has his/her hand scanned by the sensing system, and compared with the image on the smart card, to enable the sensing system either to confirm or reject the identify the person.
- the support device 102 is preferably fixed to the bottom of the housing 104, by means of screws or other connectors that can extend through openings 150 in the support device 102 (see Figure 14) and openings 152 in the housing 104 (see Figure 7).
- the support device may be connected to the housing by other connecting means (e.g. adhesive).
- the housing 104 can be secured to a wall or other support by means of screws or other connectors that extend through openings 160 in the housing 104 (see e.g. Figures 1 and 5).
- the housing can be secured to a wall or other support by other connecting means.
- the back side 108 of the housing When the housing 104 is secured to a wall, the back side 108 of the housing would be against (adjacent to) the wall, and the front side 106 would be spaced from the wall, with the front opening 110 positioned to allow a person's hand to be conveniently inserted through the front opening and onto the support device 102.
- the operating environment of the sensing device of the present invention may be an environment that is exposed to ambient light.
- the present invention is additionally designed to obtain an evenly illuminated image of the vein pattern on the back of a human hand, in an environment that is exposed to ambient light.
- the technique for obtaining an evenly illuminated image of a vein pattern, in an environment exposed to ambient light, according to the principles of the present invention, is as follows:
- the system used to obtain an evenly illuminated object image consists of the high speed digital camera 160, a camera lens, lights, and the computer 170.
- the computer requires a program and specifically an algorithm to calculate the amount of light required for even illumination.
- the preferred embodiment of the foregoing illumination concept can be appreciated from Fig 20.
- the camera 160 is attached to the circuit board 114 that is planar and parallel to the object being imaged.
- the lights 162 are arranged in columns to either side of the camera 160.
- the computer 170 directs the camera 160 to snap a picture.
- the picture consists of a two dimensional array of pixels that when viewed as a whole, constitute an image.
- the computer then dissects the image according to the number of columns of lights on the camera board. For example, if there are 4 columns of illumination lights, each column consisting of 4 lights, there are a total of 16 lights. Each column of lights can be controlled independently by the computer. Thus, the computer separates, or dissects, the image into four equal sections with each section falling under one column of lights.
- the computer uses an algorithm to determine the average illumination of each section.
- the average illumination is determined by creating a Greylevel Histogram of each section.
- a Greylevel Histogram of an image gives the greylevel distribution of the pixels within the image.
- the histogram of an image is defined as a set of M numbers (the number of possible grey levels) defining the percentage of an image at a particular grey level value.
- the histogram of an image is defined as:
- n is the number of pixels within the image at the i th grey level value and n t is the total number of pixels in the image. Note that the image does not need to be a grey scale (256 possible intensity levels) image, but for this discussion, it is assumed to be.
- the computer 170 will direct the camera 160 to snap another image. That image is also histogrammed and the lighting adjusted, with this sequence repeated until an acceptable level of illumination uniformity is obtained, or it is determined that there is no possible way, given the constraints of the ambient light conditions and the system components, that an acceptable image can be obtained. This is a failure condition and the system will abort the imaging session.
- FIGS 21-25 illustrate the manner in which the principles of the present invention can be implemented in an environment that is at least partially exposed to ambient light.
- a housing 104 can be configured in the form of a hood like structure, that is designed so that when a hand is placed under the hood, some ambient light is shielded from the hand, but the environment under the hood is such that the presence and location of the hand is detected, and scanning of a hand takes place in the presence of at least some ambient light.
- the components in the hood 104 are the same as those described above in connection with Figures 1-14 (e.g. see paragraph 0056 for a description of components that would be contained in the hood 104 of Figure 21).
- the hood 104 carries a slot for a smart card with an image of a person's hand that is used as a reference with which a live image of a person's hand can be compared, in the same manner as described above..
- the components under the hood e.g. scanning digital camera 160, circuit board 114, power supply, computer 170 (figure 22), are generally similar to the components described above.
- the components include a laser diode 180 adjacent the camera 160.
- the camera 160 has a lens and a light filter, and the laser diode has a diffuser, as described further below.
- the digital camera detector is powered by the power supply, and communicates with the computer via a parallel or serial bus.
- the digital camera 160 can be monochrome or color, but the preferred embodiment is monochrome.
- the laser diode 180 is also powered by the power supply and its light output can be varied from full output to off by the computer 170.
- the computer may or may not be powered by the same power supply but it does communicate with both the laser diode and the camera sensor.
- the computer runs a program which will be described later.
- the digital camera 160 obtains its images thru a lens. The lens is selected by criteria which will be described later.
- the laser diode 160 emits its light thru a diffuser lens which will diffuse the output light in a single dot (see dot 200 in Figures 24, 25), X by Y square grid pattern (202 in Figure 25), or both, depending on the intensity of laser light programmed for output by the computer.
- the digital camera 160 is mounted such that it "sees” its image normal to its mounted surface.
- the laser diode 180 is mounted adjacent to the camera and also projects its light normally to the mount surface. The laser projects its light so that the camera images the light grid in the center of its field of vision at the camera lens optimum focal distance.
- the camera lens is covered with a filter that blocks ambient light below that of the laser diodes output frequency.
- a filter that blocks ambient light below that of the laser diodes output frequency.
- use of a laser diode with an emission spectrum of 660-665 nm indicates use of a camera lens filter that blocks light frequencies below 660nm. The reason being that the camera 160 need only see the laser grid frequency as well as light in the near infrared frequencies above it when illuminating the hand for vein imaging.
- the laser diode 180 emits a cone of light (shown schematically in Figure
- the diameter of which varies with distance the farther from the laser, the greater the diameter.
- the cone is instead an ever expanding grid pattern; i.e. the farther from the laser, the longer each side of the grid square. Therefore, if the distance of a given grid side length is known, then the distance from the camera can be calculated.
- the first step is to be certain an interesting object is in the camera's Field Of
- the laser's output power is varied by the computer. At low power, the laser's output is seen as an intense dot 200 (Figure 24). As output power is increased, the grid 202 becomes visible and the dot 200 becomes a large, intense blur in the geographical center of the grid (see Figure 25). [0088] Since the imaging process can take place where ambient light containing near and true infrared light is present, the device must try to ensure that its artificial indicator, the single intense laser dot 200, is present in the FOV. Therefore, following the initial stimulus to begin the scanning process, the computer commands the laser 180 to emit the single dot 200, finite circular spot of light, instead of a grid 202 ( Figure 25). Then the computer commands the camera 160 to capture an image.
- the dot 200 will be of the highest luminance in the image.
- the computer therefore scans the image pixels for the dot's expected luminance.
- the dot will be of known size in pixel dimensions at the optimum camera lens' focal distance. Even when luminances of the dot's expected value appear, unless the shape of the pixel pattern is closely similar to the dot's template, the computer will not assume it has found the dot.
- the sequence laser dot, camera image, computer scan image for dot, continues for a specified period of time. After that timeout, the sequence is terminated until another stimulus is given.
- the laser 180 is then commanded to raise its light output power, effectively turning on the grid (shown in Figure 25). Again the camera 160 obtains an image. This time the computer 170 scans the image for the grid pattern 202. Before the computer scans the image, the image is thresholded such that the grid pattern appears as black lines on a white background. The grid's outer side lengths are determined in a pixel count, and compared to the expected length preset in the computer's program.
- the computer assumes the object (e.g. the back of the person's hand) is within range and begins the hand scan process. That hand scan process utilities the illumination techniques described above for illuminating the back of the hand and capturing an image of the hand (at least partially in ambient light), and the additional techniques described above for comparing the scanned image of the vein pattern with the reference vein pattern obtained from the smart card.
- the object e.g. the back of the person's hand
- That hand scan process utilities the illumination techniques described above for illuminating the back of the hand and capturing an image of the hand (at least partially in ambient light), and the additional techniques described above for comparing the scanned image of the vein pattern with the reference vein pattern obtained from the smart card.
- the optimum focus distance of the camera is fixed within a given range. And by adjusting the laser diode's lens to match that focal length, the diffusor's grid pattern is in focus at the camera lens' optimum focal length. Therefore, as illustrated in Figure 25, if a camera image is obtained when an object is placed at the optimum focal length from the camera, and the laser grid 202 is also visible on the object, the grid side length distances can be obtained in image pixels. Hence, if the given camera lens 1 optimum focal length is of distance x to x', then an object is in focus to that lens when it is within the distance of x to x'. The grid side length at x is then recorded as xGL, and the side length at x' is stored as xGL'.
- applicant's method can provide scanning the vein pattern of a human hand at least partially in the presence of ambient light.
- the back of a human hand and a scanning camera are moved relative to each other in an environment that is at least partially exposed to ambient light until the hand is detected (e.g. by the digital camera sensor described above), and when the hand is detected operating the scanning camera in the environment that is at least partially exposed to ambient light to (i) illuminate the back of the hand to enable an image of the vein pattern of the human hand to be imaged by the camera, and (ii) extracting information from the imaged vein pattern that can be compared with information extracted from a reference vein pattern of a human hand to determine if the imaged vein pattern of the human hand matches the reference vein pattern.
- the hood 104 provides a shield against some ambient light from an undesirable direction and otherwise allows the scanning to take place at least partially in the presence of ambient light.
- the shape and size of the hood 104 can be varied, in accordance with the amount of light shielding that is needed or desired.
- a sensing device and method of the present invention also has the following additional characteristics:
- the sensing device is designed to communicate with existing security systems through the internal Wiegand 26 bit port.
- the sensing device can replace legacy prox card security installations by simply removing the existing (any reader - Mifare, HID, etc. that supports the Wiegand 26-bit protocol) prox reader.
- the sensing device uses IR LEDs, and is preferably powered by an external 12V, 500mA electrical supply.
- the device interfaces to a security system bus via a Wiegand 2-wire port.
- the backlit liquid crystal display (LCD) on the top of the device enables a user to view instructions on how to proceed.
- the current version of the sensing device incorporates an HID iCLASS 13.56MHz smart card reader.
- the rigorosity of the comparison algorithm desribed above in connection with Figures 15-19 is adjustable by the installer using a 4 position DIP switch accessible from the back of the device. Settings range from LOW - NORMAL - ENHANCED - HIGH. The user is informed of the comparison result with a literal indication on the LCD screen, and by a blinking colored light on the front of the unit. If a successful match is obtained, the device behaves the same as any standard 26 bit prox card reader and emits a Wiegand stream to the system bus. It can also be configured as a standalone device and incorporates a 5OmA, 12VDC open-collector input for dedicated door lock control.
- the sensing device is designed to be weatherproof, in that the external switches, displays and surfaces of the unit are not affected by water or sun.
- the sensing device mounts to a vertical surface using two %" fasteners.
- Power and communication wiring enter the back of the unit through a slot and attach to the circuit board with a screw type jumper block.
- An "operating environment” is an environment in which some operation is performed relative to the hand or relative to an object.
- the preferred operating environment is in a scanning system in which the back of the hand is scanned and used in a biometric identification system.
- the principles of the present invention should be applicable in such an environment.
- a (surgical) operating environment where comfortable positioning of a human hand, in a desirable position for an operating surgeon, may be more important that repetitive positioning of a hand in a biometric scanning system
- the principles of the present invention would provide a support member with a three dimensional rounded support surface configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm in a predetermined position in the operating environment.
- the posts and/or recesses of the support surface, along with the substantially flat surface, would enable a human hand that is supported on the support surface to be comfortably positioned in a predetermined position in the operating environment.
- the principles of the present invention may also be applied to an operating environment in which it is desirable to detect an object.
- the principles of the invention can be used to project over a predetermined field a laser beam in a first known pattern that is different from a pattern that would naturally occur on the object, seeking to detect a reflection of the first known pattern within the field to determine if such an object is located within the field, and if such an object is determined to exist within the field projecting within the field a laser beam of a second known pattern with a known size, and seeking to detect a reflection of the second known pattern to determine the location of the object within the field.
- the techniques described above for imaging the back of a hand at least partially in the presence of ambient light can also be used to image the object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Input (AREA)
- Collating Specific Patterns (AREA)
Abstract
A new and useful apparatus and technique are provided for supporting a human hand in an operating environment, particularly a biometric identification system, in which imaging of the back of a person's hand, particularly the back of the person's palm, is used as the biometric identifier. Imaging the vein pattern on the back of a human hand in an operating environment that can be at least partially in the presence of ambient light, enables a biometric feature of the person's hand (e.g. the vein pattern of the person's hand) to be compared to a previously recorded digitized image of the person's hand, in order to identify the person. The present invention provides structure and techniques for imaging a human hand, with the back side of the palm facing a predetermined portion of the operating environment (e.g. the imaging device of the scanning system), and at least partially in the presence of ambient light.
Description
Title: Biometric Sensing Device And Method
Inventor: Robert K. Pira
Related Applications/Claims of Priority
This application is a continuation-in-part of, and claims priority from, application serial number 11/041,677, filed January 24, 2005 and entitled "Structure for Supporting a Human Hand in an Operating Environment", which application is incorporated by reference herein. This application is also related to and claims priority from Provisional application serial number Serial No.: 60/717,975, , filed Sept 16, 2005, and from provisional application serial number 60/815,729, filed June 22, 2006. Both provisional application serial numbers 60/717,975 and 60/815,729 are incorporated herein by reference.
Background
[0001] The present invention relates to a biometric scanning system for identifying a vein pattern on a human hand.
[0002] Biometric identification is based on a paradigm in which a feature of a person is scanned, recorded and used as an identifier for the person. For example, biometric identification has been based on external characteristics such as facial recognition, voice recognition, fingerprint recognition, retinal scanning, etc.
[0003] The present invention is based on the further recognition by the applicant that biometric identification of a human can also be based on vein pattern recognition, where the vein pattern in a person's hand, particularly the back side of the person's palm, can be used as the identifier.
Summary Of The Present Invention
[0004] The present invention provides a new and useful apparatus and technique for a biometric identification system in which imaging of a person's hand, particularly the vein pattern from the back of the person's hand, is used as the biometric identifier. Thus, a live image of the vein pattern of the back of a person's hand can be compared
to a previously recorded digitized image of the vein pattern of the back of the person's hand, in order to identify the person.
[0005] In one of its embodiments, the present invention provides a support structure that preferably positions a person's hand in a predetermined relation to the scanning system and a technique for scanning the vein pattern on a person's hand and using that scan to identify the person.
[0006] In addition, the present invention also provides a new and useful technique for comparing the scanned vein pattern, with a reference vein pattern, to determine whether the scanned vein pattern matches the reference vein pattern.
[0007] Still further, the present invention is designed to scan the vein pattern of a person's hand, in an environment that may be at least partially exposed to ambient light.
[0008] Moreover, in other embodiments, the present invention provides a scanning device and method that can be used to scan the vein pattern on the back of a person's hand, in an environment that is at least partially exposed to ambient light, and also in a manner that does not require a specific support device for the person's hand. The device and method are designed to detect the presence and location of a person's hand within the field of view of a scanning camera, and then using the scanning camera to capture an image of the vein pattern on the back of the person's hand, in an environment that is at least partially exposed to ambient light.
[0009] Thus, a biometric scanning system according to the present invention provides new and useful apparatus and methods for identifying a human vein pattern, by imaging the back of the human hand, and using the imaged data to identify the human associated with that hand.
[0010] Further features of the present invention will be apparent from the following detailed description and the accompanying drawings and Exhibits.
Brief Description of The Drawings and Exhibits
[0011] Figure 1 is a schematic three dimensional illustration of one embodiment of a biometric sensing system with a support device according to the principles of the present invention;
[0012] Figure 2 is a schematic front view of the sensing system of Figure 1;
[0013] Figure 3 is a schematic rear view of the sensing system of Figure 1;
[0014] Figure 4 is a schematic right side view of the sensing system of Figure 1, with portions broken away;
[0015] Figure 5 is a schematic left side view of the sensing system of Figure 1;
[0016] Figure 6 is a schematic top view of the sensing system of Figure 1;
[0017] Figure 7 is a schematic bottom view of the sensing system of Figure 1 ;
[0018] Figure 8 is a schematic three dimensional illustration of a support device according to the principles of the present invention;
[0019] Figure 9 is a schematic front view of the support device of Figure 8;
[0020] Figure 10 is a schematic rear view of the support device of Figure 8;
[0021] Figure 11 is a schematic left side view of the support device of Figure 8;
[0022] Figure 12 is a schematic right side view of the support device of Figure 8;
[0023] Figure 13 is a schematic top view of the support device of Figure 8; and
[0024] Figure 14 is a schematic bottom view of the support device of Figure 8;
[0025] Figure 15 is an illustration of a raw live image that is captured by a camera, in a system and method according to the principles of the present invention;
[UUZb j Figure 16 is a schematic illustration of a thresholded and thinned live image, as provided in a system and method according to the principles of the present invention;
[0027] Figure 17 is a schematic illustration of an expanded recorded image of a vein pattern, that can be used in a system and method according to the principles of the present invention;
[0028] Figure 18 is a schematic illustration of the manner in which a live image can be compared with a recorded image, according to the principles of the present invention;
[0029] Figure 19 is a flow chart, illustrating the steps in comparing a live image with a recorded image, according to the principles of the present invention;
[0030] Figure 20 is a schematic illustration of the manner in which the vein pattern on the back of a person's hand can be illuminated in the presence of ambient light, and an image captured by a camera, according to the principles of the present invention;
[0031] Figure 21 is a schematic illustration of a modified hood that can be used in sensing the presence and location of a person's hand, and capturing an image of the back of the person's hand, at least partially in the presence of ambient light, according to the principles of the present invention;
[0032] Figure 22 schematically illustrates, in top and side view, certain components associated with the hood of Figure 21; and
[0033] Figures 23-25 schematically illustrate how the components of Figures 21 and 22 are used to sense the presence and location of a person's hand, within the field of view of a camera, in a device and method according to the present invention.
[0034] Exhibits A-B are drawings of the sensing system of Figures 1-7;
[0035] Exhibits C-I are drawings of the support device of Figures 8-14;
[0036] Exhibit J is a description of pseudocode that is used in the comparison technique of Figures 15-19.
Detailed Description
[0037] As described above, the present invention provides a biometric scanning system for supporting a human hand, scanning the vein pattern of the human hand, and then comparing the vein pattern to a stored vein pattern, for use in identifying the human associated with the vein pattern.
Initial Description of Device of Figures 1-14
[0038] Figures 1-7 schematically illustrate a biometric hand scanning system 100, with a hand support 102 according to one embodiment of the present invention. The hand scanning system 100 includes a housing 104 with a front side 106 and a rear side 108 (Figure 4). The housing 104 is configured to be supported on a wall with the rear side 108 against the wall, and the front side 106 away from the wall. The front side 106 includes a front opening 110 (Figures 1-4) through which a human hand is extended, so that the hand can be supported on the hand support 102. When a hand is supported on the hand support, the backside of the palm of the hand is facing the scanning component of the scanning system. When the backside of the palm of the hand is scanned, e.g. through use of a digital camera inside the scanner housing 104, an image of the vein pattern on the backside of the palm of the hand is obtained. The scanned digitized image is compared to a digitized image of a vein pattern obtained from data on a "smart card" the user presents to the scanner. A smart card is industry terminology describing a credit card with a microchip and antenna laminated inside. The microchip is capable of storing and emitting information through the antenna when it encounters a magnetic field. The smart card is an example of a data storage device. The scanner then programmatically compares the images and outputs the result as a match or not a match.
Vein Pattern Capture and Comparison Technique (Figures 15-19)
[0039] The manner in which the scanner captures and compares the image and outputs the result is shown and described in Figures 15 through 25.
[0040] Specifically a digital image is captured of the subcutaneous vein pattern on the back of a human hand (see Figure 15) and that image is parsed algorithmically to determine its similarity with a recorded image of the alleged same hand.
[0041 ] The system of capture (which is described herein for illustrative purposes only) includes a digital CMOS monochrome camera 160 (Figures 20, 22, 23, e.g. OmniVision Technologies OV7141), infrared LED illumination devices 162 (Figure 20, e.g. Osram Opto Semiconductors SFH415), a digital signal processor, e.g. the computer shown schematically at 170 in Figure 22, (e.g. Texas Instruments TMS320C6713), grey level histogram, thresholding and thinning algorithms (such as those found in "The Pocket Handbook of Image Processing Algorithms in C", by Myler and Weeks, Prentice Hall, 1993), 2 MB static RAM memory, a smart card reader (e.g. HID Corporation OEM300), a power supply plus the hand support described in the detailed description to which this exhibit is appended. The pseudocode representation of the type of thresholding and thinning algorithms referenced above are shown in Exhibit J hereto.
[0042] The first step is to obtain the recorded image. The user presents his smart card containing his recorded vein image to the smart card reader and the recorded data therein is transferred to local memory by the processor. Then the user places his hand on the hand support 102 (with within the hood 104 shown in Figure 21) so the digital camera 160 can obtain a picture of the user's hand. Finally, the live digital image is moved into memory by the processor 170.
[0043] The live digital image (see Figure 15) is grey-scale, that is, each pixel is assigned a value from 0 to 255, with 255 being absolute white, and 0 being black. The image must be transformed to emphasize the veins so they appear darker than the surrounding background. First, the image is algorithmically histogrammed in order to obtain the distribution of grey levels in the image. Then by using the histogram information, an intelligent decision can be made as to where the darker regions (the
veins) appear as distinct from the background. That breakpoint is usually an inflection point in the histogram distribution. Using that inflection point, the image is now thresholded. Thus, any pixel having a value ABOVE the breakpoint number is assumed background, and as such is reassigned the number 255 (pure white), and any pixel at or below that number is assigned the number 0 (absolute black). The image is now a thresholded, binarized version of the original image, i.e., every pixel in the original image is now either 0 or 255.
[0044] The thresholded image now being binarized, consists of dark and white areas, dark indicating vein patterns, white is background. The vein areas are usually many pixels wide, looking like the stripes on a zebra. However, those stripes are wide, and should be thinned down to 1 pixel wide lines in order to process them quickly and store them efficiently. Therefore, the image is now morphologically thinned so that the thresholded vein pattern consists simply of one-pixel wide lines, the lines being the vein pattern (see Figure 16). The image is now ready for comparison to the recorded image that which was retrieved from the user's smart card because the image on the smart card was created in the same way and thus represents a thresholded, thinned binary image.
[0045] The method of determination (also shown schematically in Figure 19) is as follows
[0046] Step 1
[0047] When using the embodiment of Figures 1-14 (with hand positioning device
102) the imaging area on the hand is assumed to be consistent from capture to capture since the hand support as described herein insures the hand is in a consistent position with respect to the camera. Therefore, no image alignment is necessary.
[0048] However, there may be small variances in hand placement (<l/4") because the fleshy part of the palm allows the hand to move slightly on the hand support. So the Recorded image's vein pattern is expanded to a predetermined width, i.e, it is 'reverse thinned', that is, the vein image lines are padded with additional 0 (black)
pixels to either side. The resulting image will appear as the original pattern but with wider lines (see Figure 17).
[0049] Step 2
[0050] The Recorded image (Figure 17) is compared pixel by pixel with the thinned
Live image of Figure 16. For example, each image may be of the size 200 pixels high by 300 pixels wide. The first pixel is designated 0,0 (row, column) and is in the upper left corner.
[0051] First, the pixels valued at 0 (black) in the unpadded (original, non-reverse- thinned) Recorded image are counted. This is called the ORIGINAL COUNT. Then the pixels in both images that correspond to each other are compared. Thus, the pixel value in the Recorded image at 0,0 is compared to the Live image's pixel 0,0. If both pixels are 0, a "HIT" is tallied. Next, the pixel to the right of pixel 0,0 is checked. If one or both are 255 (white, or background), we don't record a "HIT." AU pixels, row by row, are thus checked. Now the ratio of HITS to ORIGINAL COUNT (HITS/ORIGINAL COUNT) is calculated. If it is above a predetermined percentage, the images are assumed to be similar enough for the user to be the same person who stored the image on the smart card.
[0052] Step 3
[0053] If the resulting number of HITS counted in Step 2 is not enough to reach the predetermined percentage, another comparison is made of the two images. This time, the Live image's pixels are first shifted to the LEFT by a predetermined number of pixels so the entire Live image appears to be shifted left. Thus, when the Recorded image's pixel at 1,8 is checked, the Live image's pixel 1,9 is examined. The comparison as described in Step 2 is now performed, and if the result is still not good enough to pass, the Live image is shifted further LEFT. Once the Live image is shifted to the left a predetermined number of times, and no match has been confirmed, the Live image is shifted to the RIGHT a number of times from the original position (see Figure 18) and the comparison performed after each shift.
Again, if the HIT count still does not reach the acceptable percentage, further shifting is executed. If still no success, the Live image is shifted UP then DOWN a number of times while performing the comparison.
[0054] Finally, if the percentage is still not reached, the Live image is ANGLED to the left several times, varying from a negative predetermined degree angle to a positive predetermined degree angle, with the pivot point being the geographic CENTER of the image because the line on which the user's wrist is positioned is located there, and the wrist can move to the left or right slightly while the fingers are held fixed by the hand support's vertical ribs. Then the Live image is ANGLED to the right several times. Each time the image is shifted, the comparison described in Step 2 is performed.
[0055] If no comparison gives a percentage HIT value higher than the predetermined level, the comparison operation is considered a failure. Note that the Recorded image is never shifted or angled since it is assumed when its image was obtained, the user was in his most comfortable position - one that would be natural for him to recreate every time he places his hand on a hand support.
Further Features of Device of Figures 1-14
[0056] Additional components of the scanning system 100 include a card reader
112, a processor board 114 that includes the camera, a protective cover lens 116 associated with the camera, and a liquid crystal display (LCD) 118 on the top of the housing. The card reader 112, processor board 114, and protective cover lens 116 carry out the overall scanning functions of the scanning system, as described above. The LCD 118 provides a visual aid in assisting a user in properly inserting the user's hand into the housing. Such features are also not part of the present invention, and are provided to further provide the environment in which the present invention is used, to facilitate an understanding of the principles of the present invention.
[0057] Figures 8-14 illustrate the configuration of a hand support 102 that is configured according to the principles of the present invention. The hand support 102
has a front side 102a and a rear side 102b (Figures 11, 12, 13). The front side 102a is oriented closest to the front side 106 of the housing. The rear side 102b is oriented toward the rear side 108 of the housing.
[0058] The hand support 102 has a three dimensional rounded support surface 120, which is configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm facing a predetermined portion of the scanning system 100. In this application, "rounded" support surface means a surface configured such that a cross section taken through the surface will have a circular, curved or complex curved configuration (meaning a plurality of curves that may or may not have the same curvature, and which may also include one or more straight lines). Moreover, a "rounded position' for a hand means a relatively natural rounded or curved position of the palm of a hand when the hand is resting on a ball or some other rounded object.
[0059] The support surface 120 includes at least one positioning device that is configured to enable a human hand to be positioned in a substantially similar position in relation to the scanning system each time the hand is supported on the support surface. In the illustrated preferred embodiment the positioning device comprises recesses 124, 126, 128, 130 in the support surface, and a pair of spaced apart posts 132, 134 extending outward from the support surface. The recesses and posts are preferably formed in one piece with the support surface (e.g. the hand support 102, with the recesses and posts, is preferably formed as a single molded article from a synthetic resin material and it is contemplated that the hand support can also be formed from other materials).
[0060] The recesses 124, 126, 128, 130 are each configured to receive a predetermined finger of a human hand when the hand is supported on the support surface. Thus, the recesses 124, 126 are configured to receive the thumb of a human hand (the recess 124 is oriented to receive the thumb of a right hand and the recess 126 is oriented to receive the thumb of a left hand). The recesses 128, 130 are located adjacent the outsides of the posts 132, 134, and are configured to receive the index
and pinkie fingers of a hand. The posts 132, 134 are spaced apart a distance such that the ring and middle fingers of a human hand can be placed between the posts, with the sides of those fingers against the inner walls 132a, 134a of the posts, and webs between the index and middle finger and between the ring finger and pinkie finger against the front walls 132b, 134b of the posts. Figure 4 and Exhibit G schematically illustrates the direction D from which a hand is inserted into the sensing system and onto the support device 102, and Exhibit G further illustrates (in dashed lines) the position of the hand when it is resting on the support device 102. The support surface 120 also has a relatively flat surface portion 140 against which the palm of a human hand would normally rest when the hand is supported on the support surface, irrespective of whether the hand is a right or a left hand.
[0061] The height L of the support surface (see Figure 9) is configured to enable the back of the palm of a hand resting on the support surface to be within the range of focus of an imaging device (e.g. the camera) forming part of the scanning system. Thus, when a hand is resting on the support surface, the back of the palm of the hand will naturally face the camera of the scanning system, so that the camera can scan the vein pattern on the backside of the person's palm.
[0062] In the use of a scanning system with a hand support device according to the present invention, the live hand should be in a repeatable position with respect to the digital camera inside the scanner housing, so that the vein pattern of the hand can be reliably scanned and compared to the image on the smart card (which would have been produced from the same hand). As should be clear from the foregoing description, with the device of Figures 1-14 a person would extend either hand into the front opening 110 in the housing 104. Irrespective of which hand is inserted the palm of the hand should rest on the substantially flat surface 140, the ring and middle fingers should be located between the posts 132, 134, the index and pinkie fingers should be located in recesses 128, 130, and the thumb of the hand will be disposed in a respective one of the recesses 124, 126, depending on whether the hand is a left or right hand. The hand will then be supported in a rounded orientation on the support surface 120, and should consistently be oriented in substantially the same position on
the support surface, of course depending on whether the hand is a right or a left hand. Thus, the hand should be consistently in a predetermined position relative to the digital camera, the focus and size of a scanned image of the vein pattern of the hand should be consistent, and comparison of the image of the vein pattern of the backside of the palm with the image of the smart card, should be reliable.
[0063] In the embodiment of Figures 1-14, a live hand is supported in a naturally rounded orientation on the support surface, but is not arched. Moreover, because the palm of the hand rests against the substantially flat surface portion 140 of the support surface, the backside of the palm can be oriented substantially parallel to the camera lens. This feature further facilitates obtaining a reliable and useful scanned image. In addition, by supporting the user's hand in a naturally rounded position, a user should feel comfortable in positioning the hand. The configuration of the support device 102 is designed to intuitively guide the placement of the user's hand. Feedback to the user while placing his hand can be visually displayed via the LCD. For example, pressure sensors along the side and front walls 132a, b, 134a, b of the posts and additional pressure sensor(s) on the substantially flat portion 140 of the support device 102, can provide output that can be used to determine if a hand is properly positioned on the support device, and provide output to aid a person in properly positioning his/her hand, via the LCD.
[0064] The digitized image on the "smart card" is obtained in the same manner as the comparison process - i.e. the user places his/her hand on a similar support device (or the same support device), and a reference image of the vein pattern of the person's hand is obtained via a digital camera. The image is digitized then written to the smart card's microchip. The digitized image on the "smart card" is then available when the person inserts the smart card in a sensing device, and has his/her hand scanned by the sensing system, and compared with the image on the smart card, to enable the sensing system either to confirm or reject the identify the person.
[0065] It is also believed useful to note that the support device 102 is preferably fixed to the bottom of the housing 104, by means of screws or other connectors that
can extend through openings 150 in the support device 102 (see Figure 14) and openings 152 in the housing 104 (see Figure 7). However, the support device may be connected to the housing by other connecting means (e.g. adhesive). Also, the housing 104 can be secured to a wall or other support by means of screws or other connectors that extend through openings 160 in the housing 104 (see e.g. Figures 1 and 5). Moreover, the housing can be secured to a wall or other support by other connecting means. When the housing 104 is secured to a wall, the back side 108 of the housing would be against (adjacent to) the wall, and the front side 106 would be spaced from the wall, with the front opening 110 positioned to allow a person's hand to be conveniently inserted through the front opening and onto the support device 102.
[0066] As will be appreciated by those in the art, the operating environment of the sensing device of the present invention may be an environment that is exposed to ambient light. The present invention is additionally designed to obtain an evenly illuminated image of the vein pattern on the back of a human hand, in an environment that is exposed to ambient light. The technique for obtaining an evenly illuminated image of a vein pattern, in an environment exposed to ambient light, according to the principles of the present invention, is as follows:
Illumination in ambient light to obtain a Flat Fielded image (Figure 20)
[0067] In order to obtain a flat fielded image, a two step process is used. The first is to illuminate the object such that it appears as evenly illuminated as possible. Second, an image processing technique known as "Flat Fielding" or "Brightness Correction" is implemented. Flat Fielding is best done on an image that is as evenly illuminated as possible, thus the first step is performed. Flat Fielding is not claimed as an invention.
[0068] The system used to obtain an evenly illuminated object image consists of the high speed digital camera 160, a camera lens, lights, and the computer 170. The computer requires a program and specifically an algorithm to calculate the amount of light required for even illumination.
[0069] The preferred embodiment of the foregoing illumination concept can be appreciated from Fig 20. The camera 160 is attached to the circuit board 114 that is planar and parallel to the object being imaged. The lights 162 are arranged in columns to either side of the camera 160.
[0070] When the object to be imaged is determined to be in position (either by a sensor connected with the hand support, in the embodiment of Figures 1-14, or by the device and method shown and described below in connection with Figures 21-25), the computer 170 directs the camera 160 to snap a picture. The picture consists of a two dimensional array of pixels that when viewed as a whole, constitute an image. The computer then dissects the image according to the number of columns of lights on the camera board. For example, if there are 4 columns of illumination lights, each column consisting of 4 lights, there are a total of 16 lights. Each column of lights can be controlled independently by the computer. Thus, the computer separates, or dissects, the image into four equal sections with each section falling under one column of lights.
[0071] The computer uses an algorithm to determine the average illumination of each section. The average illumination is determined by creating a Greylevel Histogram of each section. A Greylevel Histogram of an image gives the greylevel distribution of the pixels within the image. The histogram of an image is defined as a set of M numbers (the number of possible grey levels) defining the percentage of an image at a particular grey level value. The histogram of an image is defined as:
hi = ni/nt for i=0 to (M-I),
[0072] where n; is the number of pixels within the image at the ith grey level value and nt is the total number of pixels in the image. Note that the image does not need to be a grey scale (256 possible intensity levels) image, but for this discussion, it is assumed to be.
[0073] Once the histogram for each section is calculated, it is obvious which sections are darker or lighter than others. For example, if there are 256 grey levels, 0
being black and 255 being white, then a histogram with an average closer to 0 than 255 indicates a dark image. Thus, the computer 170 will increase the lighting over that section to create a lighter image. Conversely, if there is a higher histogram average in this section, then the lights will be dimmed over that section.
[0074] When all the sections have been histogrammed and the lighting manipulated, the computer 170 will direct the camera 160 to snap another image. That image is also histogrammed and the lighting adjusted, with this sequence repeated until an acceptable level of illumination uniformity is obtained, or it is determined that there is no possible way, given the constraints of the ambient light conditions and the system components, that an acceptable image can be obtained. This is a failure condition and the system will abort the imaging session.
[0075] Under certain conditions, such as when ambient light is adequate to illuminate the subject hand, only some or in other cases none of the artificial light sources present on the board are needed to additionally illuminate the hand. In such cases, all or some of the lights may be turned off. If the ambient light conditions under-illuminate portions of the hand while the rest of the hand is adequately illuminated with ambient light, only those lights needed to illuminate those parts of the hand that are under-illuminated by ambient light are turned on.
Device and Method For Detecting Hand presence and location, and for imaging back of hand at least partially in the presence of Ambient Light (Figures 21-25)
[0076] Figures 21-25 illustrate the manner in which the principles of the present invention can be implemented in an environment that is at least partially exposed to ambient light.
[0077] As shown in Figure 21 , instead of the housing including a hand supporting device (as shown and described in connection with the embodiment of Figures 1-14), a housing 104 can be configured in the form of a hood like structure, that is designed so that when a hand is placed under the hood, some ambient light is shielded from the hand, but the environment under the hood is such that the presence and location of the
hand is detected, and scanning of a hand takes place in the presence of at least some ambient light. Other than the hand support 102 at the bottom of the housing, and the additional components described below, the components in the hood 104 are the same as those described above in connection with Figures 1-14 (e.g. see paragraph 0056 for a description of components that would be contained in the hood 104 of Figure 21).
[0078] The components shown in Figures 22-24 form a digital camera and detector
(which may also be referred to as a detector/camera, camera/detector or a camera/sensor). They are supported by the hood 104 and operate when a hand is placed under the hood. In the illustrated embodiment, the components are fixed to the hood 104, and a hand placed under the hood can be moved relative to the components, so that the hand and the components can be moved relative to each other. The hood 104 carries a slot for a smart card with an image of a person's hand that is used as a reference with which a live image of a person's hand can be compared, in the same manner as described above..
[0079] The components under the hood, e.g. scanning digital camera 160, circuit board 114, power supply, computer 170 (figure 22), are generally similar to the components described above. In addition, the components include a laser diode 180 adjacent the camera 160. Moreover, the camera 160 has a lens and a light filter, and the laser diode has a diffuser, as described further below.
[0080] Setup:
[0081] The digital camera detector is powered by the power supply, and communicates with the computer via a parallel or serial bus. The digital camera 160 can be monochrome or color, but the preferred embodiment is monochrome. The laser diode 180 is also powered by the power supply and its light output can be varied from full output to off by the computer 170. The computer may or may not be powered by the same power supply but it does communicate with both the laser diode and the camera sensor. The computer runs a program which will be described later.
[0082] The digital camera 160 obtains its images thru a lens. The lens is selected by criteria which will be described later. The laser diode 160 emits its light thru a diffuser lens which will diffuse the output light in a single dot (see dot 200 in Figures 24, 25), X by Y square grid pattern (202 in Figure 25), or both, depending on the intensity of laser light programmed for output by the computer.
[0083] The digital camera 160 is mounted such that it "sees" its image normal to its mounted surface. The laser diode 180 is mounted adjacent to the camera and also projects its light normally to the mount surface. The laser projects its light so that the camera images the light grid in the center of its field of vision at the camera lens optimum focal distance.
[0084] The camera lens is covered with a filter that blocks ambient light below that of the laser diodes output frequency. For purpose of illustration, use of a laser diode with an emission spectrum of 660-665 nm indicates use of a camera lens filter that blocks light frequencies below 660nm. The reason being that the camera 160 need only see the laser grid frequency as well as light in the near infrared frequencies above it when illuminating the hand for vein imaging.
[0085] Procedure:
[0086] The laser diode 180 emits a cone of light (shown schematically in Figure
23), the diameter of which varies with distance: the farther from the laser, the greater the diameter. With the placement of the square grid diffuser in front of the laser, the cone is instead an ever expanding grid pattern; i.e. the farther from the laser, the longer each side of the grid square. Therefore, if the distance of a given grid side length is known, then the distance from the camera can be calculated.
[0087] The first step is to be certain an interesting object is in the camera's Field Of
Vision (FOV). For this, the laser's output power is varied by the computer. At low power, the laser's output is seen as an intense dot 200 (Figure 24). As output power is increased, the grid 202 becomes visible and the dot 200 becomes a large, intense blur in the geographical center of the grid (see Figure 25).
[0088] Since the imaging process can take place where ambient light containing near and true infrared light is present, the device must try to ensure that its artificial indicator, the single intense laser dot 200, is present in the FOV. Therefore, following the initial stimulus to begin the scanning process, the computer commands the laser 180 to emit the single dot 200, finite circular spot of light, instead of a grid 202 (Figure 25). Then the computer commands the camera 160 to capture an image. If present in the image, the dot 200 will be of the highest luminance in the image. The computer therefore scans the image pixels for the dot's expected luminance. The dot will be of known size in pixel dimensions at the optimum camera lens' focal distance. Even when luminances of the dot's expected value appear, unless the shape of the pixel pattern is closely similar to the dot's template, the computer will not assume it has found the dot.
[0089] The sequence: laser dot, camera image, computer scan image for dot, continues for a specified period of time. After that timeout, the sequence is terminated until another stimulus is given.
[0090] Should the computer find the dot pattern in an image, e.g. when the back of a person's hand is in the FOV, the laser 180 is then commanded to raise its light output power, effectively turning on the grid (shown in Figure 25). Again the camera 160 obtains an image. This time the computer 170 scans the image for the grid pattern 202. Before the computer scans the image, the image is thresholded such that the grid pattern appears as black lines on a white background. The grid's outer side lengths are determined in a pixel count, and compared to the expected length preset in the computer's program.
[0091] If the grid side lengths are within a predetermined range of the expected lengths, the computer assumes the object (e.g. the back of the person's hand) is within range and begins the hand scan process. That hand scan process utilities the illumination techniques described above for illuminating the back of the hand and capturing an image of the hand (at least partially in ambient light), and the additional
techniques described above for comparing the scanned image of the vein pattern with the reference vein pattern obtained from the smart card.
[0092] Predetermining the Distances:
[0093] By selecting a camera lens of a given focal length, the optimum focus distance of the camera is fixed within a given range. And by adjusting the laser diode's lens to match that focal length, the diffusor's grid pattern is in focus at the camera lens' optimum focal length. Therefore, as illustrated in Figure 25, if a camera image is obtained when an object is placed at the optimum focal length from the camera, and the laser grid 202 is also visible on the object, the grid side length distances can be obtained in image pixels. Hence, if the given camera lens1 optimum focal length is of distance x to x', then an object is in focus to that lens when it is within the distance of x to x'. The grid side length at x is then recorded as xGL, and the side length at x' is stored as xGL'.
[0094] It follows that if an object's grid length is known at x and at x', it can be assumed that if an object's grid length is found to be within the range xGL to xGL', then that object is within focal range of x to x", and therefore within the camera lens' focal range. It can be assumed that an image thus obtained is that of an interesting object within focal range.
[0095] Thus, as seen from the foregoing discussion, applicant's method can provide scanning the vein pattern of a human hand at least partially in the presence of ambient light. The back of a human hand and a scanning camera are moved relative to each other in an environment that is at least partially exposed to ambient light until the hand is detected (e.g. by the digital camera sensor described above), and when the hand is detected operating the scanning camera in the environment that is at least partially exposed to ambient light to (i) illuminate the back of the hand to enable an image of the vein pattern of the human hand to be imaged by the camera, and (ii) extracting information from the imaged vein pattern that can be compared with information extracted from a reference vein pattern of a human hand to determine if the imaged vein pattern of the human hand matches the reference vein pattern.
[0096] In the example of Figures 21-25, the hood 104 provides a shield against some ambient light from an undesirable direction and otherwise allows the scanning to take place at least partially in the presence of ambient light. The shape and size of the hood 104 can be varied, in accordance with the amount of light shielding that is needed or desired.
Additional Comments
[0097] A sensing device and method of the present invention also has the following additional characteristics:
a. The sensing device is designed to communicate with existing security systems through the internal Wiegand 26 bit port. Thus, the sensing device can replace legacy prox card security installations by simply removing the existing (any reader - Mifare, HID, etc. that supports the Wiegand 26-bit protocol) prox reader.
b. The sensing device uses IR LEDs, and is preferably powered by an external 12V, 500mA electrical supply. The device interfaces to a security system bus via a Wiegand 2-wire port. The backlit liquid crystal display (LCD) on the top of the device enables a user to view instructions on how to proceed.
c. The current version of the sensing device incorporates an HID iCLASS 13.56MHz smart card reader.
d. When a user wishes access to an area guarded by the sensing device, he/she presents his/her card to the unit and the data therein is read from the card. The sensing device then asks the user to place their hand on the scanning platform 102 (Figures 1-14), or under the hood
104 (Figure 21), depending on which version of the device is being used. With the device of Figures 1-14. switches on the platform indicate when the hand is in position on the platform and the image collection process begins. With the device of Figures 21-25, the hand position sensing technique described above determines when the hand is in position for the image collection process to begin.
e. The rigorosity of the comparison algorithm desribed above in connection with Figures 15-19 is adjustable by the installer using a 4 position DIP switch accessible from the back of the device. Settings range from LOW - NORMAL - ENHANCED - HIGH. The user is informed of the comparison result with a literal indication on the LCD screen, and by a blinking colored light on the front of the unit. If a successful match is obtained, the device behaves the same as any standard 26 bit prox card reader and emits a Wiegand stream to the system bus. It can also be configured as a standalone device and incorporates a 5OmA, 12VDC open-collector input for dedicated door lock control.
f. The sensing device is designed to be weatherproof, in that the external switches, displays and surfaces of the unit are not affected by water or sun. The sensing device mounts to a vertical surface using two %" fasteners. Power and communication wiring enter the back of the unit through a slot and attach to the circuit board with a screw type jumper block.
Thus, the principles of the present invention are preferably used in connection with a hand support for a biometric identification system, but it is also contemplated that the principles of the present invention can also be employed in other types of operation environments. An "operating environment" is an environment in which some operation is performed relative to the hand or relative to an object. The preferred operating environment is in a scanning system in which the back of the hand is scanned and used in a biometric identification system. However, if there is a need in other operating environments, e.g. a surgical environment where comfortable and repeatable support and positioning for a patient's hand in a desirable position for an operating surgeon is important, the principles of the present invention should be applicable in such an environment. In such a (surgical) operating environment, where comfortable positioning of a human hand, in a desirable position for an operating surgeon, may be more important that repetitive positioning of a hand in a biometric scanning system, the principles of the present invention would provide a support member with a three dimensional rounded support surface configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm in a predetermined position in the operating environment. The posts and/or recesses of the support surface, along with the substantially flat surface, would enable a human hand that is supported on the support surface to be comfortably positioned in a predetermined position in the operating environment. In addition, the principles of the present invention may also be applied to an operating environment in which it is desirable to detect an object. Specifically, the principles of the invention can be used to project over a predetermined field a laser beam in a first known pattern that is different from a pattern that would naturally occur on the object, seeking to detect a reflection of the first known pattern within the field to determine if such an object is located within the field, and if such an object is determined to exist within the field projecting within the field a laser beam of a second known pattern with a known size, and seeking to detect a reflection of the second known pattern to determine the location of the object within the field. In addition, the techniques described above for imaging the back of a hand at least partially in the presence of ambient light can also be used to image the object.
[0098] With the foregoing disclosure in mind, there will be other modifications and developments that will be apparent to those in the art.
Claims
1. An article for use in supporting a human hand in a scanning system, comprising
a. a support member with a three dimensional rounded support surface,
b. the support surface configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm facing a predetermined portion of the scanning system, and c. the support surface including at least one positioning component that is configured to enable a human hand to be positioned in a substantially similar position in relation to the scanning system each time the hand is supported on the support surface.
2. An article as defined in claim 1, wherein the positioning component comprises at least one recess in the support surface, the recess configured to receive a finger of a human hand when the hand is supported on the support surface.
3. An article as defined in claim 2, wherein the positioning component further comprises a plurality of recesses in the support surface, each recess configured to receive and support a predetermined finger of a human hand when the hand is supported on the support surface.
4. An article as defined in claim 3, wherein the plurality of recesses further includes recesses configured to receive predetermined fingers of a human hand, irrespective of whether the hand is a left or right hand that is supported on the support surface.
5. An article as defined in claim 4, wherein the positioning component comprises a substantially flat portion configured to provide a rest for the palm of a human hand, irrespective of whether the hand is a left or right hand that is supported on the support surface.
6. An article as defined in claim 5, wherein the positioning component further includes at least one post projecting outward from the support surface and located to be positioned between and adjacent a pair of fingers of a human hand when the hand is supported on the support surface.
7. An article as defined in claim 6, wherein the positioning component includes a plurality of posts projecting outward from the support surface, the plurality of posts located to be positioned between and adjacent selected fingers of a human hand, irrespective of whether the human hand is a right or a left hand.
8. An article as defined in claim 7, wherein the plurality of posts include a pair of spaced apart posts, and wherein the plurality of recesses are located outside the spaced apart posts and comprise recesses that are configured to receive the thumb and another finger of a human hand, irrespective of whether the human hand is a right or a left hand.
9. An article as defined in claim 8, wherein the height of the support surface is configured to enable the back of the palm of a hand resting on the support surface to be within the range of focus of an imaging device forming part of the scanning system.
10. An article as defined in claim 9, wherein the support surface is formed in one piece with the posts and the recesses.
11. An article as defined in claim 1, wherein the positioning component comprises a substantially flat portion configured to provide a rest for the palm of a human hand, irrespective of whether the hand is a left or right hand that is supported on the support surface.
12. An article as defined in claim 1, wherein the positioning component further includes at least one post projecting outward from the support surface and located to be positioned between and adjacent a pair of fingers of a human hand when the hand is supported on the support surface.
13. An article as defined in claim 12, wherein the positioning component includes a plurality of posts projecting outward from the support surface, the plurality of posts located to be positioned between and adjacent selected fingers of a human hand, irrespective of whether the human hand is a right or a left hand.
14. An article as defined in claim 13, wherein the plurality of posts include a pair of spaced apart posts, and wherein the plurality of recesses are located outside the spaced apart posts and comprise recesses that are configure to receive the thumb and another finger of a human hand, irrespective of whether the human hand is a right or a left hand.
15. An article as defined in claim 1, wherein the height of the support surface is configured to enable the back of the palm of a hand resting on the support surface to be within the range of focus of an imaging device forming part of the scanning system.
16. An article for use in supporting a human hand in an operating environment, comprising
a. a support member with a three dimensional rounded support surface,
b. the support surface configured to receive and support a human hand, with the hand in a rounded position and the back side of the palm in a predetermined position in the operating environment,
c. at least one post projecting outward from the support surface and located to be positioned between and adjacent a pair of fingers when the hand is supported on the support surface,
d. the support surface including a plurality of recesses, each configured to receive a finger of a human hand, and
e. the locations of the post and the recesses configured to enable a human hand that is supported on the support surface to be consistently positioned in a predetermined position in the operating environment.
17. A method for identifying a person by means of the subcutaneous vein pattern-on the back of the person's hand, comprising
A. imaging the subcutaneous vein pattern on the back of a person's hand in an environment that is at least partially exposed to ambient light, and
B. comparing imaged vein pattern with a recorded image of the vein pattern of the alleged same person's hand;
C. wherein the step of imaging the vein pattern on the back of the person's hand comprises
(i) positioning the back of the person's hand relative to a camera/detector that has an array of illumination sources in predetermined locations,
(ii) illuminating the back of the person's hand with ambient light and/or certain of the array of illumination sources, and capturing an image of the back of the person's hand by such illumination,
(iii) processing the image captured by such illumination in a manner that
(a) separates the captured image into predetermined sections that are related to respective predetermined sections of the sources of such illumination,
(b) determines the average illumination of each of the predetermined sections of the sources of illumination, based on predetermined illumination criteria for each predetermined section, and
(c) repeats steps (ii) and (iii) until a predetermined level of illumination uniformity is determined for each of the predetermined sections;
18. The method as set forth in claim 17 for identifying a person by means of the subcutaneous vein pattern on the back of the person's hand, wherein the step of comparing the captured image with a recorded image of the alleged same person's hand, comprises (i) thining the captured image to produce a captured vein pattern with lines of a predetermined vein width,
(ii) providing a recorded image with a reference vein pattern of the person' s hand, comprising reference vein lines of a predetermined reference width, wherein one of the predetermined vein width and the predetermined reference width is greater than the other width,
(iii) comparing the entire captured vein pattern with a portion of the reference vein pattern, to determine if the captured vein pattern has a predetermined threshold correlation to the portion of the reference vein pattern, and if so producing an output indicating a threshold correlation,
(iv) if the predetermined threshold correlation is not met, shifting the captured vein image in relation to the recorded vein image, and repeating the comparison of (iii), to determine if the predetermined threshold correlation is met, and if not continuing to shift and compare the captured vein pattern with another portion of the reference vein pattern until the comparison has been made with the entire width of the larger of the predetermined reference width and the predetermined vein width, and if the threshold level is met producing a correlation output signal, and if the correlation is not met producing a non correlation output signal.
19. A method for identifying a person by means of the subcutaneous vein pattern on the back of the person's hand, comprising
A. imaging the subcutaneous vein pattern on the back of a person's hand in an environment that is at least partially exposed to ambient light, and
B. comparing imaged vein pattern with a recorded image of the vein pattern of the alleged same person's hand;
C. wherein the step of comparing the captured image with a recorded image of the alleged same person's hand, comprises
(i) thinning the captured image to produce a captured vein pattern with lines of a predetermined vein width, (ii) providing a recorded image with a reference vein pattern of the person's hand, comprising reference vein lines of a predetermined reference width, wherein one of the predetermined vein width and the predetermined reference width is greater than the other width, (iii) comparing the entire captured vein pattern with a portion of the reference vein pattern, to determine if the captured vein pattern has a predetermined threshold correlation to the portion of the reference vein pattern, and if so producing an output indicating a threshold correlation, (iv) if the predetermined threshold correlation is not met, shifting the captured vein image in relation to the recorded vein image, and repeating the comparison of (iii), to determine if the predetermined threshold correlation is met, and if not continuing to shift and compare the captured vein pattern with another portion of the reference vein pattern until the comparison has been made with the entire width of the larger of the predetermined reference width and the predetermined vein width, and if the threshold level is met producing a correlation output signal, and if the correlation is not met producing a non correlation output signal.
20. A method of scanning the vein pattern of a human hand at least partially in the presence of ambient light, comprising the steps of moving the back of a human hand and a detector/camera relative to each other in an environment that is at least partially exposed to ambient light until the presence and the location of the hand relative to the detector/camera is detected, and when the presence and location of the hand is detected operating the detector/camera in the environment that is at least partially exposed to ambient light to (i) illuminate the back of the hand to enable an image of the vein pattern of the human hand to be imaged by the camera, and (ii) extracting information from the imaged vein pattern that can be compared with information extracted from a reference vein pattern of a human hand to determine if the imaged vein pattern of the human hand matches the reference vein pattern.
21. A method as set forth in claim 20, wherein the step of moving the back of a human hand and a detector/camera relative to each other in an atmosphere at least partially exposed to ambient light comprises providing a hood, supporting the detector/camera in a predetermined relation to the hood such that when a human hand is placed at least partially under the hood and moved relative to the detector/camera, the hood provides a shield against some ambient light from an undesirable direction and otherwise allows the scanning to take place at least partially in the presence of ambient light.
22. A method of detecting an object, comprising the steps of projecting over a predetermined field a laser beam in a first known pattern that is different from a pattern that would naturally occur on the object, and seeking to detect a reflection of the first known pattern within the field to determine if such an object is located within the field, and if such an object is determined to exist within the field projecting within the field a laser beam of a second known pattern with a known size, and seeking to detect a reflection of the second known pattern to determine the location of the object within the field.
23. A method as set forth in claim 22, including the further method of imaging a feature of the object within the field, at least partially in the presence of ambient light, comprising the steps of
(i) providing a camera that has an array of illumination sources in predetermined locations, and configured to project illumination at an object located in the field;
(ii) illuminating the object with ambient light and/or certain of the array of illumination sources, and capturing an image object by such illumination, (iii) processing the image captured by such illumination in a manner that
(a) separates the captured image into predetermined sections that are related to respective predetermined sections of the sources of such illumination,
(b) determines the average illumination of each of the predetermined sections of the sources of illumination, based on predetermined illumination criteria for each predetermined section, and
(c) repeats steps (ii) and (iii) until a predetermined level of illumination uniformity is determined for each of the predetermined sections.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US71797505P | 2005-09-16 | 2005-09-16 | |
US60/717,975 | 2005-09-16 | ||
US81572906P | 2006-06-22 | 2006-06-22 | |
US60/815,729 | 2006-06-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2007033380A2 true WO2007033380A2 (en) | 2007-03-22 |
WO2007033380A3 WO2007033380A3 (en) | 2007-11-22 |
Family
ID=37865613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2006/036217 WO2007033380A2 (en) | 2005-09-16 | 2006-09-18 | Biometric sensing device and method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2007033380A2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2364645A1 (en) * | 2010-03-12 | 2011-09-14 | Hitachi Ltd. | Finger vein authentication unit |
US8803963B2 (en) | 2008-09-22 | 2014-08-12 | Kranthi Kiran Pulluru | Vein pattern recognition based biometric system and methods thereof |
CN111063047A (en) * | 2019-05-08 | 2020-04-24 | 天津科技大学 | Attendance system based on finger vein discernment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6813010B2 (en) * | 2000-09-20 | 2004-11-02 | Hitachi, Ltd | Personal identification system |
US20050105078A1 (en) * | 2003-10-09 | 2005-05-19 | Carver John F. | Palm print scanner and methods |
US20050180620A1 (en) * | 2002-05-09 | 2005-08-18 | Kiyoaki Takiguchi | Method of detecting biological pattern, biological pattern detector, method of biological certificate and biological certificate apparatus |
-
2006
- 2006-09-18 WO PCT/US2006/036217 patent/WO2007033380A2/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6813010B2 (en) * | 2000-09-20 | 2004-11-02 | Hitachi, Ltd | Personal identification system |
US20050180620A1 (en) * | 2002-05-09 | 2005-08-18 | Kiyoaki Takiguchi | Method of detecting biological pattern, biological pattern detector, method of biological certificate and biological certificate apparatus |
US20050105078A1 (en) * | 2003-10-09 | 2005-05-19 | Carver John F. | Palm print scanner and methods |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8803963B2 (en) | 2008-09-22 | 2014-08-12 | Kranthi Kiran Pulluru | Vein pattern recognition based biometric system and methods thereof |
EP2364645A1 (en) * | 2010-03-12 | 2011-09-14 | Hitachi Ltd. | Finger vein authentication unit |
CN111063047A (en) * | 2019-05-08 | 2020-04-24 | 天津科技大学 | Attendance system based on finger vein discernment |
Also Published As
Publication number | Publication date |
---|---|
WO2007033380A3 (en) | 2007-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI407377B (en) | Biological information processing device | |
US8803963B2 (en) | Vein pattern recognition based biometric system and methods thereof | |
US6088470A (en) | Method and apparatus for removal of bright or dark spots by the fusion of multiple images | |
US7253738B2 (en) | System and method of detecting eye closure based on edge lines | |
KR100342159B1 (en) | Apparatus and method for acquiring iris images | |
EP1898338B1 (en) | Personal identification apparatus and method using living body | |
US6714665B1 (en) | Fully automated iris recognition system utilizing wide and narrow fields of view | |
US20220121869A1 (en) | Biometric authentication device and biometric authentication method | |
US20070110285A1 (en) | Apparatus and methods for detecting the presence of a human eye | |
KR970705798A (en) | Automatic non-invasive iris recognition system and method (AUTOMATED, NON-INVASIVE IRIS RECOGNITION SYSTEM AND METHOD) | |
JPH07313459A (en) | Eyelid opening detection device | |
CN104063679B (en) | Blood-vessel image filming apparatus | |
KR20030034258A (en) | Identification system and method using iris and retina, and media that can record computer program sources thereof | |
WO2007033380A2 (en) | Biometric sensing device and method | |
CN112668539A (en) | Biological characteristic acquisition and identification system and method, terminal equipment and storage medium | |
US10909713B2 (en) | System and method for item location, delineation, and measurement | |
EP1330778B1 (en) | Method and apparatus for monitoring a target | |
RU97839U1 (en) | DEVICE FOR PREPARING IMAGES OF IRIS OF THE EYES | |
KR100593606B1 (en) | Object Recognition Apparatus by Pattern Image Projection and Applied Image Processing Method | |
JP2006318374A (en) | Glasses determination device, authentication device, and glasses determination method | |
KR100706000B1 (en) | Long palm authentication method and device | |
JPH0785245A (en) | Visual sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06803750 Country of ref document: EP Kind code of ref document: A2 |