US20150235087A1 - Image Processing Method and Apparatus - Google Patents
Image Processing Method and Apparatus Download PDFInfo
- Publication number
- US20150235087A1 US20150235087A1 US14/703,719 US201514703719A US2015235087A1 US 20150235087 A1 US20150235087 A1 US 20150235087A1 US 201514703719 A US201514703719 A US 201514703719A US 2015235087 A1 US2015235087 A1 US 2015235087A1
- Authority
- US
- United States
- Prior art keywords
- image
- images
- face
- facial
- eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003672 processing method Methods 0.000 title description 5
- 230000001815 facial effect Effects 0.000 claims abstract description 81
- 238000000034 method Methods 0.000 claims abstract description 45
- 210000004709 eyebrow Anatomy 0.000 claims description 4
- 210000000744 eyelid Anatomy 0.000 claims description 4
- 210000001061 forehead Anatomy 0.000 claims description 4
- 210000001747 pupil Anatomy 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 37
- 230000002950 deficient Effects 0.000 abstract description 20
- 238000012545 processing Methods 0.000 abstract description 11
- 238000004458 analytical method Methods 0.000 description 23
- 210000000887 face Anatomy 0.000 description 14
- 230000008569 process Effects 0.000 description 10
- 238000012937 correction Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 5
- 230000004397 blinking Effects 0.000 description 4
- 239000000872 buffer Substances 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 241000593989 Scardinius erythrophthalmus Species 0.000 description 2
- 206010048232 Yawning Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 201000005111 ocular hyperemia Diseases 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013101 initial test Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B19/00—Cameras
- G03B19/02—Still-picture cameras
-
- G06K9/00624—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G06K9/00268—
-
- G06K9/52—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/772—Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H04N5/23245—
Definitions
- the present invention relates to an image processing method and apparatus.
- One of the most common reasons for an acquired digital photograph to be discarded or spoiled is because one or more of the facial regions in the photograph suffer from photographic defects other than red-eye defects, even though red eye defects can be common in cameras not operating with the advantages of the techniques described, e.g., at U.S. Pat. No. 6,407,777, and at US published applications nos. 2005/0140801, 2005/0041121, 2006/0093212, and 2006/0204054, which are assigned to the same assignee and hereby incorporated by reference. Common examples occur when people move or shake their head; when someone closes their eyes or blinks or someone yawns.
- U.S. Pat. No. 6,301,440 which is incorporated by reference, discloses an image acquisition device wherein the instant of exposure is controlled by image content.
- a trigger When activated, the image proposed by the user is analysed and imaging parameters are altered to obtain optimum image quality before the device proceeds to take the image. For example, the device could postpone acquisition of the image until every person in the image is smiling.
- An image processing method including acquiring a main image of a scene.
- One or more facial regions are determined in the main image.
- the one or more main image facial regions are analyzed for defects and one or more are determined to be defective.
- a sequence of relatively low resolution images nominally of the scene are acquired.
- One or more sets of low resolution facial regions in the sequence are analyzed to determine one or more that correspond to a defective main image facial region. At least a portion of the defective main image facial region is corrected with image information from one or more corresponding low resolution facial regions not including a same defect as said portion of said defective main image facial region.
- the sequence of low resolution images may be specifically acquired for a time period not including a time for acquiring the main image.
- the method may also include combining defect-free low resolution facial regions into a combined image, and correcting at least the portion of the defective main image facial region with image information from the combined image.
- Another image processing method includes acquiring a main image of a scene.
- One or more facial regions in the main image are determined, and analyzed to determine if any are defective.
- a sequence of relatively low resolution images is acquired nominally of the scene for a time period not including a time for acquiring the main image.
- One or more sets of low resolution facial regions are determined in the sequence of low resolution images.
- the sets of facial regions are analyzed to determine if any facial regions of a set corresponding to a defective facial region of the main image include a defect.
- Defect free facial regions of the corresponding set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of said main image are corrected with image information from a corresponding high quality defect free facial region.
- the time period may include one or more of a time period preceding or a time period following the time for acquiring the main image.
- the correcting may include applying a model including multiple vertices defining a periphery of a facial region to each high quality defect-free facial region and a corresponding defective facial region. Pixels may be mapped of the high quality defect-free facial region to the defective facial region according to the correspondence of vertices for the respective regions.
- the model may include an Active Appearance Model (AAM).
- AAM Active Appearance Model
- the main image may be acquired at an exposure level different to the exposure level of the low resolution images.
- the correcting may include mapping luminance levels of the high quality defect free facial region to luminance levels of the defective facial region.
- Sets of low resolution facial regions from the sequence of low resolution images may be stored in an image header file of the main image.
- the method may include displaying the main image and/or corrected image, and selected actions may be user-initiated.
- the analyzing of the sets may include, prior to the combining in the second method, removing facial regions including faces exceeding an average size of faces in a set of facial regions by a threshold amount from said set of facial regions, and/or removing facial regions including faces with an orientation outside an average orientation of faces in a set of facial regions by a threshold amount from said set of facial regions.
- the analyzing of sets may include the following: applying an Active Appearance Model (AAM) to each face of a set of facial regions; analyzing AAM parameters for each face of the set of facial regions to provide an indication of facial expression; and prior to the combining in the second method, removing faces having a defective expression from the set of facial regions.
- AAM Active Appearance Model
- the analyzing of sets may include the following: applying an Active Appearance Model (AAM) to each face of a set of facial regions; analysing AAM parameters for each face of the set of facial regions to provide an indication of facial orientation; and prior to said combining in the second method, removing faces having an undesirable orientation from said set of facial regions.
- AAM Active Appearance Model
- the analyzing of facial regions may include applying an Active Appearance Model (AAM) to each facial region, and analyzing AAM parameters for each facial region to provide an indication of facial expression, and/or analyzing each facial region for contrast, sharpness, texture, luminance levels or skin color or combinations thereof, and/or analyzing each facial region to determine if an eye of the facial region is closed, if a mouth of the facial region is open and/or if a mouth of the facial region is smiling.
- AAM Active Appearance Model
- the method may be such that the correcting, and the combining in the second method, only occur when the set of facial regions exceeds a given number.
- the method may also include resizing and aligning faces of the set of facial regions, and the aligning may be performed according to cardinal points of faces of the set of facial regions.
- the correcting may include blending and/or infilling a corrected region of the main image with the remainder of the main image.
- FIG. 1 is a block diagram of an image processing apparatus operating in accordance with an embodiment of the present invention
- FIG. 2 is a flow diagram of an image processing method according to a preferred embodiment of the present invention.
- FIGS. 3 and 4 show exemplary sets of images to which an active appearance model has been applied.
- Certain embodiments can be implemented with a digital camera which incorporates (i) a face tracker operative on a preview image stream; (ii) a super-resolution processing module configured to create a higher resolution image from a composite of several low-resolution images; and (iii) a facial region quality analysis module for determining the quality of facial regions.
- super-resolution is applied to preview facial regions extracted during face tracking.
- the embodiments enable the correction of errors or flaws in the facial regions of an acquired image within a digital camera using preview image data and employing super-resolution techniques.
- FIG. 1 is a block diagram of an image acquisition device 20 , which in the present embodiment is a portable digital camera, operating in accordance with certain embodiments. It will be appreciated that many of the processes implemented in the digital camera are implemented in or controlled by software operating on a microprocessor, central processing unit, controller, digital signal processor and/or an application specific integrated circuit, collectively depicted as processor 120 . All user interface and control of peripheral components such as buttons and display is controlled by a microcontroller 122 .
- the processor 120 in response to a user input at 122 , such as half pressing a shutter button (pre-capture mode 32 ), initiates and controls the digital photographic process.
- Ambient light exposure is determined using a light sensor 40 in order to automatically determine if a flash is to be used.
- the distance to the subject is determined using a focusing mechanism 50 which also focuses the image on an image capture device 60 . If a flash is to be used, processor 120 causes a flash device 70 to generate a photographic flash in substantial coincidence with the recording of the image by the image capture device 60 upon full depression of the shutter button.
- the image capture device 60 digitally records the image in colour.
- the image capture device is known to those familiar with the art and may include a CCD (charge coupled device) or CMOS to facilitate digital recording.
- the flash may be selectively generated either in response to the light sensor 40 or a manual input 72 from the user of the camera.
- the high resolution image recorded by image capture device 60 is stored in an image store 80 which may comprise computer memory such a dynamic random access memory or a non-volatile memory.
- the camera is equipped with a display 100 , such as an LCD, both for displaying preview images and displaying a user interface for camera control software.
- the display 100 can assist the user in composing the image, as well as being used to determine focus and exposure.
- Temporary storage 82 is used to store one or plurality of the stream of preview images and can be part of the image store 80 or a separate component.
- the preview image is usually generated by the image capture device 60 .
- preview images usually have a lower pixel resolution than the main image taken when the shutter button is fully depressed, and are generated by sub-sampling a raw captured image using software 124 which can be part of the general processor 120 or dedicated hardware or combination thereof.
- a face detection and tracking module 130 such as described in U.S. application Ser. No. 11/1,464,083, filed Aug. 11, 2006, which is hereby incorporated by reference, is operably connected to the sub-sampler 124 to control the sub-sampled resolution of the preview images in accordance with the requirements of the face detection and tracking module.
- Preview images stored in temporary storage 82 are available to the module 130 which records the locations of faces tracked and detected in the preview image stream.
- the module 130 is operably connected to the display 100 so that boundaries of detected and tracked face regions can be superimposed on the display around the faces during preview.
- the face tracking module 130 is arranged to extract and store tracked facial regions at relatively low resolution in a memory buffer such as memory 82 and possibly for storage as meta-data in an acquired image header stored in memory 80 . Where multiple face regions are tracked, a buffer is established for each tracked face region. These buffers are of finite size (10-20 extracted face regions in a preferred embodiment) and generally operate on a first-in-first-out (FIFO) basis.
- a memory buffer such as memory 82 and possibly for storage as meta-data in an acquired image header stored in memory 80 .
- a buffer is established for each tracked face region.
- These buffers are of finite size (10-20 extracted face regions in a preferred embodiment) and generally operate on a first-in-first-out (FIFO) basis.
- the device 20 further comprises an image correction module 90 .
- the module 90 is arranged for off-line correction of acquired images in an external processing device 10 , such as a desktop computer, a colour printer or a photo kiosk, face regions detected and/or tracked in preview images are preferably stored as meta-data within the image header.
- an external processing device 10 such as a desktop computer, a colour printer or a photo kiosk
- face regions detected and/or tracked in preview images are preferably stored as meta-data within the image header.
- the module 90 is implemented within the camera 20 , it can have direct access to the buffer 82 where preview images and/or face region information is stored.
- the module 90 receives the captured high resolution digital image from the store 80 and analyzes it to detect defects. The analysis is performed as described in the embodiments to follow. If defects are found, the module can modify the image to remove the defect.
- the modified image may be either displayed on image display 100 , saved on a persistent storage 112 which can be internal or a removable storage such as CF card, SD card or the like, or downloaded to another device via image output means 110 which can be tethered or wireless.
- the module 90 can be brought into operation either automatically each time an image is captured, or upon user demand via input 30 . Although illustrated as a separate item, where the module 90 is part of the camera, it may be implemented by suitable software on the processor 120 .
- the main components of the image correction module include a quality module 140 which is arranged to analyse face regions from either low or high resolution images to determine if these include face defects.
- a super-resolution module 160 is arranged to combine multiple low-resolution face regions of the same subject generally with the same pose and a desirable facial expression to provide a high quality face region for use in the correction process.
- an active appearance model (AAM) module 150 produces AAM parameters for face regions again from either low or high resolution images.
- AAM modules are well known and a suitable module for the present embodiment is disclosed in “Fast and Reliable Active Appearance Model Search for 3-D Face Tracking”, F Dornaika and J Ahlberg, IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, Vol. 34, No. 4, pg. 1838-1853, August 2004, although other models based on the original paper by T F Cootes et al “Active Appearance Models” Proc. European Conf. Computer Vision, 1998, pp 484-498 could also be employed.
- the AAM module 150 can preferably cooperate with the quality module 140 to provide pose and/or expression indicators to allow for selection of images in the analysis and optionally in the correction process described below. Also, the AAM module 150 can preferably cooperate with the super-resolution module 160 to provide pose indicators to allow for selection of images in the correction process, again described in more detail below.
- step 230 when a main image is acquired, step 230 , the location and size of any detected/tracked face region(s) in the main acquired image (high resolution) will be known by the module 90 from the module 130 . Face detection can either be applied directly on the acquired image and/or information for face regions previously detected and/or tracked in the preview stream can be used for face detection in the main image (indicated by the dashed line extending from step 220 ).
- the facial region quality analysis module 140 extracts and analyzes face regions tracked/detected at step 240 in the main image to determine the quality of the acquired face regions.
- the module 140 can apply a preliminary analysis to measure the overall contrast, sharpness and/or texture of detected face region(s). This can indicate if the entire face region was blurred due to motion of the subject at the instant of acquisition. If a facial region is not sufficiently well defined then it is marked as a blur defect. In additional or alternatively, another stage of analysis can focus on the eye region of the face(s) to determine if one, or both eyes were fully or partially closed at the instant of acquisition and the face region is categorized accordingly. As mentioned previously, if AAM analysis is performed on the image, then the AAM parameters can be used to indicate whether a subject's eyes are open or not. It should be noted that in the above analyses, the module 90 detects blink or blur due to localized movement of the subject as opposed to global image blur.
- Another or alternative stage of analysis focuses on the mouth region and determines if the mouth is opened in a yawn or indeed not smiling; again the face region is categorized accordingly.
- AAM analysis is performed on the image, then the AAM parameters can be used to indicate the state of a subject's mouth.
- exemplary tests might include luminance levels, skin colour and texture histograms, abrupt facial expressions (smiling, frowning) which may cause significant variations in facial features (mouth shape, furrows in brow).
- Specialized tests can be implemented as additional or alternative image analysis filters, for example, a Hough transform filter could be used to detect parallel lines in a face region above the eyes indicating a “furrowed brow”.
- Other image analysis techniques such as those known in the art and as disclosed in U.S. Pat. No. 6,301,440 can also be employed to categorise the face region(s) of the main image.
- step 260 it is decided (for each face region) if any of these defects occurred, step 260 , and the camera or external processing device user can be offered the option of repairing the defect based on the buffered (low resolution) face region data, step 265 .
- each of the low-resolution face regions is first analyzed by the face region quality analyzer, step 270 .
- the analysis may vary from the analysis of face regions in the main acquired image at step 250 . Nevertheless the analysis steps are similar in that each low-resolution face region is analyzed to determine if it suffers from image defects in which case it should not be selected at step 280 to reconstruct the defective face region(s) in the main image.
- an indication is passed to the user that image repair is not viable. Where there are enough “good” face regions, these are passed on for resizing and alignment, step 285 .
- This step re-sizes each face region and performs some local alignment of cardinal face points to correct for variations in pose and to ensure that each of the low-resolution face regions overlap one another as uniformly as is practical for later processing.
- image alignment can be achieved using cardinal face points, in particular those relating to the eyes, mouth, and lower face (chin region) which is normally delineated by a distinct boundary edge, and the upper face which is normally delineated by a distinctive hairline boundary.
- the low-resolution images captured and stored at steps 2001210 can be captured either from a time period before capturing the main image or from a period following capture of the main image (indicated by the dashed line extending from step 230 ). For example, it may be possible to capture suitable defect free low resolution images in a period immediately after a subject has stopped moving/blinking etc. following capture of the main Image.
- This set of selected defect free face regions is next passed to a super-resolution module 160 which combines them using known super-resolution methods to yield a high resolution face region which is compatible with a corresponding region of the main acquired image.
- the system has available to it, a high quality defect-free combination face region and a high resolution main image with a generally corresponding defective face region.
- the defective face region(s) as well as the corresponding high quality defect-free face region are subjected to AAM analysis, step 300 .
- FIG. 3( a ) to ( d ) which illustrates some images including face regions which have been processed by the AAM module 150 .
- the model represented by the wire frame superimposed on the face is tuned for a generally forward facing and generally upright face, although separate models can be deployed for use with inclined faces or faces in profile.
- the model returns a set of coordinates for the vertices of the wire frame; as well as texture parameters for each of the triangular elements defined by adjacent vertices.
- the relative coordinates of the vertices as well as the texture parameters can in turn provide indicators linked to the expression and inclination of the face which can be used in quality analysis as mentioned above.
- the AAM module 150 can also be used in the facial region analysis steps 250 / 270 to provide in indicator of whether a mouth or eyes are open i.e. smiling and not blinking; and also to help determine in steps 285 / 290 implemented by the super-resolution module 160 whether facial regions are similarly aligned or inclined for selection before super-resolution.
- FIG. 3( a ) as an example of a facial region produced by super-resolution of low resolution images, it is observed that the set of vertices comprising the periphery of the AAM model define a region which can be mapped on to corresponding set of peripheral vertices of FIG. 3( b ) to FIG. 3( d ) where these images have been classified and confirmed by the user as defective facial regions and candidates for correction.
- the model parameters for FIG. 4( a ) or 4 ( b ) which might represent super-resolved defect free face regions could indicate that the left-right orientation of these face regions would not make them suitable candidates for correcting the face region of FIG. 4( c ).
- the face region of FIG. 4( f ) could be a more suitable candidate than the face region of FIG. 4( e ) for correcting the face region of FIG. 4( d ).
- the super-resolved face region is deemed to be compatible with the defective face region
- information from the super-resolved face region can be pasted onto the main image by any suitable technique to correct the face region of the main image, step 320 .
- the corrected image can be viewed and depending on the nature of the mapping, it can be adjusted by the user, before being finally accepted or rejected, step 330 . So for example, where dithering around the periphery of the corrected face region is used as part of the correction process, step 320 , the degree of dithering can be adjusted.
- luminance levels or texture parameters in the corrected regions can be manually adjusted by the user, or indeed any parameter of the corrected region and the mapping process can be manually adjusted prior to final approval or rejection by the user.
- AAM provides one approach to determine the outside boundary of a facial region
- other well-known image processing techniques such as edge detection, region growing and skin color analysis may be used in addition or as alternatives to AAM.
- edge detection region growing and skin color analysis
- Other techniques which can prove useful include applying foreground/background separation to either the low-resolution images or the main image prior to running face detection to reduce overall processing time by only analysing foreground regions and particularly foreground skin segments. Local colour segmentation applied across the boundary of a foreground/background contour can assist in further refining the boundary of a facial region.
- buttons on the camera user interface where the correction module is implemented on the acquisition device 20 are typically selected through buttons on the camera user interface where the correction module is implemented on the acquisition device 20 .
- An example may be used of a defect where one eye is shut in the main image frame due to the subject “blinking” during the acquisition.
- the user is prompted to determine if they wish to correct this defect. If they confirm this, then the camera begins by analyzing a set of face regions stored from preview images acquired immediately prior to the main image acquisition. It is assumed that a set of, say, 20 images was saved from the one second period immediately prior to image acquisition. As the defect was a blinking eye, the initial testing determines that the last, say, 10 of these preview images are not useful. However the previous 10 images are determined to be suitable.
- Additional testing of these images might include the determination of facial pose, eliminating images where the facial pose varies more than 5% from the averaged pose across all previews; a determination of the size of the facial region, eliminating images where the averaged size varies more than 25% from the averaged size across all images.
- the reason the threshold is higher for the latter test is that it is easier to rescale face regions than to correct for pose variations.
- the regions that are combined may include portions of the background region surrounding the main face region. This is particularly important where the defect to be corrected in the main acquired image is due to face motion during image exposure. This will lead to a face region with a poorly defined outer boundary in the main image and the super-resolution image which is superimposed upon it typically incorporates portions of the background for properly correcting this face motion defect.
- a determination of whether to include background regions for face reconstruction can be made by the user, or may be determined automatically after a defect analysis is performed on the main acquired image. In the latter case, where the defect comprises blurring due to face motion, then background regions will normally be included in the super-resolution reconstruction process.
- a reconstructed background can be created using either (i) region infilling techniques for a background region of relatively homogeneous colour and texture characteristics, or (ii) directly from the preview image stream using image alignment and super-resolution techniques.
- the reconstructed background is merged into a gap in the main image background created by the separation of foreground from background; the reconstructed face region is next merged into the separated foreground region, specifically into the facial region of the foreground and finally the foreground is re-integrated with the enhanced background region.
- some additional scaling and alignment operations are normally involved.
- some blending, infilling and morphological operations may be used in order to ensure a smooth transition between the newly constructed super-resolution face region and the background of the main acquired image. This is particularly the case where the defect to be corrected is motion of the face during image exposure. In the case of motion defects it may also be desirable to reconstruct portions of the image background prior to integration of the reconstructed face region into the main image.
- Preview images are acquired under fixed camera settings and can be over/under exposed. This may not be fully compensated for during the super-resolution process and may involve additional image processing operations.
- the patches to be used for super-resolution reconstruction may be sub-regions within a face region.
- the patches to be used for super-resolution reconstruction may be sub-regions within a face region.
- a determination of the precise boundary of the sub-region is of less importance as the sub-region will be merged into a surrounding region of substantially similar colour and texture (i.e. skin colour and texture).
- separate face regions may be individually tracked (see also U.S. application Ser. No. 11/1,464,083, which is hereby incorporated by reference). Regions may be tracked from frame-to-frame. Preview or post-view face regions can be extracted, analyzed and aligned with each other and with the face region in the main or final acquired image.
- faces may be tracked between frames in order to find and associate smaller details between previews or post-views on the face. For example, a left eye from Joe's face in preview N may be associated with a left eye from Joe's face in preview N+1. These may be used together to form one or more enhanced quality images of Joe's eye.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
An image processing technique includes acquiring a main image of a scene and determining one or more facial regions in the main image. The facial regions are analysed to determine if any of the facial regions includes a defect. A sequence of relatively low resolution images nominally of the same scene is also acquired. One or more sets of low resolution facial regions in the sequence of low resolution images are determined and analysed for defects. Defect free facial regions of a set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of the main image are corrected with image information from a corresponding high quality defect free facial region.
Description
- This application is a Continuation of U.S. patent application Ser. No. 13/947,095 filed on Jul. 21, 2013, now U.S. Pat. No. 9,025,837; which is a Continuation of U.S. patent application Ser. No. 13/103,077 filed on May 8, 2011, now U.S. Pat. No. 8,515,138; which is a Continuation of U.S. patent application Ser. No. 13/034,707 filed on Feb. 25, 2011, now U.S. Pat. No. 8,494,232; which is a Continuation of U.S. patent application Ser. No. 11/752,925 filed on May 24, 2007, now U.S. Pat. No. 7,916,971; all of which are hereby incorporated by reference in its entirety.
- The present invention relates to an image processing method and apparatus. One of the most common reasons for an acquired digital photograph to be discarded or spoiled is because one or more of the facial regions in the photograph suffer from photographic defects other than red-eye defects, even though red eye defects can be common in cameras not operating with the advantages of the techniques described, e.g., at U.S. Pat. No. 6,407,777, and at US published applications nos. 2005/0140801, 2005/0041121, 2006/0093212, and 2006/0204054, which are assigned to the same assignee and hereby incorporated by reference. Common examples occur when people move or shake their head; when someone closes their eyes or blinks or someone yawns. Where there are several faces in a photograph, it is sufficient for one face to be “defective” for the whole shot to be spoiled. Although digital cameras allow users to quickly shoot several pictures of the same scene, typically, such cameras do not provide warnings of facial errors, nor provide a way to correct for such errors without repeating the composition stages (i.e. getting everyone together again in a group) of taking the photograph and re-shooting the scene. This type of problem is particularly difficult with children who are often photographed in unusual spontaneous poses which cannot be duplicated. When such a shot is spoiled because the child moved their head at the moment of acquisition, it is very disappointing for the photographer.
- U.S. Pat. No. 6,301,440, which is incorporated by reference, discloses an image acquisition device wherein the instant of exposure is controlled by image content. When a trigger is activated, the image proposed by the user is analysed and imaging parameters are altered to obtain optimum image quality before the device proceeds to take the image. For example, the device could postpone acquisition of the image until every person in the image is smiling.
- An image processing method is provided including acquiring a main image of a scene. One or more facial regions are determined in the main image. The one or more main image facial regions are analyzed for defects and one or more are determined to be defective. A sequence of relatively low resolution images nominally of the scene are acquired. One or more sets of low resolution facial regions in the sequence are analyzed to determine one or more that correspond to a defective main image facial region. At least a portion of the defective main image facial region is corrected with image information from one or more corresponding low resolution facial regions not including a same defect as said portion of said defective main image facial region.
- The sequence of low resolution images may be specifically acquired for a time period not including a time for acquiring the main image. The method may also include combining defect-free low resolution facial regions into a combined image, and correcting at least the portion of the defective main image facial region with image information from the combined image.
- Another image processing method is provided that includes acquiring a main image of a scene. One or more facial regions in the main image are determined, and analyzed to determine if any are defective. A sequence of relatively low resolution images is acquired nominally of the scene for a time period not including a time for acquiring the main image. One or more sets of low resolution facial regions are determined in the sequence of low resolution images. The sets of facial regions are analyzed to determine if any facial regions of a set corresponding to a defective facial region of the main image include a defect. Defect free facial regions of the corresponding set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of said main image are corrected with image information from a corresponding high quality defect free facial region.
- The time period may include one or more of a time period preceding or a time period following the time for acquiring the main image. The correcting may include applying a model including multiple vertices defining a periphery of a facial region to each high quality defect-free facial region and a corresponding defective facial region. Pixels may be mapped of the high quality defect-free facial region to the defective facial region according to the correspondence of vertices for the respective regions. The model may include an Active Appearance Model (AAM).
- The main image may be acquired at an exposure level different to the exposure level of the low resolution images. The correcting may include mapping luminance levels of the high quality defect free facial region to luminance levels of the defective facial region.
- Sets of low resolution facial regions from the sequence of low resolution images may be stored in an image header file of the main image.
- The method may include displaying the main image and/or corrected image, and selected actions may be user-initiated.
- The analyzing of the sets may include, prior to the combining in the second method, removing facial regions including faces exceeding an average size of faces in a set of facial regions by a threshold amount from said set of facial regions, and/or removing facial regions including faces with an orientation outside an average orientation of faces in a set of facial regions by a threshold amount from said set of facial regions.
- The analyzing of sets may include the following: applying an Active Appearance Model (AAM) to each face of a set of facial regions; analyzing AAM parameters for each face of the set of facial regions to provide an indication of facial expression; and prior to the combining in the second method, removing faces having a defective expression from the set of facial regions.
- The analyzing of sets may include the following: applying an Active Appearance Model (AAM) to each face of a set of facial regions; analysing AAM parameters for each face of the set of facial regions to provide an indication of facial orientation; and prior to said combining in the second method, removing faces having an undesirable orientation from said set of facial regions.
- The analyzing of facial regions may include applying an Active Appearance Model (AAM) to each facial region, and analyzing AAM parameters for each facial region to provide an indication of facial expression, and/or analyzing each facial region for contrast, sharpness, texture, luminance levels or skin color or combinations thereof, and/or analyzing each facial region to determine if an eye of the facial region is closed, if a mouth of the facial region is open and/or if a mouth of the facial region is smiling.
- The method may be such that the correcting, and the combining in the second method, only occur when the set of facial regions exceeds a given number. The method may also include resizing and aligning faces of the set of facial regions, and the aligning may be performed according to cardinal points of faces of the set of facial regions.
- The correcting may include blending and/or infilling a corrected region of the main image with the remainder of the main image.
- Embodiments will now be described, by way of example, with reference to the accompanying drawings, in which:
-
FIG. 1 is a block diagram of an image processing apparatus operating in accordance with an embodiment of the present invention; -
FIG. 2 is a flow diagram of an image processing method according to a preferred embodiment of the present invention; and -
FIGS. 3 and 4 show exemplary sets of images to which an active appearance model has been applied. - Certain embodiments can be implemented with a digital camera which incorporates (i) a face tracker operative on a preview image stream; (ii) a super-resolution processing module configured to create a higher resolution image from a composite of several low-resolution images; and (iii) a facial region quality analysis module for determining the quality of facial regions.
- Preferably, super-resolution is applied to preview facial regions extracted during face tracking.
- The embodiments enable the correction of errors or flaws in the facial regions of an acquired image within a digital camera using preview image data and employing super-resolution techniques.
-
FIG. 1 is a block diagram of animage acquisition device 20, which in the present embodiment is a portable digital camera, operating in accordance with certain embodiments. It will be appreciated that many of the processes implemented in the digital camera are implemented in or controlled by software operating on a microprocessor, central processing unit, controller, digital signal processor and/or an application specific integrated circuit, collectively depicted asprocessor 120. All user interface and control of peripheral components such as buttons and display is controlled by amicrocontroller 122. - In operation, the
processor 120, in response to a user input at 122, such as half pressing a shutter button (pre-capture mode 32), initiates and controls the digital photographic process. Ambient light exposure is determined using alight sensor 40 in order to automatically determine if a flash is to be used. The distance to the subject is determined using a focusing mechanism 50 which also focuses the image on animage capture device 60. If a flash is to be used,processor 120 causes aflash device 70 to generate a photographic flash in substantial coincidence with the recording of the image by theimage capture device 60 upon full depression of the shutter button. - The
image capture device 60 digitally records the image in colour. The image capture device is known to those familiar with the art and may include a CCD (charge coupled device) or CMOS to facilitate digital recording. The flash may be selectively generated either in response to thelight sensor 40 or amanual input 72 from the user of the camera. The high resolution image recorded byimage capture device 60 is stored in animage store 80 which may comprise computer memory such a dynamic random access memory or a non-volatile memory. The camera is equipped with adisplay 100, such as an LCD, both for displaying preview images and displaying a user interface for camera control software. - In the case of preview images which are generated in the
pre-capture mode 32 with the shutter button half-pressed, thedisplay 100 can assist the user in composing the image, as well as being used to determine focus and exposure.Temporary storage 82 is used to store one or plurality of the stream of preview images and can be part of theimage store 80 or a separate component. The preview image is usually generated by theimage capture device 60. For speed and memory efficiency reasons, preview images usually have a lower pixel resolution than the main image taken when the shutter button is fully depressed, and are generated by sub-sampling a raw capturedimage using software 124 which can be part of thegeneral processor 120 or dedicated hardware or combination thereof. - In the present embodiment, a face detection and
tracking module 130 such as described in U.S. application Ser. No. 11/1,464,083, filed Aug. 11, 2006, which is hereby incorporated by reference, is operably connected to the sub-sampler 124 to control the sub-sampled resolution of the preview images in accordance with the requirements of the face detection and tracking module. Preview images stored intemporary storage 82 are available to themodule 130 which records the locations of faces tracked and detected in the preview image stream. In one embodiment, themodule 130 is operably connected to thedisplay 100 so that boundaries of detected and tracked face regions can be superimposed on the display around the faces during preview. - In the embodiment of
FIG. 1 , theface tracking module 130 is arranged to extract and store tracked facial regions at relatively low resolution in a memory buffer such asmemory 82 and possibly for storage as meta-data in an acquired image header stored inmemory 80. Where multiple face regions are tracked, a buffer is established for each tracked face region. These buffers are of finite size (10-20 extracted face regions in a preferred embodiment) and generally operate on a first-in-first-out (FIFO) basis. - According to the preferred embodiment, the
device 20 further comprises animage correction module 90. Where themodule 90 is arranged for off-line correction of acquired images in anexternal processing device 10, such as a desktop computer, a colour printer or a photo kiosk, face regions detected and/or tracked in preview images are preferably stored as meta-data within the image header. However, where themodule 90 is implemented within thecamera 20, it can have direct access to thebuffer 82 where preview images and/or face region information is stored. - In this embodiment, the
module 90 receives the captured high resolution digital image from thestore 80 and analyzes it to detect defects. The analysis is performed as described in the embodiments to follow. If defects are found, the module can modify the image to remove the defect. The modified image may be either displayed onimage display 100, saved on apersistent storage 112 which can be internal or a removable storage such as CF card, SD card or the like, or downloaded to another device via image output means 110 which can be tethered or wireless. Themodule 90 can be brought into operation either automatically each time an image is captured, or upon user demand viainput 30. Although illustrated as a separate item, where themodule 90 is part of the camera, it may be implemented by suitable software on theprocessor 120. - The main components of the image correction module include a
quality module 140 which is arranged to analyse face regions from either low or high resolution images to determine if these include face defects. Asuper-resolution module 160 is arranged to combine multiple low-resolution face regions of the same subject generally with the same pose and a desirable facial expression to provide a high quality face region for use in the correction process. In the present embodiment, an active appearance model (AAM)module 150 produces AAM parameters for face regions again from either low or high resolution images. - AAM modules are well known and a suitable module for the present embodiment is disclosed in “Fast and Reliable Active Appearance Model Search for 3-D Face Tracking”, F Dornaika and J Ahlberg, IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, Vol. 34, No. 4, pg. 1838-1853, August 2004, although other models based on the original paper by T F Cootes et al “Active Appearance Models” Proc. European Conf. Computer Vision, 1998, pp 484-498 could also be employed.
- The
AAM module 150 can preferably cooperate with thequality module 140 to provide pose and/or expression indicators to allow for selection of images in the analysis and optionally in the correction process described below. Also, theAAM module 150 can preferably cooperate with thesuper-resolution module 160 to provide pose indicators to allow for selection of images in the correction process, again described in more detail below. - Referring now to
FIG. 2 , which illustrates an exemplary processing flow for certain embodiments, when a main image is acquired,step 230, the location and size of any detected/tracked face region(s) in the main acquired image (high resolution) will be known by themodule 90 from themodule 130. Face detection can either be applied directly on the acquired image and/or information for face regions previously detected and/or tracked in the preview stream can be used for face detection in the main image (indicated by the dashed line extending from step 220). Atstep 250, the facial regionquality analysis module 140 extracts and analyzes face regions tracked/detected atstep 240 in the main image to determine the quality of the acquired face regions. For example, themodule 140 can apply a preliminary analysis to measure the overall contrast, sharpness and/or texture of detected face region(s). This can indicate if the entire face region was blurred due to motion of the subject at the instant of acquisition. If a facial region is not sufficiently well defined then it is marked as a blur defect. In additional or alternatively, another stage of analysis can focus on the eye region of the face(s) to determine if one, or both eyes were fully or partially closed at the instant of acquisition and the face region is categorized accordingly. As mentioned previously, if AAM analysis is performed on the image, then the AAM parameters can be used to indicate whether a subject's eyes are open or not. It should be noted that in the above analyses, themodule 90 detects blink or blur due to localized movement of the subject as opposed to global image blur. - Another or alternative stage of analysis focuses on the mouth region and determines if the mouth is opened in a yawn or indeed not smiling; again the face region is categorized accordingly. As mentioned previously, if AAM analysis is performed on the image, then the AAM parameters can be used to indicate the state of a subject's mouth.
- Other exemplary tests might include luminance levels, skin colour and texture histograms, abrupt facial expressions (smiling, frowning) which may cause significant variations in facial features (mouth shape, furrows in brow). Specialized tests can be implemented as additional or alternative image analysis filters, for example, a Hough transform filter could be used to detect parallel lines in a face region above the eyes indicating a “furrowed brow”. Other image analysis techniques such as those known in the art and as disclosed in U.S. Pat. No. 6,301,440 can also be employed to categorise the face region(s) of the main image.
- After this analysis, it is decided (for each face region) if any of these defects occurred,
step 260, and the camera or external processing device user can be offered the option of repairing the defect based on the buffered (low resolution) face region data,step 265. - When the repair option is actuated by the user, each of the low-resolution face regions is first analyzed by the face region quality analyzer,
step 270. As this analysis is operative on lower resolution images acquired and stored atsteps 200/210, the analysis may vary from the analysis of face regions in the main acquired image atstep 250. Nevertheless the analysis steps are similar in that each low-resolution face region is analyzed to determine if it suffers from image defects in which case it should not be selected atstep 280 to reconstruct the defective face region(s) in the main image. After this analysis and selection, if there are not enough “good” face regions corresponding to a defective face region available from the stream of low-resolution images, an indication is passed to the user that image repair is not viable. Where there are enough “good” face regions, these are passed on for resizing and alignment,step 285. - This step re-sizes each face region and performs some local alignment of cardinal face points to correct for variations in pose and to ensure that each of the low-resolution face regions overlap one another as uniformly as is practical for later processing.
- It should also be noted that as these image regions were captured in sequence and over a relatively short duration, it is expected that they are of approximately the same size and orientation. Thus, image alignment can be achieved using cardinal face points, in particular those relating to the eyes, mouth, and lower face (chin region) which is normally delineated by a distinct boundary edge, and the upper face which is normally delineated by a distinctive hairline boundary. Some slight scaling and morphing of extracted face regions may be used to achieve reasonable alignment, however a very precise alignment of these images is not desirable as it would undermine the super-resolution techniques which enable a higher resolution image to be determined from several low-resolution images.
- It should be noted that the low-resolution images captured and stored at steps 2001210 can be captured either from a time period before capturing the main image or from a period following capture of the main image (indicated by the dashed line extending from step 230). For example, it may be possible to capture suitable defect free low resolution images in a period immediately after a subject has stopped moving/blinking etc. following capture of the main Image.
- This set of selected defect free face regions is next passed to a
super-resolution module 160 which combines them using known super-resolution methods to yield a high resolution face region which is compatible with a corresponding region of the main acquired image. - Now the system has available to it, a high quality defect-free combination face region and a high resolution main image with a generally corresponding defective face region.
- If this has not already been performed for quality analysis, the defective face region(s) as well as the corresponding high quality defect-free face region are subjected to AAM analysis,
step 300. Referring now toFIG. 3( a) to (d), which illustrates some images including face regions which have been processed by theAAM module 150. In this case, the model represented by the wire frame superimposed on the face is tuned for a generally forward facing and generally upright face, although separate models can be deployed for use with inclined faces or faces in profile. Once the model has been applied, it returns a set of coordinates for the vertices of the wire frame; as well as texture parameters for each of the triangular elements defined by adjacent vertices. The relative coordinates of the vertices as well as the texture parameters can in turn provide indicators linked to the expression and inclination of the face which can be used in quality analysis as mentioned above. - It will therefore be seen that the
AAM module 150 can also be used in the facial region analysis steps 250/270 to provide in indicator of whether a mouth or eyes are open i.e. smiling and not blinking; and also to help determine insteps 285/290 implemented by thesuper-resolution module 160 whether facial regions are similarly aligned or inclined for selection before super-resolution. - So, using
FIG. 3( a) as an example of a facial region produced by super-resolution of low resolution images, it is observed that the set of vertices comprising the periphery of the AAM model define a region which can be mapped on to corresponding set of peripheral vertices ofFIG. 3( b) toFIG. 3( d) where these images have been classified and confirmed by the user as defective facial regions and candidates for correction. - In relation to
FIG. 4 , the model parameters forFIG. 4( a) or 4(b) which might represent super-resolved defect free face regions could indicate that the left-right orientation of these face regions would not make them suitable candidates for correcting the face region ofFIG. 4( c). Similarly, the face region ofFIG. 4( f) could be a more suitable candidate than the face region ofFIG. 4( e) for correcting the face region ofFIG. 4( d). - In any case, if the super-resolved face region is deemed to be compatible with the defective face region, information from the super-resolved face region can be pasted onto the main image by any suitable technique to correct the face region of the main image,
step 320. The corrected image can be viewed and depending on the nature of the mapping, it can be adjusted by the user, before being finally accepted or rejected,step 330. So for example, where dithering around the periphery of the corrected face region is used as part of the correction process, step 320, the degree of dithering can be adjusted. Similarly, luminance levels or texture parameters in the corrected regions can be manually adjusted by the user, or indeed any parameter of the corrected region and the mapping process can be manually adjusted prior to final approval or rejection by the user. - While AAM provides one approach to determine the outside boundary of a facial region, other well-known image processing techniques such as edge detection, region growing and skin color analysis may be used in addition or as alternatives to AAM. However, these may not have the advantage of also being useful in analysing a face region for defects and/or for pose information. Other techniques which can prove useful include applying foreground/background separation to either the low-resolution images or the main image prior to running face detection to reduce overall processing time by only analysing foreground regions and particularly foreground skin segments. Local colour segmentation applied across the boundary of a foreground/background contour can assist in further refining the boundary of a facial region.
- Once the user is satisfied with the placement of the reconstructed face region they may choose to merge it with the main image; alternatively, if they are not happy they can cancel the reconstruction process. These actions are typically selected through buttons on the camera user interface where the correction module is implemented on the
acquisition device 20. - As practical examples let us consider an example of the system used to correct an eye defect. An example may be used of a defect where one eye is shut in the main image frame due to the subject “blinking” during the acquisition. Immediately after the main image acquisition the user is prompted to determine if they wish to correct this defect. If they confirm this, then the camera begins by analyzing a set of face regions stored from preview images acquired immediately prior to the main image acquisition. It is assumed that a set of, say, 20 images was saved from the one second period immediately prior to image acquisition. As the defect was a blinking eye, the initial testing determines that the last, say, 10 of these preview images are not useful. However the previous 10 images are determined to be suitable. Additional testing of these images might include the determination of facial pose, eliminating images where the facial pose varies more than 5% from the averaged pose across all previews; a determination of the size of the facial region, eliminating images where the averaged size varies more than 25% from the averaged size across all images. The reason the threshold is higher for the latter test is that it is easier to rescale face regions than to correct for pose variations.
- In variations of the above described embodiment, the regions that are combined may include portions of the background region surrounding the main face region. This is particularly important where the defect to be corrected in the main acquired image is due to face motion during image exposure. This will lead to a face region with a poorly defined outer boundary in the main image and the super-resolution image which is superimposed upon it typically incorporates portions of the background for properly correcting this face motion defect. A determination of whether to include background regions for face reconstruction can be made by the user, or may be determined automatically after a defect analysis is performed on the main acquired image. In the latter case, where the defect comprises blurring due to face motion, then background regions will normally be included in the super-resolution reconstruction process. In an alternative embodiment, a reconstructed background can be created using either (i) region infilling techniques for a background region of relatively homogeneous colour and texture characteristics, or (ii) directly from the preview image stream using image alignment and super-resolution techniques. In the latter case the reconstructed background is merged into a gap in the main image background created by the separation of foreground from background; the reconstructed face region is next merged into the separated foreground region, specifically into the facial region of the foreground and finally the foreground is re-integrated with the enhanced background region.
- After applying super-resolution methods to create a higher resolution face region from multiple low-resolution preview images, some additional scaling and alignment operations are normally involved. Furthermore, some blending, infilling and morphological operations may be used in order to ensure a smooth transition between the newly constructed super-resolution face region and the background of the main acquired image. This is particularly the case where the defect to be corrected is motion of the face during image exposure. In the case of motion defects it may also be desirable to reconstruct portions of the image background prior to integration of the reconstructed face region into the main image.
- It is also desirable to match the overall luminance levels of the new face region with that of the old face region, and this is best achieved through a matching of the skin colour between the old region and the newly constructed one. Preview images are acquired under fixed camera settings and can be over/under exposed. This may not be fully compensated for during the super-resolution process and may involve additional image processing operations.
- While the above described embodiments have been directed to replacing face regions within an image, it will be seen that AAM can be used to model any type of feature of an image. So in certain embodiments, the patches to be used for super-resolution reconstruction may be sub-regions within a face region. For example, it may be desired to reconstruct only a segment of the face regions, such as an eye or mouth region, rather than the entire face region. In such cases, a determination of the precise boundary of the sub-region is of less importance as the sub-region will be merged into a surrounding region of substantially similar colour and texture (i.e. skin colour and texture). Thus, it is sufficient to center the eye regions to be combined or to align the corners of the mouth regions and to rely on blending the surrounding skin coloured areas into the main image.
- In one or more of the above embodiments, separate face regions may be individually tracked (see also U.S. application Ser. No. 11/1,464,083, which is hereby incorporated by reference). Regions may be tracked from frame-to-frame. Preview or post-view face regions can be extracted, analyzed and aligned with each other and with the face region in the main or final acquired image. In addition, in techniques according to certain embodiments, faces may be tracked between frames in order to find and associate smaller details between previews or post-views on the face. For example, a left eye from Joe's face in preview N may be associated with a left eye from Joe's face in preview N+1. These may be used together to form one or more enhanced quality images of Joe's eye. This is advantageous because small features (an eye, a mouth, a nose, an eye component such as an eye lid or eye brow, or a pupil or iris, or an ear, chin, beard, mustache, forehead, hairstyle, etc. are not as easily traceable between frames as larger features and their absolute or relative positional shifts between frames tend to be more substantial relative to their size.
- The present invention is not limited to the embodiments described above herein, which may be amended or modified without departing from the scope of the present invention as set forth in the appended claims, and structural and functional equivalents thereof.
- In methods that may be performed according to preferred embodiments herein and that may have been described above and/or claimed below, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations.
- In addition, all references cited above herein, in addition to the background and summary of the invention sections themselves, are hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components. The following are also incorporated by reference for this purpose: U.S. patent applications Nos. 60/829,127, 60/804,546, 60/821,165 Ser. Nos. 11/1,554,539, 11/464,083, 11/027,001, 10/842,244, 11/024,046, 11/233,513, 11/460,218, 11/573,713, 11/319,766, 11/464,083, 10/744,020 and 11/460,218, and U.S. published application no. 2006/0285754.
Claims (25)
1. A computerized method comprising:
receiving a plurality of images of approximately a same scene;
detecting at least a first object and a second object within a first image from among the plurality of images;
detecting at least a third object and a fourth object within a second image from among the plurality of images;
tracking the first object in the first image as corresponding to the third object in the second image;
tracking the second object in the first image as corresponding to the fourth object in the second image;
wherein each of the first object, the second object, the third object, and the fourth object is distinguishable from each other.
2. The method of claim 1 , further comprising:
detecting at least a fifth object within a third image from among the plurality of images based on the tracking of the first object and the tracking of the second object;
identifying the fifth object as corresponding to either the first object and the third object or the second object and the fourth object.
3. The method of claim 2 , further comprising:
when the fifth object corresponds to the first object and the third object, correcting the third image by replacing at least the fifth object within the third image with the first object and the third object;
when the fifth object corresponds to the second object and the fourth object, correcting the third image by replacing at least the fifth object within the third image with the second object and the fourth object.
4. The method of claim 2 , wherein the first image, the second image, or the third image has at least one of a different focus setting, exposure level, or resolution from each other.
5. The method of claim 1 , further comprising:
extracting the first object and the second object from the first image;
extracting the third object and the fourth object from the second image;
storing the first object and the third object in a first data store;
storing the second object and the fourth object in a second data store.
6. The method of claim 1 , wherein the tracking of the first object comprises identifying a first location of the first object within the first image and a third location of the third object within the second image.
7. The method of claim 1 , wherein the first object comprises a face, a portion of a face, a facial feature, an eye, a mouth, a nose, a part of an eye, an eye lid, an eye brow, a pupil, an iris, an ear, a chin, a beard, a mustache, a forehead, or a hairstyle.
8. An apparatus comprising:
a memory including a plurality of images of approximately a same scene;
a processor in communication with the memory, the processor configured to:
detect at least a first object and a second object within a first image from among the plurality of images;
detect at least a third object and a fourth object within a second image from among the plurality of images;
track the first object in the first image as corresponding to the third object in the second image;
track the second object in the first image as corresponding to the fourth object in the second image;
wherein each of the first object, the second object, the third object, and the fourth object is distinguishable from each other.
9. The apparatus of claim 8 , wherein the processor is further configured to:
detect at least a fifth object within a third image from among the plurality of images based on the tracking of the first object and the tracking of the second object;
identify the fifth object as corresponding to either the first object and the third object or the second object and the fourth object.
10. The apparatus of claim 9 , wherein the processor is further configured to:
when the fifth object corresponds to the first object and the third object, correct the third image by replacing at least the fifth object within the third image with the first object and the third object;
when the fifth object corresponds to the second object and the fourth object, correct the third image by replacing at least the fifth object within the third image with the second object and the fourth object.
11. The apparatus of claim 9 , wherein the first image, the second image, or the third image has at least one of a different focus setting, exposure level, or resolution from each other.
12. The apparatus of claim 8 , wherein the processor is further configured to:
extract the first object and the second object from the first image;
extract the third object and the fourth object from the second image;
store the first object and the third object in a first data store included in the memory;
store the second object and the fourth object in a second data store included in the memory.
13. The apparatus of claim 8 , wherein the processor tracks the first object by identifying a first location of the first object within the first image and a third location of the third object within the second image.
14. The apparatus of claim 8 , wherein the first object comprises a face, a portion of a face, a facial feature, an eye, a mouth, a nose, a part of an eye, an eye lid, an eye brow, a pupil, an iris, an ear, a chin, a beard, a mustache, a forehead, or a hairstyle.
15. The apparatus of claim 8 , further comprising a lens and an image sensor to acquire the plurality of images.
16. A computerized method comprising:
receiving a plurality of images of approximately a same scene;
detecting at least a first object within a first image from among the plurality of images;
tracking the first object within the first image to be a second object within a second image from among the plurality of images;
tracking the second object within the second image to be a third object within a third image from among the plurality of images.
17. The method of claim 16 , wherein the tracking of the second object comprises detecting the third object within the third image based on the first object or the second object.
18. The method of claim 16 , further comprising replacing the third object within the third image with the first object and the second object.
19. The method of claim 16 , wherein the tracking of the first object comprises identifying a first location of the first object within the first image and a second location of the second object within the second image.
20. The method of claim 16 , wherein the first image, the second image, or the third image has at least one of a different focus setting, exposure level, or resolution from each other.
21. The method of claim 16 , wherein the first object, the second object, and the third object comprise a same object.
22. An apparatus comprising:
a memory including a plurality of images of approximately a same scene;
a processor in communication with the memory, the processor configured to:
detect at least a first object within a first image from among the plurality of images;
track the first object within the first image to be a second object within a second image from among the plurality of images;
track the second object within the second image to be a third object within a third image from among the plurality of images.
23. The apparatus of claim 22 , wherein the processor tracks the second object to detect the third object within the third image based on the first object or the second object.
24. The apparatus of claim 22 , wherein the processor replaces the third object within the third image with the first object and the second object.
25. The apparatus of claim 22 , wherein the third object comprises a face, a portion of a face, a facial feature, an eye, a mouth, a nose, a part of an eye, an eye lid, an eye brow, a pupil, an iris, an ear, a chin, a beard, a mustache, a forehead, or a hairstyle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/703,719 US20150235087A1 (en) | 2007-05-24 | 2015-05-04 | Image Processing Method and Apparatus |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/752,925 US7916971B2 (en) | 2007-05-24 | 2007-05-24 | Image processing method and apparatus |
US13/034,707 US8494232B2 (en) | 2007-05-24 | 2011-02-25 | Image processing method and apparatus |
US13/103,077 US8515138B2 (en) | 2007-05-24 | 2011-05-08 | Image processing method and apparatus |
US13/947,095 US9025837B2 (en) | 2007-05-24 | 2013-07-21 | Image processing method and apparatus |
US14/703,719 US20150235087A1 (en) | 2007-05-24 | 2015-05-04 | Image Processing Method and Apparatus |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/947,095 Continuation US9025837B2 (en) | 2007-05-24 | 2013-07-21 | Image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150235087A1 true US20150235087A1 (en) | 2015-08-20 |
Family
ID=39743089
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/752,925 Active 2030-01-26 US7916971B2 (en) | 2007-05-24 | 2007-05-24 | Image processing method and apparatus |
US13/034,707 Expired - Fee Related US8494232B2 (en) | 2007-05-24 | 2011-02-25 | Image processing method and apparatus |
US13/103,077 Active US8515138B2 (en) | 2007-05-24 | 2011-05-08 | Image processing method and apparatus |
US13/947,095 Active US9025837B2 (en) | 2007-05-24 | 2013-07-21 | Image processing method and apparatus |
US14/703,719 Abandoned US20150235087A1 (en) | 2007-05-24 | 2015-05-04 | Image Processing Method and Apparatus |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/752,925 Active 2030-01-26 US7916971B2 (en) | 2007-05-24 | 2007-05-24 | Image processing method and apparatus |
US13/034,707 Expired - Fee Related US8494232B2 (en) | 2007-05-24 | 2011-02-25 | Image processing method and apparatus |
US13/103,077 Active US8515138B2 (en) | 2007-05-24 | 2011-05-08 | Image processing method and apparatus |
US13/947,095 Active US9025837B2 (en) | 2007-05-24 | 2013-07-21 | Image processing method and apparatus |
Country Status (2)
Country | Link |
---|---|
US (5) | US7916971B2 (en) |
IE (1) | IES20070518A2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140376785A1 (en) * | 2013-06-20 | 2014-12-25 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US20170034453A1 (en) * | 2015-07-31 | 2017-02-02 | Sony Corporation | Automated embedding and blending head images |
US20170221186A1 (en) * | 2016-01-30 | 2017-08-03 | Samsung Electronics Co., Ltd. | Device for and method of enhancing quality of an image |
US20180183998A1 (en) * | 2016-12-22 | 2018-06-28 | Qualcomm Incorporated | Power reduction and performance improvement through selective sensor image downscaling |
CN111709878A (en) * | 2020-06-17 | 2020-09-25 | 北京百度网讯科技有限公司 | Face super-resolution implementation method and device, electronic equipment and storage medium |
CN111932594A (en) * | 2020-09-18 | 2020-11-13 | 西安拙河安见信息科技有限公司 | Billion pixel video alignment method and device based on optical flow and medium |
Families Citing this family (98)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US8698924B2 (en) | 2007-03-05 | 2014-04-15 | DigitalOptics Corporation Europe Limited | Tone mapping for low-light video frame enhancement |
US7636486B2 (en) | 2004-11-10 | 2009-12-22 | Fotonation Ireland Ltd. | Method of determining PSF using multiple instances of a nominally similar scene |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8989516B2 (en) | 2007-09-18 | 2015-03-24 | Fotonation Limited | Image processing method and apparatus |
WO2009089847A1 (en) * | 2008-01-18 | 2009-07-23 | Fotonation Vision Limited | Image processing method and apparatus |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7685341B2 (en) * | 2005-05-06 | 2010-03-23 | Fotonation Vision Limited | Remote control apparatus for consumer electronic appliances |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7639889B2 (en) | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method of notifying users regarding motion artifacts based on image analysis |
US8264576B2 (en) | 2007-03-05 | 2012-09-11 | DigitalOptics Corporation Europe Limited | RGBW sensor array |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US9160897B2 (en) | 2007-06-14 | 2015-10-13 | Fotonation Limited | Fast motion estimation method |
US8180173B2 (en) * | 2007-09-21 | 2012-05-15 | DigitalOptics Corporation Europe Limited | Flash artifact eye defect correction in blurred images using anisotropic blurring |
US8199222B2 (en) | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US7606417B2 (en) | 2004-08-16 | 2009-10-20 | Fotonation Vision Limited | Foreground/background segmentation in digital images with differential exposure calculations |
US8417055B2 (en) | 2007-03-05 | 2013-04-09 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7639888B2 (en) | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US8995715B2 (en) | 2010-10-26 | 2015-03-31 | Fotonation Limited | Face or other object detection including template matching |
US7694048B2 (en) * | 2005-05-06 | 2010-04-06 | Fotonation Vision Limited | Remote control apparatus for printer appliances |
IES20070229A2 (en) | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
ATE497218T1 (en) | 2006-06-12 | 2011-02-15 | Tessera Tech Ireland Ltd | ADVANCES IN EXPANSING AAM TECHNIQUES FROM GRAYSCALE TO COLOR IMAGES |
US20100306318A1 (en) * | 2006-09-28 | 2010-12-02 | Sfgt Inc. | Apparatuses, methods, and systems for a graphical code-serving interface |
EP2254063A3 (en) * | 2006-09-28 | 2011-04-27 | SFGT Inc. | Apparatuses, methods, and systems for code triggered information querying and serving |
KR101089393B1 (en) * | 2006-12-22 | 2011-12-07 | 노키아 코포레이션 | Removal of artifacts in flash images |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
US7773118B2 (en) | 2007-03-25 | 2010-08-10 | Fotonation Vision Limited | Handheld article with movement discrimination |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
EP2153378A1 (en) * | 2007-06-01 | 2010-02-17 | National ICT Australia Limited | Face recognition |
JP5097480B2 (en) * | 2007-08-29 | 2012-12-12 | 株式会社トプコン | Image measuring device |
ES2661736T3 (en) * | 2007-10-03 | 2018-04-03 | Kabushiki Kaisha Toshiba | Visual exam device and visual exam method |
JP5166409B2 (en) * | 2007-11-29 | 2013-03-21 | 株式会社東芝 | Video processing method and video processing apparatus |
WO2009094661A1 (en) * | 2008-01-24 | 2009-07-30 | The Trustees Of Columbia University In The City Of New York | Methods, systems, and media for swapping faces in images |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US8126221B2 (en) * | 2008-02-14 | 2012-02-28 | Ecole Polytechnique Federale De Lausanne (Epfl) | Interactive device and method for transmitting commands from a user |
US8055101B2 (en) * | 2008-04-29 | 2011-11-08 | Adobe Systems Incorporated | Subpixel registration |
WO2010012448A2 (en) | 2008-07-30 | 2010-02-04 | Fotonation Ireland Limited | Automatic face and skin beautification using face detection |
US8130278B2 (en) * | 2008-08-01 | 2012-03-06 | Omnivision Technologies, Inc. | Method for forming an improved image using images with different resolutions |
US9030486B2 (en) | 2008-08-22 | 2015-05-12 | University Of Virginia Patent Foundation | System and method for low bandwidth image transmission |
DE102009019186B4 (en) * | 2009-04-28 | 2013-10-17 | Emin Luis Aksoy | Device for detecting a maximum resolution of the details of a digital image |
US8594439B2 (en) * | 2009-05-28 | 2013-11-26 | Hewlett-Packard Development Company, L.P. | Image processing |
US20120177288A1 (en) * | 2009-08-04 | 2012-07-12 | Vesalis | Image-processing method for correcting a target image with respect to a reference image, and corresponding image-processing device |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
WO2011069698A1 (en) | 2009-12-11 | 2011-06-16 | Tessera Technologies Ireland Limited | Panorama imaging |
US20110141229A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama imaging using super-resolution |
US20110141225A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama Imaging Based on Low-Res Images |
US8294748B2 (en) * | 2009-12-11 | 2012-10-23 | DigitalOptics Corporation Europe Limited | Panorama imaging using a blending map |
US20110141224A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama Imaging Using Lo-Res Images |
US20110141226A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama imaging based on a lo-res map |
US10080006B2 (en) | 2009-12-11 | 2018-09-18 | Fotonation Limited | Stereoscopic (3D) panorama creation on handheld device |
US8692867B2 (en) | 2010-03-05 | 2014-04-08 | DigitalOptics Corporation Europe Limited | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US20110235856A1 (en) * | 2010-03-24 | 2011-09-29 | Naushirwan Patuck | Method and system for composing an image based on multiple captured images |
US8355039B2 (en) | 2010-07-06 | 2013-01-15 | DigitalOptics Corporation Europe Limited | Scene background blurring including range measurement |
US9053681B2 (en) | 2010-07-07 | 2015-06-09 | Fotonation Limited | Real-time video frame pre-processing hardware |
US8970770B2 (en) | 2010-09-28 | 2015-03-03 | Fotonation Limited | Continuous autofocus based on face detection and tracking |
US8648959B2 (en) | 2010-11-11 | 2014-02-11 | DigitalOptics Corporation Europe Limited | Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing |
US8659697B2 (en) | 2010-11-11 | 2014-02-25 | DigitalOptics Corporation Europe Limited | Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing |
US8308379B2 (en) | 2010-12-01 | 2012-11-13 | Digitaloptics Corporation | Three-pole tilt control system for camera module |
US8836777B2 (en) | 2011-02-25 | 2014-09-16 | DigitalOptics Corporation Europe Limited | Automatic detection of vertical gaze using an embedded imaging device |
JP2012205285A (en) * | 2011-03-28 | 2012-10-22 | Sony Corp | Video signal processing apparatus and video signal processing method |
US8971588B2 (en) * | 2011-03-30 | 2015-03-03 | General Electric Company | Apparatus and method for contactless high resolution handprint capture |
US8723959B2 (en) | 2011-03-31 | 2014-05-13 | DigitalOptics Corporation Europe Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US8982180B2 (en) | 2011-03-31 | 2015-03-17 | Fotonation Limited | Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries |
US8860816B2 (en) | 2011-03-31 | 2014-10-14 | Fotonation Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US8896703B2 (en) | 2011-03-31 | 2014-11-25 | Fotonation Limited | Superresolution enhancment of peripheral regions in nonlinear lens geometries |
KR101836432B1 (en) * | 2011-12-16 | 2018-03-12 | 삼성전자주식회사 | Image pickup apparatus, method for image compensation and computer-readable recording medium |
WO2013136053A1 (en) | 2012-03-10 | 2013-09-19 | Digitaloptics Corporation | Miniature camera module with mems-actuated autofocus |
US9294667B2 (en) | 2012-03-10 | 2016-03-22 | Digitaloptics Corporation | MEMS auto focus miniature camera module with fixed and movable lens groups |
US8929596B2 (en) * | 2012-06-04 | 2015-01-06 | International Business Machines Corporation | Surveillance including a modified video data stream |
WO2014072837A2 (en) | 2012-06-07 | 2014-05-15 | DigitalOptics Corporation Europe Limited | Mems fast focus camera module |
US9007520B2 (en) | 2012-08-10 | 2015-04-14 | Nanchang O-Film Optoelectronics Technology Ltd | Camera module with EMI shield |
US9001268B2 (en) | 2012-08-10 | 2015-04-07 | Nan Chang O-Film Optoelectronics Technology Ltd | Auto-focus camera module with flexible printed circuit extension |
US9242602B2 (en) | 2012-08-27 | 2016-01-26 | Fotonation Limited | Rearview imaging systems for vehicle |
TWI474201B (en) * | 2012-10-17 | 2015-02-21 | Inst Information Industry | Construction system scene fragment, method and recording medium |
US9055207B2 (en) | 2012-12-31 | 2015-06-09 | Digitaloptics Corporation | Auto-focus camera module with MEMS distance measurement |
US10402846B2 (en) | 2013-05-21 | 2019-09-03 | Fotonation Limited | Anonymizing facial expression data with a smart-cam |
US10091419B2 (en) * | 2013-06-14 | 2018-10-02 | Qualcomm Incorporated | Computer vision application processing |
EP2816564B1 (en) | 2013-06-21 | 2020-07-22 | Nokia Technologies Oy | Method and apparatus for smart video rendering |
US9286706B1 (en) | 2013-12-06 | 2016-03-15 | Google Inc. | Editing image regions based on previous user edits |
EP3198557A4 (en) * | 2014-09-26 | 2017-12-13 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method |
US10229478B2 (en) | 2014-09-26 | 2019-03-12 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method |
EP3275122A4 (en) * | 2015-03-27 | 2018-11-21 | Intel Corporation | Avatar facial expression and/or speech driven animations |
EP3316952A4 (en) * | 2015-06-30 | 2019-03-13 | ResMed Limited | Mask sizing tool using a mobile application |
US10616502B2 (en) * | 2015-09-21 | 2020-04-07 | Qualcomm Incorporated | Camera preview |
US10732809B2 (en) * | 2015-12-30 | 2020-08-04 | Google Llc | Systems and methods for selective retention and editing of images captured by mobile image capture device |
JP2017208616A (en) * | 2016-05-16 | 2017-11-24 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
WO2018048838A1 (en) * | 2016-09-06 | 2018-03-15 | Apple Inc. | Still image stabilization/optical image stabilization synchronization in multi-camera image capture |
US10489887B2 (en) * | 2017-04-10 | 2019-11-26 | Samsung Electronics Co., Ltd. | System and method for deep learning image super resolution |
RU2667790C1 (en) | 2017-09-01 | 2018-09-24 | Самсунг Электроникс Ко., Лтд. | Method of automatic adjustment of exposition for infrared camera and user computer device using this method |
GB2570447A (en) * | 2018-01-23 | 2019-07-31 | Canon Kk | Method and system for improving construction of regions of interest |
JP7073785B2 (en) * | 2018-03-05 | 2022-05-24 | オムロン株式会社 | Image inspection equipment, image inspection method and image inspection program |
CN109919876B (en) * | 2019-03-11 | 2020-09-01 | 四川川大智胜软件股份有限公司 | Three-dimensional real face modeling method and three-dimensional real face photographing system |
US10924629B1 (en) | 2019-12-12 | 2021-02-16 | Amazon Technologies, Inc. | Techniques for validating digital media content |
US10904476B1 (en) * | 2019-12-12 | 2021-01-26 | Amazon Technologies, Inc. | Techniques for up-sampling digital media content |
US11222402B2 (en) * | 2020-04-24 | 2022-01-11 | Aupera Technologies, Inc. | Adaptive image enhancement |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070237421A1 (en) * | 2006-03-29 | 2007-10-11 | Eastman Kodak Company | Recomposing photographs from multiple frames |
Family Cites Families (326)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4047187A (en) | 1974-04-01 | 1977-09-06 | Canon Kabushiki Kaisha | System for exposure measurement and/or focus detection by means of image senser |
US4456354A (en) | 1980-01-11 | 1984-06-26 | Olympus Optical Co., Ltd. | Exposure controller for a camera |
US4367027A (en) * | 1980-03-12 | 1983-01-04 | Honeywell Inc. | Active auto focus system improvement |
US4317991A (en) * | 1980-03-12 | 1982-03-02 | Honeywell Inc. | Digital auto focus system utilizing a photodetector array |
DE3132860A1 (en) * | 1981-08-20 | 1983-03-03 | Lukas-Erzett Vereinigte Schleif- und Fräswerkzeugfabriken, 5250 Engelskirchen | "METHOD FOR PRODUCING END MILLING AND MILLING MANUFACTURED" |
JPS5870217A (en) | 1981-10-23 | 1983-04-26 | Fuji Photo Film Co Ltd | Camera-shake detecting device |
JPS61105978A (en) | 1984-10-30 | 1986-05-24 | Sanyo Electric Co Ltd | Automatic focusing circuit |
US4690536A (en) | 1985-09-09 | 1987-09-01 | Minolta Camera Kabushiki Kaisha | Exposure control device for a camera in flash photography |
US4745427A (en) * | 1985-09-13 | 1988-05-17 | Minolta Camera Kabushiki Kaisha | Multi-point photometric apparatus |
DE3778234D1 (en) | 1986-01-20 | 1992-05-21 | Scanera S C | IMAGE PROCESSING DEVICE FOR CONTROLLING THE TRANSFER FUNCTION OF AN OPTICAL SYSTEM. |
US4970683A (en) | 1986-08-26 | 1990-11-13 | Heads Up Technologies, Inc. | Computerized checklist with predetermined sequences of sublists which automatically returns to skipped checklists |
US5291234A (en) * | 1987-02-04 | 1994-03-01 | Asahi Kogaku Kogyo Kabushiki Kaisha | Auto optical focus detecting device and eye direction detecting optical system |
JPH01158579A (en) * | 1987-09-09 | 1989-06-21 | Aisin Seiki Co Ltd | Image recognizing device |
US4975969A (en) | 1987-10-22 | 1990-12-04 | Peter Tal | Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same |
US5384912A (en) * | 1987-10-30 | 1995-01-24 | New Microtime Inc. | Real time video image processing system |
US5018017A (en) | 1987-12-25 | 1991-05-21 | Kabushiki Kaisha Toshiba | Electronic still camera and image recording method thereof |
US4970663A (en) | 1989-04-28 | 1990-11-13 | Avid Technology, Inc. | Method and apparatus for manipulating digital video data |
US5227837A (en) | 1989-05-12 | 1993-07-13 | Fuji Photo Film Co., Ltd. | Photograph printing method |
US5111231A (en) | 1989-07-27 | 1992-05-05 | Canon Kabushiki Kaisha | Camera system |
US5063603A (en) | 1989-11-06 | 1991-11-05 | David Sarnoff Research Center, Inc. | Dynamic method for recognizing objects and image processing system therefor |
US5164831A (en) | 1990-03-15 | 1992-11-17 | Eastman Kodak Company | Electronic still camera providing multi-format storage of full and reduced resolution images |
US5150432A (en) | 1990-03-26 | 1992-09-22 | Kabushiki Kaisha Toshiba | Apparatus for encoding/decoding video signals to improve quality of a specific region |
US5274714A (en) | 1990-06-04 | 1993-12-28 | Neuristics, Inc. | Method and apparatus for determining and organizing feature vectors for neural network recognition |
US5161204A (en) | 1990-06-04 | 1992-11-03 | Neuristics, Inc. | Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices |
GB9019538D0 (en) * | 1990-09-07 | 1990-10-24 | Philips Electronic Associated | Tracking a moving object |
JP2748678B2 (en) | 1990-10-09 | 1998-05-13 | 松下電器産業株式会社 | Gradation correction method and gradation correction device |
JP2766067B2 (en) | 1990-10-31 | 1998-06-18 | キヤノン株式会社 | Imaging device |
US5164992A (en) * | 1990-11-01 | 1992-11-17 | Massachusetts Institute Of Technology | Face recognition system |
US5493409A (en) * | 1990-11-29 | 1996-02-20 | Minolta Camera Kabushiki Kaisha | Still video camera having a printer capable of printing a photographed image in a plurality of printing modes |
JPH04257830A (en) * | 1991-02-12 | 1992-09-14 | Nikon Corp | Flash light dimming controller for camera |
JP2790562B2 (en) | 1992-01-06 | 1998-08-27 | 富士写真フイルム株式会社 | Image processing method |
US5638136A (en) | 1992-01-13 | 1997-06-10 | Mitsubishi Denki Kabushiki Kaisha | Method and apparatus for detecting flesh tones in an image |
US5488429A (en) * | 1992-01-13 | 1996-01-30 | Mitsubishi Denki Kabushiki Kaisha | Video signal processor for detecting flesh tones in am image |
JP2973676B2 (en) | 1992-01-23 | 1999-11-08 | 松下電器産業株式会社 | Face image feature point extraction device |
US5331544A (en) | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5450504A (en) | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
US5680481A (en) | 1992-05-26 | 1997-10-21 | Ricoh Corporation | Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system |
JP3298072B2 (en) | 1992-07-10 | 2002-07-02 | ソニー株式会社 | Video camera system |
US5311240A (en) | 1992-11-03 | 1994-05-10 | Eastman Kodak Company | Technique suited for use in multi-zone autofocusing cameras for improving image quality for non-standard display sizes and/or different focal length photographing modes |
KR100276681B1 (en) | 1992-11-07 | 2001-01-15 | 이데이 노부유끼 | Video camera system |
JPH06178261A (en) | 1992-12-07 | 1994-06-24 | Nikon Corp | Digital still camera |
US5550928A (en) | 1992-12-15 | 1996-08-27 | A.C. Nielsen Company | Audience measurement system and method |
JP2983407B2 (en) * | 1993-03-31 | 1999-11-29 | 三菱電機株式会社 | Image tracking device |
US5384615A (en) * | 1993-06-08 | 1995-01-24 | Industrial Technology Research Institute | Ambient depth-of-field simulation exposuring method |
US6369148B2 (en) * | 1993-07-16 | 2002-04-09 | Ciba Specialty Chemicals Corporation | Oxygen-scavenging compositions and articles |
US5432863A (en) | 1993-07-19 | 1995-07-11 | Eastman Kodak Company | Automated detection and correction of eye color defects due to flash illumination |
WO1995006297A1 (en) * | 1993-08-27 | 1995-03-02 | Massachusetts Institute Of Technology | Example-based image analysis and synthesis using pixelwise correspondence |
US5835616A (en) | 1994-02-18 | 1998-11-10 | University Of Central Florida | Face detection using templates |
US5781650A (en) | 1994-02-18 | 1998-07-14 | University Of Central Florida | Automatic feature detection and age classification of human faces in digital images |
US5852669A (en) | 1994-04-06 | 1998-12-22 | Lucent Technologies Inc. | Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video |
US5519451A (en) | 1994-04-14 | 1996-05-21 | Texas Instruments Incorporated | Motion adaptive scan-rate conversion using directional edge interpolation |
US5774754A (en) | 1994-04-26 | 1998-06-30 | Minolta Co., Ltd. | Camera capable of previewing a photographed image |
US5678098A (en) | 1994-06-09 | 1997-10-14 | Fuji Photo Film Co., Ltd. | Method and apparatus for controlling exposure of camera |
EP0723721A1 (en) | 1994-08-12 | 1996-07-31 | Koninklijke Philips Electronics N.V. | Optical synchronisation arrangement and transmission system |
US5496106A (en) * | 1994-12-13 | 1996-03-05 | Apple Computer, Inc. | System and method for generating a contrast overlay as a focus assist for an imaging device |
US6426779B1 (en) | 1995-01-04 | 2002-07-30 | Sony Electronics, Inc. | Method and apparatus for providing favorite station and programming information in a multiple station broadcast system |
US6128398A (en) | 1995-01-31 | 2000-10-03 | Miros Inc. | System, method and application for the recognition, verification and similarity ranking of facial or other object patterns |
US5724456A (en) * | 1995-03-31 | 1998-03-03 | Polaroid Corporation | Brightness adjustment of images using digital scene analysis |
US5870138A (en) * | 1995-03-31 | 1999-02-09 | Hitachi, Ltd. | Facial image processing |
US5710833A (en) * | 1995-04-20 | 1998-01-20 | Massachusetts Institute Of Technology | Detection, recognition and coding of complex objects using probabilistic eigenspace analysis |
US5774129A (en) | 1995-06-07 | 1998-06-30 | Massachusetts Institute Of Technology | Image analysis and synthesis networks using shape and texture information |
US5844573A (en) | 1995-06-07 | 1998-12-01 | Massachusetts Institute Of Technology | Image compression by pointwise prototype correspondence using shape and texture information |
US5842194A (en) | 1995-07-28 | 1998-11-24 | Mitsubishi Denki Kabushiki Kaisha | Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions |
US5715325A (en) * | 1995-08-30 | 1998-02-03 | Siemens Corporate Research, Inc. | Apparatus and method for detecting a face in a video image |
US5850470A (en) | 1995-08-30 | 1998-12-15 | Siemens Corporate Research, Inc. | Neural network for locating and recognizing a deformable object |
US5774591A (en) | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
US5802220A (en) | 1995-12-15 | 1998-09-01 | Xerox Corporation | Apparatus and method for tracking facial motion through a sequence of images |
US5633678A (en) | 1995-12-20 | 1997-05-27 | Eastman Kodak Company | Electronic still camera for capturing and categorizing images |
JPH09212620A (en) | 1996-01-31 | 1997-08-15 | Nissha Printing Co Ltd | Manufacture of face image |
US6151073A (en) | 1996-03-28 | 2000-11-21 | Fotonation, Inc. | Intelligent camera flash system |
US5911139A (en) | 1996-03-29 | 1999-06-08 | Virage, Inc. | Visual image database search engine which allows for different schema |
US5764803A (en) | 1996-04-03 | 1998-06-09 | Lucent Technologies Inc. | Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences |
US5802208A (en) | 1996-05-06 | 1998-09-01 | Lucent Technologies Inc. | Face recognition using DCT-based feature vectors |
US6188776B1 (en) | 1996-05-21 | 2001-02-13 | Interval Research Corporation | Principle component analysis of images for the automatic location of control points |
US5991456A (en) | 1996-05-29 | 1999-11-23 | Science And Technology Corporation | Method of improving a digital image |
US6173068B1 (en) * | 1996-07-29 | 2001-01-09 | Mikos, Ltd. | Method and apparatus for recognizing and classifying individuals based on minutiae |
US5978519A (en) | 1996-08-06 | 1999-11-02 | Xerox Corporation | Automatic image cropping |
US20030118216A1 (en) | 1996-09-04 | 2003-06-26 | Goldberg David A. | Obtaining person-specific images in a public venue |
AU4206897A (en) | 1996-09-05 | 1998-03-26 | Telecom Ptt | Quantum cryptography device and method |
US6028960A (en) * | 1996-09-20 | 2000-02-22 | Lucent Technologies Inc. | Face feature analysis for automatic lipreading and character animation |
US5852823A (en) | 1996-10-16 | 1998-12-22 | Microsoft | Image classification and retrieval system using a query-by-example paradigm |
US5818975A (en) | 1996-10-28 | 1998-10-06 | Eastman Kodak Company | Method and apparatus for area selective exposure adjustment |
US6765612B1 (en) | 1996-12-09 | 2004-07-20 | Flashpoint Technology, Inc. | Method and system for naming images captured by a digital camera |
JPH10208047A (en) * | 1997-01-23 | 1998-08-07 | Nissan Motor Co Ltd | On-vehicle traveling environment recognizing device |
US6061055A (en) | 1997-03-21 | 2000-05-09 | Autodesk, Inc. | Method of tracking objects with an imaging device |
US6249315B1 (en) | 1997-03-24 | 2001-06-19 | Jack M. Holm | Strategy for pictorial digital image processing |
JP3222091B2 (en) * | 1997-05-27 | 2001-10-22 | シャープ株式会社 | Image processing apparatus and medium storing image processing apparatus control program |
US7057653B1 (en) | 1997-06-19 | 2006-06-06 | Minolta Co., Ltd. | Apparatus capable of image capturing |
AUPO798697A0 (en) * | 1997-07-15 | 1997-08-07 | Silverbrook Research Pty Ltd | Data processing method and apparatus (ART51) |
US6360021B1 (en) * | 1998-07-30 | 2002-03-19 | The Regents Of The University Of California | Apparatus and methods of image and signal processing |
US6188777B1 (en) * | 1997-08-01 | 2001-02-13 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
US6072094A (en) | 1997-08-06 | 2000-06-06 | Merck & Co., Inc. | Efficient synthesis of cyclopropylacetylene |
US6252976B1 (en) | 1997-08-29 | 2001-06-26 | Eastman Kodak Company | Computer program product for redeye detection |
JP3661367B2 (en) | 1997-09-09 | 2005-06-15 | コニカミノルタフォトイメージング株式会社 | Camera with shake correction function |
KR19990030882A (en) | 1997-10-07 | 1999-05-06 | 이해규 | Digital still camera with adjustable focus position and its control method |
US7738015B2 (en) * | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
US6407777B1 (en) | 1997-10-09 | 2002-06-18 | Deluca Michael Joseph | Red-eye filter method and apparatus |
JP3724157B2 (en) * | 1997-10-30 | 2005-12-07 | コニカミノルタホールディングス株式会社 | Video observation device |
US6108437A (en) | 1997-11-14 | 2000-08-22 | Seiko Epson Corporation | Face recognition apparatus, method, system and computer readable medium thereof |
US6128397A (en) | 1997-11-21 | 2000-10-03 | Justsystem Pittsburgh Research Center | Method for finding all frontal faces in arbitrarily complex visual scenes |
JP3361980B2 (en) | 1997-12-12 | 2003-01-07 | 株式会社東芝 | Eye gaze detecting apparatus and method |
AU2207599A (en) | 1997-12-29 | 1999-07-19 | Cornell Research Foundation Inc. | Image subregion querying using color correlograms |
US6268939B1 (en) | 1998-01-08 | 2001-07-31 | Xerox Corporation | Method and apparatus for correcting luminance and chrominance data in digital color images |
US6148092A (en) | 1998-01-08 | 2000-11-14 | Sharp Laboratories Of America, Inc | System for detecting skin-tone regions within an image |
GB2333590A (en) * | 1998-01-23 | 1999-07-28 | Sharp Kk | Detecting a face-like region |
US6278491B1 (en) | 1998-01-29 | 2001-08-21 | Hewlett-Packard Company | Apparatus and a method for automatically detecting and reducing red-eye in a digital image |
US6400830B1 (en) | 1998-02-06 | 2002-06-04 | Compaq Computer Corporation | Technique for tracking objects through a series of images |
US6556708B1 (en) * | 1998-02-06 | 2003-04-29 | Compaq Computer Corporation | Technique for classifying objects within an image |
US6349373B2 (en) * | 1998-02-20 | 2002-02-19 | Eastman Kodak Company | Digital image management system having method for managing images according to image groups |
US6529630B1 (en) * | 1998-03-02 | 2003-03-04 | Fuji Photo Film Co., Ltd. | Method and device for extracting principal image subjects |
JP3657769B2 (en) | 1998-03-19 | 2005-06-08 | 富士写真フイルム株式会社 | Image processing method and image processing apparatus |
US6192149B1 (en) * | 1998-04-08 | 2001-02-20 | Xerox Corporation | Method and apparatus for automatic detection of image target gamma |
EP0949805A3 (en) | 1998-04-10 | 2001-01-10 | Fuji Photo Film Co., Ltd. | Electronic album producing and viewing system and method |
US6301370B1 (en) | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
US6097470A (en) | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
JP2000048184A (en) | 1998-05-29 | 2000-02-18 | Canon Inc | Method for processing image, and method for extracting facial area and device therefor |
AUPP400998A0 (en) | 1998-06-10 | 1998-07-02 | Canon Kabushiki Kaisha | Face detection in digital images |
US6404900B1 (en) | 1998-06-22 | 2002-06-11 | Sharp Laboratories Of America, Inc. | Method for robust human face tracking in presence of multiple persons |
US6496607B1 (en) | 1998-06-26 | 2002-12-17 | Sarnoff Corporation | Method and apparatus for region-based allocation of processing resources and control of input image formation |
US6362850B1 (en) | 1998-08-04 | 2002-03-26 | Flashpoint Technology, Inc. | Interactive movie creation from one or more still images in a digital imaging device |
DE19837004C1 (en) | 1998-08-14 | 2000-03-09 | Christian Eckes | Process for recognizing objects in digitized images |
GB2341231A (en) | 1998-09-05 | 2000-03-08 | Sharp Kk | Face detection in an image |
US6456732B1 (en) | 1998-09-11 | 2002-09-24 | Hewlett-Packard Company | Automatic rotation, cropping and scaling of images for printing |
US6134339A (en) | 1998-09-17 | 2000-10-17 | Eastman Kodak Company | Method and apparatus for determining the position of eyes and for correcting eye-defects in a captured frame |
US6606398B2 (en) | 1998-09-30 | 2003-08-12 | Intel Corporation | Automatic cataloging of people in digital photographs |
JP3291259B2 (en) | 1998-11-11 | 2002-06-10 | キヤノン株式会社 | Image processing method and recording medium |
US6351556B1 (en) * | 1998-11-20 | 2002-02-26 | Eastman Kodak Company | Method for automatically comparing content of images for classification into events |
WO2000033240A1 (en) * | 1998-12-02 | 2000-06-08 | The Victoria University Of Manchester | Face sub-space determination |
US6263113B1 (en) | 1998-12-11 | 2001-07-17 | Philips Electronics North America Corp. | Method for detecting a face in a digital image |
US6473199B1 (en) | 1998-12-18 | 2002-10-29 | Eastman Kodak Company | Correcting exposure and tone scale of digital images captured by an image capture device |
US6396599B1 (en) | 1998-12-21 | 2002-05-28 | Eastman Kodak Company | Method and apparatus for modifying a portion of an image in accordance with colorimetric parameters |
JP2000197050A (en) | 1998-12-25 | 2000-07-14 | Canon Inc | Image processing unit and its method |
US6438264B1 (en) | 1998-12-31 | 2002-08-20 | Eastman Kodak Company | Method for compensating image color when adjusting the contrast of a digital color image |
US6282317B1 (en) | 1998-12-31 | 2001-08-28 | Eastman Kodak Company | Method for automatic determination of main subjects in photographic images |
US6421468B1 (en) | 1999-01-06 | 2002-07-16 | Seiko Epson Corporation | Method and apparatus for sharpening an image by scaling elements of a frequency-domain representation |
US6463163B1 (en) | 1999-01-11 | 2002-10-08 | Hewlett-Packard Company | System and method for face detection using candidate image region selection |
US7038715B1 (en) | 1999-01-19 | 2006-05-02 | Texas Instruments Incorporated | Digital still camera with high-quality portrait mode |
AUPP839199A0 (en) | 1999-02-01 | 1999-02-25 | Traffic Pro Pty Ltd | Object recognition & tracking system |
US6778216B1 (en) | 1999-03-25 | 2004-08-17 | Texas Instruments Incorporated | Method and apparatus for digital camera real-time image correction in preview mode |
US7106374B1 (en) | 1999-04-05 | 2006-09-12 | Amherst Systems, Inc. | Dynamically reconfigurable vision system |
US6393148B1 (en) | 1999-05-13 | 2002-05-21 | Hewlett-Packard Company | Contrast enhancement of an image using luminance and RGB statistical metrics |
JP2000324437A (en) | 1999-05-13 | 2000-11-24 | Fuurie Kk | Video database system |
WO2000070558A1 (en) * | 1999-05-18 | 2000-11-23 | Sanyo Electric Co., Ltd. | Dynamic image processing method and device and medium |
US6760485B1 (en) | 1999-05-20 | 2004-07-06 | Eastman Kodak Company | Nonlinearly modifying a rendered digital image |
US7248300B1 (en) | 1999-06-03 | 2007-07-24 | Fujifilm Corporation | Camera and method of photographing good image |
US6879705B1 (en) * | 1999-07-14 | 2005-04-12 | Sarnoff Corporation | Method and apparatus for tracking multiple objects in a video sequence |
US6501857B1 (en) | 1999-07-20 | 2002-12-31 | Craig Gotsman | Method and system for detecting and classifying objects in an image |
US6545706B1 (en) | 1999-07-30 | 2003-04-08 | Electric Planet, Inc. | System, method and article of manufacture for tracking a head of a camera-generated image of a person |
US6526161B1 (en) * | 1999-08-30 | 2003-02-25 | Koninklijke Philips Electronics N.V. | System and method for biometrics-based facial feature extraction |
JP4378804B2 (en) | 1999-09-10 | 2009-12-09 | ソニー株式会社 | Imaging device |
WO2001028238A2 (en) * | 1999-10-08 | 2001-04-19 | Sarnoff Corporation | Method and apparatus for enhancing and indexing video and audio signals |
US6937773B1 (en) | 1999-10-20 | 2005-08-30 | Canon Kabushiki Kaisha | Image encoding method and apparatus |
US6792135B1 (en) | 1999-10-29 | 2004-09-14 | Microsoft Corporation | System and method for face detection through geometric distribution of a non-intensity image property |
US6504951B1 (en) * | 1999-11-29 | 2003-01-07 | Eastman Kodak Company | Method for detecting sky in images |
EP1107166A3 (en) * | 1999-12-01 | 2008-08-06 | Matsushita Electric Industrial Co., Ltd. | Device and method for face image extraction, and recording medium having recorded program for the method |
US6754389B1 (en) | 1999-12-01 | 2004-06-22 | Koninklijke Philips Electronics N.V. | Program classification using object tracking |
US6516147B2 (en) | 1999-12-20 | 2003-02-04 | Polaroid Corporation | Scene recognition method and system using brightness and ranging mapping |
US20030035573A1 (en) * | 1999-12-22 | 2003-02-20 | Nicolae Duta | Method for learning-based object detection in cardiac magnetic resonance images |
JP2001186323A (en) | 1999-12-24 | 2001-07-06 | Fuji Photo Film Co Ltd | Identification photograph system and picture on processing method |
JP2001216515A (en) * | 2000-02-01 | 2001-08-10 | Matsushita Electric Ind Co Ltd | Method and device for detecting face of person |
US7043465B2 (en) | 2000-02-24 | 2006-05-09 | Holding B.E.V.S.A. | Method and device for perception of an object by its shape, its size and/or its orientation |
US6940545B1 (en) | 2000-02-28 | 2005-09-06 | Eastman Kodak Company | Face detecting camera and method |
US6807290B2 (en) | 2000-03-09 | 2004-10-19 | Microsoft Corporation | Rapid computer modeling of faces for animation |
US6301440B1 (en) | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US7106887B2 (en) | 2000-04-13 | 2006-09-12 | Fuji Photo Film Co., Ltd. | Image processing method using conditions corresponding to an identified person |
US20020150662A1 (en) | 2000-04-19 | 2002-10-17 | Dewis Mark Lawrence | Ethyl 3-mercaptobutyrate as a flavoring or fragrance agent and methods for preparing and using same |
JP4443722B2 (en) | 2000-04-25 | 2010-03-31 | 富士通株式会社 | Image recognition apparatus and method |
US6944341B2 (en) | 2000-05-01 | 2005-09-13 | Xerox Corporation | Loose gray-scale template matching for image processing of anti-aliased lines |
US20020015662A1 (en) * | 2000-06-15 | 2002-02-07 | Hlavinka Dennis J. | Inactivation of contaminants using photosensitizers and pulsed light |
US6700999B1 (en) * | 2000-06-30 | 2004-03-02 | Intel Corporation | System, method, and apparatus for multiple face tracking |
US6747690B2 (en) | 2000-07-11 | 2004-06-08 | Phase One A/S | Digital camera with integrated accelerometers |
US6564225B1 (en) | 2000-07-14 | 2003-05-13 | Time Warner Entertainment Company, L.P. | Method and apparatus for archiving in and retrieving images from a digital image library |
AUPQ896000A0 (en) | 2000-07-24 | 2000-08-17 | Seeing Machines Pty Ltd | Facial image processing system |
JP4140181B2 (en) | 2000-09-08 | 2008-08-27 | 富士フイルム株式会社 | Electronic camera |
US6900840B1 (en) | 2000-09-14 | 2005-05-31 | Hewlett-Packard Development Company, L.P. | Digital camera and method of using same to view image in live view mode |
EP1211640A3 (en) * | 2000-09-15 | 2003-10-15 | Canon Kabushiki Kaisha | Image processing methods and apparatus for detecting human eyes, human face and other objects in an image |
JP4374759B2 (en) * | 2000-10-13 | 2009-12-02 | オムロン株式会社 | Image comparison system and image comparison apparatus |
US7038709B1 (en) | 2000-11-01 | 2006-05-02 | Gilbert Verghese | System and method for tracking a subject |
JP4590717B2 (en) | 2000-11-17 | 2010-12-01 | ソニー株式会社 | Face identification device and face identification method |
US7099510B2 (en) | 2000-11-29 | 2006-08-29 | Hewlett-Packard Development Company, L.P. | Method and system for object detection in digital images |
US6975750B2 (en) | 2000-12-01 | 2005-12-13 | Microsoft Corp. | System and method for face recognition using synthesized training images |
US6654507B2 (en) | 2000-12-14 | 2003-11-25 | Eastman Kodak Company | Automatically producing an image of a portion of a photographic image |
US6697504B2 (en) * | 2000-12-15 | 2004-02-24 | Institute For Information Industry | Method of multi-level facial image recognition and system using the same |
GB2370438A (en) | 2000-12-22 | 2002-06-26 | Hewlett Packard Co | Automated image cropping using selected compositional rules. |
US7034848B2 (en) * | 2001-01-05 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | System and method for automatically cropping graphical images |
GB2372658A (en) | 2001-02-23 | 2002-08-28 | Hewlett Packard Co | A method of creating moving video data from a static image |
US7027621B1 (en) * | 2001-03-15 | 2006-04-11 | Mikos, Ltd. | Method and apparatus for operator condition monitoring and assessment |
US20020136433A1 (en) | 2001-03-26 | 2002-09-26 | Koninklijke Philips Electronics N.V. | Adaptive facial recognition system and method |
US6915011B2 (en) | 2001-03-28 | 2005-07-05 | Eastman Kodak Company | Event clustering of images using foreground/background segmentation |
US6760465B2 (en) | 2001-03-30 | 2004-07-06 | Intel Corporation | Mechanism for tracking colored objects in a video sequence |
JP2002305713A (en) * | 2001-04-03 | 2002-10-18 | Canon Inc | Image processing unit and its method, and storage medium |
JP2002334338A (en) | 2001-05-09 | 2002-11-22 | National Institute Of Advanced Industrial & Technology | Object tracking apparatus, object tracking method, and recording medium |
US20020172419A1 (en) | 2001-05-15 | 2002-11-21 | Qian Lin | Image enhancement using face detection |
US6847733B2 (en) * | 2001-05-23 | 2005-01-25 | Eastman Kodak Company | Retrieval and browsing of database images based on image emphasis and appeal |
TW505892B (en) * | 2001-05-25 | 2002-10-11 | Ind Tech Res Inst | System and method for promptly tracking multiple faces |
US20020181801A1 (en) | 2001-06-01 | 2002-12-05 | Needham Bradford H. | Feature-based image correction |
AUPR541801A0 (en) * | 2001-06-01 | 2001-06-28 | Canon Kabushiki Kaisha | Face detection in colour images with complex background |
US7068841B2 (en) * | 2001-06-29 | 2006-06-27 | Hewlett-Packard Development Company, L.P. | Automatic digital image enhancement |
GB0116877D0 (en) * | 2001-07-10 | 2001-09-05 | Hewlett Packard Co | Intelligent feature selection and pan zoom control |
US6516154B1 (en) * | 2001-07-17 | 2003-02-04 | Eastman Kodak Company | Image revising camera and method |
US6832006B2 (en) * | 2001-07-23 | 2004-12-14 | Eastman Kodak Company | System and method for controlling image compression based on image emphasis |
US20030023974A1 (en) * | 2001-07-25 | 2003-01-30 | Koninklijke Philips Electronics N.V. | Method and apparatus to track objects in sports programs and select an appropriate camera view |
US6993180B2 (en) * | 2001-09-04 | 2006-01-31 | Eastman Kodak Company | Method and system for automated grouping of images |
US7027619B2 (en) * | 2001-09-13 | 2006-04-11 | Honeywell International Inc. | Near-infrared method and system for use in face detection |
US7262798B2 (en) * | 2001-09-17 | 2007-08-28 | Hewlett-Packard Development Company, L.P. | System and method for simulating fill flash in photography |
US7298412B2 (en) * | 2001-09-18 | 2007-11-20 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7110569B2 (en) | 2001-09-27 | 2006-09-19 | Koninklijke Philips Electronics N.V. | Video based detection of fall-down and other events |
US7130864B2 (en) | 2001-10-31 | 2006-10-31 | Hewlett-Packard Development Company, L.P. | Method and system for accessing a collection of images in a database |
KR100421221B1 (en) * | 2001-11-05 | 2004-03-02 | 삼성전자주식회사 | Illumination invariant object tracking method and image editing system adopting the method |
US7162101B2 (en) * | 2001-11-15 | 2007-01-09 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US7130446B2 (en) | 2001-12-03 | 2006-10-31 | Microsoft Corporation | Automatic detection and tracking of multiple individuals using multiple cues |
US7688349B2 (en) | 2001-12-07 | 2010-03-30 | International Business Machines Corporation | Method of detecting and tracking groups of people |
US7050607B2 (en) | 2001-12-08 | 2006-05-23 | Microsoft Corp. | System and method for multi-view face detection |
TW535413B (en) | 2001-12-13 | 2003-06-01 | Mediatek Inc | Device and method for processing digital video data |
US7221809B2 (en) | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
US7035467B2 (en) | 2002-01-09 | 2006-04-25 | Eastman Kodak Company | Method and system for processing images for themed imaging services |
JP2003219225A (en) | 2002-01-25 | 2003-07-31 | Nippon Micro Systems Kk | Device for monitoring moving object image |
US7362354B2 (en) | 2002-02-12 | 2008-04-22 | Hewlett-Packard Development Company, L.P. | Method and system for assessing the photo quality of a captured image in a digital still camera |
EP1343107A3 (en) | 2002-03-04 | 2005-03-23 | Samsung Electronics Co., Ltd. | Method and apparatus for recognising faces using principal component analysis and second order independent component analysis on parts of the image faces |
US6959109B2 (en) | 2002-06-20 | 2005-10-25 | Identix Incorporated | System and method for pose-angle estimation |
US7227976B1 (en) | 2002-07-08 | 2007-06-05 | Videomining Corporation | Method and system for real-time facial image enhancement |
US7020337B2 (en) * | 2002-07-22 | 2006-03-28 | Mitsubishi Electric Research Laboratories, Inc. | System and method for detecting objects in images |
JP2004062565A (en) * | 2002-07-30 | 2004-02-26 | Canon Inc | Image processor and image processing method, and program storage medium |
US7110575B2 (en) | 2002-08-02 | 2006-09-19 | Eastman Kodak Company | Method for locating faces in digital color images |
US7035462B2 (en) * | 2002-08-29 | 2006-04-25 | Eastman Kodak Company | Apparatus and method for processing digital images having eye color defects |
US7194114B2 (en) * | 2002-10-07 | 2007-03-20 | Carnegie Mellon University | Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder |
US7154510B2 (en) | 2002-11-14 | 2006-12-26 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
GB2395264A (en) | 2002-11-29 | 2004-05-19 | Sony Uk Ltd | Face detection in images |
US7082157B2 (en) | 2002-12-24 | 2006-07-25 | Realtek Semiconductor Corp. | Residual echo reduction for a full duplex transceiver |
CN100465985C (en) | 2002-12-31 | 2009-03-04 | 佳能株式会社 | Human ege detecting method, apparatus, system and storage medium |
US7120279B2 (en) | 2003-01-30 | 2006-10-10 | Eastman Kodak Company | Method for face orientation determination in digital color images |
US7162076B2 (en) * | 2003-02-11 | 2007-01-09 | New Jersey Institute Of Technology | Face detection method and apparatus |
US7039222B2 (en) | 2003-02-28 | 2006-05-02 | Eastman Kodak Company | Method and system for enhancing portrait images that are processed in a batch mode |
US7508961B2 (en) | 2003-03-12 | 2009-03-24 | Eastman Kodak Company | Method and system for face detection in digital images |
US20040228505A1 (en) | 2003-04-14 | 2004-11-18 | Fuji Photo Film Co., Ltd. | Image characteristic portion extraction method, computer readable medium, and data collection and processing device |
US7609908B2 (en) | 2003-04-30 | 2009-10-27 | Eastman Kodak Company | Method for adjusting the brightness of a digital image utilizing belief values |
US20040223649A1 (en) | 2003-05-07 | 2004-11-11 | Eastman Kodak Company | Composite imaging method and system |
WO2005113099A2 (en) * | 2003-05-30 | 2005-12-01 | America Online, Inc. | Personalizing content |
WO2007142621A1 (en) | 2006-06-02 | 2007-12-13 | Fotonation Vision Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7606417B2 (en) * | 2004-08-16 | 2009-10-20 | Fotonation Vision Limited | Foreground/background segmentation in digital images with differential exposure calculations |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7920723B2 (en) | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7362368B2 (en) | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7336821B2 (en) * | 2006-02-14 | 2008-02-26 | Fotonation Vision Limited | Automatic detection and correction of non-red eye flash defects |
US7680342B2 (en) | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
US7440593B1 (en) | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US7317815B2 (en) * | 2003-06-26 | 2008-01-08 | Fotonation Vision Limited | Digital image processing composition using face detection information |
US7536036B2 (en) | 2004-10-28 | 2009-05-19 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7792335B2 (en) | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US7689009B2 (en) | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
US7616233B2 (en) | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US7587085B2 (en) | 2004-10-28 | 2009-09-08 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7636486B2 (en) | 2004-11-10 | 2009-12-22 | Fotonation Ireland Ltd. | Method of determining PSF using multiple instances of a nominally similar scene |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US7471846B2 (en) * | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US7844076B2 (en) | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7315630B2 (en) * | 2003-06-26 | 2008-01-01 | Fotonation Vision Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
US7190829B2 (en) * | 2003-06-30 | 2007-03-13 | Microsoft Corporation | Speedup of face detection in digital images |
US7274822B2 (en) | 2003-06-30 | 2007-09-25 | Microsoft Corporation | Face annotation for photo management |
US7689033B2 (en) * | 2003-07-16 | 2010-03-30 | Microsoft Corporation | Robust multi-view face detection methods and apparatuses |
US20050140801A1 (en) | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
JP2005078376A (en) * | 2003-08-29 | 2005-03-24 | Sony Corp | Object detection device, object detection method, and robot device |
JP2005100084A (en) | 2003-09-25 | 2005-04-14 | Toshiba Corp | Image processor and method |
US7590305B2 (en) * | 2003-09-30 | 2009-09-15 | Fotonation Vision Limited | Digital camera with built-in lens calibration table |
US7295233B2 (en) * | 2003-09-30 | 2007-11-13 | Fotonation Vision Limited | Detection and removal of blemishes in digital images utilizing original images of defocused scenes |
US7424170B2 (en) | 2003-09-30 | 2008-09-09 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on determining probabilities based on image analysis of single images |
US7369712B2 (en) * | 2003-09-30 | 2008-05-06 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on multiple occurrences of dust in images |
JP2005128956A (en) * | 2003-10-27 | 2005-05-19 | Pentax Corp | Subject determination program and digital camera |
US7738731B2 (en) | 2003-11-11 | 2010-06-15 | Seiko Epson Corporation | Image processing device, image processing method, program thereof, and recording medium |
US7274832B2 (en) | 2003-11-13 | 2007-09-25 | Eastman Kodak Company | In-plane rotation invariant object detection in digitized images |
US7596247B2 (en) | 2003-11-14 | 2009-09-29 | Fujifilm Corporation | Method and apparatus for object recognition using probability models |
JP2005164475A (en) | 2003-12-04 | 2005-06-23 | Mitsutoyo Corp | Measuring apparatus for perpendicularity |
US20050169536A1 (en) * | 2004-01-30 | 2005-08-04 | Vittorio Accomazzi | System and method for applying active appearance models to image analysis |
JP2006033793A (en) | 2004-06-14 | 2006-02-02 | Victor Co Of Japan Ltd | Tracking video reproducing apparatus |
JP4442330B2 (en) | 2004-06-17 | 2010-03-31 | 株式会社ニコン | Electronic camera and electronic camera system |
CA2571643C (en) * | 2004-06-21 | 2011-02-01 | Nevengineering, Inc. | Single image based multi-biometric system and method |
JP4574249B2 (en) * | 2004-06-29 | 2010-11-04 | キヤノン株式会社 | Image processing apparatus and method, program, and imaging apparatus |
CA2575211C (en) * | 2004-07-30 | 2012-12-11 | Euclid Discoveries, Llc | Apparatus and method for processing video data |
KR100668303B1 (en) * | 2004-08-04 | 2007-01-12 | 삼성전자주식회사 | Face detection method using skin color and pattern matching |
JP4757559B2 (en) | 2004-08-11 | 2011-08-24 | 富士フイルム株式会社 | Apparatus and method for detecting components of a subject |
US7119838B2 (en) | 2004-08-19 | 2006-10-10 | Blue Marlin Llc | Method and imager for detecting the location of objects |
JP4383399B2 (en) | 2004-11-05 | 2009-12-16 | 富士フイルム株式会社 | Detection target image search apparatus and control method thereof |
US7734067B2 (en) | 2004-12-07 | 2010-06-08 | Electronics And Telecommunications Research Institute | User recognition system and method thereof |
US20060006077A1 (en) * | 2004-12-24 | 2006-01-12 | Erie County Plastics Corporation | Dispensing closure with integral piercing unit |
US7315631B1 (en) * | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
CN100358340C (en) | 2005-01-05 | 2007-12-26 | 张健 | Digital-camera capable of selecting optimum taking opportune moment |
US7454058B2 (en) | 2005-02-07 | 2008-11-18 | Mitsubishi Electric Research Lab, Inc. | Method of extracting and searching integral histograms of data samples |
US7620208B2 (en) | 2005-02-09 | 2009-11-17 | Siemens Corporate Research, Inc. | System and method for detecting features from images of vehicles |
US20060203106A1 (en) | 2005-03-14 | 2006-09-14 | Lawrence Joseph P | Methods and apparatus for retrieving data captured by a media device |
JP4639869B2 (en) | 2005-03-14 | 2011-02-23 | オムロン株式会社 | Imaging apparatus and timer photographing method |
JP4324170B2 (en) | 2005-03-17 | 2009-09-02 | キヤノン株式会社 | Imaging apparatus and display control method |
JP2006318103A (en) | 2005-05-11 | 2006-11-24 | Fuji Photo Film Co Ltd | Image processor, image processing method, and program |
JP4519708B2 (en) | 2005-05-11 | 2010-08-04 | 富士フイルム株式会社 | Imaging apparatus and method, and program |
JP4906034B2 (en) | 2005-05-16 | 2012-03-28 | 富士フイルム株式会社 | Imaging apparatus, method, and program |
US7612794B2 (en) | 2005-05-25 | 2009-11-03 | Microsoft Corp. | System and method for applying digital make-up in video conferencing |
EP1887511A1 (en) | 2005-06-03 | 2008-02-13 | NEC Corporation | Image processing system, 3-dimensional shape estimation system, object position posture estimation system, and image generation system |
JP2006350498A (en) | 2005-06-14 | 2006-12-28 | Fujifilm Holdings Corp | Image processor and image processing method and program |
JP2007006182A (en) | 2005-06-24 | 2007-01-11 | Fujifilm Holdings Corp | Image processing apparatus and method therefor, and program |
US20070018966A1 (en) * | 2005-07-25 | 2007-01-25 | Blythe Michael M | Predicted object location |
JP4799101B2 (en) * | 2005-09-26 | 2011-10-26 | 富士フイルム株式会社 | Image processing method, apparatus, and program |
JP2007094549A (en) * | 2005-09-27 | 2007-04-12 | Fujifilm Corp | Image processing method, device and program |
US7555149B2 (en) | 2005-10-25 | 2009-06-30 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for segmenting videos using face detection |
US20070098303A1 (en) | 2005-10-31 | 2007-05-03 | Eastman Kodak Company | Determining a particular person from a collection |
US7599577B2 (en) | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
US7692696B2 (en) | 2005-12-27 | 2010-04-06 | Fotonation Vision Limited | Digital image acquisition system with portrait mode |
US7643659B2 (en) | 2005-12-31 | 2010-01-05 | Arcsoft, Inc. | Facial feature detection on mobile devices |
US7953253B2 (en) | 2005-12-31 | 2011-05-31 | Arcsoft, Inc. | Face detection on mobile devices |
EP1987436B1 (en) | 2006-02-14 | 2015-12-09 | FotoNation Limited | Image blurring |
IES20060559A2 (en) | 2006-02-14 | 2006-11-01 | Fotonation Vision Ltd | Automatic detection and correction of non-red flash eye defects |
WO2007095556A2 (en) | 2006-02-14 | 2007-08-23 | Fotonation Vision Limited | Digital image acquisition device with built in dust and sensor mapping capability |
US7804983B2 (en) * | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
JP4767718B2 (en) * | 2006-02-24 | 2011-09-07 | 富士フイルム株式会社 | Image processing method, apparatus, and program |
IES20060564A2 (en) | 2006-05-03 | 2006-11-01 | Fotonation Vision Ltd | Improved foreground / background separation |
IES20070229A2 (en) | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
ATE497218T1 (en) * | 2006-06-12 | 2011-02-15 | Tessera Tech Ireland Ltd | ADVANCES IN EXPANSING AAM TECHNIQUES FROM GRAYSCALE TO COLOR IMAGES |
EP2050043A2 (en) | 2006-08-02 | 2009-04-22 | Fotonation Vision Limited | Face recognition with combined pca-based datasets |
WO2008022005A2 (en) * | 2006-08-09 | 2008-02-21 | Fotonation Vision Limited | Detection and correction of flash artifacts from airborne particulates |
US7403643B2 (en) * | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
JP5049356B2 (en) | 2007-02-28 | 2012-10-17 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Separation of directional lighting variability in statistical face modeling based on texture space decomposition |
KR101380731B1 (en) | 2007-05-24 | 2014-04-02 | 디지털옵틱스 코포레이션 유럽 리미티드 | Image processing method and apparatus |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
JP5260360B2 (en) | 2009-03-06 | 2013-08-14 | 株式会社ユーシン | Electric steering lock device |
-
2007
- 2007-05-24 US US11/752,925 patent/US7916971B2/en active Active
- 2007-07-16 IE IE20070518A patent/IES20070518A2/en not_active IP Right Cessation
-
2011
- 2011-02-25 US US13/034,707 patent/US8494232B2/en not_active Expired - Fee Related
- 2011-05-08 US US13/103,077 patent/US8515138B2/en active Active
-
2013
- 2013-07-21 US US13/947,095 patent/US9025837B2/en active Active
-
2015
- 2015-05-04 US US14/703,719 patent/US20150235087A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070237421A1 (en) * | 2006-03-29 | 2007-10-11 | Eastman Kodak Company | Recomposing photographs from multiple frames |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9792490B2 (en) * | 2013-06-20 | 2017-10-17 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US9251405B2 (en) * | 2013-06-20 | 2016-02-02 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US20160148043A1 (en) * | 2013-06-20 | 2016-05-26 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US20140376785A1 (en) * | 2013-06-20 | 2014-12-25 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US9916497B2 (en) * | 2015-07-31 | 2018-03-13 | Sony Corporation | Automated embedding and blending head images |
US20170034453A1 (en) * | 2015-07-31 | 2017-02-02 | Sony Corporation | Automated embedding and blending head images |
US20170221186A1 (en) * | 2016-01-30 | 2017-08-03 | Samsung Electronics Co., Ltd. | Device for and method of enhancing quality of an image |
US10055821B2 (en) * | 2016-01-30 | 2018-08-21 | John W. Glotzbach | Device for and method of enhancing quality of an image |
CN108604293A (en) * | 2016-01-30 | 2018-09-28 | 三星电子株式会社 | The device and method for improving picture quality |
US10783617B2 (en) | 2016-01-30 | 2020-09-22 | Samsung Electronics Co., Ltd. | Device for and method of enhancing quality of an image |
US20180183998A1 (en) * | 2016-12-22 | 2018-06-28 | Qualcomm Incorporated | Power reduction and performance improvement through selective sensor image downscaling |
CN111709878A (en) * | 2020-06-17 | 2020-09-25 | 北京百度网讯科技有限公司 | Face super-resolution implementation method and device, electronic equipment and storage medium |
US11710215B2 (en) | 2020-06-17 | 2023-07-25 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Face super-resolution realization method and apparatus, electronic device and storage medium |
CN111932594A (en) * | 2020-09-18 | 2020-11-13 | 西安拙河安见信息科技有限公司 | Billion pixel video alignment method and device based on optical flow and medium |
Also Published As
Publication number | Publication date |
---|---|
IES20070518A2 (en) | 2008-09-03 |
US9025837B2 (en) | 2015-05-05 |
US7916971B2 (en) | 2011-03-29 |
US8515138B2 (en) | 2013-08-20 |
US20080292193A1 (en) | 2008-11-27 |
US8494232B2 (en) | 2013-07-23 |
US20110235912A1 (en) | 2011-09-29 |
US20130301917A1 (en) | 2013-11-14 |
US20110234847A1 (en) | 2011-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9025837B2 (en) | Image processing method and apparatus | |
EP2153374B1 (en) | Image processing method and apparatus | |
US10733472B2 (en) | Image capture device with contemporaneous image correction mechanism | |
US8682097B2 (en) | Digital image enhancement with reference images | |
US8593542B2 (en) | Foreground/background separation using reference images | |
JP4970469B2 (en) | Method and apparatus for selectively disqualifying digital images | |
US8330831B2 (en) | Method of gathering visual meta data using a reference image | |
US8494286B2 (en) | Face detection in mid-shot digital images | |
US8320641B2 (en) | Method and apparatus for red-eye detection using preview or other reference images | |
US20050200722A1 (en) | Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program | |
JP2008092299A (en) | Electronic camera | |
US7903164B2 (en) | Image capturing apparatus, an image capturing method and a machine readable medium storing thereon a computer program for capturing an image of a range wider than an image capture designation range | |
M Corcoran et al. | Advances in the detection & repair of flash-eye defects in digital images-a review of recent patents | |
IE20070518U1 (en) | Image processing method and apparatus | |
IES84961Y1 (en) | Image processing method and apparatus | |
IES84977Y1 (en) | Face detection in mid-shot digital images | |
IE20080161U1 (en) | Face detection in mid-shot digital images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |