US20090278857A1 - Method of forming an image based on a plurality of image frames, image processing system and digital camera - Google Patents
Method of forming an image based on a plurality of image frames, image processing system and digital camera Download PDFInfo
- Publication number
- US20090278857A1 US20090278857A1 US12/089,997 US8999708A US2009278857A1 US 20090278857 A1 US20090278857 A1 US 20090278857A1 US 8999708 A US8999708 A US 8999708A US 2009278857 A1 US2009278857 A1 US 2009278857A1
- Authority
- US
- United States
- Prior art keywords
- array
- intensity values
- arrays
- intensity
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012545 processing Methods 0.000 title claims description 14
- 238000003491 array Methods 0.000 claims description 184
- 238000003384 imaging method Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 3
- 230000010365 information processing Effects 0.000 claims description 2
- 238000012935 Averaging Methods 0.000 abstract 3
- 230000004927 fusion Effects 0.000 abstract 1
- 238000013507 mapping Methods 0.000 description 26
- 230000006870 function Effects 0.000 description 22
- 238000004458 analytical method Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
Definitions
- the invention relates to a method of forming a combined image based on a plurality of image frames.
- the invention also relates to a system for processing arrays of intensity values, each array being suitable for representing an image frame at a resolution corresponding to the number of intensity values in the array.
- the invention also relates to an imaging apparatus, e.g. a digital camera.
- the invention also relates to a computer program.
- a set of derived arrays of intensity values is generated, each derived array being based on a respective one of the obtained arrays of intensity levels and encoding light intensity levels at each of a common number of pixel positions in at least a region of overlap of the respective image frames.
- An array of combined intensity values is generated. Each element in that array is based on a sum of intensity values represented by a corresponding element in each of the respective derived arrays of intensity values.
- An array of intensity values encoding the combined final image is provided, the array being based on the array of combined intensity values.
- a first array of intensity values encoding at least the region of overlap at a higher resolution than the further arrays of intensity values is obtained.
- An array of intensity values encoding at least the region of overlap in the combined final image at a higher spatial resolution than the further arrays of intensity values is provided.
- the array of intensity values encoding the combined final image is based on a sufficient number of intensity values in the first array of intensity values to encode the region of overlap at a higher resolution than the further arrays of intensity values.
- Forming a combined image by adding a plurality of image frames at least partially depicting the same region has the effect that the region of overlap has a higher Signal-to-Noise Ratio (SNR) in the combined image than in the individual image frames.
- SNR Signal-to-Noise Ratio
- intensity values assume one of a range of discrete values, the number of which is determined by the number of bits by which the values are represented. This in turn is determined by the dynamic range allowed by the format in which the combined image is displayed, e.g. the JPEG standard or the resolution of a computer display. If the sum of the intensity values corresponding to a pixel in the respective image frames exceeds the maximum allowed by the range of discrete values, the sum value is clipped to stay within the range. If this happens for many intensity values in the array of intensity values representing the combined image, the combined image appears over-exposed.
- This object is achieved according to the invention by providing a method of forming a combined image based on a plurality of image frames, including:
- the intensity values in the final array are each obtained by executing a step of summing an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set, wherein, prior to executing the summing step, only the intensity values of the arrays in the first set are mapped from a scale within a first range to a scale within a second range.
- the SNR is improved. Because the intensity values of the arrays in the first set are mapped from a scale within a first range to a scale within a second range prior to executing the summing step, it is possible to use the full dynamic range allowed by the representation of the intensity values without going beyond the end of the scale on which they are represented. For this purpose, the second range is different from the first range. Because only the intensity values of the arrays in the first set are mapped, the method is relatively efficient.
- the at least one array of intensity values based on at least one array of intensity values in only the first set contains coefficients in the spatial frequency domain
- the at least one array of intensity values based on at least one array of intensity values in only the second set contains coefficients in the spatial frequency domain and the intensity values in the final array are formed by coefficients in the spatial frequency domain
- At least one lower-order coefficient in the final array is obtained by summing an intensity value from each of the at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set, wherein at least one higher-order coefficient in the final array is obtained on the basis of only arrays of intensity values based on the second set.
- At least some of the arrays of intensity values in the first and second set are obtained by reading out measurement values from an image-capturing device comprising an array of light-sensitive cells, wherein each intensity value in the final array is based on at least one intensity value in an array comprised in the second set.
- the arrays in the first set represent image frames at a lower resolution they contain fewer intensity values. Thus, the time to read out the measurement values is reduced. This allows the image frames represented by the first and second sets of arrays to be read out in quick succession, decreasing the effect of camera shake or movement in the scene that is captured. Because each intensity value in the final array is based on at least one intensity value in an array comprised in the second set, the effect of decreased blur due to movement is not obtained at the expense of the resolution of the combined image.
- An embodiment includes determining an upper limit of the second range at least partly in dependence on the number of arrays of intensity values in the second set.
- At least one of the arrays of intensity values in the first set is obtained by obtaining a plurality of arrays of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, and by summing an intensity value from each of the plurality of arrays to obtain a corresponding intensity value in the at least one array in the first set.
- an array representing an image that is the sum of a plurality of image frames is scaled. This has the effect of decreasing the amount of scaling that has to be done, making the method more efficient.
- random noise over the plurality of arrays that are summed to form an array in the first set is filtered out by means of the addition.
- At least one of the arrays of intensity values in the first set is obtained by obtaining a plurality of arrays of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, wherein the method further includes
- the appropriate extent of the second range can be determined relatively accurately, since it is based on an array of intensity values that is quite representative of the final array.
- This embodiment is also relatively efficient, since it does not require an analysis of each of a plurality of arrays in the first set.
- At least the arrays of intensity values in the first set are obtained by obtaining a plurality of arrays of intensity values for representing colour image frames in a first colour space, and applying a transformation to a plurality of arrays of values in a second colour space, wherein, in the first colour space, an image frame is represented by parameter value combinations, each parameter indicating the intensity of one of a plurality of colour components, whereas, in the second colour space, an image frame is represented by parameter value combinations, one parameter of the combination indicating a hue and at least one of the other parameters being indicative of light intensity.
- This embodiment has the advantage that the mapping from the first scale to the second scale need be carried out on fewer arrays of intensity values. Instead of separate arrays of intensity values for each colour component, or arrays of intensity value combinations, only the array or arrays of parameter values indicative of light intensity in the second colour space, or arrays derived based thereon, need be processed.
- the colour information is contained in an array of parameter values indicating hues, which need not be scaled to prevent saturation of the combined image.
- a system for processing arrays of intensity values each array being suitable for representing an image frame at a resolution corresponding to the number of intensity values in the array
- system is configured to retrieve a first set of at least one array of intensity values and a second set of at least one array of intensity values, the arrays in the first set and arrays in the second set representing respective image frames, and to form a final array of intensity values representing a combined image
- system is configured to obtain each of at least some of the intensity values in the final array by executing a step of summing an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set, and
- system is configured to map, prior to executing the summing step, only the intensity values of the arrays in the first set from a scale within a first range to a scale within a second range.
- an imaging apparatus e.g. a digital camera, comprising a processor and at least one storage device for storing a plurality of arrays of intensity values, wherein the imaging apparatus is configured to execute a method according to the invention.
- the imaging apparatus makes relatively efficient use of digital signal processing capacity.
- the amount of values to be retrieved from a look-up table implementing the mapping function is relatively low.
- a computer program including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system having information processing capabilities to perform a method according the invention.
- the computer program can be run on a general-purpose computer for post-processing of captured images, or it can be provided in the form of firmware for an image-capturing device such as a digital camera.
- FIG. 1 illustrates schematically a digital camera equipped to implement a method of forming a combined image
- FIG. 2 illustrates schematically a first embodiment of a method of forming a combined image
- FIG. 3 illustrates schematically a second embodiment of a method of forming a combined image
- FIG. 4 illustrates schematically a third embodiment of a method of forming a combined image
- FIG. 5 illustrates schematically a fourth embodiment of a method of forming a combined image.
- FIG. 1 illustrates some components of a digital camera 1 as an example of an imaging apparatus adapted for implementing the methods described below.
- Other examples of suitable imaging apparatus include scanners and photocopying apparatus. Because the methods of forming a combined image require relatively little processing capacity, it is advantageous to apply them in the digital camera 1 .
- the digital camera 1 includes a lens system 2 for focussing on one or more objects in a scene that is to be represented by a combined image.
- a shutter 3 When a shutter 3 is opened, the scene is projected through an aperture in a diaphragm 4 onto a photosensitive area of an image-capturing device 5 .
- an electronic shutter implemented by suitable control of the image-capturing device 5 could be used.
- the shutter time is controllable, as is the diameter of the aperture.
- the image-capturing device 5 can be a device implemented in Complementary Metal-Oxide Semiconductor (CMOS) technology, or a Charge-Coupled Device (CCD) sensor, for example.
- CMOS Complementary Metal-Oxide Semiconductor
- CCD Charge-Coupled Device
- Each pixel cell includes a device for generating a signal indicative of the intensity of light to which the area that the pixel cell occupies is exposed.
- An integral of the signal generated by a device is formed during exposure, for example by accumulation of photocurrent in a capacitor. Subsequent to exposure for the duration of an exposure time interval, the values of the integrals of the generated signals are read out row by row.
- the (analogue) values that are read out are provided to an Analogue-to-Digital (A/D-)converter 6 .
- the A/D converter samples and quantises the signals received from the image-capturing device 5 . This involves recording the intensity values on a scale with discrete levels, the number of which is determined by the number of bits of resolution of the digital words provided as output by the A/D converter 6 .
- the A/D-converter 6 provides as output an array of intensity values recorded on a scale occupying a first range.
- Each intensity value is associated with a particular pixel position in an image frame, corresponding to a photosensitive cell or a plurality of adjacent photosensitive cells.
- the values read out from the image-capturing device 5 are preferably obtained by “binning” the values corresponding to a plurality of adjacent photosensitive cells. The areas to which the “binned” values correspond may overlap.
- each exposure of the image-capturing device 5 thus results in an array of intensity values representing an image frame.
- the intensity values of one or more arrays may be mapped to a different scale occupying a second range by a Digital Signal Processor (DSP) 7 .
- DSP 7 is also suitable for performing such operations as interpolation between pixel values and optionally compression of the image. It may also carry out a transformation of the intensity values to the spatial frequency domain, such as a Direct Cosine Transform (DCT).
- DCT Direct Cosine Transform
- Arrays of intensity values are stored in a storage device 8 .
- the storage device can be any usual type of storage device, e.g. built-in flash memory, replaceable flash memory modules, an optical disk drive or a magnetic disk drive.
- Capturing and processing of images is carried out under control of a microprocessor 9 , which issues commands over a bus 10 .
- the microprocessor 9 is assisted by a co-processor 11 in the illustrated embodiment.
- the co-processor 11 is preferably a digital signal processor for performing image compression, for example in accordance with the JPEG standard.
- the microprocessor 9 comprises a volatile memory and has access to instructions stored in Read-Only Memory (ROM) module 12 .
- the instructions provide the digital camera 1 with the capability to perform a method of forming a combined image by adding a plurality of captured image frames, which method is carried out under the control of the microprocessor 9 .
- a motion sensor 15 is present for sensing and measuring movement of the digital camera 1 .
- a series of image frames captured in rapid succession is analysed to determine the amount and/or direction of movement of the digital camera 1 .
- the digital camera 1 comprises an exposure metering device 16 and a flash driver 17 for directing the operation of a flash (not shown).
- a user issues a command to form a single image of a scene, which is passed on to the microprocessor 9 through the input interface module 13 and the bus 10 .
- the microprocessor 9 controls the digital camera 1 such that a plurality of underexposed image frames or image frames with a high ISO setting are captured.
- a high ISO setting means that the sensitivity of the image-capturing device 5 , calibrated along the linear film speed scale according to international standard ISO 5800:1987 is set to a high level.
- the captured images represent respective scenes that overlap at least partially.
- Each image frame, specifically each colour component of an image frame is represented by an array of pixel values. Each pixel value corresponds to the light intensity of the associated colour component over an area associated with a pixel.
- the number of intensity values contained in an array corresponds to the spatial resolution of the image frame. This is also the case where the intensity values are coefficients in the spatial frequency domain, since the inclusion of more values in an array corresponds to the presence of coefficients of a higher order.
- the microprocessor 9 determines a desired exposure for a final image to be formed on the basis of the image frames. This exposure is divided over the image frames. The desired exposure can be determined from user input or automatically on the basis of one or more values obtained from the exposure metering device 16 . Exposure levels for each of the image frames result in settings of the diaphragm 4 , shutter speed and flash intensity. In addition, the microprocessor 9 determines amplification levels for the signals read out from the image-capturing device. These determine the range of values within which the intensity values in the arrays representing the image frames lie. The number of bits with which the intensity values are represented determines the dynamic range of the intensity values.
- the intensity values are represented in eight bits, so that there are 255 possible non-zero values.
- the linear-scale ISO setting also known as ASA number
- the image-capturing device 5 can be increased by the same factor as the underexposure factor. This results in increased noise levels in the individual frames, which are reduced through the combination processes presented below.
- a first set 18 of arrays of intensity values represents image frames at a relatively low spatial resolution
- a second set 19 of arrays of intensity values represents image frames at a relatively high resolution. Since the spatial resolution is proportional to the number of intensity values in the arrays, it follows that the arrays in the first set contain fewer values than those in the second set 19 . This reduces the processing requirements, which is advantageous, as will become clear.
- the amount of processing is already reduced merely by the division of a sequence of arrays into the first set 18 and second set 19 , so that the fact that the first set represents image frames at a lower resolution than the second set is an advantageous, but optional feature.
- the arrays that share a set all have the same number of elements, i.e. that the image frames they represent each have the same resolution.
- a final array 20 of intensity values representing a combined image is formed on the basis of the arrays in the first and second set 18 , 19 only.
- An object of the method illustrated in FIG. 2 is to scale the intensity values in the arrays of the first set 18 such that the final array 20 contains intensity values that occupy the full dynamic range.
- the method serves to prevent a situation in which all the intensity values in the final array are clipped at the highest of the 255 values afforded by an eight-bit representation.
- a first step 21 one or more arrays of intensity values in the first set 18 of arrays are at least partially analysed.
- the analysis comprises the forming of a histogram of some or all of the intensity values. To reduce the processing effort required to generate a histogram, only one value in every block of sixty-four values could be used.
- mapping function is required, which mapping function is determined in a second step 22 .
- the second step 22 is followed by a step 23 in which a look-up table 24 is generated on the basis of the mapping function. For each of 255 intensity values, a scaled value is entered into the look-up table 24 .
- a look-up table allows the mapping to be carried out by the DSP 7 , which is relatively efficient.
- the use of a look-up table makes the methods presented herein quite suitable for implementation in an imaging apparatus, such as the digital camera 1 .
- each intensity value is used as an index into the look-up table 24 to determine its scaled value. It will be appreciated that, by scaling only the intensity values in the arrays forming the first set 18 , a smaller look-up table is required. Moreover, the number of look-up operations is much reduced. As will be seen, the final array 20 can still represent a combined image at a higher resolution, because each intensity value in the final array is based on at least one intensity value in an array comprised in the second set 19 . It is noted that the mapping function is applied directly to the arrays of intensity values in the first set 18 in other embodiments, so that the look-up table 24 is dispensed with.
- the mapping function used to populate the look-up table 24 maps the intensity values from a first scale within a first range to a second scale occupying a second, smaller range.
- the upper limit of the second scale is determined on the basis of at least two factors.
- a first factor is the extent to which the intensity values of the arrays analysed in the first step 21 exceed a certain threshold value.
- the second factor is based on the number of arrays of intensity values in the second set 19 . More specifically, the threshold value is the maximum value of the dynamic range for encoding the values in the final array 20 , divided by the number of arrays in the first and second sets 18 , 19 .
- the mapping function is chosen to ensure that a substantial proportion of the intensity values in each of the arrays of the set 26 of arrays of scaled intensity values remain below the threshold.
- the second factor in this example is based on the ratio of the number of arrays in the second set 19 to the number of arrays in the first set 18 .
- the upper value of the second scale is obtained by reducing the threshold by an amount corresponding to this ratio.
- a fixed curve or look-up table is used to determine the scaling in dependence only on the number of arrays of intensity values in the first and second sets 18 , 19 .
- a mapping function could be selected in dependence on the degree of overexposure or, equivalently, the factor by which the sensitivity of the image-capturing device 5 used to capture the arrays of intensity values on which the arrays in the first and second sets 18 , 19 are respectively based has been increased.
- the first step 21 is preceded by a step (not shown), in which the first and second sets 18 , 19 of arrays of intensity values are obtained by obtaining a plurality of arrays of intensity values for representing colour image frames in a first colour space, and applying a transformation to a plurality of arrays of values in a second colour space, wherein, in the first colour space, an image frame is represented by parameter value combinations, each parameter indicating the intensity of one of a plurality of colour components, whereas, in the second colour space, an image frame is represented by parameter value combinations, one parameter of the combination indicating a hue and at least one of the other parameters being indicative of light intensity.
- arrays of intensity values representing image frames in the RGB (Red Green Blue) colour space are transformed to respective arrays of parameter values representing image frames in the HLS (Hue, Lightness, Saturation) colour space.
- the RGB colour space is an additive colour space, wherein the intensity of each of the three colour components is encoded separately. If the entire method depicted in FIG. 2 is carried out in the RGB colour space, then the method would in essence have to be carried out in triplicate.
- the first and second steps 21 , 22 would involve analysis of the three arrays that belong together in the sense that they represent a colour component of the same image frame.
- At least the scaling step 25 involves scaling three arrays of intensity values per image frame.
- HLS colour space an image is represented by the parameter combination Hue, indicating the relative strengths of three colour components, Saturation, providing a scale from a grey level to a full colour, and Lightness (also called Luminance) corresponding substantially to the average intensity of the colour components. Only the arrays of Lightness values in the first set 18 are scaled.
- Hue Hue, Saturation, Value
- CMYK and YUV colour spaces are alternatives to the RGB colour space.
- each intensity value in the final array 20 is based on at least one intensity value in an array comprised in the second set 19 of arrays of intensity values. In the embodiment illustrated in FIG. 2 , this is assured by summing corresponding pixel values of each of the arrays in the set 26 of arrays of scaled intensity values
- a set 27 of resolution-adjusted arrays is generated (step 28 ).
- the spatial resolution of the arrays in the set 26 of arrays of scaled intensity values is adjusted by a multiplication factor, and is increased.
- An alternative would be to decrease the resolution of the image frames represented by the arrays in the second set 19 .
- One way of increasing the spatial resolution of the image frames represented by the arrays in the set 26 of arrays of scaled intensity values is to interpolate between the intensity values in the arrays of scaled intensity values.
- the final array 20 is obtained by summing (step 29 ) an intensity value from each of the arrays in the set 27 of resolution-adjusted arrays and value from each of the second set 19 of arrays. Intensity values corresponding to the same pixel in the scene represented by the image frames are added.
- an additional step is carried out to correct the image frames.
- the correction may be carried out prior to the first step 21 shown in FIG. 2 , so that the arrays of the first and second set 18 , 19 are the result of the correction operation.
- each array in the first and second sets 18 , 19 is based on an array of intensity values obtained by the image-capturing device 5 and corrected in accordance with a motion vector.
- the motion vector describes the motion of the camera 1 between the points in time at which the arrays of intensity values were obtained by the image-capturing device.
- a method that includes calculating a motion vector representing at least a component indicative of relative movement of at least a part of successive image frames in a sequence of image frames, wherein the step of calculating the motion vector includes a step of determining at least a first term in a series expansion representing at least one element of the motion vector, which step includes an estimation process wherein at least the part in each of a plurality of the image frames is repositioned in accordance with the calculated motion vector.
- the estimation process includes calculation of a measure of energy contained in an upper range of the spatial frequency spectrum of the combined image and the step of determining at least the first term includes at least one further iteration of the estimation process to maximise the energy.
- the image frames are aligned using a method known per se by the name of Random Sample Consensus (RANSAC). This method is suitable where there is sufficient light to capture image frames.
- RANSAC Random Sample Consensus
- FIG. 3 illustrates a variant of the method shown in FIG. 2 .
- This embodiment is also based on a first set 30 of arrays of intensity values and a second set 31 of arrays of intensity values.
- Each intensity value is a pixel value, corresponding to the light intensity of an associated colour component over an area associated with a pixel.
- the first and second set 18 , 19 shown in FIG. 2 applies equally to the first and second set 30 , 31 shown in FIG. 3 .
- this description will assume that the arrays of intensity value in the first set 30 of arrays represent image frames at a lower resolution than the arrays in the second set 31 .
- a first step 32 in the method of FIG. 3 corresponds to the first step 21 shown in FIG. 3 .
- a mapping function is again determined in order to map the intensity values of the arrays in the first set 30 from a scale occupying a first range to a second scale occupying a second range.
- the mapping function is determined on the basis of at least parts of some or all of the arrays in the first set 30 . It is determined in substantially the same way as in the embodiment of FIG. 2 .
- a look-up table 34 is created in a step 34 following the step 33 of determining the mapping function.
- the look-up table 34 is used (step 36 ) to generate a set 37 of arrays of scaled intensity values, in which each array is based on a corresponding array in the first set 30 of arrays of intensity values.
- the variant of FIG. 3 differs from the one shown in FIG. 2 , in that a transformation to the spatial frequency domain is carried out in another step 38 subsequent to the scaling step 36 .
- This transformation step 38 is implemented using a Discrete Cosine Transform (DCT) in the illustrated example.
- the set 37 of arrays of scaled intensity values is the basis for a first set 39 of arrays of DCT coefficients.
- the second set 31 of arrays of intensity values is the basis for a second set 40 of arrays of DCT coefficients.
- the DCT transform is part of the JPEG (Joint Photographic Experts Group) compression algorithm, and that it is advantageous to implement such an algorithm in a special-purpose processor, such as the DSP 7 or co-processor 11 .
- a transformation from the RGB colour space to the HLS colour space is also part of the JPEG algorithm, so that this feature is also applied to advantage in the embodiment illustrated in FIG. 3 .
- the transformation between colour spaces has been detailed above.
- a summation step 41 is carried out in the spatial frequency domain to obtain a final array 42 of DCT coefficients.
- the final array 42 forms an array of intensity values representing a combined image, since each coefficient is indicative of the intensity level of a spatial frequency component, and the set of spatial frequency components contains all the information necessary to render the combined image.
- the low-frequency coefficients of the final array 42 are obtained by summing the low-frequency coefficients of each array in the first set 39 of arrays of DCT coefficients and the low-frequency coefficients of each array in the second set 40 of arrays of DCT coefficients.
- the high-frequency coefficients are obtained by summing the high-frequency coefficients of each array in the second set 40 of arrays of DCT coefficients.
- the summation step 41 is preferably implemented so as to take account of the differing number of addends used to obtain each coefficient in the final array 42 .
- An Inverse Discrete Cosine Transformation (IDCT) 43 results in an array 44 of intensity values in the spatial domain. Both the transformation step 38 and the IDCT 43 are advantageously carried out by the co-processor 11 in the digital camera 1 .
- FIG. 4 illustrates an embodiment for simplifying the determination of the mapping function from the first scale to the second scale, as well as simplifying the scaling step. It operates on the basis of a first set 45 of arrays of pixel values and a second set 46 of arrays of pixel values.
- a first sum array 47 is formed in a first step 48 .
- each intensity value in the first sum array is obtained by summing the corresponding intensity values from each of the arrays in the first set 45 . If the resolutions are not the same, interpolation may be carried out first, or the arrays representing higher-resolution image frames may be reduced to correspond to a common resolution.
- the first sum array 47 is also suitable for representing an image frame, albeit one based on a plurality of preceding image frames, and forms a set of arrays consisting of one member. In alternative embodiments, a plurality of sum arrays could be formed, each based on a subset of arrays in the first set 45 , with the plurality of sum arrays forming a first set in the terminology used herein.
- the first sum array 47 of intensity values is analysed (step 49 ) to determine a mapping function for mapping a first scale occupying a first range to a second scale occupying a second range.
- the analysis advantageously comprises the forming of a histogram of some or all of the intensity values, i.e. DCT coefficients. Again, this may be carried out using one value per block of intensity values within the first sum array.
- the embodiment of FIG. 4 because only the first sum array 47 is analysed, allows for a more involved analysis as compared to embodiments in which a number of arrays of intensity values have to be analysed.
- a mapping function is required.
- a look-up table 50 is generated (step 51 ) on the basis of the mapping function. For each of, for example, 255 intensity values, a scaled value is entered into the look-up table 50 .
- the arrays in the first set 45 of intensity values represent image frames at a lower resolution than the arrays in the second set 46 of arrays. Even if this is not the case, it is still feasible to generate a first sum array 47 representing a combined image frame at a lower resolution than that at which image frames are represented by the arrays in the second set 46 of arrays of intensity values. Thus, the number of look-up operations is kept relatively small.
- the mapping function used to populate the look-up table 50 maps the intensity values from a first scale within a first range to a second scale occupying a second, smaller range.
- the upper limit of the second scale is again determined on the basis of at least two factors.
- a first factor is the extent to which the intensity values of the first sum array 47 exceed a certain threshold value.
- the second factor is based on the number of arrays of intensity values in the second set 46 . More specifically, the threshold value is the maximum value of the dynamic range for encoding the values in the first sum array 47 .
- the mapping function is chosen to ensure that a substantial proportion of the intensity values in the scaled first sum array 53 remain below the threshold.
- the second factor in this example is based on the ratio of the number of arrays in the second set 46 to the number of arrays in the first set 45 .
- the upper value of the second scale is obtained by reducing the threshold by an amount corresponding to this ratio.
- Scaling only the first sum array 47 reduces even further the number of look-up operations. Nevertheless, it would be possible to analyse the first sum array 47 to derive a mapping function for scaling the individual arrays in the first set 45 of arrays, which are then added after having been scaled. Alternatively, it would be possible to analyse the individual frames in the first set 45 of arrays of intensity values, in order to derive a mapping function for scaling the first sum array 47 .
- the effect of scaling the first sum array 47 is to reduce the amount of noise that propagates to a final array 54 of intensity values representing a combined image.
- the final array 54 of intensity values represents a combined image at a higher resolution than the scaled first sum array 53 . For this reason, the latter is processed (step 55 ) to obtain a resolution-adjusted scaled first sum array 56 .
- interpolation is a method by which the intensity values in the resolution-adjusted scaled first sum array 56 can be obtained.
- each intensity value in the final array 54 of intensity values is obtained by summing an intensity value from the resolution-adjusted scaled first sum array 56 and the corresponding respective intensity values from each of the arrays in the second set 46 of arrays of intensity values. It will be apparent that the final array 54 is thus formed of intensity values that are each based on at least one intensity value in an array in the second set 46 of arrays of intensity values, to achieve a high-resolution representation of the combined image.
- FIG. 5 shows a variant in which calculation is largely carried out in the spatial frequency domain, and which does not necessarily require interpolation or another process for increasing the resolution at which an image frame is represented.
- the variant illustrated in FIG. 5 commences with a DCT operation 58 .
- the DCT operation 58 is used to obtain a first set 59 of arrays of DCT coefficients for representing a set of corresponding image frames at a first resolution.
- This first set 59 is based on a set 60 of arrays of pixel values encoding the image frames in the spatial domain as opposed to the spatial frequency domain.
- a second set 61 of arrays of DCT coefficients is based on a second set 62 of arrays of pixel values encoding image frames in the spatial domain at a second resolution.
- the second resolution is higher than the first resolution.
- a subsequent step 63 the arrays in the first set 59 of arrays of DCT coefficients are processed to obtain a first sum array 64 .
- Each DCT coefficient in the first sum array 64 is obtained by summing the corresponding DCT coefficients in the respective arrays of the first set 59 .
- the first sum array 64 is analysed to determine (step 65 ) a mapping function mapping the DCT coefficients in the first sum array 64 from a first scale occupying a first range to a second scale occupying a second, preferably smaller, range.
- This step 65 is carried out using any of the methods outlined above with regard to the corresponding steps 22 , 33 , 49 in the methods of FIGS. 2-4 .
- a look-up table 67 is created on the basis of the mapping function.
- the mapping functions is based at least partly on the number of arrays in the second set 61 of arrays of DCT coefficients. This is done because only the DCT coefficients in the first sum array 64 are mapped from the first scale to the second scale (step 68 ), whereas those in the arrays forming the second set 61 of arrays of DCT coefficients are not.
- the result of the scaling carried out in this step 68 is a scaled first sum array 69 .
- the scaled first sum array 69 and the arrays in the second set 61 of arrays of DCT coefficients are summed in a step 70 similar to the summation step 41 in the embodiment illustrated in FIG. 3 .
- a final array 71 of DCT coefficients is obtained.
- the lower-order DCT coefficients in the final array 71 of DCT coefficients are each obtained by summing the lower-order coefficients of the scaled first sum array 69 , which is based on the first sum array 64 , and the corresponding lower-order coefficients of the arrays of the second set 61 of arrays of DCT coefficients.
- the higher-order DCT coefficients in the final array 71 are obtained by summing the corresponding higher-order coefficients in the arrays comprised in the second set 61 of arrays of DCT coefficients only.
- the final array 71 of DCT coefficients is suitable for representing the combined image at a relatively high resolution, at least higher than that of the image frames represented by the first sum array 64 .
- An inverse DCT operation 72 transforms the final array 71 of DCT coefficients into a final array 73 of pixel values, each corresponding to a light intensity over an area occupied by a pixel in the combined image.
- the invention is not limited to the described embodiments, which may be varied within the scope of the accompanying claims.
- the methods outlined herein are suitable for partial or complete execution by another type of image processing system than the digital camera 1 .
- a general-purpose personal computer or work station may carry out the method on the basis of a first set of arrays of pixel values and a second set of arrays of pixel values in a sequence of arrays captured in rapid succession by the digital camera 1 and stored in the storage device 8 . Processing of the arrays for relative alignment of at least the region of overlap between the image frames represented by them is an advantageous feature of each embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Image fusion based on a modified method of frame averaging for noise removal by partly averaging over images having a smaller resolution than the desired resolution of the de-noised image. The set of images which are summed for averaging out noise consists of two subsets. The first set of images has a resolution (in terms of number of pixels) being smaller than the resolution of the images in the second set. The resolution of the images in the second set is the resolution of the “high-definition” de-noised output image. The lower resolution images are up-sampled by scaling their pixel numbers to that desired output image. The gradation of the first set images is also adapted to avoid intensity saturation (flare) due to summation. Image fusing is also done in fourier space using the high frequency components from the higher resolution images and the lower ones from the lower resolution images.
Description
- The invention relates to a method of forming a combined image based on a plurality of image frames.
- The invention also relates to a system for processing arrays of intensity values, each array being suitable for representing an image frame at a resolution corresponding to the number of intensity values in the array.
- The invention also relates to an imaging apparatus, e.g. a digital camera.
- The invention also relates to a computer program.
- International patent application PCT/EP2005/052121 was filed before and published under number WO____/______ after the date of filing of the present application, and is thus comprised in the state of the art according to Art. 54(3) EPC only. It describes a method of forming a combined final image from a plurality of image frames, including the steps of obtaining a first and at least one further array of pixel values, each array of intensity values encoding light intensity levels a each of a respective number of pixel positions in the respective image frame, the number determining the spatial resolution of the image frame concerned. A set of derived arrays of intensity values is generated, each derived array being based on a respective one of the obtained arrays of intensity levels and encoding light intensity levels at each of a common number of pixel positions in at least a region of overlap of the respective image frames. An array of combined intensity values is generated. Each element in that array is based on a sum of intensity values represented by a corresponding element in each of the respective derived arrays of intensity values. An array of intensity values encoding the combined final image is provided, the array being based on the array of combined intensity values. A first array of intensity values encoding at least the region of overlap at a higher resolution than the further arrays of intensity values is obtained. An array of intensity values encoding at least the region of overlap in the combined final image at a higher spatial resolution than the further arrays of intensity values is provided. The array of intensity values encoding the combined final image is based on a sufficient number of intensity values in the first array of intensity values to encode the region of overlap at a higher resolution than the further arrays of intensity values.
- Forming a combined image by adding a plurality of image frames at least partially depicting the same region has the effect that the region of overlap has a higher Signal-to-Noise Ratio (SNR) in the combined image than in the individual image frames. However, in an image processing system, intensity values assume one of a range of discrete values, the number of which is determined by the number of bits by which the values are represented. This in turn is determined by the dynamic range allowed by the format in which the combined image is displayed, e.g. the JPEG standard or the resolution of a computer display. If the sum of the intensity values corresponding to a pixel in the respective image frames exceeds the maximum allowed by the range of discrete values, the sum value is clipped to stay within the range. If this happens for many intensity values in the array of intensity values representing the combined image, the combined image appears over-exposed.
- It is an object of the invention to provide a method, system, imaging apparatus and computer program of the types indicated above, for providing in an efficient manner a combined image that has a relatively good SNR and little or no over-exposure.
- This object is achieved according to the invention by providing a method of forming a combined image based on a plurality of image frames, including:
- obtaining a first set of at least one array of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, and
- obtaining a second set of at least one array of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array,
- wherein the combined image is represented by a final array of intensity values,
- wherein at least some of the intensity values in the final array are each obtained by executing a step of summing an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set, wherein, prior to executing the summing step, only the intensity values of the arrays in the first set are mapped from a scale within a first range to a scale within a second range.
- Because at least some of the intensity values in the final array are each obtained by executing a step of summing an intensity value from each of at least one array of intensity values based on at least two arrays of intensity values, the SNR is improved. Because the intensity values of the arrays in the first set are mapped from a scale within a first range to a scale within a second range prior to executing the summing step, it is possible to use the full dynamic range allowed by the representation of the intensity values without going beyond the end of the scale on which they are represented. For this purpose, the second range is different from the first range. Because only the intensity values of the arrays in the first set are mapped, the method is relatively efficient.
- An embodiment of the invention includes obtaining a first set and a second set arranged such that the image frames represented by the arrays in the first set are represented at lower resolutions than the image frames represented by the arrays in the second set
- This has the effect of increased efficiency, as relatively few intensity values are mapped from the scale within the first range to the scale within the second range.
- In an embodiment, the at least one array of intensity values based on at least one array of intensity values in only the first set contains coefficients in the spatial frequency domain, the at least one array of intensity values based on at least one array of intensity values in only the second set contains coefficients in the spatial frequency domain and the intensity values in the final array are formed by coefficients in the spatial frequency domain,
- wherein at least one lower-order coefficient in the final array is obtained by summing an intensity value from each of the at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set,
wherein at least one higher-order coefficient in the final array is obtained on the basis of only arrays of intensity values based on the second set. - This is a relatively efficient way of obtaining a combined image represented at a relatively high resolution on the basis of a first set of arrays representing image frames at a lower resolution and a second set of arrays representing image frames at a higher resolution. Interpolation or similar techniques to increase the resolution of the image frames represented by the arrays of the first set is not required. Instead, the information in the higher resolution image frames represented by the second set is used to generate a relatively high-resolution combined image, whereas summation of the lower-order coefficients serves to decrease the perceptible noise in the image.
- In an embodiment, at least some of the arrays of intensity values in the first and second set are obtained by reading out measurement values from an image-capturing device comprising an array of light-sensitive cells, wherein each intensity value in the final array is based on at least one intensity value in an array comprised in the second set.
- Because the arrays in the first set represent image frames at a lower resolution they contain fewer intensity values. Thus, the time to read out the measurement values is reduced. This allows the image frames represented by the first and second sets of arrays to be read out in quick succession, decreasing the effect of camera shake or movement in the scene that is captured. Because each intensity value in the final array is based on at least one intensity value in an array comprised in the second set, the effect of decreased blur due to movement is not obtained at the expense of the resolution of the combined image.
- An embodiment includes determining an upper limit of the second range at least partly in dependence on the number of arrays of intensity values in the second set.
- Thus, the risk of an over-exposed combined image is reduced.
- In an embodiment, at least one of the arrays of intensity values in the first set is obtained by obtaining a plurality of arrays of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, and by summing an intensity value from each of the plurality of arrays to obtain a corresponding intensity value in the at least one array in the first set.
- Thus, an array representing an image that is the sum of a plurality of image frames is scaled. This has the effect of decreasing the amount of scaling that has to be done, making the method more efficient. In addition, random noise over the plurality of arrays that are summed to form an array in the first set is filtered out by means of the addition.
- In an embodiment, at least one of the arrays of intensity values in the first set is obtained by obtaining a plurality of arrays of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, wherein the method further includes
- summing an intensity value from each of the obtained plurality of arrays to obtain a corresponding intensity value in an intermediate combined array, and
determining an upper limit of the second range at least partly in dependence on at least one intensity value in the intermediate combined array. - Thus, the appropriate extent of the second range can be determined relatively accurately, since it is based on an array of intensity values that is quite representative of the final array. This embodiment is also relatively efficient, since it does not require an analysis of each of a plurality of arrays in the first set.
- In an embodiment, at least the arrays of intensity values in the first set are obtained by obtaining a plurality of arrays of intensity values for representing colour image frames in a first colour space, and applying a transformation to a plurality of arrays of values in a second colour space, wherein, in the first colour space, an image frame is represented by parameter value combinations, each parameter indicating the intensity of one of a plurality of colour components, whereas, in the second colour space, an image frame is represented by parameter value combinations, one parameter of the combination indicating a hue and at least one of the other parameters being indicative of light intensity.
- This embodiment has the advantage that the mapping from the first scale to the second scale need be carried out on fewer arrays of intensity values. Instead of separate arrays of intensity values for each colour component, or arrays of intensity value combinations, only the array or arrays of parameter values indicative of light intensity in the second colour space, or arrays derived based thereon, need be processed. The colour information is contained in an array of parameter values indicating hues, which need not be scaled to prevent saturation of the combined image.
- According to another aspect, there is provided in accordance with the invention a system for processing arrays of intensity values, each array being suitable for representing an image frame at a resolution corresponding to the number of intensity values in the array,
- wherein the system is configured to retrieve a first set of at least one array of intensity values and a second set of at least one array of intensity values, the arrays in the first set and arrays in the second set representing respective image frames, and to form a final array of intensity values representing a combined image,
- wherein the system is configured to obtain each of at least some of the intensity values in the final array by executing a step of summing an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the first set and an intensity value from each of at least one array of intensity values based on at least one array of intensity values in only the second set, and
- wherein the system is configured to map, prior to executing the summing step, only the intensity values of the arrays in the first set from a scale within a first range to a scale within a second range.
- According to another aspect, there is provided in accordance with the invention an imaging apparatus, e.g. a digital camera, comprising a processor and at least one storage device for storing a plurality of arrays of intensity values, wherein the imaging apparatus is configured to execute a method according to the invention.
- The imaging apparatus makes relatively efficient use of digital signal processing capacity. In particular, because not all arrays of pixel values are scaled, the amount of values to be retrieved from a look-up table implementing the mapping function is relatively low.
- According to another aspect of the invention, there is provided a computer program, including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system having information processing capabilities to perform a method according the invention.
- The computer program can be run on a general-purpose computer for post-processing of captured images, or it can be provided in the form of firmware for an image-capturing device such as a digital camera.
- The invention will be explained in further detail with reference to the accompanying drawings, in which
-
FIG. 1 illustrates schematically a digital camera equipped to implement a method of forming a combined image; -
FIG. 2 illustrates schematically a first embodiment of a method of forming a combined image; -
FIG. 3 illustrates schematically a second embodiment of a method of forming a combined image; -
FIG. 4 illustrates schematically a third embodiment of a method of forming a combined image; and -
FIG. 5 illustrates schematically a fourth embodiment of a method of forming a combined image. -
FIG. 1 illustrates some components of adigital camera 1 as an example of an imaging apparatus adapted for implementing the methods described below. Other examples of suitable imaging apparatus include scanners and photocopying apparatus. Because the methods of forming a combined image require relatively little processing capacity, it is advantageous to apply them in thedigital camera 1. - The
digital camera 1 includes alens system 2 for focussing on one or more objects in a scene that is to be represented by a combined image. When ashutter 3 is opened, the scene is projected through an aperture in adiaphragm 4 onto a photosensitive area of an image-capturingdevice 5. Instead of theshutter 3, an electronic shutter implemented by suitable control of the image-capturingdevice 5 could be used. The shutter time is controllable, as is the diameter of the aperture. The image-capturingdevice 5 can be a device implemented in Complementary Metal-Oxide Semiconductor (CMOS) technology, or a Charge-Coupled Device (CCD) sensor, for example. The photosensitive area of the image-capturingdevice 5 is divided into areas occupied by pixel cells. Each pixel cell includes a device for generating a signal indicative of the intensity of light to which the area that the pixel cell occupies is exposed. An integral of the signal generated by a device is formed during exposure, for example by accumulation of photocurrent in a capacitor. Subsequent to exposure for the duration of an exposure time interval, the values of the integrals of the generated signals are read out row by row. - The (analogue) values that are read out are provided to an Analogue-to-Digital (A/D-)
converter 6. The A/D converter samples and quantises the signals received from the image-capturingdevice 5. This involves recording the intensity values on a scale with discrete levels, the number of which is determined by the number of bits of resolution of the digital words provided as output by the A/D converter 6. Thus, the A/D-converter 6 provides as output an array of intensity values recorded on a scale occupying a first range. Each intensity value is associated with a particular pixel position in an image frame, corresponding to a photosensitive cell or a plurality of adjacent photosensitive cells. In the latter case, the values read out from the image-capturingdevice 5 are preferably obtained by “binning” the values corresponding to a plurality of adjacent photosensitive cells. The areas to which the “binned” values correspond may overlap. - Each exposure of the image-capturing
device 5 thus results in an array of intensity values representing an image frame. As will be explained in more detail below, the intensity values of one or more arrays may be mapped to a different scale occupying a second range by a Digital Signal Processor (DSP) 7. In certain embodiments, theDSP 7 is also suitable for performing such operations as interpolation between pixel values and optionally compression of the image. It may also carry out a transformation of the intensity values to the spatial frequency domain, such as a Direct Cosine Transform (DCT). - Arrays of intensity values are stored in a
storage device 8. The storage device can be any usual type of storage device, e.g. built-in flash memory, replaceable flash memory modules, an optical disk drive or a magnetic disk drive. - Capturing and processing of images is carried out under control of a
microprocessor 9, which issues commands over abus 10. Themicroprocessor 9 is assisted by a co-processor 11 in the illustrated embodiment. The co-processor 11 is preferably a digital signal processor for performing image compression, for example in accordance with the JPEG standard. Themicroprocessor 9 comprises a volatile memory and has access to instructions stored in Read-Only Memory (ROM)module 12. The instructions provide thedigital camera 1 with the capability to perform a method of forming a combined image by adding a plurality of captured image frames, which method is carried out under the control of themicroprocessor 9. - Other components connected to the
bus 10 include aninput interface module 13 for receiving user commands, and anoutput interface module 14 for returning status information. In the illustrated embodiment, amotion sensor 15 is present for sensing and measuring movement of thedigital camera 1. In other embodiments, a series of image frames captured in rapid succession is analysed to determine the amount and/or direction of movement of thedigital camera 1. In addition, thedigital camera 1 comprises anexposure metering device 16 and aflash driver 17 for directing the operation of a flash (not shown). - In use, a user issues a command to form a single image of a scene, which is passed on to the
microprocessor 9 through theinput interface module 13 and thebus 10. In response, themicroprocessor 9 controls thedigital camera 1 such that a plurality of underexposed image frames or image frames with a high ISO setting are captured. A high ISO setting means that the sensitivity of the image-capturingdevice 5, calibrated along the linear film speed scale according to international standard ISO 5800:1987 is set to a high level. The captured images represent respective scenes that overlap at least partially. Each image frame, specifically each colour component of an image frame, is represented by an array of pixel values. Each pixel value corresponds to the light intensity of the associated colour component over an area associated with a pixel. Given that each area associated with a pixel corresponds to a part of the area of the image-capturingdevice 5, which is constant, the number of intensity values contained in an array corresponds to the spatial resolution of the image frame. This is also the case where the intensity values are coefficients in the spatial frequency domain, since the inclusion of more values in an array corresponds to the presence of coefficients of a higher order. - To obtain the sequence of individually underexposed image frames, the
microprocessor 9 determines a desired exposure for a final image to be formed on the basis of the image frames. This exposure is divided over the image frames. The desired exposure can be determined from user input or automatically on the basis of one or more values obtained from theexposure metering device 16. Exposure levels for each of the image frames result in settings of thediaphragm 4, shutter speed and flash intensity. In addition, themicroprocessor 9 determines amplification levels for the signals read out from the image-capturing device. These determine the range of values within which the intensity values in the arrays representing the image frames lie. The number of bits with which the intensity values are represented determines the dynamic range of the intensity values. In the example, it will be assumed that the intensity values are represented in eight bits, so that there are 255 possible non-zero values. Instead of underexposing the image frames, the linear-scale ISO setting (also known as ASA number) of the image-capturingdevice 5 can be increased by the same factor as the underexposure factor. This results in increased noise levels in the individual frames, which are reduced through the combination processes presented below. - In the embodiments described herein, a distinction is made between a first set of arrays of intensity values representing associated respective image frames and a second set of arrays of intensity values representing associated respective image frames. The distinction is made on the basis of how the arrays are processed subsequent to capturing of the image frames.
- In a first embodiment, depicted in
FIG. 2 , afirst set 18 of arrays of intensity values represents image frames at a relatively low spatial resolution, whereas asecond set 19 of arrays of intensity values represents image frames at a relatively high resolution. Since the spatial resolution is proportional to the number of intensity values in the arrays, it follows that the arrays in the first set contain fewer values than those in thesecond set 19. This reduces the processing requirements, which is advantageous, as will become clear. - It is noted that the amount of processing is already reduced merely by the division of a sequence of arrays into the
first set 18 and second set 19, so that the fact that the first set represents image frames at a lower resolution than the second set is an advantageous, but optional feature. Furthermore, it is not required, but efficient in terms of processing, that the arrays that share a set all have the same number of elements, i.e. that the image frames they represent each have the same resolution. In the illustrated embodiment, afinal array 20 of intensity values representing a combined image is formed on the basis of the arrays in the first andsecond set - An object of the method illustrated in
FIG. 2 is to scale the intensity values in the arrays of thefirst set 18 such that thefinal array 20 contains intensity values that occupy the full dynamic range. The method serves to prevent a situation in which all the intensity values in the final array are clipped at the highest of the 255 values afforded by an eight-bit representation. - In a
first step 21, one or more arrays of intensity values in thefirst set 18 of arrays are at least partially analysed. In one embodiment, the analysis comprises the forming of a histogram of some or all of the intensity values. To reduce the processing effort required to generate a histogram, only one value in every block of sixty-four values could be used. - If a significant number of intensity values lies above a threshold value, then a mapping function is required, which mapping function is determined in a
second step 22. Thesecond step 22 is followed by astep 23 in which a look-up table 24 is generated on the basis of the mapping function. For each of 255 intensity values, a scaled value is entered into the look-up table 24. Using a look-up table allows the mapping to be carried out by theDSP 7, which is relatively efficient. Thus, the use of a look-up table makes the methods presented herein quite suitable for implementation in an imaging apparatus, such as thedigital camera 1. - Only the arrays of intensity values in the
first set 18 are mapped (step 25) to arrays of scaled intensity values in aset 26. Each intensity value is used as an index into the look-up table 24 to determine its scaled value. It will be appreciated that, by scaling only the intensity values in the arrays forming thefirst set 18, a smaller look-up table is required. Moreover, the number of look-up operations is much reduced. As will be seen, thefinal array 20 can still represent a combined image at a higher resolution, because each intensity value in the final array is based on at least one intensity value in an array comprised in thesecond set 19. It is noted that the mapping function is applied directly to the arrays of intensity values in thefirst set 18 in other embodiments, so that the look-up table 24 is dispensed with. - The mapping function used to populate the look-up table 24 maps the intensity values from a first scale within a first range to a second scale occupying a second, smaller range. In one embodiment, the upper limit of the second scale is determined on the basis of at least two factors. A first factor is the extent to which the intensity values of the arrays analysed in the
first step 21 exceed a certain threshold value. The second factor is based on the number of arrays of intensity values in thesecond set 19. More specifically, the threshold value is the maximum value of the dynamic range for encoding the values in thefinal array 20, divided by the number of arrays in the first andsecond sets set 26 of arrays of scaled intensity values remain below the threshold. The second factor in this example is based on the ratio of the number of arrays in the second set 19 to the number of arrays in thefirst set 18. The upper value of the second scale is obtained by reducing the threshold by an amount corresponding to this ratio. Thus, the fact that, only the arrays in thefirst set 18 of the first andsecond sets - In an embodiment that is more efficient in its implementation, a fixed curve or look-up table is used to determine the scaling in dependence only on the number of arrays of intensity values in the first and
second sets device 5 used to capture the arrays of intensity values on which the arrays in the first andsecond sets - In an advantageous embodiment, the
first step 21 is preceded by a step (not shown), in which the first andsecond sets FIG. 2 is carried out in the RGB colour space, then the method would in essence have to be carried out in triplicate. The first andsecond steps step 25 involves scaling three arrays of intensity values per image frame. In the HLS colour space, an image is represented by the parameter combination Hue, indicating the relative strengths of three colour components, Saturation, providing a scale from a grey level to a full colour, and Lightness (also called Luminance) corresponding substantially to the average intensity of the colour components. Only the arrays of Lightness values in thefirst set 18 are scaled. It is noted that the HSV (Hue, Saturation, Value) colour space is usable as an alternative to the HSL colour space, and that the CMYK and YUV colour spaces are alternatives to the RGB colour space. - As mentioned, each intensity value in the
final array 20 is based on at least one intensity value in an array comprised in thesecond set 19 of arrays of intensity values. In the embodiment illustrated inFIG. 2 , this is assured by summing corresponding pixel values of each of the arrays in theset 26 of arrays of scaled intensity values - In order to obtain a high-resolution combined image, a
set 27 of resolution-adjusted arrays is generated (step 28). In thisstep 28, the spatial resolution of the arrays in theset 26 of arrays of scaled intensity values is adjusted by a multiplication factor, and is increased. An alternative would be to decrease the resolution of the image frames represented by the arrays in thesecond set 19. One way of increasing the spatial resolution of the image frames represented by the arrays in theset 26 of arrays of scaled intensity values is to interpolate between the intensity values in the arrays of scaled intensity values. - The
final array 20 is obtained by summing (step 29) an intensity value from each of the arrays in theset 27 of resolution-adjusted arrays and value from each of thesecond set 19 of arrays. Intensity values corresponding to the same pixel in the scene represented by the image frames are added. - To take account of camera shake, an additional step (not shown) is carried out to correct the image frames. The correction may be carried out prior to the
first step 21 shown inFIG. 2 , so that the arrays of the first andsecond set second sets device 5 and corrected in accordance with a motion vector. The motion vector describes the motion of thecamera 1 between the points in time at which the arrays of intensity values were obtained by the image-capturing device. It is based on data obtained from themotion sensor 15 or based on an analysis of the captured image frames using a method described more fully in international patent application PCT/EP04/051080, which is hereby incorporated by reference. In that application, a method is described that includes calculating a motion vector representing at least a component indicative of relative movement of at least a part of successive image frames in a sequence of image frames, wherein the step of calculating the motion vector includes a step of determining at least a first term in a series expansion representing at least one element of the motion vector, which step includes an estimation process wherein at least the part in each of a plurality of the image frames is repositioned in accordance with the calculated motion vector. The estimation process includes calculation of a measure of energy contained in an upper range of the spatial frequency spectrum of the combined image and the step of determining at least the first term includes at least one further iteration of the estimation process to maximise the energy. - In an alternative embodiment, the image frames are aligned using a method known per se by the name of Random Sample Consensus (RANSAC). This method is suitable where there is sufficient light to capture image frames.
-
FIG. 3 illustrates a variant of the method shown inFIG. 2 . This embodiment is also based on afirst set 30 of arrays of intensity values and asecond set 31 of arrays of intensity values. Each intensity value is a pixel value, corresponding to the light intensity of an associated colour component over an area associated with a pixel. What has been stated above regarding the first andsecond set FIG. 2 applies equally to the first andsecond set FIG. 3 . Again, this description will assume that the arrays of intensity value in thefirst set 30 of arrays represent image frames at a lower resolution than the arrays in thesecond set 31. - A
first step 32 in the method ofFIG. 3 corresponds to thefirst step 21 shown inFIG. 3 . In asubsequent step 33, a mapping function is again determined in order to map the intensity values of the arrays in the first set 30 from a scale occupying a first range to a second scale occupying a second range. The mapping function is determined on the basis of at least parts of some or all of the arrays in thefirst set 30. It is determined in substantially the same way as in the embodiment ofFIG. 2 . Similarly, a look-up table 34 is created in astep 34 following thestep 33 of determining the mapping function. The look-up table 34 is used (step 36) to generate aset 37 of arrays of scaled intensity values, in which each array is based on a corresponding array in thefirst set 30 of arrays of intensity values. - The variant of
FIG. 3 differs from the one shown inFIG. 2 , in that a transformation to the spatial frequency domain is carried out in anotherstep 38 subsequent to the scalingstep 36. Thistransformation step 38 is implemented using a Discrete Cosine Transform (DCT) in the illustrated example. Theset 37 of arrays of scaled intensity values is the basis for a first set 39 of arrays of DCT coefficients. The second set 31 of arrays of intensity values is the basis for asecond set 40 of arrays of DCT coefficients. It is observed that the DCT transform is part of the JPEG (Joint Photographic Experts Group) compression algorithm, and that it is advantageous to implement such an algorithm in a special-purpose processor, such as theDSP 7 orco-processor 11. A transformation from the RGB colour space to the HLS colour space is also part of the JPEG algorithm, so that this feature is also applied to advantage in the embodiment illustrated inFIG. 3 . The transformation between colour spaces has been detailed above. - A
summation step 41 is carried out in the spatial frequency domain to obtain a final array 42 of DCT coefficients. The final array 42 forms an array of intensity values representing a combined image, since each coefficient is indicative of the intensity level of a spatial frequency component, and the set of spatial frequency components contains all the information necessary to render the combined image. The low-frequency coefficients of the final array 42 are obtained by summing the low-frequency coefficients of each array in the first set 39 of arrays of DCT coefficients and the low-frequency coefficients of each array in thesecond set 40 of arrays of DCT coefficients. The high-frequency coefficients are obtained by summing the high-frequency coefficients of each array in thesecond set 40 of arrays of DCT coefficients. Since these higher-order coefficients are absent in the (smaller) arrays of the first set 39 of arrays of DCT coefficients, only some of the intensity values in the final array 42 of DCT coefficients are obtained on the basis of both the first andsecond set summation step 41 is preferably implemented so as to take account of the differing number of addends used to obtain each coefficient in the final array 42. - An Inverse Discrete Cosine Transformation (IDCT) 43 results in an
array 44 of intensity values in the spatial domain. Both thetransformation step 38 and theIDCT 43 are advantageously carried out by the co-processor 11 in thedigital camera 1. -
FIG. 4 illustrates an embodiment for simplifying the determination of the mapping function from the first scale to the second scale, as well as simplifying the scaling step. It operates on the basis of afirst set 45 of arrays of pixel values and asecond set 46 of arrays of pixel values. - A
first sum array 47 is formed in afirst step 48. On the assumption that the arrays in thefirst set 45 represent respective image frames at the same resolution, each intensity value in the first sum array is obtained by summing the corresponding intensity values from each of the arrays in thefirst set 45. If the resolutions are not the same, interpolation may be carried out first, or the arrays representing higher-resolution image frames may be reduced to correspond to a common resolution. Thefirst sum array 47 is also suitable for representing an image frame, albeit one based on a plurality of preceding image frames, and forms a set of arrays consisting of one member. In alternative embodiments, a plurality of sum arrays could be formed, each based on a subset of arrays in thefirst set 45, with the plurality of sum arrays forming a first set in the terminology used herein. - The
first sum array 47 of intensity values is analysed (step 49) to determine a mapping function for mapping a first scale occupying a first range to a second scale occupying a second range. As described before, the analysis advantageously comprises the forming of a histogram of some or all of the intensity values, i.e. DCT coefficients. Again, this may be carried out using one value per block of intensity values within the first sum array. However, the embodiment ofFIG. 4 , because only thefirst sum array 47 is analysed, allows for a more involved analysis as compared to embodiments in which a number of arrays of intensity values have to be analysed. - If a significant number of intensity values lies above a threshold value, then a mapping function is required. A look-up table 50 is generated (step 51) on the basis of the mapping function. For each of, for example, 255 intensity values, a scaled value is entered into the look-up table 50.
- Only the
first sum array 47 of intensity values is mapped (step 52) to a scaledfirst sum array 53. Preferably, the arrays in thefirst set 45 of intensity values represent image frames at a lower resolution than the arrays in thesecond set 46 of arrays. Even if this is not the case, it is still feasible to generate afirst sum array 47 representing a combined image frame at a lower resolution than that at which image frames are represented by the arrays in thesecond set 46 of arrays of intensity values. Thus, the number of look-up operations is kept relatively small. - As before, the mapping function used to populate the look-up table 50 maps the intensity values from a first scale within a first range to a second scale occupying a second, smaller range. The upper limit of the second scale is again determined on the basis of at least two factors. A first factor is the extent to which the intensity values of the
first sum array 47 exceed a certain threshold value. The second factor is based on the number of arrays of intensity values in thesecond set 46. More specifically, the threshold value is the maximum value of the dynamic range for encoding the values in thefirst sum array 47. The mapping function is chosen to ensure that a substantial proportion of the intensity values in the scaledfirst sum array 53 remain below the threshold. The second factor in this example is based on the ratio of the number of arrays in the second set 46 to the number of arrays in thefirst set 45. The upper value of the second scale is obtained by reducing the threshold by an amount corresponding to this ratio. Thus, the fact that, only thefirst sum array 47 is scaled, and not also the arrays in thesecond set 46 of arrays of intensity values, is taken into account. - Scaling only the
first sum array 47 reduces even further the number of look-up operations. Nevertheless, it would be possible to analyse thefirst sum array 47 to derive a mapping function for scaling the individual arrays in thefirst set 45 of arrays, which are then added after having been scaled. Alternatively, it would be possible to analyse the individual frames in thefirst set 45 of arrays of intensity values, in order to derive a mapping function for scaling thefirst sum array 47. The effect of scaling thefirst sum array 47 is to reduce the amount of noise that propagates to afinal array 54 of intensity values representing a combined image. - The
final array 54 of intensity values represents a combined image at a higher resolution than the scaledfirst sum array 53. For this reason, the latter is processed (step 55) to obtain a resolution-adjusted scaledfirst sum array 56. Again, interpolation is a method by which the intensity values in the resolution-adjusted scaledfirst sum array 56 can be obtained. - The
final array 54 is obtained in afinal step 57. In thisstep 57, each intensity value in thefinal array 54 of intensity values is obtained by summing an intensity value from the resolution-adjusted scaledfirst sum array 56 and the corresponding respective intensity values from each of the arrays in thesecond set 46 of arrays of intensity values. It will be apparent that thefinal array 54 is thus formed of intensity values that are each based on at least one intensity value in an array in thesecond set 46 of arrays of intensity values, to achieve a high-resolution representation of the combined image. -
FIG. 5 shows a variant in which calculation is largely carried out in the spatial frequency domain, and which does not necessarily require interpolation or another process for increasing the resolution at which an image frame is represented. The variant illustrated inFIG. 5 commences with aDCT operation 58. TheDCT operation 58 is used to obtain afirst set 59 of arrays of DCT coefficients for representing a set of corresponding image frames at a first resolution. This first set 59 is based on aset 60 of arrays of pixel values encoding the image frames in the spatial domain as opposed to the spatial frequency domain. Asecond set 61 of arrays of DCT coefficients is based on asecond set 62 of arrays of pixel values encoding image frames in the spatial domain at a second resolution. In this example, it will again be assumed that the second resolution is higher than the first resolution. - In a
subsequent step 63, the arrays in thefirst set 59 of arrays of DCT coefficients are processed to obtain afirst sum array 64. Each DCT coefficient in thefirst sum array 64 is obtained by summing the corresponding DCT coefficients in the respective arrays of thefirst set 59. - The
first sum array 64 is analysed to determine (step 65) a mapping function mapping the DCT coefficients in thefirst sum array 64 from a first scale occupying a first range to a second scale occupying a second, preferably smaller, range. Thisstep 65 is carried out using any of the methods outlined above with regard to thecorresponding steps FIGS. 2-4 . Subsequently (step 66) a look-up table 67 is created on the basis of the mapping function. - The mapping functions is based at least partly on the number of arrays in the
second set 61 of arrays of DCT coefficients. This is done because only the DCT coefficients in thefirst sum array 64 are mapped from the first scale to the second scale (step 68), whereas those in the arrays forming thesecond set 61 of arrays of DCT coefficients are not. The result of the scaling carried out in thisstep 68 is a scaledfirst sum array 69. - The scaled
first sum array 69 and the arrays in thesecond set 61 of arrays of DCT coefficients are summed in astep 70 similar to thesummation step 41 in the embodiment illustrated inFIG. 3 . Afinal array 71 of DCT coefficients is obtained. The lower-order DCT coefficients in thefinal array 71 of DCT coefficients are each obtained by summing the lower-order coefficients of the scaledfirst sum array 69, which is based on thefirst sum array 64, and the corresponding lower-order coefficients of the arrays of thesecond set 61 of arrays of DCT coefficients. The higher-order DCT coefficients in thefinal array 71 are obtained by summing the corresponding higher-order coefficients in the arrays comprised in thesecond set 61 of arrays of DCT coefficients only. Thus, thefinal array 71 of DCT coefficients is suitable for representing the combined image at a relatively high resolution, at least higher than that of the image frames represented by thefirst sum array 64. - An
inverse DCT operation 72 transforms thefinal array 71 of DCT coefficients into afinal array 73 of pixel values, each corresponding to a light intensity over an area occupied by a pixel in the combined image. - The invention is not limited to the described embodiments, which may be varied within the scope of the accompanying claims. In particular, the methods outlined herein are suitable for partial or complete execution by another type of image processing system than the
digital camera 1. For example, a general-purpose personal computer or work station may carry out the method on the basis of a first set of arrays of pixel values and a second set of arrays of pixel values in a sequence of arrays captured in rapid succession by thedigital camera 1 and stored in thestorage device 8. Processing of the arrays for relative alignment of at least the region of overlap between the image frames represented by them is an advantageous feature of each embodiment.
Claims (12)
1. Method of forming a combined image based on a plurality of image frames, including:
obtaining a first set (18;30;47;64) of at least one array of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, and obtaining a second set (19;31;46;61) of at least one array of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array,
wherein the combined image is represented by a final array (20;42;54;71) of intensity values,
wherein at least some of the intensity values in the final array (20;42;54;71) are each obtained by executing a step (29;41;57;70) of summing an intensity value from each of at least one array (27;39;56;69) of intensity values based on at least one array of intensity values in only the first set (18;30;47;64) and an intensity value from each of at least one array (19;40;46;61) of intensity values based on at least one array of intensity values in only the second set (19;31;46;61), wherein, prior to executing the summing step (29;41;57;70), only the intensity values of the arrays in the first set (18;30;47;64) are mapped from a scale within a first range to a scale within a second range.
2. Method according to claim 1 , including obtaining a first set (18;30;47;64) and a second set (19;31;46;61) arranged such that the image frames represented by the arrays in the first set (18;30;47;64) are represented at lower resolutions than the image frames represented by the arrays in the second set (19;31;46;61).
3. Method according to claim 2 , wherein the at least one array (39;69) of intensity values based on at least one array of intensity values in only the first set (30;64) contains coefficients in the spatial frequency domain, wherein the at least one array (40;61) of intensity values based on at least one array of intensity values in only the second set (31; 61) contains coefficients in the spatial frequency domain and wherein the intensity values in the final array (42;71) are formed by coefficients in the spatial frequency domain,
wherein at least one lower order coefficient in the final array (42;71) is obtained by summing an intensity value from each of the at least one array (39;69) of intensity values based on at least one array of intensity values in only the first set (30;64) and an intensity value from each of at least one array (40;61) of intensity values based on at least one array of intensity values in only the second set (31;61), wherein at least one higher order coefficient in the final array (42;71) is obtained on the basis of only arrays (40;61) of intensity values based on the second set (31;61).
4. Method according to claim 2 , wherein at least some of the arrays of intensity values in the first and second set are obtained by reading out measurement values from an image-capturing device comprising an array of light-sensitive cells, wherein each intensity value in the final array (20;42;54;71) is based on at least one intensity value in an array comprised in the second set.
5. Method according to claim 1 , including determining an upper limit of the second range at least partly in dependence on the number of arrays of intensity values in the second set (19;31;46;61).
6. Method according to claim 1 wherein at least one of the arrays of intensity values in the first set (47;64) is obtained by obtaining a plurality of arrays (45;60) of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, and by summing an intensity value from each of the plurality of arrays (45;60) to obtain a corresponding intensity value in the at least one array in the first set (47;64).
7. Method according to claim 1 , wherein at least one of the arrays of intensity values in the first set (18;30;47;64) is obtained by obtaining a plurality of arrays (45;60) of intensity values for representing an image frame at a resolution corresponding to the number of intensity values in the array, wherein the method further includes summing an intensity value from each of the obtained plurality of arrays (45;60) to obtain a corresponding intensity value in an intermediate combined array (47;64), and determining an upper limit of the second range at least partly in dependence on at least one intensity value in the intermediate combined array (47;64).
8. Method according to claim 1 , wherein at least the arrays of intensity values in the first set are obtained by obtaining a plurality of arrays of intensity values for representing colour image frames in a first colour space, and applying a transformation to a plurality of arrays of values in a second colour space, wherein, in the first colour space, an image frame is represented by parameter value combinations, each parameter indicating the intensity of one of a plurality of colour components, whereas, in the second colour space, an image frame is represented by parameter value combinations, one parameter of the combination indicating a hue and at least one of the other parameters being indicative of light intensity.
9. System for processing arrays of intensity values, each array being suitable for representing an image frame at a resolution corresponding to the number of intensity values in the array,
wherein the system is configured to retrieve a first set (18;30;47;64) of at least one array of intensity values and a second set (19;31;46;61) of at least one array of intensity values, the arrays in the first set (18;30;47;64) and arrays in the second set representing respective image frames, and to form a final array (20;42;54;71) of intensity values representing a combined image, wherein the system is configured to obtain each of at least some of the intensity values in the final array (20;42;54;71) by executing a step of summing an intensity value from each of at least one array (27;39;56;69) of intensity values based on at least one array of intensity values in only the first set (18;30;47;64) and an intensity value from each of at least one array of intensity values based on at least one array (19;40;46;61) of intensity values in only the second set (19;31;46;61), and wherein the system is configured to map, prior to executing the summing step, only the intensity values of the arrays in the first set (18;30;47;64) from a scale within a first range to a scale within a second range.
10. (canceled)
11. Imaging apparatus, e.g. a digital camera (1), comprising a processor (7,9,11) and at least one storage device (8) for storing a plurality of arrays of intensity values, wherein the imaging apparatus is configured to execute a method according to claim 1 .
12. Computer program, including a set of instructions capable, when incorporated in a machine-readable medium, of causing a system (1) having information processing capabilities to perform a method according to claim 1 .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/547,184 US8624923B2 (en) | 2005-10-12 | 2012-07-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2005/055186 WO2007042074A1 (en) | 2005-10-12 | 2005-10-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/547,184 Continuation US8624923B2 (en) | 2005-10-12 | 2012-07-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090278857A1 true US20090278857A1 (en) | 2009-11-12 |
Family
ID=36579082
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/089,997 Abandoned US20090278857A1 (en) | 2005-10-12 | 2005-10-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
US13/547,184 Expired - Fee Related US8624923B2 (en) | 2005-10-12 | 2012-07-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/547,184 Expired - Fee Related US8624923B2 (en) | 2005-10-12 | 2012-07-12 | Method of forming an image based on a plurality of image frames, image processing system and digital camera |
Country Status (6)
Country | Link |
---|---|
US (2) | US20090278857A1 (en) |
EP (1) | EP1934939A1 (en) |
JP (1) | JP4651716B2 (en) |
KR (1) | KR101205842B1 (en) |
CN (1) | CN101305397B (en) |
WO (1) | WO2007042074A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100103194A1 (en) * | 2008-10-27 | 2010-04-29 | Huawei Technologies Co., Ltd. | Method and system for fusing images |
US20120212458A1 (en) * | 2008-08-07 | 2012-08-23 | Rapt Ip Limited | Detecting Multitouch Events in an Optical Touch-Sensitive Device by Combining Beam Information |
CN103608839A (en) * | 2011-03-28 | 2014-02-26 | 皇家飞利浦有限公司 | Contrast-dependent resolution image |
US9053558B2 (en) | 2013-07-26 | 2015-06-09 | Rui Shen | Method and system for fusing multiple images |
US9092092B2 (en) | 2008-08-07 | 2015-07-28 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device using touch event templates |
US20160335516A1 (en) * | 2014-05-27 | 2016-11-17 | Fuji Xerox Co., Ltd. | Image processing apparatus, and non-transitory computer readable medium for generating a feature-reflected image and for changing a degree of reflection of a feature in the feature-reflected image |
US9734427B2 (en) | 2014-11-17 | 2017-08-15 | Industrial Technology Research Institute | Surveillance systems and image processing methods thereof |
WO2018136276A1 (en) * | 2017-01-20 | 2018-07-26 | Rambus Inc. | Imaging systems and methods with periodic gratings with homologous pixels |
US11134180B2 (en) * | 2019-07-25 | 2021-09-28 | Shenzhen Skyworth-Rgb Electronic Co., Ltd. | Detection method for static image of a video and terminal, and computer-readable storage medium |
US20230230212A1 (en) * | 2022-01-14 | 2023-07-20 | Omnivision Technologies, Inc. | Image processing method and apparatus implementing the same |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2310136B1 (en) * | 2007-06-07 | 2009-11-05 | Consejo Superior De Investigaciones Cientificas | METHOD FOR AUTOMATIC IMPROVEMENT OF IMAGES AND SEQUENCES WITH SPACIALLY VARIANT DEGRADATION. |
JP2008306512A (en) * | 2007-06-08 | 2008-12-18 | Nec Corp | Information providing system |
US9998697B2 (en) | 2009-03-02 | 2018-06-12 | Flir Systems, Inc. | Systems and methods for monitoring vehicle occupants |
US9208542B2 (en) | 2009-03-02 | 2015-12-08 | Flir Systems, Inc. | Pixel-wise noise reduction in thermal images |
USD765081S1 (en) | 2012-05-25 | 2016-08-30 | Flir Systems, Inc. | Mobile communications device attachment with camera |
US10244190B2 (en) | 2009-03-02 | 2019-03-26 | Flir Systems, Inc. | Compact multi-spectrum imaging with fusion |
US9451183B2 (en) | 2009-03-02 | 2016-09-20 | Flir Systems, Inc. | Time spaced infrared image enhancement |
US9948872B2 (en) | 2009-03-02 | 2018-04-17 | Flir Systems, Inc. | Monitor and control systems and methods for occupant safety and energy efficiency of structures |
US9235876B2 (en) | 2009-03-02 | 2016-01-12 | Flir Systems, Inc. | Row and column noise reduction in thermal images |
US9473681B2 (en) | 2011-06-10 | 2016-10-18 | Flir Systems, Inc. | Infrared camera system housing with metalized surface |
US9843742B2 (en) | 2009-03-02 | 2017-12-12 | Flir Systems, Inc. | Thermal image frame capture using de-aligned sensor array |
US9517679B2 (en) | 2009-03-02 | 2016-12-13 | Flir Systems, Inc. | Systems and methods for monitoring vehicle occupants |
US10757308B2 (en) | 2009-03-02 | 2020-08-25 | Flir Systems, Inc. | Techniques for device attachment with dual band imaging sensor |
US9674458B2 (en) | 2009-06-03 | 2017-06-06 | Flir Systems, Inc. | Smart surveillance camera systems and methods |
US9986175B2 (en) | 2009-03-02 | 2018-05-29 | Flir Systems, Inc. | Device attachment with infrared imaging sensor |
US9635285B2 (en) | 2009-03-02 | 2017-04-25 | Flir Systems, Inc. | Infrared imaging enhancement with fusion |
US9756264B2 (en) | 2009-03-02 | 2017-09-05 | Flir Systems, Inc. | Anomalous pixel detection |
US8963949B2 (en) * | 2009-04-22 | 2015-02-24 | Qualcomm Incorporated | Image selection and combination method and device |
US9292909B2 (en) | 2009-06-03 | 2016-03-22 | Flir Systems, Inc. | Selective image correction for infrared imaging devices |
US9756262B2 (en) | 2009-06-03 | 2017-09-05 | Flir Systems, Inc. | Systems and methods for monitoring power systems |
US9843743B2 (en) | 2009-06-03 | 2017-12-12 | Flir Systems, Inc. | Infant monitoring systems and methods using thermal imaging |
US9716843B2 (en) | 2009-06-03 | 2017-07-25 | Flir Systems, Inc. | Measurement device for electrical installations and related methods |
US10091439B2 (en) | 2009-06-03 | 2018-10-02 | Flir Systems, Inc. | Imager with array of multiple infrared imaging modules |
US9819880B2 (en) | 2009-06-03 | 2017-11-14 | Flir Systems, Inc. | Systems and methods of suppressing sky regions in images |
FR2951311A1 (en) * | 2009-10-09 | 2011-04-15 | Trixell Sas | METHOD FOR CONTROLLING A PHOTOSENSITIVE DEVICE |
CN101799915B (en) * | 2010-02-26 | 2011-12-07 | 中北大学 | Bicolor medium wave infrared image fusion method |
US9848134B2 (en) | 2010-04-23 | 2017-12-19 | Flir Systems, Inc. | Infrared imager with integrated metal layers |
US9706138B2 (en) | 2010-04-23 | 2017-07-11 | Flir Systems, Inc. | Hybrid infrared sensor array having heterogeneous infrared sensors |
US9207708B2 (en) | 2010-04-23 | 2015-12-08 | Flir Systems, Inc. | Abnormal clock rate detection in imaging sensor arrays |
CN101976436B (en) * | 2010-10-14 | 2012-05-30 | 西北工业大学 | A pixel-level multi-focus image fusion method based on difference map correction |
US9356278B2 (en) | 2011-03-31 | 2016-05-31 | Nec Energy Devices, Ltd. | Battery pack |
US9509924B2 (en) | 2011-06-10 | 2016-11-29 | Flir Systems, Inc. | Wearable apparatus with integrated infrared imaging module |
US9706137B2 (en) | 2011-06-10 | 2017-07-11 | Flir Systems, Inc. | Electrical cabinet infrared monitor |
EP2719167B1 (en) | 2011-06-10 | 2018-08-08 | Flir Systems, Inc. | Low power and small form factor infrared imaging |
US9900526B2 (en) | 2011-06-10 | 2018-02-20 | Flir Systems, Inc. | Techniques to compensate for calibration drifts in infrared imaging devices |
US10841508B2 (en) | 2011-06-10 | 2020-11-17 | Flir Systems, Inc. | Electrical cabinet infrared monitor systems and methods |
US10079982B2 (en) | 2011-06-10 | 2018-09-18 | Flir Systems, Inc. | Determination of an absolute radiometric value using blocked infrared sensors |
US10389953B2 (en) | 2011-06-10 | 2019-08-20 | Flir Systems, Inc. | Infrared imaging device having a shutter |
US9961277B2 (en) | 2011-06-10 | 2018-05-01 | Flir Systems, Inc. | Infrared focal plane array heat spreaders |
US9143703B2 (en) | 2011-06-10 | 2015-09-22 | Flir Systems, Inc. | Infrared camera calibration techniques |
US9235023B2 (en) | 2011-06-10 | 2016-01-12 | Flir Systems, Inc. | Variable lens sleeve spacer |
CN103828343B (en) | 2011-06-10 | 2017-07-11 | 菲力尔系统公司 | Based on capable image procossing and flexible storage system |
US10169666B2 (en) | 2011-06-10 | 2019-01-01 | Flir Systems, Inc. | Image-assisted remote control vehicle systems and methods |
CN103875235B (en) | 2011-06-10 | 2018-10-12 | 菲力尔系统公司 | Nonuniformity Correction for infreared imaging device |
US9058653B1 (en) | 2011-06-10 | 2015-06-16 | Flir Systems, Inc. | Alignment of visible light sources based on thermal images |
US10051210B2 (en) | 2011-06-10 | 2018-08-14 | Flir Systems, Inc. | Infrared detector array with selectable pixel binning systems and methods |
US9811884B2 (en) | 2012-07-16 | 2017-11-07 | Flir Systems, Inc. | Methods and systems for suppressing atmospheric turbulence in images |
US9973692B2 (en) | 2013-10-03 | 2018-05-15 | Flir Systems, Inc. | Situational awareness by compressed display of panoramic views |
US11297264B2 (en) | 2014-01-05 | 2022-04-05 | Teledyne Fur, Llc | Device attachment with dual band imaging sensor |
CN105844630B (en) * | 2016-03-21 | 2018-11-16 | 西安电子科技大学 | A kind of image super-resolution fusion denoising method of binocular vision |
KR102584187B1 (en) * | 2016-03-30 | 2023-10-05 | 삼성전자주식회사 | Electronic device and method for processing image |
CN109478315B (en) * | 2016-07-21 | 2023-08-01 | 前视红外系统股份公司 | Fusion image optimization system and method |
US10187584B2 (en) | 2016-12-20 | 2019-01-22 | Microsoft Technology Licensing, Llc | Dynamic range extension to produce high dynamic range images |
CN109671106B (en) * | 2017-10-13 | 2023-09-05 | 华为技术有限公司 | Image processing method, device and equipment |
CN111970451B (en) * | 2020-08-31 | 2022-01-07 | Oppo(重庆)智能科技有限公司 | Image processing method, image processing device and terminal equipment |
KR102707267B1 (en) * | 2023-04-18 | 2024-09-19 | 한국원자력연구원 | Image enhancement system, image enhancement method and underwater cutting method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5345262A (en) * | 1992-07-31 | 1994-09-06 | Hughes-Jvc Technology Corporation | Automatic convergence system for color video projector |
US20080024683A1 (en) * | 2006-07-31 | 2008-01-31 | Niranjan Damera-Venkata | Overlapped multi-projector system with dithering |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5033103A (en) * | 1988-12-09 | 1991-07-16 | The United States Of America As Represented By The Secretary Of The Air Force | Model of the lateral inhibition, energy normalization, and noise suppression processes in the retina |
US6009206A (en) * | 1997-09-30 | 1999-12-28 | Intel Corporation | Companding algorithm to transform an image to a lower bit resolution |
US6683704B1 (en) * | 2000-05-12 | 2004-01-27 | Hewlett-Packard Development Company, L.P. | Apparatus for determining the best image from a dual resolution photo sensor |
JP2002165136A (en) * | 2000-11-29 | 2002-06-07 | Canon Inc | Imaging apparatus and imaging system |
JP4103129B2 (en) * | 2002-04-19 | 2008-06-18 | ソニー株式会社 | Solid-state imaging device and imaging method |
JP2004357202A (en) * | 2003-05-30 | 2004-12-16 | Canon Inc | Photographing apparatus |
JP2005136760A (en) * | 2003-10-31 | 2005-05-26 | Nikon Corp | Digital still camera |
JP4321287B2 (en) * | 2004-02-10 | 2009-08-26 | ソニー株式会社 | Imaging apparatus, imaging method, and program |
EP2273777A3 (en) * | 2005-05-10 | 2011-03-02 | Active Optics Pty. Ltd. | Method of controlling an image capturing system |
JP2009512038A (en) * | 2005-10-12 | 2009-03-19 | アクティブ オプティクス ピーティーワイ リミテッド | Method for generating a combined image based on multiple image frames |
-
2005
- 2005-10-12 US US12/089,997 patent/US20090278857A1/en not_active Abandoned
- 2005-10-12 JP JP2008534875A patent/JP4651716B2/en not_active Expired - Fee Related
- 2005-10-12 CN CN2005800520096A patent/CN101305397B/en not_active Expired - Fee Related
- 2005-10-12 EP EP05807952A patent/EP1934939A1/en not_active Withdrawn
- 2005-10-12 WO PCT/EP2005/055186 patent/WO2007042074A1/en active Application Filing
- 2005-10-12 KR KR1020087011112A patent/KR101205842B1/en active Active
-
2012
- 2012-07-12 US US13/547,184 patent/US8624923B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5345262A (en) * | 1992-07-31 | 1994-09-06 | Hughes-Jvc Technology Corporation | Automatic convergence system for color video projector |
US20080024683A1 (en) * | 2006-07-31 | 2008-01-31 | Niranjan Damera-Venkata | Overlapped multi-projector system with dithering |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10067609B2 (en) | 2008-08-07 | 2018-09-04 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device using touch event templates |
US8531435B2 (en) * | 2008-08-07 | 2013-09-10 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device by combining beam information |
US9552104B2 (en) | 2008-08-07 | 2017-01-24 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device using touch event templates |
US9092092B2 (en) | 2008-08-07 | 2015-07-28 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device using touch event templates |
US10795506B2 (en) * | 2008-08-07 | 2020-10-06 | Rapt Ip Limited | Detecting multitouch events in an optical touch- sensitive device using touch event templates |
US20120212458A1 (en) * | 2008-08-07 | 2012-08-23 | Rapt Ip Limited | Detecting Multitouch Events in an Optical Touch-Sensitive Device by Combining Beam Information |
US20190163325A1 (en) * | 2008-08-07 | 2019-05-30 | Rapt Ip Limited | Detecting multitouch events in an optical touch-sensitive device using touch event templates |
US20100103194A1 (en) * | 2008-10-27 | 2010-04-29 | Huawei Technologies Co., Ltd. | Method and system for fusing images |
US8896625B2 (en) * | 2008-10-27 | 2014-11-25 | Huawei Technologies Co., Ltd. | Method and system for fusing images |
CN103608839A (en) * | 2011-03-28 | 2014-02-26 | 皇家飞利浦有限公司 | Contrast-dependent resolution image |
US9053558B2 (en) | 2013-07-26 | 2015-06-09 | Rui Shen | Method and system for fusing multiple images |
US9805284B2 (en) * | 2014-05-27 | 2017-10-31 | Fuji Xerox Co., Ltd. | Image processing apparatus, and non-transitory computer readable medium for generating a feature-reflected image and for changing a degree of reflection of a feature in the feature-reflected image |
US20160335516A1 (en) * | 2014-05-27 | 2016-11-17 | Fuji Xerox Co., Ltd. | Image processing apparatus, and non-transitory computer readable medium for generating a feature-reflected image and for changing a degree of reflection of a feature in the feature-reflected image |
US9734427B2 (en) | 2014-11-17 | 2017-08-15 | Industrial Technology Research Institute | Surveillance systems and image processing methods thereof |
WO2018136276A1 (en) * | 2017-01-20 | 2018-07-26 | Rambus Inc. | Imaging systems and methods with periodic gratings with homologous pixels |
US11134180B2 (en) * | 2019-07-25 | 2021-09-28 | Shenzhen Skyworth-Rgb Electronic Co., Ltd. | Detection method for static image of a video and terminal, and computer-readable storage medium |
US20230230212A1 (en) * | 2022-01-14 | 2023-07-20 | Omnivision Technologies, Inc. | Image processing method and apparatus implementing the same |
US12254607B2 (en) * | 2022-01-14 | 2025-03-18 | Omnivision Technologies, Inc. | Image processing method and apparatus implementing the same |
Also Published As
Publication number | Publication date |
---|---|
US20120274814A1 (en) | 2012-11-01 |
CN101305397A (en) | 2008-11-12 |
US8624923B2 (en) | 2014-01-07 |
KR20080063829A (en) | 2008-07-07 |
KR101205842B1 (en) | 2012-11-28 |
EP1934939A1 (en) | 2008-06-25 |
CN101305397B (en) | 2012-09-19 |
JP4651716B2 (en) | 2011-03-16 |
JP2009512290A (en) | 2009-03-19 |
WO2007042074A1 (en) | 2007-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8624923B2 (en) | Method of forming an image based on a plurality of image frames, image processing system and digital camera | |
US7548689B2 (en) | Image processing method | |
US6825884B1 (en) | Imaging processing apparatus for generating a wide dynamic range image | |
US8155441B2 (en) | Image processing apparatus, image processing method, and program for color fringing estimation and compensation | |
US9094648B2 (en) | Tone mapping for low-light video frame enhancement | |
US8619156B2 (en) | Image capturing system and method of controlling the same utilizing exposure control that captures multiple images of different spatial resolutions | |
JP2004088149A (en) | Imaging system and image processing program | |
JPWO2006134923A1 (en) | Image processing apparatus, computer program product, and image processing method | |
US8150209B2 (en) | Method of forming a combined image based on a plurality of image frames | |
JP7463640B2 (en) | Method, apparatus and storage medium for spatially multiplexed exposure | |
US8155472B2 (en) | Image processing apparatus, camera, image processing program product and image processing method | |
JP2023106486A (en) | Imaging apparatus and a control method for the same, and program | |
JP4290965B2 (en) | How to improve the quality of digital images | |
US8463034B2 (en) | Image processing system and computer-readable recording medium for recording image processing program | |
JP2011100204A (en) | Image processor, image processing method, image processing program, imaging apparatus, and electronic device | |
JP6468791B2 (en) | Image processing apparatus, imaging apparatus, image processing system, image processing method, and image processing program | |
KR20100075730A (en) | A apparatus and a method for reducing noise |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACTIVE OPTICS PTY LIMITED, AUSTRALIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WAJS, ANDREW AUGUSTINE;REEL/FRAME:021501/0610 Effective date: 20080821 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: SILVERCREST INVESTMENT HOLDINGS LIMITED, VIRGIN IS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACTIVE OPTICS PTY LIMITED;REEL/FRAME:029875/0727 Effective date: 20120705 |