WO2003101119A1 - Procede de traitement d'images, programme de traitement d'images et processeur d'images - Google Patents
Procede de traitement d'images, programme de traitement d'images et processeur d'images Download PDFInfo
- Publication number
- WO2003101119A1 WO2003101119A1 PCT/JP2003/006388 JP0306388W WO03101119A1 WO 2003101119 A1 WO2003101119 A1 WO 2003101119A1 JP 0306388 W JP0306388 W JP 0306388W WO 03101119 A1 WO03101119 A1 WO 03101119A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color
- image
- color information
- component
- similarity
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 claims abstract description 227
- 238000012545 processing Methods 0.000 claims abstract description 171
- 230000008569 process Effects 0.000 claims abstract description 43
- 238000006243 chemical reaction Methods 0.000 claims abstract description 26
- 238000006073 displacement reaction Methods 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims description 48
- 238000012937 correction Methods 0.000 claims description 42
- 230000002093 peripheral effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 5
- 230000020509 sex determination Effects 0.000 claims 1
- 238000004040 coloring Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 28
- 239000003086 colorant Substances 0.000 description 25
- 230000000694 effects Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 230000004075 alteration Effects 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 6
- 238000012805 post-processing Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000006837 decompression Effects 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000004304 visual acuity Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000003371 toe Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
Definitions
- Japanese Patent Application No. 2000, No. 1, 507, 888 (filed on May 24, 2002) Japanese Patent Application No. 159, 228, 2008 (No. 20) (Filed on May 31, 2012) Japanese patent application No. 200, No. 1, 592, 209 (filed May 31, 2002) Japanese patent application, filed in Japanese No. 1 592 250 (filed on May 31, 2002)
- the present invention relates to an image processing method, an image processing program, and an image processing apparatus for processing image data obtained by a color filter having a delta arrangement.
- An electronic camera captures an image of a subject using an image sensor such as a CCD.
- an image sensor such as a CCD.
- a Bayer arrangement in which three color filters of RGB (red, green, blue) are arranged as shown in FIG. 24 (a) is known.
- a Delaunay arrangement arranged as shown in Fig. 24 (b) is known.
- honeycomb arrangements arranged as shown in FIG. 24 (c) are also known.
- Various image processing methods are available for the image data obtained by the Bayer array, for example, US Pat. No. 5,552,827, US Pat. No. 5,629,734, and JP-A-2001-245314. Proposed.
- the present invention provides an image processing method, an image processing program, and an image processing device that output high-definition square grid array image data based on image data obtained by a triangular grid color filter or the like such as a delta array. I do.
- the present invention provides an image for outputting image data capable of deriving a spatial color resolution inherent in a delta array or the like based on image data obtained by a triangular lattice color filter or the like in a delta array or the like.
- a processing method an image processing program, and an image processing device.
- the present invention provides image data obtained by performing high-resolution interpolation processing based on image data obtained by a triangular lattice color filter or the like such as a Derby array, and an image that is finely converted to another color system.
- image processing method an image processing program, and an image processing device for outputting data.
- the present invention provides an image processing method and an image processing program for outputting, for example, high-definition image data of a different color system based on image data obtained by a triangular lattice color filter such as a delta arrangement.
- An image processing device is provided.
- a first image processing method is represented by a color system composed of a plurality of color components, one pixel includes a plurality of pixels having color information of at least one color component, and the plurality of pixels are triangular.
- a second image in which a plurality of pixels are arranged in a square grid using the color information obtained by converting the pixel position into color information of each pixel position.
- the new color information is preferably color information of a color component missing in each pixel of the first image among color components of a color system of the first image.
- the new color information is color information of a color system different from the color system of the first image.
- the one-dimensional displacement process is performed using a one-dimensional filter including positive and negative coefficient values.
- the one-dimensional displacement processing is performed on the first image every other line in units of a line.
- a similarity determination procedure for determining the strength of similarity in at least three directions is further provided, and in the color information generation procedure, new color information is generated according to the determined similarity strength.
- the similarity determination procedure it is preferable to calculate the similarity in at least three directions and determine the strength of the similarity in each direction based on the reciprocal of the similarity.
- a color difference generating procedure for generating color information of a color difference component at the pixel positions of the triangular grid, and a color difference generating procedure.
- the method further comprises a correction procedure for correcting the color information of the generated color difference component.
- the method further includes a correction procedure for correcting the generated color information of the luminance component.
- a plurality of pixels represented by first to n-th color components (n ⁇ 2) and having color information of one color component in one pixel are arranged in a triangular lattice.
- An image acquisition procedure for acquiring the obtained first image an interpolation procedure for interpolating the color information of the first color component to the pixel where the first color component is missing, using the color information of the acquired first image,
- the curvature information of at least one of the first to n-th color components is determined by a fixed calculation, and interpolation is performed based on the average information and the curvature information.
- the image processing method may further include a similarity determination procedure for determining the level of similarity in at least three directions, and the interpolation procedure may include a first procedure based on the similarity determined in the similarity determination procedure. It is preferable that the calculation of the average information of the color components is made variable. It is preferable that the curvature information is obtained by a second derivative calculation.
- a third image processing method records a first image represented by a plurality of color components, in which a plurality of pixels having color information of one color component in one pixel are arranged in a non-rectangular shape.
- a second direction group similarity calculation for calculating a similarity for each of a second direction group composed of a plurality of directions orthogonal to at least one direction of the first direction group and different from the first direction group, using A similarity determination procedure for determining the strength of similarity between the first direction groups by using the similarity of the first direction group and the similarity of the second direction group together.
- the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step.
- the color information generating procedure includes the steps of: adding a second color component and / or a third color component to pixels having the first color component; Preferably, the information is generated. Further, it is preferable that the color information generating procedure generates color information of a luminance component different from the color information of the first image. It is preferable that the color information generating step generates color information of a color difference component different from the color information of the first image.
- the color information generation procedure includes (1) a color difference component between the first color component and the second color component, and (2) It is preferable to generate color information of three types of color difference components, that is, a color difference component between the second color component and the third color component and (3) a color difference component between the third color component and the first color component.
- D 1,, D 2 ', ..., DN' (D i 'is a direction orthogonal to D i, i 1, 2,, ..., N) (N ⁇ 2) similarity CD, CD 2 ', ⁇ ,
- the intensity of the similarity between the first direction group (C D 1' C DN / C: (C D2 in 'ZC D 2):: ⁇ (CDN' ZC DN)
- the determination is made using a function based on the expressed ratio.
- the pixels of the first image are arranged in a triangular lattice, and the first direction group similarity calculation procedure is performed. It is preferable to set both N and 3 for the second direction group similarity calculation procedure.
- a fourth image processing method records a first image represented by a plurality of color components and in which a plurality of pixels each having color information of one color component are arranged in a non-rectangular shape.
- the similarity composed of the color information at the second pixel interval is converted into a second direction composed of a plurality of directions different from the first direction group.
- the similarity between the first direction group and the second direction group is used together with the procedure for calculating the second direction group similarity to be calculated, and the similarity between the first direction groups is determined. And a similarity determination procedure.
- the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step.
- the first direction group consists of directions in which color information of the same color component is arranged at a first pixel interval
- the second direction group consists of color information of the same color component arranged at a second pixel interval. It is preferred that they consist of the directions in which they are placed.
- the first image is represented by first to third color components, and both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include: (1) color information of only the first color component (2) Similarity component composed of color information of only the second color component (3) At least two types of similarity components composed of color information of only the third color component It is preferable to calculate the similarity using the similarity component.
- the first pixel interval is preferably longer than the second pixel interval.
- the first pixel interval is about three pixel intervals, and the second pixel interval is about two pixel intervals.
- both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include not only the similarity calculated for the processing target pixel to be subjected to image processing, but also A class calculated for pixels surrounding the pixel to be processed It is preferable to calculate the similarity including the similarity.
- the first image a plurality of pixels are preferably arranged in a triangular lattice.
- the first image is represented by first to third color components, and the first to third color components are preferably distributed at a uniform pixel density.
- a fifth image processing method a plurality of pixels represented by first to n-th color components (n ⁇ 2) and each pixel having color information of one color component are arranged in a triangular lattice shape
- the average information of the first color component is obtained by using the color information of the area including the pixel closest to the second, and interpolation is performed.
- the image processing method further includes a similarity determination procedure for determining the strength of the similarity in at least three directions, and the interpolation procedure includes a first procedure based on the similarity determined in the similarity determination procedure.
- the average information of the color components is obtained.
- a first image is obtained in which a plurality of pixels represented by a plurality of color components and each pixel has color information of one color component arranged in a triangular lattice shape.
- the color information generation step includes: for a pixel to be processed of the first image, a color component of the pixel; Weighted addition of the color information in the area including the pixel whose different color component is second closest.
- the sixth image processing method may further include a similarity determination procedure for determining the strength of similarity in at least three directions, and the color information generation procedure may be performed according to the similarity strength determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable. Further, when the first image is represented by first to third color components, and the pixel having the first color component of the first image is the processing target pixel, the color information generating procedure includes the processing target pixel, It is preferable to perform weighted addition of color information in a region including a pixel whose two color components are second closest and a pixel whose third color component is second closest.
- color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
- a seventh image processing method a plurality of pixels represented by first to n-th color components (n ⁇ 2) and having color information of one color component in one pixel are arranged in a triangular lattice.
- the color information of the color difference component is generated using the color information of the pixel whose component is the second closest.
- the color difference generating procedure includes: (1) color information of a first color component of the pixel, and (2) a color information of a first color component of the first image. It is preferable that the color information of the color difference component is generated based on the average information of the color information of the second color component in a region including the pixel whose second color component is second closest to the pixel. In the color difference generation procedure, it is preferable that color information of a color difference component is further generated based on curvature information of a second color component for the pixel to be processed.
- a similarity determination procedure for determining the level of similarity in at least three directions is further provided, and the color difference generation procedure generates color information of a color difference component according to the similarity strength.
- the second image is output to the same pixel position as the first image.
- An eighth image processing method obtains a first image represented by first to third color components and in which a plurality of pixels having color information of one color component are uniformly distributed to one pixel.
- Image information, and color information of a color component different from the color information of the first image by weighting and adding the acquired color information of the first image with a variable coefficient value of zero or more.
- the color information generation step includes the steps of: first, second, and third color components for all pixels of the first image; The color information is always weighted and added at a uniform color component ratio.
- the image processing method further includes a similarity determination procedure for determining the strength of the similarity in a plurality of directions, and the color information generation procedure is based on the strength of the similarity determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable.
- a plurality of pixels are preferably arranged in a triangular lattice.
- color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
- the ninth image processing method of the present invention is an image acquisition procedure for acquiring a first image composed of a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component. And a color information generating procedure for generating color information of a luminance component and color information of at least three types of color difference components using the acquired color information of the first image, and a color information generating procedure. An output procedure for outputting the second image using the color information of the luminance component and the color information of the color difference component can be obtained.
- the ninth image processing method further includes a conversion procedure of converting the color information of the luminance component and the color information of at least three types of color difference components into color information of three types of color components.
- the second image is output using the color information of the three types of color components converted by the conversion procedure.
- the color information of the luminance component and the color information of the color difference component generated in the color information generation procedure are color information of components different from the three or more types of color components of the first image.
- the first image is represented by first to third color components, a plurality of pixels are uniformly distributed, and the color information generation procedure is as follows: (1) The color component ratio of the first to third color components is Color information of a luminance component composed of 1: 1: 1, (2) color information of a color difference component between a first color component and a second color component, and (3) a second color component and a third color. It is preferable to generate color information of a color difference component between the components and (4) color information of a color difference component between the third color component and the first color component.
- the image processing apparatus further includes a similarity determination procedure for determining the degree of similarity in a plurality of directions.
- the color information generation procedure includes a color component report of a luminance component according to the similarity determined in the similarity determination procedure. It is preferable to generate the color information of at least three types of color difference components. Further, it is preferable that the first image has a plurality of pixels arranged in a triangular lattice. According to a tenth image processing method of the present invention, there is provided an image acquisition procedure for acquiring a first image including a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component.
- a color difference generation procedure for generating at least three types of color difference component color information using the acquired color information of the first image, and a correction process for performing correction processing on the generated color difference component color information. And an output step of outputting a second image using the corrected color difference component color information.
- the first image is represented by first to third color components
- the color difference generation procedure includes: 1) color information of a color difference component between the first color component and the second color component; It is preferable to generate color information of a color difference component between the second color component and the third color component, and 3) color information of a color difference component between the third color component and the first color component.
- the first image is represented by first to third color components, and the color difference generation procedure uses the color information of the first image to generate color information of a luminance component different from the color information of the first image.
- the first to third color components are evenly distributed to a plurality of pixels, and the color difference generation procedure determines that the color component ratio of the first to third color components is 1 as a luminance component. It is preferable to generate color information of a luminance component composed of 1: 1.
- the second image is output at the same pixel position as the first image.
- a computer-readable computer program product has an image processing program for causing a computer to execute the procedure of the image processing method described in any of the above.
- This computer program product is preferably a recording medium on which an image processing program is recorded.
- FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment.
- FIG. 2 is a flowchart showing an outline of image processing performed by the image processing unit in the first embodiment.
- FIG. 3 is a diagram showing a positional relationship between pixels obtained by an image sensor in a delta arrangement.
- FIG. 4 is a diagram showing coefficients used for peripheral addition.
- FIG. 5 is a diagram showing coefficient values used when obtaining the curvature information dR.
- FIG. 6 is a diagram showing an achromatic spatial frequency reproduction region in a delta arrangement.
- FIG. 7 is a diagram showing coefficient values used for the one-dimensional displacement processing.
- FIG. 8 is a diagram illustrating pixel positions used in the calculation according to the second embodiment.
- FIG. 9 is a diagram showing coefficient values used when obtaining the curvature information dR, dG, and dB.
- FIG. 10 is a flowchart showing an outline of image processing performed by the image processing unit in the third embodiment.
- FIG. 11 is a diagram illustrating coefficient values of a low-pass filter.
- FIG. 12 is a diagram showing coefficient values of another low-pass filter.
- FIG. 13 is a diagram illustrating coefficient values of another low-pass filter.
- FIG. 14 is a diagram showing Laplacian coefficient values.
- FIG. 15 is a diagram showing coefficient values of other Laplacians.
- FIG. 16 is a diagram showing coefficient values of other Laplacians.
- FIG. 17 shows the concept of generating a luminance plane (Y) and three color difference planes (Cgb, Cbr, Crg) directly from the delta plane of the delta array, and then converting it to the original RGB color system.
- FIG. FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit in the fourth embodiment.
- FIG. 19 is a diagram showing a state where the Crg and Cbr components are obtained at the R position and the Cgb component is obtained at the nearest neighbor pixel.
- FIG. 20 is a diagram illustrating a spatial frequency reproduction region of each of the RGB components of the delta array.
- FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit in the sixth embodiment.
- FIG. 22 is a diagram defining adjacent pixels.
- FIG. 3 is a diagram showing a state of being provided through a data signal.
- FIG. 24 is a diagram showing a Bayer array, a delta array, and a honeycomb array of RGB color filters.
- FIG. 25 is a diagram showing the concept of a process of interpolating image data obtained in a delta array on a triangular lattice and restoring the image data to a square lattice data.
- FIG. 26 is a diagram illustrating the azimuth relationship of the similarity in the fifth embodiment.
- FIG. 27 is a flowchart showing the image restoration processing and the gradation processing. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment.
- the electronic camera 1 includes an A / D conversion unit 10, an image processing unit 11, a control unit 12, a memory 13, a compression / decompression unit 14, and a display image generation unit 15. Also, an external unit such as a personal computer (PC) 18 via a memory card interface unit 17 for interfacing with a memory card (card-shaped removable memory) 16 and a predetermined cable or wireless transmission path. And an external interface unit 19 for interfacing with the external interface. These blocks are interconnected via a bus 29.
- the image processing unit 11 is composed of, for example, a one-chip microprocessor dedicated to image processing.
- the electronic camera 1 further includes a photographing optical system 20, an image sensor 21, an analog signal processor 22, and a timing controller 23.
- An optical image of the subject obtained by the imaging optical system 20 is formed on the image sensor 21, and the output of the image sensor 21 is connected to the analog signal processor 22.
- the output of the analog signal processor 22 is connected to the A / D converter 10.
- the output of the control unit 12 is connected to the timing control unit 23.
- the output of the timing control unit 23 is an image sensor 21, an analog signal processing unit 22, an A / D conversion unit 10 and an image processing unit 1. Connected to 1.
- the image sensor 21 is composed of, for example, a CCD or the like.
- the electronic camera 1 further includes an operation unit 24 and a monitor 25 corresponding to a release button, a selection button for mode switching, and the like.
- the output of the operation unit 2 4 is the control unit 1 2 and the output of the display image generation unit 15 is connected to the monitor 25.
- the monitor 18 and the printer 27 are connected to the PC 18, and the application program recorded on the CD-ROM 28 is installed in advance.
- the PC 18 is connected to a memory card interface unit (not shown) for interfacing with the memory card 16 in addition to a CPU, a memory, and a hard disk (not shown). It has an external interface (not shown) that interfaces with external devices such as the camera 1.
- the control unit 12 controls the evening imaging control unit 2. 3, the timing of the image sensor 21, the analog signal processor 22, and the AZD converter 10 is controlled.
- the image sensor 21 generates an image signal corresponding to the optical image.
- the image signal is subjected to predetermined signal processing in an analog signal processing unit 22, digitized in an AZD conversion unit 10, and supplied as image data to the image processing unit 11.
- the image processing unit 1 Since the color filters of R (red), G (green), and B (blue) are arranged in a delta array (described later) in the image sensor 21, the image processing unit 1
- the image data supplied to 1 is represented by the RGB color system. Each pixel constituting the image data has color information of any one of RGB components.
- the image processing unit 11 performs image processing such as gradation conversion and contour emphasis on such image data in addition to performing image data conversion processing described later.
- the image data on which such image processing has been completed is subjected to predetermined compression processing by the compression / decompression section 14 as necessary, and is recorded on the memory card 16 via the memory card interface section 17.
- Image data that has undergone image processing is recorded on a memory card 16 without compression, or converted into the color system used by the monitor 18 and the printer 27 on the PC 18 side. Alternatively, the data may be supplied to the PC 18 via the external interface 19.
- the image data recorded in the memory card 16 is transferred to the memory card interface 17 via the memory card interface unit 17. It is read out and subjected to decompression processing by the compression / decompression unit 1 2, and the display image generation unit 1 Displayed on monitor 2 through 5.
- the decompressed image data is not displayed on the monitor 25, but is converted to the color system used by the monitor 26 and the printer 27 on the PC 18 side, and the external interface is used.
- the data may be supplied to the PC 18 via the unit 19.
- FIG. 25 is a diagram showing the concept of these processes.
- the triangular lattice refers to an arrangement in which pixels of the image sensor are arranged with a shift of 1/2 pixel for each row. It is a line that forms a triangle when the centers of adjacent pixels are connected. The center point of a pixel may be called a grid point.
- the delta arrangement in Fig. 24 (b) is arranged in a triangular lattice. An image obtained with the arrangement shown in Fig.
- the square lattice refers to an arrangement in which pixels of the image sensor are arranged without being shifted for each row. The arrangement forms a square when the centers of adjacent pixels are connected.
- the bay array in Fig. 24 (a) is arranged in a square lattice.
- An image obtained in the arrangement shown in Fig. 24 (a) may be called an image in which pixels are arranged in a rectangular (square) shape.
- FIG. 2 is a flowchart illustrating an outline of the image processing performed by the image processing unit 11.
- step S1 an image obtained by the image sensor 21 in the delta arrangement is input.
- step S2 the similarity is calculated.
- step S3 similarity is determined based on the similarity obtained in step S2.
- step S4 an interpolation value of a missing color component in each pixel is calculated based on the similarity determination result obtained in step S3.
- step S5 the obtained RGB color image is output.
- the RGB color image output in step S5 is image data obtained on a triangular lattice.
- step S6 one-dimensional displacement processing is performed on the image data obtained on the triangular lattice.
- One-dimensional displacement processing is performed on every other row of image data, as described later. Do it.
- step S7 square lattice image data is output by combining the image data subjected to the one-dimensional displacement processing and the image data not subjected to the one-dimensional displacement processing.
- Steps S1 to S5 are interpolation processing on a triangular lattice, and steps S6 and S7 are square processing.
- FIG. 3 is a diagram showing a positional relationship between pixels obtained by the image sensor 21 in the delta arrangement.
- the pixels of the image pickup device 21 are arranged with a shift of 1/2 pixel for each row, and the color filters are arranged on each pixel at a ratio of RGB components of 1: 1: 1. That is, the colors are evenly distributed.
- the color filter in the bay array has RGB arranged in the ratio of 1: 2: 1 (Fig. 24 (a)).
- a pixel having R component color information is called an R pixel
- a pixel having B component color information is called a B pixel
- a pixel having G component color information is called a G pixel.
- the image data obtained by the image sensor 21 has only one color component for each pixel.
- the interpolation process is a process for calculating color information of other color components missing in each pixel by calculation.
- the color information of the G and B components is interpolated at the R pixel position will be described.
- the pixel to be processed which is the R pixel
- Rctr the pixel to be interpolated
- each pixel position existing around the pixel Rctr is expressed using an angle.
- the B pixel existing in the 60-degree direction is expressed as B060
- the G pixel is expressed as G060.
- this angle is not exact but approximate.
- the direction connecting 0 ° -180 ° is 0 ° direction
- the direction connecting 120 ° -300 ° is 120 ° direction
- the direction connecting 240 ° -60 ° is 240 ° direction
- the direction connecting 30 ° -210 ° is The direction connecting 30 degrees
- the direction connecting 150-330 degrees is called the 150-degree direction
- the direction connecting 270-90 degrees is called the 270-degree direction.
- C120 (
- C240 11 G240-RC tr
- the strength of the similarity in each direction is determined so as to continuously change at the reciprocal ratio of the similarity. That is, it is determined by (1 / cOOO): (1 / C120): (1 / C240). Specifically, the following weight coefficient is calculated.
- the weighting coefficients w000, wl20, and w240 are values according to the strength of similarity. In the Bayer arrangement, as shown in U.S. Pat.No. 5,552,827, U.S. Pat.No. 5,629,734, and JP-A-2001-245314, a continuous weighting coefficient There are two types of methods, a statistical determination method and a discrete determination method based on threshold value determination.
- the nearest neighbor G component exists densely in four directions with respect to the pixel to be interpolated, so it can be used with almost no problem using either the continuous judgment method or the discrete judgment method.
- the nearest G component exists only in three directions: 0, 120, and 240 degrees In this case, it is important to determine the direction continuously based on the weighting factor.
- the nearest neighbor G component is a pixel which has a G component at a side of the edge of the pixel to be interpolated. In FIG. 3, they are G0OO, G12O, and G240.
- the interpolation values of the G component and the B component are calculated using the above weighting coefficients.
- the interpolated value consists of two items, average information and curvature information.
- Gave w00O * Gave000 + wl20 * Gavel2O1 ⁇ 224O * Gave24O ... (8)
- GaveOOO (2 * G000 + G180) / 3 (10)
- Gavel20 (2 * G120 + G300) / 3 (12)
- the first adjacent pixel is a pixel separated by about 1 pixel pitch
- the second adjacent pixel is a pixel separated by about 2 pixel pitch. It can be said that Rc tr and G 120 are separated by about 1 pixel pitch, and that Rc tr and G300 are separated by about 2 pixel pitch.
- FIG. 22 is a diagram that defines adjacent pixels. “center” is a pixel to be processed, “nearest” is the nearest or nearest neighbor or the first neighboring pixel, and “2nd” is the second neighboring pixel.
- a term corresponding to the curvature information dR is generally calculated in consideration of the same directionality as the average information.
- the directionality of the average information (0, 120, and 240 degrees) and the direction of the curvature information that can be extracted (30, 150, and 270 degrees) do not match.
- the curvature information in the 30-degree direction and the 150-degree direction are averaged to define the curvature information in the 0-degree direction, and the interpolation value may be calculated in consideration of the directionality of the curvature information as in the case of the Payer array.
- the average information considering the directionality is corrected using the curvature information having no directionality. As a result, it is possible to improve gradation clarity up to a high-frequency region in every direction.
- Equation (16) for calculating the curvature information dR uses the coefficient values shown in FIG. Equation (16) obtains the difference between the interpolation target pixel Rctr and the peripheral pixel, the difference between the peripheral pixel on the opposite side and the interpolation target pixel Rctr, and further obtains the difference. Therefore, it is obtained by the second derivative operation.
- the curvature information dR is information indicating the degree of change in the color information of a certain color component.
- this is information indicating a change in the curve of the curve and the degree of the curve. That is, the concave of the change of the color information of the color component This is the amount reflecting the structural information on the convexity.
- the curvature information dR of the R pixel position is obtained using the Rctr of the pixel to be interpolated and the color information of the surrounding R pixels.
- the G component color information is used for the G pixel position
- the B component color information is used for the B pixel position.
- Interpolation processing of the G and B components at the R pixel position was performed.
- ⁇ Interpolation of B and R components at G pixel position '' is a symbol of ⁇ interpolation of G and B components at R pixel position ''.
- R is G
- G is B
- B is R.
- "interpolation processing of R and G components at B position” is a circular replacement of the symbol "interpolation processing of B and R components at G position" with G for B, B for R, and R for G.
- the same process may be performed. That is, the same arithmetic routine (subroutine) can be used in each interpolation process.
- the image reconstructed as described above can bring out all the limit resolution performance of the delta array in the spatial direction.
- all hexagonal achromatic color reproduction regions of the delta array are resolved.
- a clear image can be obtained in the gradation direction. This is an extremely effective method especially for images with many achromatic parts.
- 8-340455 discloses an example in which restored data is generated at a pixel position different from that of a triangular lattice, or restored data is generated on a virtual square lattice having a pixel density twice that of the triangular lattice. Also, an example is shown in which a half row of the triangular lattice is restored to the same pixel position, and the other half row is restored to a pixel position shifted by 1/2 pixel from the triangular lattice. However, this implements the interpolation processing directly at the square grid position, and applies a different interpolation processing when the distance between adjacent pixels of the triangular grid changes. On the other hand, in Japanese Patent Application Laid-Open No. 2001-103295, a square is formed by two-dimensional cubic interpolation, and in Japanese Patent Application Laid-Open No. 2000-194386. Generates virtual double-density square lattice data.
- the interpolation data is restored with the triangular lattice at the same pixel position as the delta arrangement. This makes it possible to bring out the spatial frequency limit resolution performance of the delta array. If the color difference correction processing and the edge enhancement processing are also performed in the triangular arrangement, the effect works well isotropically. Therefore, once the image is restored on the triangular grid, an RGB value is generated for each pixel.
- the image data generated on the triangular lattice is subjected to square conversion. It has been experimentally found that it is important to keep the original data as much as possible to maintain the resolution performance of the triangular lattice. Therefore, a displacement process that shifts half a line by half a pixel every other line is performed. The remaining half of the line is not processed, and the Nyquist resolution of the vertical line of the triangular arrangement is maintained. Experiments have shown that the displacement process can maintain the vertical line resolution of the triangular lattice with little problem if it is self-estimated by cue-pick interpolation within one dimension of the row to be processed, though there is some influence from the Nyquist frequency of the square lattice. Was.
- tmp_R [, y] ( ⁇ 1 * R [x- (3/2) pixel, y] + 5 * [x— (1/2) pixel, y]
- FIG. 7 is a diagram showing coefficient values used in equation (19).
- the one-dimensional displacement processing by the cubic described above can be said to be a processing of applying a one-dimensional filter consisting of positive and negative coefficient values.
- the image restoration method as described above not only can maintain the limit resolution of the triangular arrangement to the maximum, but also the image restoration processing on the triangular grid can use the same processing routine for all pixels. Since only half of the rows need be subjected to simple one-dimensional processing, it is possible to achieve a simpler algorithm than the conventional technology without increasing the amount of data.
- Post-processing is performed to remove false colors and return to the RGB color system.
- edge enhancement processing is performed on the luminance component Y plane. In the case of a delta array, exactly the same post-processing may be applied.
- the configuration of the electronic device 1 of the third embodiment is the same as that of FIG. 1 of the first embodiment, and the description thereof is omitted.
- FIG. 10 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the third embodiment. It is assumed that the interpolation processing is performed on a triangular lattice as in the first embodiment. Figure 10 starts when the RGB color image after interpolation processing is input. That is, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 10 starts.
- step S11 the RGB color image data after the interpolation processing is input.
- step S12 the RGB color system is converted to the YCrCgCb color system unique to the third embodiment.
- step S13 a low-pass fill process is performed on the color difference plane (CrCgCb plane).
- step S14 edge enhancement processing is performed on the luminance plane (Y plane).
- step S15 when the false color on the color difference plane has been removed, conversion is performed to return the YCrCgCb color system to the original RGB color system.
- step S16 the obtained RGB color image is output.
- the RGB color image data output in step S16 is image data obtained on a triangular lattice.
- steps S6 and S7 in FIG. 2 When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S12 to S15 will be described.
- FIG. 14 illustrates the coefficient values used for the Laplace Braun's processing of equation (36). It should be noted that the Laplacian is not limited to the one shown here, and another one may be used. Figures 15 and 16 show examples of other Labrians.
- K is a value greater than or equal to zero, and is a parameter for adjusting the level of edge enhancement. 4.
- the color system is returned to the original RGB color system.
- This enables color difference correction with extremely high false color suppression capability.
- by performing the color difference correction processing and the luminance correction processing on the triangular lattice it is possible to perform the correction processing suitable for the directionality of the Dell array.
- the process of interpolating the color information of the color component missing in each pixel on the triangular lattice has been described.
- the fourth embodiment an example of image restoration processing of a different system from the interpolation processing in the RGB plane of the first embodiment will be described.
- the luminance component and the color difference component are created directly from the Dell array without interpolating in the RGB plane.
- the usefulness of separating one luminance plane and three chrominance planes described in the third embodiment is taken over, and the luminance plane maximizes the achromatic luminance resolution.
- the three color difference planes are responsible for maximizing the color resolution of the three primary colors.
- the configuration of the electronic camera 1 according to the fourth embodiment is the same as that of FIG. 1 according to the first embodiment, and a description thereof will be omitted.
- FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit 11 in the fourth embodiment.
- step S21 an image obtained by the delta-array image sensor 21 is input.
- step S22 the similarity is calculated.
- step S23 similarity is determined based on the similarity obtained in step S22.
- step S24 a luminance plane (Y0 plane) is generated based on the similarity determination result obtained in step S23 and the delta array image data obtained in step S21.
- step S25 a correction process is performed on the luminance plane (Y0 plane) obtained in step S24.
- step S26 color difference components Cgb, Cbr, and Crg are generated based on the similarity determination result obtained in step S23 and the image data of the Delaunay array obtained in step S21. I do.
- step S26 all the color difference components Cgb, Cbr, and Crg have not yet been generated in all the pixels.
- step S27 interpolation processing is performed on the color difference components that have not been generated based on the surrounding color difference components. As a result, the color difference plane of Cgb, Cbr, Crg is completed.
- step S28 the generated Y, Cgb, Cbr, and Crg color systems are converted to the RGB color system.
- step S29 the converted RGB single image data is output.
- Steps S 21 to S 29 are all processes on a triangular lattice. Therefore, the RGB image data output in step S29 is triangular lattice image data.
- the square processing is necessary, the same processing as steps S6 and S7 in FIG. 2 of the first embodiment is performed. 1. Calculation of similarity
- the similarity is calculated.
- the similarity obtained by an arbitrary method may be used.
- the most accurate one shall be used.
- the similarity between different colors shown in the first embodiment the similarity between same colors shown in the second embodiment, a combination thereof, or the similarity between different colors based on a color index or the like is used.
- the same color similarity may be switched and used.
- the determination is made in the same manner as in the first embodiment.
- the range of the weighted addition takes the G component and the B component up to the second adjacent pixel. That is, when the equations (8) and (9) are rewritten using the definition equations of the equations (10) to (15) of the first embodiment, the equations (41) and (42) are obtained.
- the luminance component generated in this way is always generated using the positive direction weighting coefficient while including the center pixel at a constant color component ratio, the potential for gradation sharpness is extremely high, and the spatial resolution power is extremely high.
- a high image can be obtained, and an image can be obtained that is very smoothly connected to peripheral pixels without being affected by chromatic aberration.
- the spatial resolution reaches the limit resolution of FIG. 6 as in the first embodiment.
- the luminance plane Y0 Since the above-mentioned luminance plane is generated using only positive coefficients, a correction process using Laplacian is performed in order to extract the potential of gradation clarity contained therein. Since the luminance plane Y0 is designed to be connected to peripheral pixels extremely smoothly taking into account the directionality, it is not necessary to calculate a correction term that requires a new calculation in the correction process according to the directionality. Thus, a single process using a fixed bandpass filter may be used. triangle As shown in FIGS. 14 to 16 of the third embodiment, several methods are available for taking the Laplacian in the arrangement. However, if the degree of optimality is slightly increased, the luminance plane Y0 can only be generated by collecting the G and B components in the directions of 0, 120, and 240 degrees. Here, a case where the correction is performed using the Laplacian shown in FIG. 15 in the directions of 30 degrees, 150 degrees, and 270 degrees, which is independent of the above (Equation (44)). Let Y be the corrected brightness component.
- k is a positive value and is usually set to 1. However, by setting the value to be larger than 1, the edge enhancement processing as shown in the third embodiment can be provided here.
- the three chrominance planes are generated directly from the delta plane independently of the luminance plane Y.
- Cgb G-B
- Cbr B-R
- dG and dB are the same as those defined by the equations (24) and (25) in the second embodiment, and G> 2 till d and B> 2 consult d are obtained by the equation (41) ( Same as 42).
- the average information including the pixels up to the second adjacent pixel is calculated, and the consistency with the luminance component is obtained. The resolution has been raised.
- dG and dB are not necessarily required, but they are added because they have the effect of increasing color resolution and vividness.
- the color difference components Cgb and Crg at the G position and the color difference components Cbr and Cgb at the B position are obtained in the same manner. At this point, the Crg and Cbr components have been obtained at the R position, and the Cgb component has been obtained at the nearest pixel.
- FIG. 19 is a diagram showing this state.
- the Cgb component at the R position is obtained from the equation (48) using the Cgb component of the pixel around the R position (interpolation processing). At this time, the calculation is performed using the direction determination result obtained at the R position.
- Cgb [center] w000 * (Cgb [nearestOOO] + Cgb [nearest 180]) / 1
- the four components of YCgbCbrCrg are obtained for all pixels. If necessary, the color difference planes Cgb, Cbr, and Crg may be subjected to correction processing such as a color difference port one-pass filter similar to that of the third embodiment to suppress false colors.
- Y (MG + B) / 3
- Cgb G ⁇ B
- Cbr B ⁇ R
- the conversion method is not unique because it is a 4 to 3 conversion, but to suppress color moiré and maximize the brightness resolution and color resolution, all Y, Cgb, C br and Crg components are included, and the mutual moire canceling effect is used. In this way, all of the highest performances generated by the respective roles of Y, Cgb, Cbr, and Crg can be reflected in each of R, G, and B.
- R [i, j] (9 * Y [i, j] + Cgb [i, j] -2 * Cbr [i, j] + 4 * Crg [i, j]) / 9 ... (49)
- G [i, j] (9 * Y [i, j] + 4 * Cgb [i, j] + Cbr [i, j] -2 * Crg [i, j]) / 9 ... (50)
- the image restoration method according to the fourth embodiment has extremely high gradation clarity, and simultaneously achieves excellent luminance resolution performance and color resolution performance in the spatial direction, while reducing chromatic aberration. It has the effect of being strong against them.
- the square processing it can be performed in the same manner as in the first embodiment.
- the similarity is determined by calculating the similarity between the same colors. At this time, only similarities in the directions of 0, 120 and 240 degrees were obtained. But the fifth implementation In the embodiment, similarities in the directions of 30, 150, and 270 degrees are also obtained.
- the configuration of the electronic camera 1 according to the fifth embodiment is the same as that of FIG. 1 of the first embodiment, and a description thereof will be omitted.
- the description focuses on the case where G and B components are interpolated at the R pixel position. Also, refer to FIG. 8 of the second embodiment.
- At the R pixel position there are three nearest neighbors of the G component at positions that point to 0, 120, and 240 degrees, and the second adjacent pixel points to 60, 180, and 300 degrees separated by two pixels. There are three in position.
- For the B component there are three nearest neighbors at 60, 180, and 300 degrees adjacent to each other, and the second adjacent pixel points to 0, 120, and 140 degrees separated by two pixels. There are three in position.
- the nearest neighbor pixel of the R component is 6 pixels at positions 30 degrees, 90 degrees, 150 degrees, 210 degrees, 270 degrees, and 330 degrees separated by 2 pixels, and the second adjacent pixel is separated by 3 pixels There are six positions at 0, 60, 120, 180, 240, and 300 degrees.
- C000 ((
- C120 i (
- C240 ((
- the similarity between same colors defined in this way checks the directionality that matches the direction in which the G component and the B component are missing at the R pixel position.
- the information is between pixels that are very far from each other at a three-pixel interval.
- the similarities C030, C150, and C270 in the directions of 30 degrees, 150 degrees, and 270 degrees are calculated.
- the similarity between the same colors in these directions can be defined by a shorter two-pixel interval, unlike the 0, 120, and 240 degree directions.
- C030 ((
- C150 I (
- C270 l (
- the direction that does not match the direction in which the G component and the B component are missing at the R pixel position Since we are examining similarities, techniques are needed to utilize them effectively. 3) Similarity marginal addition
- Equation (58) is the same as equation (4) in the first embodiment.
- cl20, c240, c030, cl50, and c270 are obtained in the same manner.
- FIG. 26 shows the azimuth relationship of the similarity described above.
- the significance of judging similarity is the direction in which the G component and B component that do not exist in the processing target pixel exist, that is, the directions of 0, 120, and 240 degrees, and 30 and 150 degrees. It is not meaningful to determine the similarity in the 270 degrees direction. Therefore, it is conceivable to first determine the degree of similarity in the 0-, 120-, and 240-degree directions by using the similarity in the 0-, 120-, and 240-degree directions and continuously determine the reciprocal ratio. It is. That is, it is determined by (1 000): (1 / C120): (1 / C240).
- the similarity cOOO has the ability to resolve chromatic horizontal lines, it is possible to extend the ky-axis direction of the frequency reproduction range of each of the RGB color components in the Derby arrangement of FIG. 20 to the limit resolution. That is, such a determination method can extend the color resolution of a chromatic image to the limit resolution of the vertices of the hexagon in FIG.
- it is a similarity over a long distance of 3 pixel intervals if the angle between them is an The directionality cannot be determined due to the influence of the wave number component, and especially in the direction of 30 degrees, 150 degrees, and 270 degrees, the most adverse effect occurs. Near the midpoint of each side of the hexagon in Figure 20 It can only exhibit color resolution that can be lost.
- the similarities C030, cl50, and C270 of short-range correlation are effectively used.
- simply taking the reciprocal can only determine the similarity in the directions of 30, 150, and 270 degrees, so in order to convert it to the similarity in the directions of 0, 120, and 240 degrees, the reciprocal Rather, it is interpreted that the value of the similarity itself expresses the similarity in the direction orthogonal to the 30 °, 150 °, and 270 ° directions, ie, the 120 °, 240 °, and 0 ° directions. Therefore, the similarity in the directions of 0, 120 and 240 degrees is determined by the following ratio.
- the 0 degree direction and the 270 degree direction, the 120 degree direction and the 30 degree direction, and the 240 degree direction and the 150 degree direction are orthogonal relations. This orthogonal relationship is expressed as a 0-degree direction ⁇ 270-degree direction, a 120-degree direction ⁇ 30-degree direction, and a 240-degree direction ⁇ 150-degree direction.
- the similarity continuously determined using the similarities in the six directions has a spatial resolution that accurately reproduces all the hexagons in FIG. 20 with respect to the chromatic image.
- the spatial resolution based on the similarity between same colors can always be achieved without being affected by chromatic aberration included in the optical system because similarity between the same color components is observed.
- the fifth embodiment it is possible to extract all the spatial color resolving power of each RGB single color originally included in the delta arrangement for any image.
- clear image restoration in the gradation direction is possible, and it shows strong performance even for systems containing chromatic aberration.
- the calculation of the similarity and the determination of the similarity in the fifth embodiment can be applied to the calculation of the similarity and the determination of the similarity in the fourth embodiment.
- Such an image restoration method has extremely high gradation clarity, achieves excellent luminance resolution performance and color resolution performance in the spatial direction at the same time, and exhibits an effect of being strong against chromatic aberration.
- the RGB signal is usually converted to YCbCr consisting of luminance and color difference to reduce false colors remaining in the image after the interpolation processing, and a color difference port-one-pass filter is applied to the Cb and Cr planes.
- Post-processing is performed to remove false colors by applying a color difference median filter or to return to the RGB color system. Even in the case of the delta arrangement, if the Nyquist frequency cannot be completely reduced in the optical low-pass filter, appropriate false color reduction processing is required to improve the appearance.
- a post-processing method that does not impair the color resolution performance, which is an excellent feature of the delta array, as much as possible will be described.
- the configuration of the electronic device 1 according to the sixth embodiment is the same as that shown in FIG. 1 of the first embodiment, and a description thereof will be omitted.
- FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the sixth embodiment. Interpolation processing is performed on a triangular lattice as in the first, second, fourth, and fifth embodiments.
- Figure 21 starts with the input of an RGB color image after interpolation. For example, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 21 starts.
- step S31 the RGB color image data after the interpolation processing is input.
- step S32 the RGB color system is converted to the YCgbCbrCrg color system unique to the sixth embodiment.
- step S33 a color determination image is generated.
- step S34 a color index is calculated using the color determination image generated in step S33.
- step S35 a color judgment of low saturation or high saturation is performed based on the color index of step S34.
- step S36 based on the color determination result in step S35, the mouth-pass filter to be used is switched to perform color difference correction.
- the color difference data to be corrected is generated in step S32.
- step S37 conversion is performed to return the YCgbCbrCrg color system to the original RGB color system when the false colors on the color difference plane have been removed.
- step S38 the obtained RGB color image data is output.
- the RGB color image data output in step S38 is image data obtained on a triangular lattice.
- steps S6 and S7 in FIG. 2 When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S32 to S37 will be described.
- TCbr and TCrg are also calculated in the same way.
- the color index Cdiff is calculated using the image for color determination in which the false color is reduced, and the color evaluation is performed in pixel units.
- the above continuous color index Cdiff is judged as a threshold value and converted to a discrete color index BW.
- the threshold Th is preferably set to about 30 for 256 gradations.
- FIG. 27 is a flowchart showing the processing.
- the user gamma correction process is a process of converting a linear gradation to an 8-bit gradation suitable for display output, that is, a process of compressing the dynamic range of an image to within the range of the output display device. Independently of this, once the gradation is converted to a certain gamma space and the image restoration processing corresponding to the first to sixth embodiments is performed, a better restoration result can be obtained. . There are the following methods for this gradation conversion.
- Input signal x (0 ⁇ x ⁇ xmax)
- Output signal y (0 ⁇ x ⁇ ymax)
- Input signal y (0 ⁇ x ⁇ ymax), Output signal x (0 ⁇ x ⁇ xmax), Input image is RGB plane
- This technique can be applied not only to the delta arrangement but also to the interpolation processing of the payer arrangement and other various file arrangements.
- the Delaware array originally has higher color resolution of a single color than the Payer array, by inserting this gradation conversion processing before and after the image restoration processing, even better color clarity is produced. It becomes possible.
- the present invention is not necessarily limited to this.
- RGB color image data generated in other embodiments can be appropriately combined with each other. That is, in the first to sixth embodiments, similar direction determination processing, interpolation processing or direct generation processing of color difference planes, post-processing such as correction, square processing, and the like are described.
- An optimal image processing method and processing apparatus can be realized by appropriately combining the processes of the embodiments.
- the present invention can be applied to a two-chip image sensor.
- the two-plate system for example, one color component is missing in each pixel, but the content of the above embodiment can be applied to the interpolation processing of this one missing color component. Conversion processing can be performed in the same manner.
- the method of directly generating a luminance component and a color difference component from the delta array without through interpolation processing according to the fourth embodiment can be similarly applied to a two-chip image sensor.
- an example of an electronic camera has been described, but the present invention is not necessarily limited to this content. It may be a video camera for capturing moving images, a personal convenience store with an image sensor, a mobile phone, or the like. That is, the present invention can be applied to any device that generates a color image data by an image sensor.
- FIG. 23 is a diagram showing this state.
- the personal convenience store 100 will be provided with the program via CD-ROM 104.
- the personal computer 100 has a function of connecting to the communication line 101.
- the combination computer 102 is a server computer that provides the above program, and stores the program on a recording medium such as a hard disk 103.
- the communication line 101 is a communication line such as the Internet, personal computer communication, or a dedicated communication line.
- the computer 102 reads out a program using the hard disk 103, and reads the program.
- the program is transmitted to the personal computer 100 via the communication line 101 (that is, the program is transmitted as a data signal on a carrier wave via the communication line 101.
- the program It can be supplied as a computer-readable computer program product in various forms, such as a carrier wave, etc.
- the output image data can be output.
- a similarity is calculated for each of the first direction groups including a plurality of directions, and each of the second direction groups including a plurality of directions that are orthogonal to at least one direction of the first direction group and different from the first direction group.
- the similarity is determined. For example, since the similarity in three directions is continuously determined using the similarity in six directions in the delta array, all the hexagons in FIG. It has a reproducible spatial resolution. That is, it is possible to extract all the spatial color resolving power of each of the RGB single colors originally possessed by the Delhi array.
- the color information of the first to third color components is always weighted and added at a uniform (1: 1: 1) color component ratio.
- the color information of a color component different from the color information of the first image is generated.
- the color information of the color components generated in this way has an extremely high gradation clear potential, an image with high spatial resolution is obtained, and an image that is connected to peripheral pixels very smoothly without being affected by chromatic aberration is obtained. can get.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé de traitement d'images qui comprend : un processus d'acquisition d'images consistant à acquérir une première image qui est représentée par un système de coloration constitué d'une pluralité de composants de couleur et qui est constituée d'une pluralité de pixels, chacun de ces pixels contenant des informations de couleur sur au moins un composant de couleur, ces pixels étant disposés sous la forme de grille delta ; un processus de génération d'informations de couleur qui consiste à générer, au moyen des premières informations de couleur d'image acquises, au moins une nouvelle information dans les mêmes positions de pixel sous la forme de grille delta que dans la première image ; un processus de conversion qui consiste à convertir des informations de couleur présentes sur une pluralité de pixels, y compris les informations de couleur générées et se présentant sous la forme de grille delta, en informations de couleur sur des positions interpixel respectives en opérant un traitement de déplacement unidimensionnel entre les pixels disposés dans une direction ; enfin, le processus d'émission qui consiste à émettre, au moyen des informations de couleur dont les positions de pixel ont été converties, une seconde image dans laquelle une pluralité de pixels sont disposés sous forme de grille carrée.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002150788A JP4239480B2 (ja) | 2002-05-24 | 2002-05-24 | 画像処理方法、画像処理プログラム、画像処理装置 |
JP2002/150788 | 2002-05-24 | ||
JP2002/159228 | 2002-05-31 | ||
JP2002/159229 | 2002-05-31 | ||
JP2002159250A JP4196055B2 (ja) | 2002-05-31 | 2002-05-31 | 画像処理方法、画像処理プログラム、画像処理装置 |
JP2002159228A JP4239483B2 (ja) | 2002-05-31 | 2002-05-31 | 画像処理方法、画像処理プログラム、画像処理装置 |
JP2002/159250 | 2002-05-31 | ||
JP2002159229A JP4239484B2 (ja) | 2002-05-31 | 2002-05-31 | 画像処理方法、画像処理プログラム、画像処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003101119A1 true WO2003101119A1 (fr) | 2003-12-04 |
Family
ID=29587777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/006388 WO2003101119A1 (fr) | 2002-05-24 | 2003-05-22 | Procede de traitement d'images, programme de traitement d'images et processeur d'images |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2003101119A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004112401A1 (fr) * | 2003-06-12 | 2004-12-23 | Nikon Corporation | Procede de traitement d'images, programme de traitement d'images et processeur d'images |
CN104954767A (zh) * | 2014-03-26 | 2015-09-30 | 联想(北京)有限公司 | 一种信息处理方法和电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000341701A (ja) * | 1999-05-25 | 2000-12-08 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001016597A (ja) * | 1999-07-01 | 2001-01-19 | Fuji Photo Film Co Ltd | 固体撮像装置および信号処理方法 |
JP2001103295A (ja) * | 1999-07-27 | 2001-04-13 | Fuji Photo Film Co Ltd | 画像変換方法および装置並びに記録媒体 |
JP2001245314A (ja) * | 1999-12-21 | 2001-09-07 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001275126A (ja) * | 2000-01-20 | 2001-10-05 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001292455A (ja) * | 2000-04-06 | 2001-10-19 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びに記録媒体 |
JP2001326942A (ja) * | 2000-05-12 | 2001-11-22 | Fuji Photo Film Co Ltd | 固体撮像装置および信号処理方法 |
-
2003
- 2003-05-22 WO PCT/JP2003/006388 patent/WO2003101119A1/fr active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000341701A (ja) * | 1999-05-25 | 2000-12-08 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001016597A (ja) * | 1999-07-01 | 2001-01-19 | Fuji Photo Film Co Ltd | 固体撮像装置および信号処理方法 |
JP2001103295A (ja) * | 1999-07-27 | 2001-04-13 | Fuji Photo Film Co Ltd | 画像変換方法および装置並びに記録媒体 |
JP2001245314A (ja) * | 1999-12-21 | 2001-09-07 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001275126A (ja) * | 2000-01-20 | 2001-10-05 | Nikon Corp | 補間処理装置および補間処理プログラムを記録した記録媒体 |
JP2001292455A (ja) * | 2000-04-06 | 2001-10-19 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びに記録媒体 |
JP2001326942A (ja) * | 2000-05-12 | 2001-11-22 | Fuji Photo Film Co Ltd | 固体撮像装置および信号処理方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004112401A1 (fr) * | 2003-06-12 | 2004-12-23 | Nikon Corporation | Procede de traitement d'images, programme de traitement d'images et processeur d'images |
US7391903B2 (en) | 2003-06-12 | 2008-06-24 | Nikon Corporation | Image processing method, image processing program and image processing processor for interpolating color components |
US7630546B2 (en) | 2003-06-12 | 2009-12-08 | Nikon Corporation | Image processing method, image processing program and image processor |
CN104954767A (zh) * | 2014-03-26 | 2015-09-30 | 联想(北京)有限公司 | 一种信息处理方法和电子设备 |
CN104954767B (zh) * | 2014-03-26 | 2017-08-29 | 联想(北京)有限公司 | 一种信息处理方法和电子设备 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1289310B1 (fr) | Méthode et dispositif de démosaiquage adaptif | |
JP7646619B2 (ja) | カメラの画像処理方法およびカメラ | |
EP1395041B1 (fr) | Correction de couleurs d'images | |
JP5045421B2 (ja) | 撮像装置、色ノイズ低減方法および色ノイズ低減プログラム | |
JP3985679B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
US6724932B1 (en) | Image processing method, image processor, and storage medium | |
JP5574615B2 (ja) | 画像処理装置、その制御方法、及びプログラム | |
US7755670B2 (en) | Tone-conversion device for image, program, electronic camera, and tone-conversion method | |
US7072509B2 (en) | Electronic image color plane reconstruction | |
US8320714B2 (en) | Image processing apparatus, computer-readable recording medium for recording image processing program, and image processing method | |
JP4321064B2 (ja) | 画像処理装置および画像処理プログラム | |
JPWO2006006373A1 (ja) | 画像処理装置およびコンピュータプログラム製品 | |
EP0739571A1 (fr) | Camera couleur a gamme dynamique large utilisant un dispositif a transfert de charge et un filtre mosaique | |
JP4196055B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
JP4239483B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
JP4239480B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
WO2003101119A1 (fr) | Procede de traitement d'images, programme de traitement d'images et processeur d'images | |
JP4239484B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
JP4122082B2 (ja) | 信号処理装置およびその処理方法 | |
JP2012100215A (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
JP2004064227A (ja) | 映像信号処理装置 | |
JP2001086523A (ja) | 信号生成方法および装置並びに記録媒体 | |
JP2000050292A (ja) | 信号処理装置およびその信号処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase |