+

WO2003101119A1 - Image processing method, image processing program, image processor - Google Patents

Image processing method, image processing program, image processor Download PDF

Info

Publication number
WO2003101119A1
WO2003101119A1 PCT/JP2003/006388 JP0306388W WO03101119A1 WO 2003101119 A1 WO2003101119 A1 WO 2003101119A1 JP 0306388 W JP0306388 W JP 0306388W WO 03101119 A1 WO03101119 A1 WO 03101119A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
color information
component
similarity
Prior art date
Application number
PCT/JP2003/006388
Other languages
French (fr)
Japanese (ja)
Inventor
Kenichi Ishiga
Original Assignee
Nikon Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2002150788A external-priority patent/JP4239480B2/en
Priority claimed from JP2002159229A external-priority patent/JP4239484B2/en
Priority claimed from JP2002159250A external-priority patent/JP4196055B2/en
Priority claimed from JP2002159228A external-priority patent/JP4239483B2/en
Application filed by Nikon Corporation filed Critical Nikon Corporation
Publication of WO2003101119A1 publication Critical patent/WO2003101119A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • Japanese Patent Application No. 2000, No. 1, 507, 888 (filed on May 24, 2002) Japanese Patent Application No. 159, 228, 2008 (No. 20) (Filed on May 31, 2012) Japanese patent application No. 200, No. 1, 592, 209 (filed May 31, 2002) Japanese patent application, filed in Japanese No. 1 592 250 (filed on May 31, 2002)
  • the present invention relates to an image processing method, an image processing program, and an image processing apparatus for processing image data obtained by a color filter having a delta arrangement.
  • An electronic camera captures an image of a subject using an image sensor such as a CCD.
  • an image sensor such as a CCD.
  • a Bayer arrangement in which three color filters of RGB (red, green, blue) are arranged as shown in FIG. 24 (a) is known.
  • a Delaunay arrangement arranged as shown in Fig. 24 (b) is known.
  • honeycomb arrangements arranged as shown in FIG. 24 (c) are also known.
  • Various image processing methods are available for the image data obtained by the Bayer array, for example, US Pat. No. 5,552,827, US Pat. No. 5,629,734, and JP-A-2001-245314. Proposed.
  • the present invention provides an image processing method, an image processing program, and an image processing device that output high-definition square grid array image data based on image data obtained by a triangular grid color filter or the like such as a delta array. I do.
  • the present invention provides an image for outputting image data capable of deriving a spatial color resolution inherent in a delta array or the like based on image data obtained by a triangular lattice color filter or the like in a delta array or the like.
  • a processing method an image processing program, and an image processing device.
  • the present invention provides image data obtained by performing high-resolution interpolation processing based on image data obtained by a triangular lattice color filter or the like such as a Derby array, and an image that is finely converted to another color system.
  • image processing method an image processing program, and an image processing device for outputting data.
  • the present invention provides an image processing method and an image processing program for outputting, for example, high-definition image data of a different color system based on image data obtained by a triangular lattice color filter such as a delta arrangement.
  • An image processing device is provided.
  • a first image processing method is represented by a color system composed of a plurality of color components, one pixel includes a plurality of pixels having color information of at least one color component, and the plurality of pixels are triangular.
  • a second image in which a plurality of pixels are arranged in a square grid using the color information obtained by converting the pixel position into color information of each pixel position.
  • the new color information is preferably color information of a color component missing in each pixel of the first image among color components of a color system of the first image.
  • the new color information is color information of a color system different from the color system of the first image.
  • the one-dimensional displacement process is performed using a one-dimensional filter including positive and negative coefficient values.
  • the one-dimensional displacement processing is performed on the first image every other line in units of a line.
  • a similarity determination procedure for determining the strength of similarity in at least three directions is further provided, and in the color information generation procedure, new color information is generated according to the determined similarity strength.
  • the similarity determination procedure it is preferable to calculate the similarity in at least three directions and determine the strength of the similarity in each direction based on the reciprocal of the similarity.
  • a color difference generating procedure for generating color information of a color difference component at the pixel positions of the triangular grid, and a color difference generating procedure.
  • the method further comprises a correction procedure for correcting the color information of the generated color difference component.
  • the method further includes a correction procedure for correcting the generated color information of the luminance component.
  • a plurality of pixels represented by first to n-th color components (n ⁇ 2) and having color information of one color component in one pixel are arranged in a triangular lattice.
  • An image acquisition procedure for acquiring the obtained first image an interpolation procedure for interpolating the color information of the first color component to the pixel where the first color component is missing, using the color information of the acquired first image,
  • the curvature information of at least one of the first to n-th color components is determined by a fixed calculation, and interpolation is performed based on the average information and the curvature information.
  • the image processing method may further include a similarity determination procedure for determining the level of similarity in at least three directions, and the interpolation procedure may include a first procedure based on the similarity determined in the similarity determination procedure. It is preferable that the calculation of the average information of the color components is made variable. It is preferable that the curvature information is obtained by a second derivative calculation.
  • a third image processing method records a first image represented by a plurality of color components, in which a plurality of pixels having color information of one color component in one pixel are arranged in a non-rectangular shape.
  • a second direction group similarity calculation for calculating a similarity for each of a second direction group composed of a plurality of directions orthogonal to at least one direction of the first direction group and different from the first direction group, using A similarity determination procedure for determining the strength of similarity between the first direction groups by using the similarity of the first direction group and the similarity of the second direction group together.
  • the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step.
  • the color information generating procedure includes the steps of: adding a second color component and / or a third color component to pixels having the first color component; Preferably, the information is generated. Further, it is preferable that the color information generating procedure generates color information of a luminance component different from the color information of the first image. It is preferable that the color information generating step generates color information of a color difference component different from the color information of the first image.
  • the color information generation procedure includes (1) a color difference component between the first color component and the second color component, and (2) It is preferable to generate color information of three types of color difference components, that is, a color difference component between the second color component and the third color component and (3) a color difference component between the third color component and the first color component.
  • D 1,, D 2 ', ..., DN' (D i 'is a direction orthogonal to D i, i 1, 2,, ..., N) (N ⁇ 2) similarity CD, CD 2 ', ⁇ ,
  • the intensity of the similarity between the first direction group (C D 1' C DN / C: (C D2 in 'ZC D 2):: ⁇ (CDN' ZC DN)
  • the determination is made using a function based on the expressed ratio.
  • the pixels of the first image are arranged in a triangular lattice, and the first direction group similarity calculation procedure is performed. It is preferable to set both N and 3 for the second direction group similarity calculation procedure.
  • a fourth image processing method records a first image represented by a plurality of color components and in which a plurality of pixels each having color information of one color component are arranged in a non-rectangular shape.
  • the similarity composed of the color information at the second pixel interval is converted into a second direction composed of a plurality of directions different from the first direction group.
  • the similarity between the first direction group and the second direction group is used together with the procedure for calculating the second direction group similarity to be calculated, and the similarity between the first direction groups is determined. And a similarity determination procedure.
  • the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step.
  • the first direction group consists of directions in which color information of the same color component is arranged at a first pixel interval
  • the second direction group consists of color information of the same color component arranged at a second pixel interval. It is preferred that they consist of the directions in which they are placed.
  • the first image is represented by first to third color components, and both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include: (1) color information of only the first color component (2) Similarity component composed of color information of only the second color component (3) At least two types of similarity components composed of color information of only the third color component It is preferable to calculate the similarity using the similarity component.
  • the first pixel interval is preferably longer than the second pixel interval.
  • the first pixel interval is about three pixel intervals, and the second pixel interval is about two pixel intervals.
  • both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include not only the similarity calculated for the processing target pixel to be subjected to image processing, but also A class calculated for pixels surrounding the pixel to be processed It is preferable to calculate the similarity including the similarity.
  • the first image a plurality of pixels are preferably arranged in a triangular lattice.
  • the first image is represented by first to third color components, and the first to third color components are preferably distributed at a uniform pixel density.
  • a fifth image processing method a plurality of pixels represented by first to n-th color components (n ⁇ 2) and each pixel having color information of one color component are arranged in a triangular lattice shape
  • the average information of the first color component is obtained by using the color information of the area including the pixel closest to the second, and interpolation is performed.
  • the image processing method further includes a similarity determination procedure for determining the strength of the similarity in at least three directions, and the interpolation procedure includes a first procedure based on the similarity determined in the similarity determination procedure.
  • the average information of the color components is obtained.
  • a first image is obtained in which a plurality of pixels represented by a plurality of color components and each pixel has color information of one color component arranged in a triangular lattice shape.
  • the color information generation step includes: for a pixel to be processed of the first image, a color component of the pixel; Weighted addition of the color information in the area including the pixel whose different color component is second closest.
  • the sixth image processing method may further include a similarity determination procedure for determining the strength of similarity in at least three directions, and the color information generation procedure may be performed according to the similarity strength determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable. Further, when the first image is represented by first to third color components, and the pixel having the first color component of the first image is the processing target pixel, the color information generating procedure includes the processing target pixel, It is preferable to perform weighted addition of color information in a region including a pixel whose two color components are second closest and a pixel whose third color component is second closest.
  • color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
  • a seventh image processing method a plurality of pixels represented by first to n-th color components (n ⁇ 2) and having color information of one color component in one pixel are arranged in a triangular lattice.
  • the color information of the color difference component is generated using the color information of the pixel whose component is the second closest.
  • the color difference generating procedure includes: (1) color information of a first color component of the pixel, and (2) a color information of a first color component of the first image. It is preferable that the color information of the color difference component is generated based on the average information of the color information of the second color component in a region including the pixel whose second color component is second closest to the pixel. In the color difference generation procedure, it is preferable that color information of a color difference component is further generated based on curvature information of a second color component for the pixel to be processed.
  • a similarity determination procedure for determining the level of similarity in at least three directions is further provided, and the color difference generation procedure generates color information of a color difference component according to the similarity strength.
  • the second image is output to the same pixel position as the first image.
  • An eighth image processing method obtains a first image represented by first to third color components and in which a plurality of pixels having color information of one color component are uniformly distributed to one pixel.
  • Image information, and color information of a color component different from the color information of the first image by weighting and adding the acquired color information of the first image with a variable coefficient value of zero or more.
  • the color information generation step includes the steps of: first, second, and third color components for all pixels of the first image; The color information is always weighted and added at a uniform color component ratio.
  • the image processing method further includes a similarity determination procedure for determining the strength of the similarity in a plurality of directions, and the color information generation procedure is based on the strength of the similarity determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable.
  • a plurality of pixels are preferably arranged in a triangular lattice.
  • color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
  • the ninth image processing method of the present invention is an image acquisition procedure for acquiring a first image composed of a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component. And a color information generating procedure for generating color information of a luminance component and color information of at least three types of color difference components using the acquired color information of the first image, and a color information generating procedure. An output procedure for outputting the second image using the color information of the luminance component and the color information of the color difference component can be obtained.
  • the ninth image processing method further includes a conversion procedure of converting the color information of the luminance component and the color information of at least three types of color difference components into color information of three types of color components.
  • the second image is output using the color information of the three types of color components converted by the conversion procedure.
  • the color information of the luminance component and the color information of the color difference component generated in the color information generation procedure are color information of components different from the three or more types of color components of the first image.
  • the first image is represented by first to third color components, a plurality of pixels are uniformly distributed, and the color information generation procedure is as follows: (1) The color component ratio of the first to third color components is Color information of a luminance component composed of 1: 1: 1, (2) color information of a color difference component between a first color component and a second color component, and (3) a second color component and a third color. It is preferable to generate color information of a color difference component between the components and (4) color information of a color difference component between the third color component and the first color component.
  • the image processing apparatus further includes a similarity determination procedure for determining the degree of similarity in a plurality of directions.
  • the color information generation procedure includes a color component report of a luminance component according to the similarity determined in the similarity determination procedure. It is preferable to generate the color information of at least three types of color difference components. Further, it is preferable that the first image has a plurality of pixels arranged in a triangular lattice. According to a tenth image processing method of the present invention, there is provided an image acquisition procedure for acquiring a first image including a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component.
  • a color difference generation procedure for generating at least three types of color difference component color information using the acquired color information of the first image, and a correction process for performing correction processing on the generated color difference component color information. And an output step of outputting a second image using the corrected color difference component color information.
  • the first image is represented by first to third color components
  • the color difference generation procedure includes: 1) color information of a color difference component between the first color component and the second color component; It is preferable to generate color information of a color difference component between the second color component and the third color component, and 3) color information of a color difference component between the third color component and the first color component.
  • the first image is represented by first to third color components, and the color difference generation procedure uses the color information of the first image to generate color information of a luminance component different from the color information of the first image.
  • the first to third color components are evenly distributed to a plurality of pixels, and the color difference generation procedure determines that the color component ratio of the first to third color components is 1 as a luminance component. It is preferable to generate color information of a luminance component composed of 1: 1.
  • the second image is output at the same pixel position as the first image.
  • a computer-readable computer program product has an image processing program for causing a computer to execute the procedure of the image processing method described in any of the above.
  • This computer program product is preferably a recording medium on which an image processing program is recorded.
  • FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment.
  • FIG. 2 is a flowchart showing an outline of image processing performed by the image processing unit in the first embodiment.
  • FIG. 3 is a diagram showing a positional relationship between pixels obtained by an image sensor in a delta arrangement.
  • FIG. 4 is a diagram showing coefficients used for peripheral addition.
  • FIG. 5 is a diagram showing coefficient values used when obtaining the curvature information dR.
  • FIG. 6 is a diagram showing an achromatic spatial frequency reproduction region in a delta arrangement.
  • FIG. 7 is a diagram showing coefficient values used for the one-dimensional displacement processing.
  • FIG. 8 is a diagram illustrating pixel positions used in the calculation according to the second embodiment.
  • FIG. 9 is a diagram showing coefficient values used when obtaining the curvature information dR, dG, and dB.
  • FIG. 10 is a flowchart showing an outline of image processing performed by the image processing unit in the third embodiment.
  • FIG. 11 is a diagram illustrating coefficient values of a low-pass filter.
  • FIG. 12 is a diagram showing coefficient values of another low-pass filter.
  • FIG. 13 is a diagram illustrating coefficient values of another low-pass filter.
  • FIG. 14 is a diagram showing Laplacian coefficient values.
  • FIG. 15 is a diagram showing coefficient values of other Laplacians.
  • FIG. 16 is a diagram showing coefficient values of other Laplacians.
  • FIG. 17 shows the concept of generating a luminance plane (Y) and three color difference planes (Cgb, Cbr, Crg) directly from the delta plane of the delta array, and then converting it to the original RGB color system.
  • FIG. FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit in the fourth embodiment.
  • FIG. 19 is a diagram showing a state where the Crg and Cbr components are obtained at the R position and the Cgb component is obtained at the nearest neighbor pixel.
  • FIG. 20 is a diagram illustrating a spatial frequency reproduction region of each of the RGB components of the delta array.
  • FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit in the sixth embodiment.
  • FIG. 22 is a diagram defining adjacent pixels.
  • FIG. 3 is a diagram showing a state of being provided through a data signal.
  • FIG. 24 is a diagram showing a Bayer array, a delta array, and a honeycomb array of RGB color filters.
  • FIG. 25 is a diagram showing the concept of a process of interpolating image data obtained in a delta array on a triangular lattice and restoring the image data to a square lattice data.
  • FIG. 26 is a diagram illustrating the azimuth relationship of the similarity in the fifth embodiment.
  • FIG. 27 is a flowchart showing the image restoration processing and the gradation processing. BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment.
  • the electronic camera 1 includes an A / D conversion unit 10, an image processing unit 11, a control unit 12, a memory 13, a compression / decompression unit 14, and a display image generation unit 15. Also, an external unit such as a personal computer (PC) 18 via a memory card interface unit 17 for interfacing with a memory card (card-shaped removable memory) 16 and a predetermined cable or wireless transmission path. And an external interface unit 19 for interfacing with the external interface. These blocks are interconnected via a bus 29.
  • the image processing unit 11 is composed of, for example, a one-chip microprocessor dedicated to image processing.
  • the electronic camera 1 further includes a photographing optical system 20, an image sensor 21, an analog signal processor 22, and a timing controller 23.
  • An optical image of the subject obtained by the imaging optical system 20 is formed on the image sensor 21, and the output of the image sensor 21 is connected to the analog signal processor 22.
  • the output of the analog signal processor 22 is connected to the A / D converter 10.
  • the output of the control unit 12 is connected to the timing control unit 23.
  • the output of the timing control unit 23 is an image sensor 21, an analog signal processing unit 22, an A / D conversion unit 10 and an image processing unit 1. Connected to 1.
  • the image sensor 21 is composed of, for example, a CCD or the like.
  • the electronic camera 1 further includes an operation unit 24 and a monitor 25 corresponding to a release button, a selection button for mode switching, and the like.
  • the output of the operation unit 2 4 is the control unit 1 2 and the output of the display image generation unit 15 is connected to the monitor 25.
  • the monitor 18 and the printer 27 are connected to the PC 18, and the application program recorded on the CD-ROM 28 is installed in advance.
  • the PC 18 is connected to a memory card interface unit (not shown) for interfacing with the memory card 16 in addition to a CPU, a memory, and a hard disk (not shown). It has an external interface (not shown) that interfaces with external devices such as the camera 1.
  • the control unit 12 controls the evening imaging control unit 2. 3, the timing of the image sensor 21, the analog signal processor 22, and the AZD converter 10 is controlled.
  • the image sensor 21 generates an image signal corresponding to the optical image.
  • the image signal is subjected to predetermined signal processing in an analog signal processing unit 22, digitized in an AZD conversion unit 10, and supplied as image data to the image processing unit 11.
  • the image processing unit 1 Since the color filters of R (red), G (green), and B (blue) are arranged in a delta array (described later) in the image sensor 21, the image processing unit 1
  • the image data supplied to 1 is represented by the RGB color system. Each pixel constituting the image data has color information of any one of RGB components.
  • the image processing unit 11 performs image processing such as gradation conversion and contour emphasis on such image data in addition to performing image data conversion processing described later.
  • the image data on which such image processing has been completed is subjected to predetermined compression processing by the compression / decompression section 14 as necessary, and is recorded on the memory card 16 via the memory card interface section 17.
  • Image data that has undergone image processing is recorded on a memory card 16 without compression, or converted into the color system used by the monitor 18 and the printer 27 on the PC 18 side. Alternatively, the data may be supplied to the PC 18 via the external interface 19.
  • the image data recorded in the memory card 16 is transferred to the memory card interface 17 via the memory card interface unit 17. It is read out and subjected to decompression processing by the compression / decompression unit 1 2, and the display image generation unit 1 Displayed on monitor 2 through 5.
  • the decompressed image data is not displayed on the monitor 25, but is converted to the color system used by the monitor 26 and the printer 27 on the PC 18 side, and the external interface is used.
  • the data may be supplied to the PC 18 via the unit 19.
  • FIG. 25 is a diagram showing the concept of these processes.
  • the triangular lattice refers to an arrangement in which pixels of the image sensor are arranged with a shift of 1/2 pixel for each row. It is a line that forms a triangle when the centers of adjacent pixels are connected. The center point of a pixel may be called a grid point.
  • the delta arrangement in Fig. 24 (b) is arranged in a triangular lattice. An image obtained with the arrangement shown in Fig.
  • the square lattice refers to an arrangement in which pixels of the image sensor are arranged without being shifted for each row. The arrangement forms a square when the centers of adjacent pixels are connected.
  • the bay array in Fig. 24 (a) is arranged in a square lattice.
  • An image obtained in the arrangement shown in Fig. 24 (a) may be called an image in which pixels are arranged in a rectangular (square) shape.
  • FIG. 2 is a flowchart illustrating an outline of the image processing performed by the image processing unit 11.
  • step S1 an image obtained by the image sensor 21 in the delta arrangement is input.
  • step S2 the similarity is calculated.
  • step S3 similarity is determined based on the similarity obtained in step S2.
  • step S4 an interpolation value of a missing color component in each pixel is calculated based on the similarity determination result obtained in step S3.
  • step S5 the obtained RGB color image is output.
  • the RGB color image output in step S5 is image data obtained on a triangular lattice.
  • step S6 one-dimensional displacement processing is performed on the image data obtained on the triangular lattice.
  • One-dimensional displacement processing is performed on every other row of image data, as described later. Do it.
  • step S7 square lattice image data is output by combining the image data subjected to the one-dimensional displacement processing and the image data not subjected to the one-dimensional displacement processing.
  • Steps S1 to S5 are interpolation processing on a triangular lattice, and steps S6 and S7 are square processing.
  • FIG. 3 is a diagram showing a positional relationship between pixels obtained by the image sensor 21 in the delta arrangement.
  • the pixels of the image pickup device 21 are arranged with a shift of 1/2 pixel for each row, and the color filters are arranged on each pixel at a ratio of RGB components of 1: 1: 1. That is, the colors are evenly distributed.
  • the color filter in the bay array has RGB arranged in the ratio of 1: 2: 1 (Fig. 24 (a)).
  • a pixel having R component color information is called an R pixel
  • a pixel having B component color information is called a B pixel
  • a pixel having G component color information is called a G pixel.
  • the image data obtained by the image sensor 21 has only one color component for each pixel.
  • the interpolation process is a process for calculating color information of other color components missing in each pixel by calculation.
  • the color information of the G and B components is interpolated at the R pixel position will be described.
  • the pixel to be processed which is the R pixel
  • Rctr the pixel to be interpolated
  • each pixel position existing around the pixel Rctr is expressed using an angle.
  • the B pixel existing in the 60-degree direction is expressed as B060
  • the G pixel is expressed as G060.
  • this angle is not exact but approximate.
  • the direction connecting 0 ° -180 ° is 0 ° direction
  • the direction connecting 120 ° -300 ° is 120 ° direction
  • the direction connecting 240 ° -60 ° is 240 ° direction
  • the direction connecting 30 ° -210 ° is The direction connecting 30 degrees
  • the direction connecting 150-330 degrees is called the 150-degree direction
  • the direction connecting 270-90 degrees is called the 270-degree direction.
  • C120 (
  • C240 11 G240-RC tr
  • the strength of the similarity in each direction is determined so as to continuously change at the reciprocal ratio of the similarity. That is, it is determined by (1 / cOOO): (1 / C120): (1 / C240). Specifically, the following weight coefficient is calculated.
  • the weighting coefficients w000, wl20, and w240 are values according to the strength of similarity. In the Bayer arrangement, as shown in U.S. Pat.No. 5,552,827, U.S. Pat.No. 5,629,734, and JP-A-2001-245314, a continuous weighting coefficient There are two types of methods, a statistical determination method and a discrete determination method based on threshold value determination.
  • the nearest neighbor G component exists densely in four directions with respect to the pixel to be interpolated, so it can be used with almost no problem using either the continuous judgment method or the discrete judgment method.
  • the nearest G component exists only in three directions: 0, 120, and 240 degrees In this case, it is important to determine the direction continuously based on the weighting factor.
  • the nearest neighbor G component is a pixel which has a G component at a side of the edge of the pixel to be interpolated. In FIG. 3, they are G0OO, G12O, and G240.
  • the interpolation values of the G component and the B component are calculated using the above weighting coefficients.
  • the interpolated value consists of two items, average information and curvature information.
  • Gave w00O * Gave000 + wl20 * Gavel2O1 ⁇ 224O * Gave24O ... (8)
  • GaveOOO (2 * G000 + G180) / 3 (10)
  • Gavel20 (2 * G120 + G300) / 3 (12)
  • the first adjacent pixel is a pixel separated by about 1 pixel pitch
  • the second adjacent pixel is a pixel separated by about 2 pixel pitch. It can be said that Rc tr and G 120 are separated by about 1 pixel pitch, and that Rc tr and G300 are separated by about 2 pixel pitch.
  • FIG. 22 is a diagram that defines adjacent pixels. “center” is a pixel to be processed, “nearest” is the nearest or nearest neighbor or the first neighboring pixel, and “2nd” is the second neighboring pixel.
  • a term corresponding to the curvature information dR is generally calculated in consideration of the same directionality as the average information.
  • the directionality of the average information (0, 120, and 240 degrees) and the direction of the curvature information that can be extracted (30, 150, and 270 degrees) do not match.
  • the curvature information in the 30-degree direction and the 150-degree direction are averaged to define the curvature information in the 0-degree direction, and the interpolation value may be calculated in consideration of the directionality of the curvature information as in the case of the Payer array.
  • the average information considering the directionality is corrected using the curvature information having no directionality. As a result, it is possible to improve gradation clarity up to a high-frequency region in every direction.
  • Equation (16) for calculating the curvature information dR uses the coefficient values shown in FIG. Equation (16) obtains the difference between the interpolation target pixel Rctr and the peripheral pixel, the difference between the peripheral pixel on the opposite side and the interpolation target pixel Rctr, and further obtains the difference. Therefore, it is obtained by the second derivative operation.
  • the curvature information dR is information indicating the degree of change in the color information of a certain color component.
  • this is information indicating a change in the curve of the curve and the degree of the curve. That is, the concave of the change of the color information of the color component This is the amount reflecting the structural information on the convexity.
  • the curvature information dR of the R pixel position is obtained using the Rctr of the pixel to be interpolated and the color information of the surrounding R pixels.
  • the G component color information is used for the G pixel position
  • the B component color information is used for the B pixel position.
  • Interpolation processing of the G and B components at the R pixel position was performed.
  • ⁇ Interpolation of B and R components at G pixel position '' is a symbol of ⁇ interpolation of G and B components at R pixel position ''.
  • R is G
  • G is B
  • B is R.
  • "interpolation processing of R and G components at B position” is a circular replacement of the symbol "interpolation processing of B and R components at G position" with G for B, B for R, and R for G.
  • the same process may be performed. That is, the same arithmetic routine (subroutine) can be used in each interpolation process.
  • the image reconstructed as described above can bring out all the limit resolution performance of the delta array in the spatial direction.
  • all hexagonal achromatic color reproduction regions of the delta array are resolved.
  • a clear image can be obtained in the gradation direction. This is an extremely effective method especially for images with many achromatic parts.
  • 8-340455 discloses an example in which restored data is generated at a pixel position different from that of a triangular lattice, or restored data is generated on a virtual square lattice having a pixel density twice that of the triangular lattice. Also, an example is shown in which a half row of the triangular lattice is restored to the same pixel position, and the other half row is restored to a pixel position shifted by 1/2 pixel from the triangular lattice. However, this implements the interpolation processing directly at the square grid position, and applies a different interpolation processing when the distance between adjacent pixels of the triangular grid changes. On the other hand, in Japanese Patent Application Laid-Open No. 2001-103295, a square is formed by two-dimensional cubic interpolation, and in Japanese Patent Application Laid-Open No. 2000-194386. Generates virtual double-density square lattice data.
  • the interpolation data is restored with the triangular lattice at the same pixel position as the delta arrangement. This makes it possible to bring out the spatial frequency limit resolution performance of the delta array. If the color difference correction processing and the edge enhancement processing are also performed in the triangular arrangement, the effect works well isotropically. Therefore, once the image is restored on the triangular grid, an RGB value is generated for each pixel.
  • the image data generated on the triangular lattice is subjected to square conversion. It has been experimentally found that it is important to keep the original data as much as possible to maintain the resolution performance of the triangular lattice. Therefore, a displacement process that shifts half a line by half a pixel every other line is performed. The remaining half of the line is not processed, and the Nyquist resolution of the vertical line of the triangular arrangement is maintained. Experiments have shown that the displacement process can maintain the vertical line resolution of the triangular lattice with little problem if it is self-estimated by cue-pick interpolation within one dimension of the row to be processed, though there is some influence from the Nyquist frequency of the square lattice. Was.
  • tmp_R [, y] ( ⁇ 1 * R [x- (3/2) pixel, y] + 5 * [x— (1/2) pixel, y]
  • FIG. 7 is a diagram showing coefficient values used in equation (19).
  • the one-dimensional displacement processing by the cubic described above can be said to be a processing of applying a one-dimensional filter consisting of positive and negative coefficient values.
  • the image restoration method as described above not only can maintain the limit resolution of the triangular arrangement to the maximum, but also the image restoration processing on the triangular grid can use the same processing routine for all pixels. Since only half of the rows need be subjected to simple one-dimensional processing, it is possible to achieve a simpler algorithm than the conventional technology without increasing the amount of data.
  • Post-processing is performed to remove false colors and return to the RGB color system.
  • edge enhancement processing is performed on the luminance component Y plane. In the case of a delta array, exactly the same post-processing may be applied.
  • the configuration of the electronic device 1 of the third embodiment is the same as that of FIG. 1 of the first embodiment, and the description thereof is omitted.
  • FIG. 10 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the third embodiment. It is assumed that the interpolation processing is performed on a triangular lattice as in the first embodiment. Figure 10 starts when the RGB color image after interpolation processing is input. That is, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 10 starts.
  • step S11 the RGB color image data after the interpolation processing is input.
  • step S12 the RGB color system is converted to the YCrCgCb color system unique to the third embodiment.
  • step S13 a low-pass fill process is performed on the color difference plane (CrCgCb plane).
  • step S14 edge enhancement processing is performed on the luminance plane (Y plane).
  • step S15 when the false color on the color difference plane has been removed, conversion is performed to return the YCrCgCb color system to the original RGB color system.
  • step S16 the obtained RGB color image is output.
  • the RGB color image data output in step S16 is image data obtained on a triangular lattice.
  • steps S6 and S7 in FIG. 2 When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S12 to S15 will be described.
  • FIG. 14 illustrates the coefficient values used for the Laplace Braun's processing of equation (36). It should be noted that the Laplacian is not limited to the one shown here, and another one may be used. Figures 15 and 16 show examples of other Labrians.
  • K is a value greater than or equal to zero, and is a parameter for adjusting the level of edge enhancement. 4.
  • the color system is returned to the original RGB color system.
  • This enables color difference correction with extremely high false color suppression capability.
  • by performing the color difference correction processing and the luminance correction processing on the triangular lattice it is possible to perform the correction processing suitable for the directionality of the Dell array.
  • the process of interpolating the color information of the color component missing in each pixel on the triangular lattice has been described.
  • the fourth embodiment an example of image restoration processing of a different system from the interpolation processing in the RGB plane of the first embodiment will be described.
  • the luminance component and the color difference component are created directly from the Dell array without interpolating in the RGB plane.
  • the usefulness of separating one luminance plane and three chrominance planes described in the third embodiment is taken over, and the luminance plane maximizes the achromatic luminance resolution.
  • the three color difference planes are responsible for maximizing the color resolution of the three primary colors.
  • the configuration of the electronic camera 1 according to the fourth embodiment is the same as that of FIG. 1 according to the first embodiment, and a description thereof will be omitted.
  • FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit 11 in the fourth embodiment.
  • step S21 an image obtained by the delta-array image sensor 21 is input.
  • step S22 the similarity is calculated.
  • step S23 similarity is determined based on the similarity obtained in step S22.
  • step S24 a luminance plane (Y0 plane) is generated based on the similarity determination result obtained in step S23 and the delta array image data obtained in step S21.
  • step S25 a correction process is performed on the luminance plane (Y0 plane) obtained in step S24.
  • step S26 color difference components Cgb, Cbr, and Crg are generated based on the similarity determination result obtained in step S23 and the image data of the Delaunay array obtained in step S21. I do.
  • step S26 all the color difference components Cgb, Cbr, and Crg have not yet been generated in all the pixels.
  • step S27 interpolation processing is performed on the color difference components that have not been generated based on the surrounding color difference components. As a result, the color difference plane of Cgb, Cbr, Crg is completed.
  • step S28 the generated Y, Cgb, Cbr, and Crg color systems are converted to the RGB color system.
  • step S29 the converted RGB single image data is output.
  • Steps S 21 to S 29 are all processes on a triangular lattice. Therefore, the RGB image data output in step S29 is triangular lattice image data.
  • the square processing is necessary, the same processing as steps S6 and S7 in FIG. 2 of the first embodiment is performed. 1. Calculation of similarity
  • the similarity is calculated.
  • the similarity obtained by an arbitrary method may be used.
  • the most accurate one shall be used.
  • the similarity between different colors shown in the first embodiment the similarity between same colors shown in the second embodiment, a combination thereof, or the similarity between different colors based on a color index or the like is used.
  • the same color similarity may be switched and used.
  • the determination is made in the same manner as in the first embodiment.
  • the range of the weighted addition takes the G component and the B component up to the second adjacent pixel. That is, when the equations (8) and (9) are rewritten using the definition equations of the equations (10) to (15) of the first embodiment, the equations (41) and (42) are obtained.
  • the luminance component generated in this way is always generated using the positive direction weighting coefficient while including the center pixel at a constant color component ratio, the potential for gradation sharpness is extremely high, and the spatial resolution power is extremely high.
  • a high image can be obtained, and an image can be obtained that is very smoothly connected to peripheral pixels without being affected by chromatic aberration.
  • the spatial resolution reaches the limit resolution of FIG. 6 as in the first embodiment.
  • the luminance plane Y0 Since the above-mentioned luminance plane is generated using only positive coefficients, a correction process using Laplacian is performed in order to extract the potential of gradation clarity contained therein. Since the luminance plane Y0 is designed to be connected to peripheral pixels extremely smoothly taking into account the directionality, it is not necessary to calculate a correction term that requires a new calculation in the correction process according to the directionality. Thus, a single process using a fixed bandpass filter may be used. triangle As shown in FIGS. 14 to 16 of the third embodiment, several methods are available for taking the Laplacian in the arrangement. However, if the degree of optimality is slightly increased, the luminance plane Y0 can only be generated by collecting the G and B components in the directions of 0, 120, and 240 degrees. Here, a case where the correction is performed using the Laplacian shown in FIG. 15 in the directions of 30 degrees, 150 degrees, and 270 degrees, which is independent of the above (Equation (44)). Let Y be the corrected brightness component.
  • k is a positive value and is usually set to 1. However, by setting the value to be larger than 1, the edge enhancement processing as shown in the third embodiment can be provided here.
  • the three chrominance planes are generated directly from the delta plane independently of the luminance plane Y.
  • Cgb G-B
  • Cbr B-R
  • dG and dB are the same as those defined by the equations (24) and (25) in the second embodiment, and G> 2 till d and B> 2 consult d are obtained by the equation (41) ( Same as 42).
  • the average information including the pixels up to the second adjacent pixel is calculated, and the consistency with the luminance component is obtained. The resolution has been raised.
  • dG and dB are not necessarily required, but they are added because they have the effect of increasing color resolution and vividness.
  • the color difference components Cgb and Crg at the G position and the color difference components Cbr and Cgb at the B position are obtained in the same manner. At this point, the Crg and Cbr components have been obtained at the R position, and the Cgb component has been obtained at the nearest pixel.
  • FIG. 19 is a diagram showing this state.
  • the Cgb component at the R position is obtained from the equation (48) using the Cgb component of the pixel around the R position (interpolation processing). At this time, the calculation is performed using the direction determination result obtained at the R position.
  • Cgb [center] w000 * (Cgb [nearestOOO] + Cgb [nearest 180]) / 1
  • the four components of YCgbCbrCrg are obtained for all pixels. If necessary, the color difference planes Cgb, Cbr, and Crg may be subjected to correction processing such as a color difference port one-pass filter similar to that of the third embodiment to suppress false colors.
  • Y (MG + B) / 3
  • Cgb G ⁇ B
  • Cbr B ⁇ R
  • the conversion method is not unique because it is a 4 to 3 conversion, but to suppress color moiré and maximize the brightness resolution and color resolution, all Y, Cgb, C br and Crg components are included, and the mutual moire canceling effect is used. In this way, all of the highest performances generated by the respective roles of Y, Cgb, Cbr, and Crg can be reflected in each of R, G, and B.
  • R [i, j] (9 * Y [i, j] + Cgb [i, j] -2 * Cbr [i, j] + 4 * Crg [i, j]) / 9 ... (49)
  • G [i, j] (9 * Y [i, j] + 4 * Cgb [i, j] + Cbr [i, j] -2 * Crg [i, j]) / 9 ... (50)
  • the image restoration method according to the fourth embodiment has extremely high gradation clarity, and simultaneously achieves excellent luminance resolution performance and color resolution performance in the spatial direction, while reducing chromatic aberration. It has the effect of being strong against them.
  • the square processing it can be performed in the same manner as in the first embodiment.
  • the similarity is determined by calculating the similarity between the same colors. At this time, only similarities in the directions of 0, 120 and 240 degrees were obtained. But the fifth implementation In the embodiment, similarities in the directions of 30, 150, and 270 degrees are also obtained.
  • the configuration of the electronic camera 1 according to the fifth embodiment is the same as that of FIG. 1 of the first embodiment, and a description thereof will be omitted.
  • the description focuses on the case where G and B components are interpolated at the R pixel position. Also, refer to FIG. 8 of the second embodiment.
  • At the R pixel position there are three nearest neighbors of the G component at positions that point to 0, 120, and 240 degrees, and the second adjacent pixel points to 60, 180, and 300 degrees separated by two pixels. There are three in position.
  • For the B component there are three nearest neighbors at 60, 180, and 300 degrees adjacent to each other, and the second adjacent pixel points to 0, 120, and 140 degrees separated by two pixels. There are three in position.
  • the nearest neighbor pixel of the R component is 6 pixels at positions 30 degrees, 90 degrees, 150 degrees, 210 degrees, 270 degrees, and 330 degrees separated by 2 pixels, and the second adjacent pixel is separated by 3 pixels There are six positions at 0, 60, 120, 180, 240, and 300 degrees.
  • C000 ((
  • C120 i (
  • C240 ((
  • the similarity between same colors defined in this way checks the directionality that matches the direction in which the G component and the B component are missing at the R pixel position.
  • the information is between pixels that are very far from each other at a three-pixel interval.
  • the similarities C030, C150, and C270 in the directions of 30 degrees, 150 degrees, and 270 degrees are calculated.
  • the similarity between the same colors in these directions can be defined by a shorter two-pixel interval, unlike the 0, 120, and 240 degree directions.
  • C030 ((
  • C150 I (
  • C270 l (
  • the direction that does not match the direction in which the G component and the B component are missing at the R pixel position Since we are examining similarities, techniques are needed to utilize them effectively. 3) Similarity marginal addition
  • Equation (58) is the same as equation (4) in the first embodiment.
  • cl20, c240, c030, cl50, and c270 are obtained in the same manner.
  • FIG. 26 shows the azimuth relationship of the similarity described above.
  • the significance of judging similarity is the direction in which the G component and B component that do not exist in the processing target pixel exist, that is, the directions of 0, 120, and 240 degrees, and 30 and 150 degrees. It is not meaningful to determine the similarity in the 270 degrees direction. Therefore, it is conceivable to first determine the degree of similarity in the 0-, 120-, and 240-degree directions by using the similarity in the 0-, 120-, and 240-degree directions and continuously determine the reciprocal ratio. It is. That is, it is determined by (1 000): (1 / C120): (1 / C240).
  • the similarity cOOO has the ability to resolve chromatic horizontal lines, it is possible to extend the ky-axis direction of the frequency reproduction range of each of the RGB color components in the Derby arrangement of FIG. 20 to the limit resolution. That is, such a determination method can extend the color resolution of a chromatic image to the limit resolution of the vertices of the hexagon in FIG.
  • it is a similarity over a long distance of 3 pixel intervals if the angle between them is an The directionality cannot be determined due to the influence of the wave number component, and especially in the direction of 30 degrees, 150 degrees, and 270 degrees, the most adverse effect occurs. Near the midpoint of each side of the hexagon in Figure 20 It can only exhibit color resolution that can be lost.
  • the similarities C030, cl50, and C270 of short-range correlation are effectively used.
  • simply taking the reciprocal can only determine the similarity in the directions of 30, 150, and 270 degrees, so in order to convert it to the similarity in the directions of 0, 120, and 240 degrees, the reciprocal Rather, it is interpreted that the value of the similarity itself expresses the similarity in the direction orthogonal to the 30 °, 150 °, and 270 ° directions, ie, the 120 °, 240 °, and 0 ° directions. Therefore, the similarity in the directions of 0, 120 and 240 degrees is determined by the following ratio.
  • the 0 degree direction and the 270 degree direction, the 120 degree direction and the 30 degree direction, and the 240 degree direction and the 150 degree direction are orthogonal relations. This orthogonal relationship is expressed as a 0-degree direction ⁇ 270-degree direction, a 120-degree direction ⁇ 30-degree direction, and a 240-degree direction ⁇ 150-degree direction.
  • the similarity continuously determined using the similarities in the six directions has a spatial resolution that accurately reproduces all the hexagons in FIG. 20 with respect to the chromatic image.
  • the spatial resolution based on the similarity between same colors can always be achieved without being affected by chromatic aberration included in the optical system because similarity between the same color components is observed.
  • the fifth embodiment it is possible to extract all the spatial color resolving power of each RGB single color originally included in the delta arrangement for any image.
  • clear image restoration in the gradation direction is possible, and it shows strong performance even for systems containing chromatic aberration.
  • the calculation of the similarity and the determination of the similarity in the fifth embodiment can be applied to the calculation of the similarity and the determination of the similarity in the fourth embodiment.
  • Such an image restoration method has extremely high gradation clarity, achieves excellent luminance resolution performance and color resolution performance in the spatial direction at the same time, and exhibits an effect of being strong against chromatic aberration.
  • the RGB signal is usually converted to YCbCr consisting of luminance and color difference to reduce false colors remaining in the image after the interpolation processing, and a color difference port-one-pass filter is applied to the Cb and Cr planes.
  • Post-processing is performed to remove false colors by applying a color difference median filter or to return to the RGB color system. Even in the case of the delta arrangement, if the Nyquist frequency cannot be completely reduced in the optical low-pass filter, appropriate false color reduction processing is required to improve the appearance.
  • a post-processing method that does not impair the color resolution performance, which is an excellent feature of the delta array, as much as possible will be described.
  • the configuration of the electronic device 1 according to the sixth embodiment is the same as that shown in FIG. 1 of the first embodiment, and a description thereof will be omitted.
  • FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the sixth embodiment. Interpolation processing is performed on a triangular lattice as in the first, second, fourth, and fifth embodiments.
  • Figure 21 starts with the input of an RGB color image after interpolation. For example, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 21 starts.
  • step S31 the RGB color image data after the interpolation processing is input.
  • step S32 the RGB color system is converted to the YCgbCbrCrg color system unique to the sixth embodiment.
  • step S33 a color determination image is generated.
  • step S34 a color index is calculated using the color determination image generated in step S33.
  • step S35 a color judgment of low saturation or high saturation is performed based on the color index of step S34.
  • step S36 based on the color determination result in step S35, the mouth-pass filter to be used is switched to perform color difference correction.
  • the color difference data to be corrected is generated in step S32.
  • step S37 conversion is performed to return the YCgbCbrCrg color system to the original RGB color system when the false colors on the color difference plane have been removed.
  • step S38 the obtained RGB color image data is output.
  • the RGB color image data output in step S38 is image data obtained on a triangular lattice.
  • steps S6 and S7 in FIG. 2 When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S32 to S37 will be described.
  • TCbr and TCrg are also calculated in the same way.
  • the color index Cdiff is calculated using the image for color determination in which the false color is reduced, and the color evaluation is performed in pixel units.
  • the above continuous color index Cdiff is judged as a threshold value and converted to a discrete color index BW.
  • the threshold Th is preferably set to about 30 for 256 gradations.
  • FIG. 27 is a flowchart showing the processing.
  • the user gamma correction process is a process of converting a linear gradation to an 8-bit gradation suitable for display output, that is, a process of compressing the dynamic range of an image to within the range of the output display device. Independently of this, once the gradation is converted to a certain gamma space and the image restoration processing corresponding to the first to sixth embodiments is performed, a better restoration result can be obtained. . There are the following methods for this gradation conversion.
  • Input signal x (0 ⁇ x ⁇ xmax)
  • Output signal y (0 ⁇ x ⁇ ymax)
  • Input signal y (0 ⁇ x ⁇ ymax), Output signal x (0 ⁇ x ⁇ xmax), Input image is RGB plane
  • This technique can be applied not only to the delta arrangement but also to the interpolation processing of the payer arrangement and other various file arrangements.
  • the Delaware array originally has higher color resolution of a single color than the Payer array, by inserting this gradation conversion processing before and after the image restoration processing, even better color clarity is produced. It becomes possible.
  • the present invention is not necessarily limited to this.
  • RGB color image data generated in other embodiments can be appropriately combined with each other. That is, in the first to sixth embodiments, similar direction determination processing, interpolation processing or direct generation processing of color difference planes, post-processing such as correction, square processing, and the like are described.
  • An optimal image processing method and processing apparatus can be realized by appropriately combining the processes of the embodiments.
  • the present invention can be applied to a two-chip image sensor.
  • the two-plate system for example, one color component is missing in each pixel, but the content of the above embodiment can be applied to the interpolation processing of this one missing color component. Conversion processing can be performed in the same manner.
  • the method of directly generating a luminance component and a color difference component from the delta array without through interpolation processing according to the fourth embodiment can be similarly applied to a two-chip image sensor.
  • an example of an electronic camera has been described, but the present invention is not necessarily limited to this content. It may be a video camera for capturing moving images, a personal convenience store with an image sensor, a mobile phone, or the like. That is, the present invention can be applied to any device that generates a color image data by an image sensor.
  • FIG. 23 is a diagram showing this state.
  • the personal convenience store 100 will be provided with the program via CD-ROM 104.
  • the personal computer 100 has a function of connecting to the communication line 101.
  • the combination computer 102 is a server computer that provides the above program, and stores the program on a recording medium such as a hard disk 103.
  • the communication line 101 is a communication line such as the Internet, personal computer communication, or a dedicated communication line.
  • the computer 102 reads out a program using the hard disk 103, and reads the program.
  • the program is transmitted to the personal computer 100 via the communication line 101 (that is, the program is transmitted as a data signal on a carrier wave via the communication line 101.
  • the program It can be supplied as a computer-readable computer program product in various forms, such as a carrier wave, etc.
  • the output image data can be output.
  • a similarity is calculated for each of the first direction groups including a plurality of directions, and each of the second direction groups including a plurality of directions that are orthogonal to at least one direction of the first direction group and different from the first direction group.
  • the similarity is determined. For example, since the similarity in three directions is continuously determined using the similarity in six directions in the delta array, all the hexagons in FIG. It has a reproducible spatial resolution. That is, it is possible to extract all the spatial color resolving power of each of the RGB single colors originally possessed by the Delhi array.
  • the color information of the first to third color components is always weighted and added at a uniform (1: 1: 1) color component ratio.
  • the color information of a color component different from the color information of the first image is generated.
  • the color information of the color components generated in this way has an extremely high gradation clear potential, an image with high spatial resolution is obtained, and an image that is connected to peripheral pixels very smoothly without being affected by chromatic aberration is obtained. can get.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

An image processing method comprising the image acquiring process of acquiring a first image that is represented by a coloring system consisting of a plurality of color components and consists of a plurality of pixels, each one of which has color information on at least one color component and which are disposed in a delta lattice form, the color information generation process of generating, by using the acquired first image color information, at least one new piece of color information in the same delta-lattice-form pixel positions as with the first image, the pixel position conversion process of converting color information on a plurality of pixels, including the generated color information and being in delta-lattice-form pixel positions, into color information in respective inter-pixel positions by performing a one-dimensional displacement processing between pixels arranged in one direction, and the output process of outputting, by using pixel-positions-converted color information, a second image where a plurality of pixels are disposed in a square lattice form.

Description

明細書 画像処理方法、 画像処理プログラム、 画像処理装置 次の優先権基礎出願の開示内容は引用文としてここに組み込まれる。  Description Image processing method, image processing program, image processing apparatus The disclosure of the following priority application is incorporated herein by reference.
曰本国特許出願 2 0 0 2年第 1 5 0 7 8 8号 ( 2 0 0 2年 5月 2 4日出願) 日本国特許出願 2 0 0 2年第 1 5 9 2 2 8号 (2 0 0 2年 5月 3 1 日出願) 日本国特許出願 2 0 0 2年第 1 5 9 2 2 9号 ( 2 0 0 2年 5月 3 1 日出願) 日本国特許出願 2 0 0 2年第 1 5 9 2 5 0号 (2 0 0 2年 5月 3 1 日出願) 技術分野  Japanese Patent Application No. 2000, No. 1, 507, 888 (filed on May 24, 2002) Japanese Patent Application No. 159, 228, 2008 (No. 20) (Filed on May 31, 2012) Japanese patent application No. 200, No. 1, 592, 209 (filed May 31, 2002) Japanese patent application, filed in Japanese No. 1 592 250 (filed on May 31, 2002)
本発明は、 デルタ配列のカラーフィル夕で得られる画像データを処理する画像 処理方法、 画像処理プログラム、 画像処理装置に関する。 背景技術  The present invention relates to an image processing method, an image processing program, and an image processing apparatus for processing image data obtained by a color filter having a delta arrangement. Background art
電子カメラは、 C C Dなどの撮像素子により被写体を撮像する。 この撮像素子 において、 R G B (赤緑青) の 3色のカラーフィルタを図 2 4 ( a) に示すよう に配列したべィァ配列が知られている。 また、 図 2 4 (b) に示すように配列し たデル夕配列も知られている。 さらに、 図 2 4 ( c ) に示すように配列したハニ カム配列も知られている。 べィァ配列で得られる画像データについては、 例えば、 米国特許 5, 552, 827号、 米国特許 5, 629, 734号、 特開 2001-245314号公報などのよう に、 各種の画像処理方法が提案されている。  An electronic camera captures an image of a subject using an image sensor such as a CCD. In this image sensor, a Bayer arrangement in which three color filters of RGB (red, green, blue) are arranged as shown in FIG. 24 (a) is known. Also, a Delaunay arrangement arranged as shown in Fig. 24 (b) is known. Further, honeycomb arrangements arranged as shown in FIG. 24 (c) are also known. Various image processing methods are available for the image data obtained by the Bayer array, for example, US Pat. No. 5,552,827, US Pat. No. 5,629,734, and JP-A-2001-245314. Proposed.
一方、 デル夕配列で得られる画像データについては、 特開平 8- 340455号公報、 米国特許 5, 805, 217号などで画像処理方法が提案されている。  On the other hand, with respect to image data obtained in a Dell array, an image processing method is proposed in Japanese Patent Application Laid-Open No. 8-340455, US Pat. No. 5,805,217, and the like.
しかし、 米国特許 5, 805, 217号では、 デルタ配列固有の特性に応じた画像処理が なされていない。 また、 特開平 8-340455号公報では、 簡単な画像復元方法しか提 案されていない。 また、 べィァ配列で提案されている各種の画像処理方法は、 必 ずしも、 そのままデルタ配列に適用できない。 発明の開示 However, US Pat. No. 5,805,217 does not perform image processing in accordance with the characteristics unique to the delta array. Also, Japanese Patent Application Laid-Open No. 8-340455 proposes only a simple image restoration method. Also, the various image processing methods proposed for the Bayer array cannot necessarily be applied to the Delta array as they are. Disclosure of the invention
本発明は、 デルタ配列等の三角格子状のカラーフィルタ等で得られた画像デ一 夕に基づき高精細な正方格子配列した画像データを出力する画像処理方法、 画像 処理プログラム、 画像処理装置を提供する。  The present invention provides an image processing method, an image processing program, and an image processing device that output high-definition square grid array image data based on image data obtained by a triangular grid color filter or the like such as a delta array. I do.
また、 本発明は、 デルタ配列等の三角格子状のカラーフィルタ等で得られた画 像データに基づき、 デルタ配列等が元来有する空間色解像力を引き出すことが可 能な画像データを出力する画像処理方法、 画像処理プログラム、 画像処理装置を 提供する。  Further, the present invention provides an image for outputting image data capable of deriving a spatial color resolution inherent in a delta array or the like based on image data obtained by a triangular lattice color filter or the like in a delta array or the like. Provided are a processing method, an image processing program, and an image processing device.
また、 本発明は、 デル夕配列等の三角格子状のカラーフィルタ等で得られた画 像データに基づき、 高精細に補間処理した画像データゃ髙精細に他の表色系に変 換した画像データを出力する画像処理方法、 画像処理プログラム、 画像処理装置 を提供する。  In addition, the present invention provides image data obtained by performing high-resolution interpolation processing based on image data obtained by a triangular lattice color filter or the like such as a Derby array, and an image that is finely converted to another color system. Provided are an image processing method, an image processing program, and an image processing device for outputting data.
また、 本発明は、 デルタ配列等の三角格子状のカラ一フィルタ等で得られた画 像データに基づき、 例えば異なる表色系の高精細な画像データを出力する画像処 理方法、 画像処理プログラム、 画像処理装置を提供する。  Further, the present invention provides an image processing method and an image processing program for outputting, for example, high-definition image data of a different color system based on image data obtained by a triangular lattice color filter such as a delta arrangement. An image processing device is provided.
本発明の第 1の画像処理方法は、 複数の色成分からなる表色系で表され、 1つ の画素に少なくとも 1つの色成分の色情報を有する複数の画素からなり、 複数の 画素は三角格子状に配置された第 1の画像を取得する画像取得手順と、 取得した 第 1の画像の色情報を用いて、 少なくとも 1つの新しい色情報を、 第 1の画像と 同じ三角格子状の画素位置に生成する色情報生成手順と、 生成された色情報を含 む三角格子状の画素位置にある複数の画素の色情報を、 一方向に並ぶ画素間にお いて一次元変位処理を行うことによって、 各画素間位置の色情報に変換する画素 位置変換手順と、 画素位置が変換された色情報を使用して、 複数の画素が正方格 子状に配置された第 2の画像を出力する出力手順を備える。  A first image processing method according to the present invention is represented by a color system composed of a plurality of color components, one pixel includes a plurality of pixels having color information of at least one color component, and the plurality of pixels are triangular. An image acquisition procedure for acquiring a first image arranged in a lattice, and using the acquired color information of the first image, at least one new color information is converted into the same triangular lattice pixels as the first image. A procedure for generating color information at a position, and performing a one-dimensional displacement process between the pixels arranged in one direction with the color information of a plurality of pixels at a pixel position of a triangular lattice including the generated color information. And a second image in which a plurality of pixels are arranged in a square grid using the color information obtained by converting the pixel position into color information of each pixel position. Output procedure to perform
この第 1の画像処理方法において、 新しい色情報は、 第 1の画像の表色系の色 成分のうち第 1の画像の各画素において欠落する色成分の色情報であるのが好ま しい。  In the first image processing method, the new color information is preferably color information of a color component missing in each pixel of the first image among color components of a color system of the first image.
また、 新しい色情報は、 第 1の画像の表色系とは異なる表色系の色情報である のが好ましい。 また、 一次元変位処理は、 正及び負の係数値からなる一次元フィルタを用いて 行うのが好ましい。 Further, it is preferable that the new color information is color information of a color system different from the color system of the first image. Further, it is preferable that the one-dimensional displacement process is performed using a one-dimensional filter including positive and negative coefficient values.
また、 一次元変位処理は、 第 1の画像に対して、 行単位で一行置きに行うのが 好ましい。  Further, it is preferable that the one-dimensional displacement processing is performed on the first image every other line in units of a line.
また、 少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさ らに備え、 色情報生成手順では、 判定された類似性の強さに応じて新しい色情報 を生成するめが好ましい。 この場合、 類似性判定手順では、 少なく とも 3方向に 対する類似度を算出し、 類似度の逆数に基づいて各方向の類似性の強弱を判定す るのが好ましい。  In addition, it is preferable that a similarity determination procedure for determining the strength of similarity in at least three directions is further provided, and in the color information generation procedure, new color information is generated according to the determined similarity strength. In this case, in the similarity determination procedure, it is preferable to calculate the similarity in at least three directions and determine the strength of the similarity in each direction based on the reciprocal of the similarity.
また、 生成された色情報を含む三角格子状の画素位置にある複数の画素の色情 報に基づき、 三角格子状の画素位置に色差成分の色情報を生成する色差生成手順 と、 色差生成手順で生成された色差成分の色情報を補正する補正手順とをさらに 備えるのが好ましい。  Further, based on the color information of a plurality of pixels at the pixel positions of the triangular grid including the generated color information, a color difference generating procedure for generating color information of a color difference component at the pixel positions of the triangular grid, and a color difference generating procedure. Preferably, the method further comprises a correction procedure for correcting the color information of the generated color difference component.
また、 生成された色情報を含む三角格子状の画素位置にある複数の画素の色情 報に基づき、 三角格子状の画素位置に輝度成分の色情報を生成する輝度生成手順 と、 輝度生成手順で生成された輝度成分の色情報を補正する補正手順とをさらに 備えるのが好ましい。  Further, based on the color information of a plurality of pixels at the pixel positions of the triangular lattice including the generated color information, a luminance generation procedure for generating color information of the luminance component at the pixel positions of the triangular lattice, and a luminance generation procedure It is preferable that the method further includes a correction procedure for correcting the generated color information of the luminance component.
本発明の第 2の画像処理方法は、 第 1〜第 n色成分 (n≥2 ) で表され、 1つ の画素に 1つの色成分の色情報を有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順と、 取得した第 1の画像の色情報を用いて、 第 1色成分が欠落する画素に第 1色成分の色情報を補間する補間手順と、 第 1の画 像の色情報と補間された色情報とに基づき、 第 2の画像を出力する出力手順とを 備え、 補間手順は、 第 1の画像の補間対象画素に関して 1 ) 第 1色成分の平均情 報を可変な演算により求め、 2 ) 第 1〜第 nの何れか少なくとも 1つの色成分の 曲率情報を固定の演算により求め、 平均情報と曲率情報に基づいて補間を行う。 この第 2の画像処理方法において、 少なくとも 3方向に対する類似性の強弱を 判定する類似性判定手順をさらに備え、 補間手順は、 類似性判定手順で判定され た類似性の強さに応じて第 1色成分の平均情報の演算を可変にするのが好ましい また、 曲率情報を 2次微分演算により求めるのが好ましい。 また、 第 1〜第 3色成分で表される第 1の画像を入力するとき、 第 1〜第 3の 全ての色成分の曲率情報に基づいて補間を行うのが好ましい。 In the second image processing method of the present invention, a plurality of pixels represented by first to n-th color components (n≥2) and having color information of one color component in one pixel are arranged in a triangular lattice. An image acquisition procedure for acquiring the obtained first image, an interpolation procedure for interpolating the color information of the first color component to the pixel where the first color component is missing, using the color information of the acquired first image, An output step of outputting a second image based on the color information of the first image and the interpolated color information, wherein the interpolation step includes: 1) a first color component with respect to an interpolation target pixel of the first image. 2) The curvature information of at least one of the first to n-th color components is determined by a fixed calculation, and interpolation is performed based on the average information and the curvature information. In the second image processing method, the image processing method may further include a similarity determination procedure for determining the level of similarity in at least three directions, and the interpolation procedure may include a first procedure based on the similarity determined in the similarity determination procedure. It is preferable that the calculation of the average information of the color components is made variable. It is preferable that the curvature information is obtained by a second derivative calculation. When a first image represented by the first to third color components is input, it is preferable to perform interpolation based on the curvature information of all of the first to third color components.
本発明の第 3の画像処理方法は、 複数の色成分で表され、 1つの画素に 1つの 色成分の色情報を有する複数の画素が非矩形状に配置された第 1の画像を記録す る記録手順と、 第 1の画像の色情報を用いて、 複数の方向からなる第 1方向群の 各々について類似度を算出する第 1方向群類似度算出手順と、 第 1の画像の色情 報を用いて、 第 1方向群の少なくとも 1方向と直交し、 かつ第 1方向群とは異な る複数の方向からなる第 2方向群の各々について類似度を算出する第 2方向群類 似度算出手順と、 第 1方向群の類似度と第 2方向群の類似度を合わせて用いて、 第 1方向群間の類似性の強弱を判定する類似性判定手順とを備える。  A third image processing method according to the present invention records a first image represented by a plurality of color components, in which a plurality of pixels having color information of one color component in one pixel are arranged in a non-rectangular shape. A first direction group similarity calculating step of calculating a similarity for each of a plurality of first direction groups using the color information of the first image; and a color information of the first image. A second direction group similarity calculation for calculating a similarity for each of a second direction group composed of a plurality of directions orthogonal to at least one direction of the first direction group and different from the first direction group, using A similarity determination procedure for determining the strength of similarity between the first direction groups by using the similarity of the first direction group and the similarity of the second direction group together.
この第 3の画像処理方法において、 類似性判定手順の判定結果に基づき、 第 1 の画像の画素位置に、 少なくとも 1つの新しい色情報を生成する色情報生成手順 をさらに備えるのが好ましい。 この場合、 第 1の画像が第 1〜第 3の色成分で表 されるとき、 色情報生成手順は、 第 1色成分を有する画素に第 2色成分及び/又 は第 3色成分の色情報を生成するのが好ましい。 さらに、 色情報生成手順は、 第 1の画像の色情報と異なる輝度成分の色情報を生成するのが好ましい。 色情報生 成手順は、 第 1の画像の色情報と異なる色差成分の色情報を生成するのが好まし い。 このとき、 第 1の画像が第 1〜第 3の色成分で表されるとき、 色情報生成手 順は、 ( 1 ) 第 1色成分と第 2色成分の間の色差成分と ( 2 ) 第 2色成分と第 3 色成分の間の色差成分と ( 3 ) 第 3色成分と第 1色成分の間の色差成分の 3種類 の色差成分の色情報を生成するのが好ましい。  In the third image processing method, it is preferable that the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step. In this case, when the first image is represented by the first to third color components, the color information generating procedure includes the steps of: adding a second color component and / or a third color component to pixels having the first color component; Preferably, the information is generated. Further, it is preferable that the color information generating procedure generates color information of a luminance component different from the color information of the first image. It is preferable that the color information generating step generates color information of a color difference component different from the color information of the first image. At this time, when the first image is represented by the first to third color components, the color information generation procedure includes (1) a color difference component between the first color component and the second color component, and (2) It is preferable to generate color information of three types of color difference components, that is, a color difference component between the second color component and the third color component and (3) a color difference component between the third color component and the first color component.
また、 第 1方向群類似度算出手順は、 D l , D 2 , 〜, DNで表される N方向 (N≥ 2 ) の類似度 C D 1, C D2, 〜, C DNを算出し、 第 2方向群類似度算出手順 は、 D 1 , , D 2 ' , 〜, DN ' (D i ' は D i と直交する方向、 i = 1 , 2 , 〜, N) で表される N方向 (N≥ 2 ) の類似度 C D , CD2' , 〜, The first direction group similarity calculating procedure, D l, D 2, calculated ~ similarity C D 1, CD 2 of N direction indicated by DN (N≥ 2), ~, a C DN, In the second direction group similarity calculation procedure, D 1,, D 2 ', ..., DN' (D i 'is a direction orthogonal to D i, i = 1, 2,, ..., N) (N≥2) similarity CD, CD 2 ', ~,
C DN'を算出し、 類似性判定手順は、 第 1方向群間の類似性の強弱を (C D 1' / C : ( C D2' Z C D 2) : 〜 : (CDN' Z CDN) で表される比率を基にした関数を 用いて判定するのが好ましい。 'Is calculated, and similarity determination procedures, the intensity of the similarity between the first direction group (C D 1' C DN / C: (C D2 in 'ZC D 2):: ~ (CDN' ZC DN) Preferably, the determination is made using a function based on the expressed ratio.
また、 第 1の画像の画素は三角格子状に配置され、 第 1方向群類似度算出手順 と第 2方向群類似度算出手順は共に、 N = 3に設定するのが好ましい。 Further, the pixels of the first image are arranged in a triangular lattice, and the first direction group similarity calculation procedure is performed. It is preferable to set both N and 3 for the second direction group similarity calculation procedure.
本発明の第 4の画像処理方法は、 複数の色成分で表され、 1つの画素に 1つの 色成分の色情報を有する複数の画素が非矩形状に配置された第 1の画像を記録す る記録手順と、 第 1の画像の色情報を用いて、 第 1の画素間隔の色情報で構成さ れる類似度を、 複数の方向からなる第 1方向群の各々について、 算出する第 1方 向群類似度算出手順と、 第 1の画像の色情報を用いて、 第 2の画素間隔の色情報 で構成される類似度を、 第 1方向群とは異なる複数の方向からなる第 2方向群の 各々について、 算出する第 2方向群類似度算出手順と、 第 1方向群の類似度と第 2方向群の類似度を合わせて用いて、 第 1方向群間の類似性の強弱を判定する類 似性判定手順とを備える。  A fourth image processing method according to the present invention records a first image represented by a plurality of color components and in which a plurality of pixels each having color information of one color component are arranged in a non-rectangular shape. A first method of calculating a similarity composed of color information at a first pixel interval for each of a first direction group including a plurality of directions using a recording procedure and a color information of a first image. Using the direction group similarity calculation procedure and the color information of the first image, the similarity composed of the color information at the second pixel interval is converted into a second direction composed of a plurality of directions different from the first direction group. For each of the groups, the similarity between the first direction group and the second direction group is used together with the procedure for calculating the second direction group similarity to be calculated, and the similarity between the first direction groups is determined. And a similarity determination procedure.
この第 4の画像処理方法において、 類似性判定手順の判定結果に基づき、 第 1 の画像の画素位置に、 少なくとも 1つの新しい色情報を生成する色情報生成手順 をさらに備えるのが好ましい。  In the fourth image processing method, it is preferable that the method further includes a color information generating step of generating at least one new color information at a pixel position of the first image based on the determination result of the similarity determining step.
また、 第 1方向群類似度算出手順と第 2方向群類似度算出手順は共に、 類似度 として、 同じ色成分間の色情報で構成される同色間類似度を算出するのが好まし い。 この場合、 第 1方向群は、 同じ色成分の色情報が第 1の画素間隔で配置され ている方向からなり、 第 2方向群は、 同じ色成分の色情報が第 2の画素間隔で配 置されている方向からなるのが好ましい。  In both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure, it is preferable to calculate the similarity between the same colors composed of the color information between the same color components as the similarity. In this case, the first direction group consists of directions in which color information of the same color component is arranged at a first pixel interval, and the second direction group consists of color information of the same color component arranged at a second pixel interval. It is preferred that they consist of the directions in which they are placed.
また、 第 1の画像は第 1〜第 3の色成分で表され、 第 1方向群類似度算出手順 と第 2方向群類似度算出手順は共に、 ( 1 ) 第 1色成分のみの色情報で構成され る類似度成分 ( 2 ) 第 2色成分のみの色情報で構成される類似度成分 ( 3 ) 第 3 色成分のみの色情報で構成される類似度成分の内、 少なくとも 2種類の類似度成 分を用いて類似度を算出するのが好ましい。  The first image is represented by first to third color components, and both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include: (1) color information of only the first color component (2) Similarity component composed of color information of only the second color component (3) At least two types of similarity components composed of color information of only the third color component It is preferable to calculate the similarity using the similarity component.
また、 第 1の画素間隔は第 2の画素間隔より長いのが好ましい。  Further, the first pixel interval is preferably longer than the second pixel interval.
また、 第 1の画素間隔は約 3画素間隔であり、 第 2の画素間隔は約 2画素間隔 であるのが好ましい。  Preferably, the first pixel interval is about three pixel intervals, and the second pixel interval is about two pixel intervals.
上記第 3あるいは第 4の画像処理方法において、 第 1方向群類似度算出手順と 第 2方向群類似度算出手順は共に、 画像処理の対象となる処理対象画素について 算出された類似度のみならず、 該処理対象画素の周辺画素について算出された類 似度も含めて類似度を算出するのが好ましい。 In the third or fourth image processing method, both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure include not only the similarity calculated for the processing target pixel to be subjected to image processing, but also A class calculated for pixels surrounding the pixel to be processed It is preferable to calculate the similarity including the similarity.
また、 第 1の画像は、 複数の画素が三角格子状に配置されているのが好ましい。 また、 第 1の画像は、 第 1〜第 3の色成分で表され、 該第 1〜第 3の色成分は、 均等な画素密度で配分されているのが好ましい。  In the first image, a plurality of pixels are preferably arranged in a triangular lattice. In addition, the first image is represented by first to third color components, and the first to third color components are preferably distributed at a uniform pixel density.
本発明の第 5の画像処理方法は、 第 1〜第 n色成分 (n≥2 ) で表され、 1つ の画素に 1つの色成分の色情報を有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順と、 取得した第 1の画像の色情報を用いて、 第 1色成分が欠落する画素に第 1色成分を補間する補間手順と、 第 1の画像の色情 報と補間された色情報とに基づき、 第 2の画像を出力する出力手順とを備え、 補 間手順は、 第 1の画像の補間対象画素に対して、 第 1色成分が 2番目に近接する 画素を含む領域の色情報を用いて、 第 1色成分の平均情報を求め、 補間を行う。 この第 5の画像処理方法において、 少なくとも 3方向に対する類似性の強弱を 判定する類似性判定手順をさらに備え、 補間手順は、 類似性判定手順で判定され た類似性の強さに応じて第 1色成分の平均情報を求めるのが好ましい。  In a fifth image processing method according to the present invention, a plurality of pixels represented by first to n-th color components (n≥2) and each pixel having color information of one color component are arranged in a triangular lattice shape An image acquisition procedure for acquiring the obtained first image, an interpolation procedure for interpolating the first color component to a pixel lacking the first color component using the acquired color information of the first image, An output step of outputting a second image based on the color information of the image and the interpolated color information, wherein the interpolation step is such that the first color component is 2 with respect to the interpolation target pixel of the first image. The average information of the first color component is obtained by using the color information of the area including the pixel closest to the second, and interpolation is performed. In the fifth image processing method, the image processing method further includes a similarity determination procedure for determining the strength of the similarity in at least three directions, and the interpolation procedure includes a first procedure based on the similarity determined in the similarity determination procedure. Preferably, the average information of the color components is obtained.
本発明の第 6の画像処理方法は、 複数の色成分で表され、 1つの画素に 1つの 色成分の色情報を有する複数の画素が三角格子状に配置された第 1の画像を取得 する画像取得手順と、 取得した第 1の画像の色情報を零以上の可変な係数値で加 重加算することによって、 第 1の画像の色情報と異なる色成分の色情報を生成す る色情報生成手順と、 生成された色情報を使用して第 2の画像を出力する出力手 順を備え、 色情報生成手順は、 第 1の画像の処理対象画素に対して、 該画素の色 成分と異なる色成分が 2番目に近接する画素を含む領域内の色情報を加重加算す る。  According to a sixth image processing method of the present invention, a first image is obtained in which a plurality of pixels represented by a plurality of color components and each pixel has color information of one color component arranged in a triangular lattice shape. An image acquisition procedure and color information for generating color information of a color component different from the color information of the first image by performing weighted addition of the acquired color information of the first image with a variable coefficient value of zero or more. A generation step, and an output step of outputting a second image using the generated color information. The color information generation step includes: for a pixel to be processed of the first image, a color component of the pixel; Weighted addition of the color information in the area including the pixel whose different color component is second closest.
この第 6の画像処理方法において、 少なくとも 3方向に対する類似性の強弱を 判定する類似性判定手順をさらに備え、 色情報生成手順は、 類似性判定手順で判 定された類似性の強さに応じて加重加算の係数値を可変にするのが好ましい。 また、 第 1の画像が第 1〜第 3の色成分で表され、 第 1の画像の第 1色成分を 有する画素が処理対象画素の場合、 色情報生成手順は、 処理対象画素、 及び第 2 色成分が 2番目に近接する画素、 及び第 3色成分が 2番目に近接する画素を含む 領域内の色情報を加重加算するのが好ましい。 また、 色情報生成手順後出力手順前に、 色情報生成手順で生成された第 1の画 像の色情報と異なる色成分の色情報を、 予め決められた固定のフィルタ係数から なるフィルタ処理により、 補正する補正手順をさらに備えるのが好ましい。 この 場合、 フィルタ係数の中に、 正および負の値を含むのが好ましい。 The sixth image processing method may further include a similarity determination procedure for determining the strength of similarity in at least three directions, and the color information generation procedure may be performed according to the similarity strength determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable. Further, when the first image is represented by first to third color components, and the pixel having the first color component of the first image is the processing target pixel, the color information generating procedure includes the processing target pixel, It is preferable to perform weighted addition of color information in a region including a pixel whose two color components are second closest and a pixel whose third color component is second closest. Also, after the color information generation procedure and before the output procedure, color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
本発明の第 7の画像処理方法は、 第 1〜第 n色成分 (n≥2 ) で表され、 1つ の画素に 1つの色成分の色情報を有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順と、 第 1の画像の色情報を用いて、 第 1色成分 と第 2色成分の間の色差成分の色情報を生成する色差生成手順と、 生成された色 差成分の色情報を使用して第 2の画像を出力する出力手順を備え、 色差生成手順 は、 第 1の画像の第 1色成分を有する画素に対して、 少なくとも第 2色成分が 2 番目に近接する画素の色情報を用いて、 色差成分の色情報を生成する。  In a seventh image processing method according to the present invention, a plurality of pixels represented by first to n-th color components (n≥2) and having color information of one color component in one pixel are arranged in a triangular lattice. An image obtaining procedure for obtaining the obtained first image, a color difference generating procedure for generating color information of a color difference component between the first color component and the second color component using the color information of the first image, An output step of outputting a second image using the color information of the generated color difference component, wherein the color difference generation step includes: outputting at least a second color to a pixel having the first color component of the first image. The color information of the color difference component is generated using the color information of the pixel whose component is the second closest.
この第 7の画像処理方法において、 色差生成手順は、 第 1の画像の第 1色成分 を有する処理対象画素に対して、 ( 1 ) 該画素の第 1色成分の色情報と (2 ) 該 画素に対して第 2色成分が 2番目に近接する画素を含む領域内における第 2色成 分の色情報の平均情報とに基づいて色差成分の色情報を生成するのが好ましい。 また、 色差生成手順は、 さらに、 処理対象画素に関する第 2色成分の曲率情報 に基づいて、 色差成分の色情報を生成するのが好ましい。  In the seventh image processing method, the color difference generating procedure includes: (1) color information of a first color component of the pixel, and (2) a color information of a first color component of the first image. It is preferable that the color information of the color difference component is generated based on the average information of the color information of the second color component in a region including the pixel whose second color component is second closest to the pixel. In the color difference generation procedure, it is preferable that color information of a color difference component is further generated based on curvature information of a second color component for the pixel to be processed.
また、 少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさ らに備え、 色差生成手順は、 類似性の強さに応じて色差成分の色情報を生成する のが好ましい。  In addition, it is preferable that a similarity determination procedure for determining the level of similarity in at least three directions is further provided, and the color difference generation procedure generates color information of a color difference component according to the similarity strength.
上記第 5〜第 7の画像処理方法において、 出力手順は、 第 1の画像と同じ画素 位置に、 第 2画像を出力するのが好ましい。  In the fifth to seventh image processing methods, it is preferable that, in the output procedure, the second image is output to the same pixel position as the first image.
本発明の第 8の画像処理方法は、 第 1〜第 3色成分で表され、 1つの画素に 1 つの色成分の色情報を有する複数の画素が均等色配分された第 1の画像を取得す る画像取得手順と、 取得した第 1の画像の色情報を零以上の可変な係数値で加重 加算することによって、 第 1の画像の色情報と異なる色成分の色情報を生成する 色情報生成手順と、 生成された色情報を使用して第 2の画像を出力する出力手順 を備え、 色情報生成手順は、 第 1の画像の全ての画素において、 第 1〜第 3色成 分の色情報を常に均等な色成分比率で加重加算する。 この第 8の画像処理方法において、 複数の方向に対する類似性の強弱を判定す る類似性判定手順をさらに備え、 色情報生成手順は、 類似性判定手順で判定され た類似性の強さに応じて加重加算の係数値を可変にするのが好ましい。 An eighth image processing method according to the present invention obtains a first image represented by first to third color components and in which a plurality of pixels having color information of one color component are uniformly distributed to one pixel. Image information, and color information of a color component different from the color information of the first image by weighting and adding the acquired color information of the first image with a variable coefficient value of zero or more. A generation step, and an output step of outputting a second image using the generated color information.The color information generation step includes the steps of: first, second, and third color components for all pixels of the first image; The color information is always weighted and added at a uniform color component ratio. In the eighth image processing method, the image processing method further includes a similarity determination procedure for determining the strength of the similarity in a plurality of directions, and the color information generation procedure is based on the strength of the similarity determined in the similarity determination procedure. It is preferable to make the coefficient value of the weighted addition variable.
また、 第 1の画像は、 複数の画素が三角格子状に配置されているのが好ましい。 また、 色情報生成手順後出力手順前に、 色情報生成手順で生成された第 1の画 像の色情報と異なる色成分の色情報を、 予め決められた固定のフィルタ係数から なるフィルタ処理により、 補正する補正手順をさらに備えるのが好ましい。 この 場合、 フィルタ係数の中に、 正および負の値を含むのが好ましい。  In the first image, a plurality of pixels are preferably arranged in a triangular lattice. Also, after the color information generation procedure and before the output procedure, color information of a color component different from the color information of the first image generated in the color information generation procedure is filtered by a filter process including a predetermined fixed filter coefficient. It is preferable to further include a correction procedure for performing correction. In this case, it is preferable that the filter coefficients include positive and negative values.
本発明の第 9の画像処理方法は、 3種類以上の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複数の画素からなる第 1の画像を取得する画像取 得手順と、 取得した第 1の画像の色情報を用いて、 輝度成分の色情報と少なくと も 3種類の色差成分の色情報とを生成する色情報生成手順と、 色情報生成手順で 生成された輝度成分の色情報と色差成分の色情報とを使用して第 2の画像を出力 する出力手順とを傭える。  The ninth image processing method of the present invention is an image acquisition procedure for acquiring a first image composed of a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component. And a color information generating procedure for generating color information of a luminance component and color information of at least three types of color difference components using the acquired color information of the first image, and a color information generating procedure. An output procedure for outputting the second image using the color information of the luminance component and the color information of the color difference component can be obtained.
この第 9の画像処理方法において、 輝度成分の色情報と少なくとも 3種類の色 差成分の色情報とを用いて、 3種類の色成分の色情報に変換する変換手順をさら に備え、 出力手順は、 変換手順で変換された 3種類の色成分の色情報を使用して 第 2の画像を出力するのが好ましい。  The ninth image processing method further includes a conversion procedure of converting the color information of the luminance component and the color information of at least three types of color difference components into color information of three types of color components. Preferably, the second image is output using the color information of the three types of color components converted by the conversion procedure.
また、 色情報生成手順で生成された輝度成分の色情報と色差成分の色情報は、 第 1の画像の 3種類以上の色成分とは異なる成分の色情報であるのが好ましい。 また、 第 1の画像は、 第 1〜第 3色成分で表され、 複数の画素が均等色配分さ れ、 色情報生成手順は、 ( 1 ) 第 1〜第 3色成分の色成分比率が 1 : 1 : 1で構 成される輝度成分の色情報と、 (2 ) 第 1色成分と第 2色成分の間の色差成分の 色情報と、 ( 3 ) 第 2色成分と第 3色成分の間の色差成分の色情報と、 (4 ) 第 3色成分と第 1色成分の間の色差成分の色情報とを生成するのが好ましい。  In addition, it is preferable that the color information of the luminance component and the color information of the color difference component generated in the color information generation procedure are color information of components different from the three or more types of color components of the first image. Further, the first image is represented by first to third color components, a plurality of pixels are uniformly distributed, and the color information generation procedure is as follows: (1) The color component ratio of the first to third color components is Color information of a luminance component composed of 1: 1: 1, (2) color information of a color difference component between a first color component and a second color component, and (3) a second color component and a third color. It is preferable to generate color information of a color difference component between the components and (4) color information of a color difference component between the third color component and the first color component.
また、 複数の方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、 色情報生成手順は、 類似性判定手順で判定された類似性の強さに応じて輝度 成分の色倩報と少なく とも 3種類の色差成分の色情報とを生成するのが好ましい, また、 第 1の画像は、 複数の画素が三角格子状に配置されているのが好ましい < 本発明の第 1 0の画像処理方法は、 3種類以上の色成分で表され、 1つの画素 に 1つの色成分の色情報を有する複数の画素からなる第 1の画像を取得する画像 取得手順と、 取得した第 1の画像の色情報を用いて、 少なくとも 3種類の色差成 分の色情報を生成する色差生成手順と、 生成した各々の色差成分の色情報に対し て補正処理を行う補正手順と、 補正した色差成分の色情報を使用して第 2の画像 を出力する出力手順とを備える。 The image processing apparatus further includes a similarity determination procedure for determining the degree of similarity in a plurality of directions. The color information generation procedure includes a color component report of a luminance component according to the similarity determined in the similarity determination procedure. It is preferable to generate the color information of at least three types of color difference components. Further, it is preferable that the first image has a plurality of pixels arranged in a triangular lattice. According to a tenth image processing method of the present invention, there is provided an image acquisition procedure for acquiring a first image including a plurality of pixels represented by three or more types of color components and having one pixel having color information of one color component. A color difference generation procedure for generating at least three types of color difference component color information using the acquired color information of the first image, and a correction process for performing correction processing on the generated color difference component color information. And an output step of outputting a second image using the corrected color difference component color information.
この画像処理方法において、 第 1の画像は第 1〜第 3色成分で表され、 色差生 成手順は、 1 ) 第 1色成分と第 2色成分の間の色差成分の色情報と、 2 ) 第 2色 成分と第 3色成分の間の色差成分の色情報と、 3 ) 第 3色成分と第 1色成分の間 の色差成分の色情報とを生成するのが好ましい。  In this image processing method, the first image is represented by first to third color components, and the color difference generation procedure includes: 1) color information of a color difference component between the first color component and the second color component; It is preferable to generate color information of a color difference component between the second color component and the third color component, and 3) color information of a color difference component between the third color component and the first color component.
また、 第 1の画像は第 1〜第 3色成分で表され、 色差生成手順は、 第 1の画像 の色情報を用いて、 第 1の画像の色情報と異なる輝度成分の色情報を生成し、 Also, the first image is represented by first to third color components, and the color difference generation procedure uses the color information of the first image to generate color information of a luminance component different from the color information of the first image. And
1 ) 第 1色成分と輝度成分の間の色差成分の色情報と、 2 ) 第 2色成分と輝度成 分の間の色差成分の色情報と、 3 ) 第 3色成分と輝度成分の間の色差成分の色情 報とを生成するのが好ましい。 この場合、 第 1の画像は、 第 1〜第 3色成分が複 数の画素に均等色配分され、 色差生成手順は、 輝度成分として、 第 1〜第 3色成 分の色成分比率が 1 : 1 : 1で構成される輝度成分の色情報を生成するのが好ま しい。 1) color information of the color difference component between the first color component and the luminance component, 2) color information of the color difference component between the second color component and the luminance component, and 3) between the third color component and the luminance component. It is preferable to generate the color information of the color difference component. In this case, in the first image, the first to third color components are evenly distributed to a plurality of pixels, and the color difference generation procedure determines that the color component ratio of the first to third color components is 1 as a luminance component. It is preferable to generate color information of a luminance component composed of 1: 1.
上記第 8 ~ 1 0の画像処理方法において、 出力手順は、 第 1の画像と同じ画素 位置に、 第 2画像を出力するのが好ましい。  In the eighth to tenth image processing methods, it is preferable that, in the output procedure, the second image is output at the same pixel position as the first image.
本発明のコンピュータ読み込み可能なコンピュー夕プログラム製品は、 上記の いずれかに記載の画像処理方法の手順をコンピュータに実行させるための画像処 理プログラムを有する。  A computer-readable computer program product according to the present invention has an image processing program for causing a computer to execute the procedure of the image processing method described in any of the above.
このコンピュータプログラム製品は、 画像処理プログラムが記録された記録媒 体であるのが好ましい。  This computer program product is preferably a recording medium on which an image processing program is recorded.
本発明の画像処理装置は、 上記のいずれかに記載の画像処理方法の手順を実行 する制御装置を備える。 図面の簡単な説明 図 1は、 第 1の実施の形態における電子カメラの機能ブロック図である。 An image processing device according to the present invention includes a control device that executes a procedure of the image processing method according to any one of the above. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment.
図 2は、 第 1の実施の形態において、 画像処理部が行う画像処理の概要を示す フローチャー卜である。  FIG. 2 is a flowchart showing an outline of image processing performed by the image processing unit in the first embodiment.
図 3は、 デルタ配列の撮像素子により得られた画素の位置関係を示す図である。 図 4は、 周辺加算に使用する係数を示す図である。  FIG. 3 is a diagram showing a positional relationship between pixels obtained by an image sensor in a delta arrangement. FIG. 4 is a diagram showing coefficients used for peripheral addition.
図 5は、 曲率情報 dRを求めるときに使用する係数値を示す図である。  FIG. 5 is a diagram showing coefficient values used when obtaining the curvature information dR.
図 6は、 デルタ配列の無彩色の空間周波数再現領域を示す図である。  FIG. 6 is a diagram showing an achromatic spatial frequency reproduction region in a delta arrangement.
図 7は、 1次元変位処理に使用する係数値を示す図である。  FIG. 7 is a diagram showing coefficient values used for the one-dimensional displacement processing.
図 8は、 第 2の実施の形態の演算で使用する画素位置を示す図である。  FIG. 8 is a diagram illustrating pixel positions used in the calculation according to the second embodiment.
図 9は、 曲率情報 dR、 dG、 dBを求めるときに使用する係数値を示す図である。 図 1 0は、 第 3の実施の形態において、 画像処理部が行う画像処理の概要を示 すフローチャートである。  FIG. 9 is a diagram showing coefficient values used when obtaining the curvature information dR, dG, and dB. FIG. 10 is a flowchart showing an outline of image processing performed by the image processing unit in the third embodiment.
図 1 1は、 ローパスフィル夕の係数値を示す図である。  FIG. 11 is a diagram illustrating coefficient values of a low-pass filter.
図 1 2は、 他のローパスフィル夕の係数値を示す図である。  FIG. 12 is a diagram showing coefficient values of another low-pass filter.
図 1 3は、 他のローパスフィルタの係数値を示す図である。  FIG. 13 is a diagram illustrating coefficient values of another low-pass filter.
図 1 4は、 ラプラシアンの係数値を示す図である。  FIG. 14 is a diagram showing Laplacian coefficient values.
図 1 5は、 他のラプラシアンの係数値を示す図である。  FIG. 15 is a diagram showing coefficient values of other Laplacians.
図 1 6は、 他のラプラシアンの係数値を示す図である。  FIG. 16 is a diagram showing coefficient values of other Laplacians.
図 1 7は、 デルタ配列のデルタ面から、 直接、 輝度面(Y)と 3つの色差面(Cgb, Cbr, C rg)を生成し、 その後、 元の R G Bの表色系に変換する概念を示す図である。 図 1 8は、 第 4の実施の形態において、 画像処理部が行う画像処理の概要を示 すフ口一チヤ一卜である。  Fig. 17 shows the concept of generating a luminance plane (Y) and three color difference planes (Cgb, Cbr, Crg) directly from the delta plane of the delta array, and then converting it to the original RGB color system. FIG. FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit in the fourth embodiment.
図 1 9は、 R位置には Crg, Cbr成分がその最隣接画素には Cgb成分が求まっている 様子を示す図である。  FIG. 19 is a diagram showing a state where the Crg and Cbr components are obtained at the R position and the Cgb component is obtained at the nearest neighbor pixel.
図 2 0は、 デルタ配列の R G B各色成分の空間周波数再現領域を示す図である。 図 2 1は、 第 6の実施の形態において、 画像処理部が行う画像処理の概要を示 すフローチャートである。  FIG. 20 is a diagram illustrating a spatial frequency reproduction region of each of the RGB components of the delta array. FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit in the sixth embodiment.
図 2 2は、 隣接画素について定義する図である。  FIG. 22 is a diagram defining adjacent pixels.
図 2 3は、 プログラムを、 C D— R O Mなどの記録媒体やインタ一ネッ トなど のデータ信号を通じて提供する様子を示す図である。 Figure 23 shows a program stored on a storage medium such as a CD-ROM or on the Internet. FIG. 3 is a diagram showing a state of being provided through a data signal.
図 2 4は、 R G Bカラーフィルタのべィァ配列、 デルタ配列、 ハニカム配列を 示す図である。  FIG. 24 is a diagram showing a Bayer array, a delta array, and a honeycomb array of RGB color filters.
図 2 5は、 デルタ配列で得られた画像データについて、 三角格子上で補間処理 し、 正方格子データに復元する処理の概念を示す図である。  FIG. 25 is a diagram showing the concept of a process of interpolating image data obtained in a delta array on a triangular lattice and restoring the image data to a square lattice data.
図 2 6は、 第 5の実施の形態において、 類似度の方位関係を示す図である。 図 2 7は、 画像復元処理と階調処理を示すフローチャートである。 発明を実施するための最良の形態  FIG. 26 is a diagram illustrating the azimuth relationship of the similarity in the fifth embodiment. FIG. 27 is a flowchart showing the image restoration processing and the gradation processing. BEST MODE FOR CARRYING OUT THE INVENTION
一第 1の実施の形態一 First Embodiment 1
(電子カメラの構成)  (Configuration of electronic camera)
図 1は、 第 1の実施の形態における電子カメラの機能プロック図である。 電子 カメラ 1は、 A / D変換部 1 0、 画像処理部 1 1、 制御部 1 2、 メモリ 1 3、 圧 縮/伸長部 1 4、 表示画像生成部 1 5を備える。 また、 メモリカード (カード状 のリムーバブルメモリ) 1 6とのインタフェースをとるメモリカード用インタフ エース部 1 7および所定のケーブルや無線伝送路を介して P C (パーソナルコン ピュー夕) 1 8等の外部装置とのインタフェースをとる外部インタフェース部 1 9を備える。 これらの各ブロックはバス 2 9を介して相互に接続される。 画像処 理部 1 1は、 例えば、 画像処理専用の 1チップ · マイクロプロセッサで構成され る。  FIG. 1 is a functional block diagram of the electronic camera according to the first embodiment. The electronic camera 1 includes an A / D conversion unit 10, an image processing unit 11, a control unit 12, a memory 13, a compression / decompression unit 14, and a display image generation unit 15. Also, an external unit such as a personal computer (PC) 18 via a memory card interface unit 17 for interfacing with a memory card (card-shaped removable memory) 16 and a predetermined cable or wireless transmission path. And an external interface unit 19 for interfacing with the external interface. These blocks are interconnected via a bus 29. The image processing unit 11 is composed of, for example, a one-chip microprocessor dedicated to image processing.
電子カメラ 1は、 さらに、 撮影光学系 2 0、 撮像素子 2 1、 アナログ信号処理 部 2 2、 タイミング制御部 2 3を傭える。 撮像素子 2 1には撮影光学系 2 0で取 得された被写体の光学像が結像し、 撮像素子 2 1の出力はアナログ信号処理部 2 2に接続される。 アナログ信号処理部 2 2の出力は、 A / D変換部 1 0に接続さ れる。 タイミング制御部 2 3には制御部 1 2の出力が接続され、 タイミング制御 部 2 3の出力は、 撮像素子 2 1、 アナログ信号処理部 2 2 、 A / D変換部 1 0 、 画像処理部 1 1に接続される。 撮像素子 2 1は例えば C C Dなどで構成される。 電子カメラ 1は、 さらに、 レリーズポタンやモ一ド切り換え用の選択ポタン等 に相当する操作部 2 4およびモニタ 2 5を備える。 操作部 2 4の出力は制御部 1 2に接続され、 モニタ 2 5には表示画像生成部 1 5の出力が接続される。 The electronic camera 1 further includes a photographing optical system 20, an image sensor 21, an analog signal processor 22, and a timing controller 23. An optical image of the subject obtained by the imaging optical system 20 is formed on the image sensor 21, and the output of the image sensor 21 is connected to the analog signal processor 22. The output of the analog signal processor 22 is connected to the A / D converter 10. The output of the control unit 12 is connected to the timing control unit 23. The output of the timing control unit 23 is an image sensor 21, an analog signal processing unit 22, an A / D conversion unit 10 and an image processing unit 1. Connected to 1. The image sensor 21 is composed of, for example, a CCD or the like. The electronic camera 1 further includes an operation unit 24 and a monitor 25 corresponding to a release button, a selection button for mode switching, and the like. The output of the operation unit 2 4 is the control unit 1 2 and the output of the display image generation unit 15 is connected to the monitor 25.
なお、 P C 1 8には、 モニタ 2 6やプリンタ 2 7等が接続されており、 C D— R O M 2 8に記録されたアプリケーションプログラムが予めィンストールされて いる。 また、 P C 1 8は、 不図示の C P U、 メモリ、 ハードディスクの他に、 メ モリカード 1 6とのインタフェースをとるメモリカード用インタフェース部 (不 図示) や所定のケーブルや無線伝送路を介して電子カメラ 1等の外部装置とのィ ン夕フェースをとる外部ィン夕フェース部 (不図示) を備える。  The monitor 18 and the printer 27 are connected to the PC 18, and the application program recorded on the CD-ROM 28 is installed in advance. The PC 18 is connected to a memory card interface unit (not shown) for interfacing with the memory card 16 in addition to a CPU, a memory, and a hard disk (not shown). It has an external interface (not shown) that interfaces with external devices such as the camera 1.
図 1のような構成の電子カメラ 1において、 操作部 2 4を介し、 操作者によつ て撮影モードが選択されてレリーズポタンが押されると、 制御部 1 2は、 夕イミ ング制御部 2 3を介して、 撮像素子 2 1、 アナログ信号処理部 2 2、 A Z D変換 部 1 0に対するタイミング制御を行う。 撮像素子 2 1は、 光学像に対応する画像 信号を生成する。 その画像信号は、 アナログ信号処理部 2 2で所定の信号処理が 行われ、 A Z D変換部 1 0でディジタル化され、 画像データとして、 画像処理部 1 1に供給される。  In the electronic camera 1 configured as shown in FIG. 1, when the operator selects a shooting mode via the operation unit 24 and presses the release button, the control unit 12 controls the evening imaging control unit 2. 3, the timing of the image sensor 21, the analog signal processor 22, and the AZD converter 10 is controlled. The image sensor 21 generates an image signal corresponding to the optical image. The image signal is subjected to predetermined signal processing in an analog signal processing unit 22, digitized in an AZD conversion unit 10, and supplied as image data to the image processing unit 11.
本実施の形態の電子カメラ 1では、 撮像素子 2 1において、 R (赤) 、 G (緑) 、 B (青) のカラ一フィル夕がデルタ配列 (後述) されているので、 画像 処理部 1 1に供給される画像データは R G B表色系で示される。 画像データを構 成する各々の画素には、 R G Bの何れか 1つの色成分の色情報が存在する。 画像処理部 1 1は、 このような画像デ一夕に対し、 後述する画像データ変換処 理を行う他に、 階調変換や輪郭強調などの画像処理を行う。 このような画像処理 が完了した画像データは、 必要に応じて、 圧縮 伸長部 1 4で所定の圧縮処理が 施され、 メモリカード用インタフェース部 1 7を介してメモリカード 1 6に記録 される。  In the electronic camera 1 of the present embodiment, since the color filters of R (red), G (green), and B (blue) are arranged in a delta array (described later) in the image sensor 21, the image processing unit 1 The image data supplied to 1 is represented by the RGB color system. Each pixel constituting the image data has color information of any one of RGB components. The image processing unit 11 performs image processing such as gradation conversion and contour emphasis on such image data in addition to performing image data conversion processing described later. The image data on which such image processing has been completed is subjected to predetermined compression processing by the compression / decompression section 14 as necessary, and is recorded on the memory card 16 via the memory card interface section 17.
なお、 画像処理が完了した画像データは、 圧縮処理を施さずにメモリカード 1 6に記録したり、 P C 1 8側のモニタ 2 6やプリン夕 2 7で採用されている表色 系に変換して、 外部ィン夕フェース部 1 9を介して P C 1 8に供給しても良い。 また、 操作部 2 4を介し、 操作者によって再生モードが選択されると、 メモリ力 ード 1 6に記録されている画像データは、 メモリ力一ド用ィン夕フェース部 1 7 を介して読み出されて圧縮/伸長部 1 2で伸長処理が施され、 表示画像生成部 1 5を介してモニタ 2 5に表示される。 Image data that has undergone image processing is recorded on a memory card 16 without compression, or converted into the color system used by the monitor 18 and the printer 27 on the PC 18 side. Alternatively, the data may be supplied to the PC 18 via the external interface 19. When the playback mode is selected by the operator via the operation unit 24, the image data recorded in the memory card 16 is transferred to the memory card interface 17 via the memory card interface unit 17. It is read out and subjected to decompression processing by the compression / decompression unit 1 2, and the display image generation unit 1 Displayed on monitor 2 through 5.
なお、 伸長処理が施された画像データは、 モニタ 2 5に表示せず、 P C 1 8側 のモニタ 2 6やプリンタ 2 7で採用されている表色系に変換して、 外部ィンタフ ェ一ス部 1 9を介して P C 1 8に供給しても良い。  The decompressed image data is not displayed on the monitor 25, but is converted to the color system used by the monitor 26 and the printer 27 on the PC 18 side, and the external interface is used. The data may be supplied to the PC 18 via the unit 19.
(画像処理)  (Image processing)
次に、 デル夕配列で得られた画像データについて、 三角格子上で補間処理し、 コンピュータ上の取り扱いが容易な正方格子データに復元する処理について説明 する。 図 2 5は、 これらの処理の概念を示す図である。 三角格子とは、 撮像素子 の画素が 1行ごとに 1 / 2画素ずれて配列された並びを言う。 隣接する各画素の 中心を結ぶと三角形を形成する並びである。 画素の中心点を格子点と言ってもよ い。 図 2 4 ( b ) のデルタ配列は、 三角格子状に配列されたものである。 図 2 4 ( b ) のような配列で得られる画像は、 画素が三角配置された画像と言ってもよ い。 正方格子とは、 撮像素子の画素が 1行ごとにずれないで配列された並びを言 う。 隣接する各画素の中心を結ぶと四角形を形成する並びである。 図 2 4 ( a ) のべィァ配列は正方格子状に配列されたものである。 図 2 4 ( a ) のような配列 で得られる画像は、 画素が矩形 (四角) 配置された画像と言ってもよい。  Next, the process of performing interpolation processing on the image data obtained in the Dell array on a triangular lattice and restoring it to square lattice data that is easy to handle on a computer will be described. FIG. 25 is a diagram showing the concept of these processes. The triangular lattice refers to an arrangement in which pixels of the image sensor are arranged with a shift of 1/2 pixel for each row. It is a line that forms a triangle when the centers of adjacent pixels are connected. The center point of a pixel may be called a grid point. The delta arrangement in Fig. 24 (b) is arranged in a triangular lattice. An image obtained with the arrangement shown in Fig. 24 (b) may be called an image in which pixels are arranged in a triangular arrangement. The square lattice refers to an arrangement in which pixels of the image sensor are arranged without being shifted for each row. The arrangement forms a square when the centers of adjacent pixels are connected. The bay array in Fig. 24 (a) is arranged in a square lattice. An image obtained in the arrangement shown in Fig. 24 (a) may be called an image in which pixels are arranged in a rectangular (square) shape.
また、 後述するように、 デルタ配列の画像データを 1行おきに 1 / 2画素ずら す処理をすると、 正方格子データが生成される。 三角格子上で補間処理をすると は、 デルタ配列で得られた画像データの状態で補間処理をすることを言う。 すな わち、 デルタ配列の画素位置に欠落する色成分の色情報を補間することを言う。 図 2は、 画像処理部 1 1が行う画像処理の概要を示すフローチャートである。 ステップ S 1では、 デルタ配列の撮像素子 2 1で得られた画像を入力する。 ステ ップ S 2において、 類似度の算出を行う。 ステップ S 3では、 ステップ S 2で得 られた類似度に基づき類似性を判定する。 ステップ S 4では、 ステップ S 3で得 られた類似性の判定結果に基づいて、 各画素において欠落する色成分の補間値を 算出する。 ステップ S 5では、 得られた R G Bカラ一画像を出力する。 ステップ S 5で出力される R G Bカラ一画像は三角格子上で得られた画像データである。 次に、 ステップ S 6で、 三角格子上で得られた画像データに対して 1次元変位 処理を行う。 1次元変位処理は、 後述するように、 1行おきの画像データに対し て行う。 ステップ S 7で、 1次元変位処理を行った画像データと行わなかった画 像データを合わせて正方格子画像データを出力する。 ステップ S 1〜S 5は、 三 角格子上での補間処理であり、 ステップ S 6、 S 7は正方化処理である。 以下、 これらの処理の詳細について説明する。 As will be described later, a process of shifting the delta array image data by 1/2 pixel every other line generates square grid data. Interpolating on a triangular lattice means performing interpolation on the state of image data obtained in a delta array. That is, it means to interpolate the color information of the color component missing at the pixel position of the delta array. FIG. 2 is a flowchart illustrating an outline of the image processing performed by the image processing unit 11. In step S1, an image obtained by the image sensor 21 in the delta arrangement is input. In step S2, the similarity is calculated. In step S3, similarity is determined based on the similarity obtained in step S2. In step S4, an interpolation value of a missing color component in each pixel is calculated based on the similarity determination result obtained in step S3. In step S5, the obtained RGB color image is output. The RGB color image output in step S5 is image data obtained on a triangular lattice. Next, in step S6, one-dimensional displacement processing is performed on the image data obtained on the triangular lattice. One-dimensional displacement processing is performed on every other row of image data, as described later. Do it. In step S7, square lattice image data is output by combining the image data subjected to the one-dimensional displacement processing and the image data not subjected to the one-dimensional displacement processing. Steps S1 to S5 are interpolation processing on a triangular lattice, and steps S6 and S7 are square processing. Hereinafter, details of these processes will be described.
一 角格子上補間処理一 Interpolation on square grid
以下の説明では、 代表して R画素位置での補間処理について説明する。 図 3は、 デルタ配列の撮像素子 2 1により得られた画素の位置関係を示す図である。 撮像 素子 2 1の画素は 1行ごとに 1 / 2画素ずれて配置され、 カラーフィル夕一は R GBの色成分が 1 : 1 : 1の割合で各画素上に配列されている。 すなわち、 均等 に色配分されている。 参考に、 べィァ配列のカラ一フィルタは R G Bが 1 : 2 : 1の割合で配列されている (図 2 4 ( a) ) 。  In the following description, the interpolation process at the R pixel position will be described as a representative. FIG. 3 is a diagram showing a positional relationship between pixels obtained by the image sensor 21 in the delta arrangement. The pixels of the image pickup device 21 are arranged with a shift of 1/2 pixel for each row, and the color filters are arranged on each pixel at a ratio of RGB components of 1: 1: 1. That is, the colors are evenly distributed. For reference, the color filter in the bay array has RGB arranged in the ratio of 1: 2: 1 (Fig. 24 (a)).
R成分の色情報を有する画素を R画素、 B成分の色情報を有する画素を B画素、 G成分の色情報を有する画素を G画素と言う。 撮像素子 2 1で得られた画像デー 夕は、 各画素に 1つの色成分の色情報しか有しない。 補間処理は、 各画素に欠落 する他の色成分の色情報を計算により求める処理である。 以下、 R画素位置に G、 B成分の色情報を補間する場合について説明する。  A pixel having R component color information is called an R pixel, a pixel having B component color information is called a B pixel, and a pixel having G component color information is called a G pixel. The image data obtained by the image sensor 21 has only one color component for each pixel. The interpolation process is a process for calculating color information of other color components missing in each pixel by calculation. Hereinafter, a case where the color information of the G and B components is interpolated at the R pixel position will be described.
図 3において、 R画素である処理対象画素 (補間対象画素) を Rctrと呼ぶ。 ま た、 画素 Rctrの周辺に存在する各画素位置を角度を用いて表現する。 例えば、 60 度方向に存在する B画素を B060、 G画素を G060と表現する。 ただし、 この角度は 厳密な角度ではなく近似的なものである。 また、 0度- 180度を結ぶ方向を 0度方向、 120度- 300度を結ぶ方向を 120度方向、 240度- 60度を結ぶ方向を 240度方向、 30度 - 210度を結ぶ方向を 30度方向、 150度- 330度を結ぶ方向を 150度方向、 270度- 90度を 結ぶ方向を 270度方向と呼ぶことにする。  In FIG. 3, the pixel to be processed (pixel to be interpolated), which is the R pixel, is called Rctr. In addition, each pixel position existing around the pixel Rctr is expressed using an angle. For example, the B pixel existing in the 60-degree direction is expressed as B060, and the G pixel is expressed as G060. However, this angle is not exact but approximate. Also, the direction connecting 0 ° -180 ° is 0 ° direction, the direction connecting 120 ° -300 ° is 120 ° direction, the direction connecting 240 ° -60 ° is 240 ° direction, and the direction connecting 30 ° -210 ° is The direction connecting 30 degrees, the direction connecting 150-330 degrees is called the 150-degree direction, and the direction connecting 270-90 degrees is called the 270-degree direction.
1. 類似度の算出  1. Calculation of similarity
0度、 120度、 240度方向の類似度 C000、 C120、 C240を算出する。 第 1の実施の形 態では、 式(1) (2) (3)に示すように、 異なる色成分間で構成される異色間類似度を 求める。  Calculate the similarities C000, C120, and C240 in the directions of 0, 120, and 240 degrees. In the first embodiment, as shown in Expressions (1), (2), and (3), the similarity between different colors composed of different color components is obtained.
C000 ( I GOOO-Rc t r I + 1 B 180-Rc t r | + 1 (GOOO+B 180) /2-Rc t r I ) /3 ... (1)  C000 (I GOOO-Rct r I + 1 B 180-Rct r | + 1 (GOOO + B 180) / 2-Rct r I) / 3 ... (1)
C120 = (|G120-Rctr| + |B300-Rctr| + | (G120+B300) /2-Rctr | ) /3 ... (2) C240 = 11 G240-RC t r | + 1 B060-Rc t r | + 1 (G240+B060) /2-Rc t r | ) /3 ... (3) このように隣接画素間で定義される異色間類似度は、 デルタ配列の三角格子で 規定されるナイキスト周波数の画像構造を空間的に解像させる能力を持つ。 C120 = (| G120-Rctr | + | B300-Rctr | + | (G120 + B300) / 2-Rctr |) / 3 ... (2) C240 = 11 G240-RC tr | + 1 B060-Rc tr | + 1 (G240 + B060) / 2-Rc tr |) / 3 ... (3) Thus, similarity between different colors defined between adjacent pixels Degree has the ability to spatially resolve the image structure at the Nyquist frequency specified by the triangular lattice of the delta arrangement.
次に、 類似度の周辺加算を行って周辺画素との連続性を考慮することにより、 類似度の精度を上げる。 ここでは各画素について求めた上記類似度を、 R位置に ついて図 4に示す係数を使用して周辺加算を行う。 周辺加算を行った類似度を小 文字で表記する。 []は、 処理対象画素から見たデルタ配列上の画素位置を表す。 式(4)は 0度方向の類似度の周辺加算を示す。  Next, the accuracy of the similarity is increased by performing the peripheral addition of the similarity and considering the continuity with the peripheral pixels. Here, the above similarity obtained for each pixel is subjected to peripheral addition using the coefficients shown in FIG. 4 for the R position. The similarity with margin addition is written in small letters. [] Indicates a pixel position on the delta array viewed from the processing target pixel. Equation (4) shows the peripheral addition of the similarity in the 0-degree direction.
cOOO = (6*C000 [Rctr]  cOOO = (6 * C000 [Rctr]
+C000 [R030] +C000 [R150] +CO0O [ 270]  + C000 [R030] + C000 [R150] + CO0O [270]
+C000 [R210] +C000 [ 300] +C000 [R090] ) /12 ... (4)  + C000 [R210] + C000 [300] + C000 [R090]) / 12 ... (4)
120度方向の c 120、 240度方向の c240も同様にして求める。  C120 in the 120-degree direction and c240 in the 240-degree direction are obtained in the same manner.
2. 類似性判定  2. Similarity judgment
上述した類似度は値が小さいほど大きな類似性を示すので、 各方向の類似性の 強弱を、 類似度の逆数比で連続的に変化するように判定する。 すなわち(1/cOOO) : (1/C120): (1/C240)で判定する。 具体的には、 次の加重係数を演算する。  The smaller the value of the similarity is, the greater the similarity is. Therefore, the strength of the similarity in each direction is determined so as to continuously change at the reciprocal ratio of the similarity. That is, it is determined by (1 / cOOO): (1 / C120): (1 / C240). Specifically, the following weight coefficient is calculated.
各方向の類似性を 1で規格化された加重係数 w000、 wl20、 w240として表すと、 w000=(cl20*c240+Th)/(cl20*c240+c240*c000+c000*cl20+3*Th) … (5) wl20=(c240*c000+Th)/(cl20*c240+c240*c000+c000*cl20+3*Th) ... (6) w240=(c000*cl20+Th)/(cl20*c240+c240*c000+c000*cl20+3*Th) ... (7) により求まる。 ただし、 閾値 Thは発散を防ぐための項で正の値をとる。 通常 Th=l とすればよいが、 高感度撮影画像などノイズの多い画像に対してはこの閾値を上 げるとよい。 加重係数 w000、 wl20、 w240は、 類似性の強弱に応じた値となる。 べィァ配列では、 米国特許 5, 552, 827号、 米国特許 5, 629, 734号、 特開 2001- 245 314号に示されるように、 G捕間における方向判定法として、 加重係数による連続 的判定法と閾値判定による離散的判定法の 2通りがある。 べィァ配列では、 最隣 接 G成分が補間対象画素に対して 4方向と密に存在するため、 連続的判定法と離 散的判定法のどちらを使用してもおおよそ問題なく使える。 しかし、 最隣接 G成 分が 0度方向、 120度方向、 240度方向の 3方向にしか存在しないデルタ配列におい ては加重係数による連続的な方向判定が重要となる。 最隣接 G成分とは、 補間対 象画素と辺を接する画素で G成分を有するものである。 図 3では、 G0OO, G12O,G2 40である。 When the similarity in each direction is expressed as a weighting coefficient w000, wl20, w240 normalized by 1, w000 = (cl20 * c240 + Th) / (cl20 * c240 + c240 * c000 + c000 * cl20 + 3 * Th) … (5) wl20 = (c240 * c000 + Th) / (cl20 * c240 + c240 * c000 + c000 * cl20 + 3 * Th) ... (6) w240 = (c000 * cl20 + Th) / (cl20 * c240 + c240 * c000 + c000 * cl20 + 3 * Th)... (7) However, the threshold Th is a term to prevent divergence and takes a positive value. Normally, it is sufficient to set Th = l, but it is better to increase this threshold for images with much noise such as high-sensitivity images. The weighting coefficients w000, wl20, and w240 are values according to the strength of similarity. In the Bayer arrangement, as shown in U.S. Pat.No. 5,552,827, U.S. Pat.No. 5,629,734, and JP-A-2001-245314, a continuous weighting coefficient There are two types of methods, a statistical determination method and a discrete determination method based on threshold value determination. In the Bayer array, the nearest neighbor G component exists densely in four directions with respect to the pixel to be interpolated, so it can be used with almost no problem using either the continuous judgment method or the discrete judgment method. However, in a delta array where the nearest G component exists only in three directions: 0, 120, and 240 degrees In this case, it is important to determine the direction continuously based on the weighting factor. The nearest neighbor G component is a pixel which has a G component at a side of the edge of the pixel to be interpolated. In FIG. 3, they are G0OO, G12O, and G240.
3. 補間値算出 3. Interpolation value calculation
G成分、 B成分の補間値を上記加重係数を用いて算出する。 補間値は、 平均情 報と曲率情報の 2つの項からなる。  The interpolation values of the G component and the B component are calculated using the above weighting coefficients. The interpolated value consists of two items, average information and curvature information.
<平均情報 >  <Average information>
Gave = w00O*Gave000+wl20*Gavel2O½24O*Gave24O ... (8)  Gave = w00O * Gave000 + wl20 * Gavel2O½24O * Gave24O ... (8)
Bave = w000*Bave000+wl20*Bavel20+w240*Bave240 ... (9)  Bave = w000 * Bave000 + wl20 * Bavel20 + w240 * Bave240 ... (9)
ここで、  here,
GaveOOO = (2*G000+G180) /3 (10)  GaveOOO = (2 * G000 + G180) / 3 (10)
BaveOOO = (2*B180+B000) /3 (11)  BaveOOO = (2 * B180 + B000) / 3 (11)
Gavel20 = (2*G120+G300) /3 (12)  Gavel20 = (2 * G120 + G300) / 3 (12)
Bavel20 (2*B300+B120)/3 (13)  Bavel20 (2 * B300 + B120) / 3 (13)
Gave240 (2*G240+G060)/3 (14)  Gave240 (2 * G240 + G060) / 3 (14)
Bave240 (2*B060+B240)/3 (15)  Bave240 (2 * B060 + B240) / 3 (15)
である。  It is.
<曲率情報 >  <Curvature information>
dR = (6*Rctr-R030-R090- 150- 210-R270-R330)/12 ... (16)  dR = (6 * Rctr-R030-R090- 150- 210-R270-R330) / 12 ... (16)
<補間値 >  <Interpolated value>
Gctr = Gave+dR ... (17)  Gctr = Gave + dR ... (17)
Bctr = Bave+dR ... (18)  Bctr = Bave + dR ... (18)
通常、 べィァ配列の G補間においては、 最隣接 G成分を用いて平均情報を求め る。 しかし、 デルタ配列の場合は同様にすると縦線や 30度方向、 150度方向の境界 線がブッブッになる問題が生じることが実験的に判明した。 これは、 例えば縦線 境界の場合、 加重係数が w000= 0、 wl20 = w240 0.5になっていると想定され、 2 方向間で平均処理を行う部分に相当する。 しかしながら、 最隣接画素平均だけで は、 縦線に沿ってある点では左側の 2点平均をとり、 次の隣の斜め右上あるいは 右下の点では右側の 2点平均をとることになつてしまうためと考えられる。 そこで、 第 2隣接画素まで考慮して補間対象画素からの距離比に応じた平均処 理を行うと、 例えば縦線の場合、 常に右側と左側の両方から平均をとるので、 縦 線、 30度方向、 1 50度方向の境界線のブッブッが劇的に改善する。 故に、 式(10)〜 ( 1 5)に示すように、 第 2隣接画素まで含む平均処理を行う。 これにより、 30度、 150度、 270度方向の空間解像力が向上する。 例えば、 式(10)において、 G000が最 隣接画素あるいは第 1隣接画素であり、 G 1 80が第 2隣接画素である。 式(1 2)にお いて、 G 120が最隣接画素あるいは第 1隣接画素であり、 G300が第 2隣接画素であ る。 他の式も同様である。 第 1隣接画素は約 1画素ピッチ離れた画素であり、 第 2隣接画素は約 2画素ピッチ離れた画素と言える。 Rc t rと G 120は約 1画素ピッチ 離れており、 Rc t rと G300は約 2画素ピッチ離れていると言える。 図 2 2は、 隣接 画素について定義する図である。 c en t e rは処理対象画素であり、 ne a re s tは最近接 あるいは最隣接あるいは第 1隣接画素であり、 ' 2ndは第 2隣接画素である。 Normally, in G interpolation of the Bay array, average information is obtained using the nearest neighbor G component. However, in the case of the delta arrangement, it has been experimentally found that the same problem occurs in which vertical lines and boundaries in the 30-degree and 150-degree directions become buzzy. For example, in the case of a vertical line boundary, it is assumed that the weighting factors are w000 = 0 and wl20 = w2400.5, and this corresponds to a portion where the averaging process is performed between the two directions. However, the average of the nearest neighbors alone will mean the two points on the left side at a point along the vertical line, and the average of the two points on the right side at the next adjacent diagonally upper right or lower right point. It is thought to be. Therefore, when averaging is performed according to the distance ratio from the interpolation target pixel while considering the second adjacent pixel, for example, in the case of a vertical line, the average is always taken from both the right and left sides. Direction, the bubbly of the 150-degree direction boundary improves dramatically. Therefore, as shown in Expressions (10) to (15), an averaging process including up to the second adjacent pixel is performed. This improves the spatial resolution in the directions of 30, 150, and 270 degrees. For example, in equation (10), G000 is the nearest neighbor pixel or the first neighbor pixel, and G180 is the second neighbor pixel. In equation (12), G120 is the nearest neighbor pixel or the first neighbor pixel, and G300 is the second neighbor pixel. The same applies to other expressions. The first adjacent pixel is a pixel separated by about 1 pixel pitch, and the second adjacent pixel is a pixel separated by about 2 pixel pitch. It can be said that Rc tr and G 120 are separated by about 1 pixel pitch, and that Rc tr and G300 are separated by about 2 pixel pitch. FIG. 22 is a diagram that defines adjacent pixels. “center” is a pixel to be processed, “nearest” is the nearest or nearest neighbor or the first neighboring pixel, and “2nd” is the second neighboring pixel.
また、 べィァ配列補間の従来技術において、 曲率情報 dRに相当する項は平均情 報と同じ方向性を考慮して算出するのが普通である。 しかしながら、 デルタ配列 においては、 平均情報の方向性 (0度、 120度、 240度方向) と抽出可能な曲率情報 の方向性 (30度、 150度、 270度方向) がー致していない。 このような場合、 30度 方向と 150度方向の曲率情報を平均して 0度方向の曲率情報を定義し、 ペイァ配列 と同様に曲率情報の方向性を考慮して補間値を算出することも可能であるが、 曲 率情報は方向性を考慮せずに一律に抽出するほうが効果が大きいことが実験的に 判明した。 従って、 本実施の形態では、 方向性のない曲率情報で方向性を考慮し た平均情報を補正する。 これにより、 あらゆる方向の高周波領域まで階調鮮明性 を向上させることが可能となる。  Further, in the conventional technique of Bayesian interpolation, a term corresponding to the curvature information dR is generally calculated in consideration of the same directionality as the average information. However, in the delta array, the directionality of the average information (0, 120, and 240 degrees) and the direction of the curvature information that can be extracted (30, 150, and 270 degrees) do not match. In such a case, the curvature information in the 30-degree direction and the 150-degree direction are averaged to define the curvature information in the 0-degree direction, and the interpolation value may be calculated in consideration of the directionality of the curvature information as in the case of the Payer array. Although it is possible, it has been experimentally found that it is more effective to uniformly extract the curvature information without considering the directionality. Therefore, in the present embodiment, the average information considering the directionality is corrected using the curvature information having no directionality. As a result, it is possible to improve gradation clarity up to a high-frequency region in every direction.
曲率情報 dRを求める式(16)は、 図 5に示す係数値が使用される。 式(16)は、 補 間対象画素 Rc t rと周辺画素の差分とその反対側の周辺画素と補間対象画素 Rc t rの 差分とを求めさらにそれらの差分を求めている。 従って、 2次微分演算により求 められている。  The equation (16) for calculating the curvature information dR uses the coefficient values shown in FIG. Equation (16) obtains the difference between the interpolation target pixel Rctr and the peripheral pixel, the difference between the peripheral pixel on the opposite side and the interpolation target pixel Rctr, and further obtains the difference. Therefore, it is obtained by the second derivative operation.
また、 曲率情報 dRとは、 ある色成分の色情報の変化する度合いを示す情報であ る。 ある色成分の色情報の値をプロッ トして曲線で表した場合、 その曲線の曲が りの変化、 曲がり具合を示す情報である。 すなわち、 色成分の色情報の変化の凹 凸に関する構造情報が反映された量である。 本実施の形態では、 R画素位置の曲 率情報 dRは、 補間対象画素の Rc t rと周辺の R画素の色情報を使用して求める。 G 画素位置の場合は G成分、 B画素位置の場合は B成分の色情報を使用する。 The curvature information dR is information indicating the degree of change in the color information of a certain color component. When the value of the color information of a certain color component is plotted and represented by a curve, this is information indicating a change in the curve of the curve and the degree of the curve. That is, the concave of the change of the color information of the color component This is the amount reflecting the structural information on the convexity. In the present embodiment, the curvature information dR of the R pixel position is obtained using the Rctr of the pixel to be interpolated and the color information of the surrounding R pixels. The G component color information is used for the G pixel position, and the B component color information is used for the B pixel position.
以上のようにして、 「R画素位置における G、 B成分の補間処理」 を行うこと ができた。 「G画素位置における B、 R成分の補間処理」 は、 「R画素位置にお ける G、 B成分の補間処理」 の記号を Rを Gに、 Gを Bに、 Bを Rに循環的に置 き換えて全く同様の処理を行えばよい。 また、 「B位置における R、 G成分の補 間処理」 は、 「G位置における B、 R成分の補間処理」 の記号を Gを Bに、 Bを Rに、 Rを Gに循環的に置き換えて全く同様の処理を行えばよい。 すなわち、 各 補間処理において同一の演算ルーチン (サブルーチン) を使用することが可能で ある。  As described above, the “interpolation processing of the G and B components at the R pixel position” was performed. `` Interpolation of B and R components at G pixel position '' is a symbol of `` interpolation of G and B components at R pixel position ''. R is G, G is B, and B is R. Exactly the same processing may be performed instead. Also, "interpolation processing of R and G components at B position" is a circular replacement of the symbol "interpolation processing of B and R components at G position" with G for B, B for R, and R for G. The same process may be performed. That is, the same arithmetic routine (subroutine) can be used in each interpolation process.
以上のようにして復元された画像は、 空間方向に関してデルタ配列が持つ限界 解像性能を全て引き出すことができる。 すなわち、 周波数空間 (k空間) で見ると、 図 6に示すように、 デルタ配列が持つ六角形の無彩色再現領域を全て解像する。 また、 階調方向に関しても鮮明な画像が得られる。 特に無彩色部の多い画像に対 しては極めて有効な手法である。  The image reconstructed as described above can bring out all the limit resolution performance of the delta array in the spatial direction. In other words, when viewed in the frequency space (k-space), as shown in Fig. 6, all hexagonal achromatic color reproduction regions of the delta array are resolved. Also, a clear image can be obtained in the gradation direction. This is an extremely effective method especially for images with many achromatic parts.
一正方化処理一 Square processing 1
次に、 上述のように三角格子上で補間処理された画像データを、 コンピュータ 一上の取り扱いが容易な正方格子データに復元する処理について説明する。 行単 位で 1 /2画素ずつずれた単板撮像素子のデータを、 3色揃った正方格子データに復 元する従来技術として、 デル夕配列については特開平 8-340455号公報、 ハニカム 配列については特開平 200 1- 103295号公報、 特開平 2000- 194386号公報等がある。 特開平 8- 340455号公報では、 三角格子と異なる画素位置に復元データを生成し たり、 三角格子より倍の画素密度を持つ仮想正方格子に復元データを生成したり する例を示している。 また、 三角格子と半分の行は同じ画素位置に復元して、 残 りの半分の行は三角格子と 1/2画素ずれた画素位置に復元する例も示している。 し かし、 これは直接正方格子位置で補間処理を実現し、 三角格子の隣接画素との距 離が変わると別々の補間処理を適用するものである。 一方、 特開平 200 1- 103295号 公報では 2次元のキュービック補間により正方化し、 特開平 2000- 194386号公報で は仮想的な倍密正方格子データを生成している。 Next, a description will be given of a process of restoring image data interpolated on a triangular lattice as described above into square lattice data which is easy to handle on a computer. As a conventional technique for restoring single-chip image sensor data shifted by 1/2 pixel on a row basis to a square grid data in which three colors are aligned, Japanese Patent Laid-Open Publication No. Are disclosed in JP-A-2001-103295 and JP-A-2000-194386. Japanese Patent Application Laid-Open No. 8-340455 discloses an example in which restored data is generated at a pixel position different from that of a triangular lattice, or restored data is generated on a virtual square lattice having a pixel density twice that of the triangular lattice. Also, an example is shown in which a half row of the triangular lattice is restored to the same pixel position, and the other half row is restored to a pixel position shifted by 1/2 pixel from the triangular lattice. However, this implements the interpolation processing directly at the square grid position, and applies a different interpolation processing when the distance between adjacent pixels of the triangular grid changes. On the other hand, in Japanese Patent Application Laid-Open No. 2001-103295, a square is formed by two-dimensional cubic interpolation, and in Japanese Patent Application Laid-Open No. 2000-194386. Generates virtual double-density square lattice data.
このように色々なやり方が存在するが、 本第 1の実施の形態では、 デルタ配列 の性能を最大限に維持する方式を示す。  Although there are various methods as described above, in the first embodiment, a method for maximizing the performance of the delta array will be described.
本第 1の実施の形態では、 上述したように、 まず、 デルタ配列と同じ画素位置 に三角格子のまま補間データを復元している。 これにより、 デルタ配列が持つ空 間周波数の限界解像性能を引き出すことができる。 また、 色差補正処理やエッジ 強調処理も三角配置のままで行うとその効果が等方的にうまく作用する。 従って、 いったん三角格子上で画像復元し、 各画素に R GB値を生成する。  In the first embodiment, as described above, first, the interpolation data is restored with the triangular lattice at the same pixel position as the delta arrangement. This makes it possible to bring out the spatial frequency limit resolution performance of the delta array. If the color difference correction processing and the edge enhancement processing are also performed in the triangular arrangement, the effect works well isotropically. Therefore, once the image is restored on the triangular grid, an RGB value is generated for each pixel.
次に、 三角格子上で生成された画像デ一夕を正方化変換する。 三角格子の解像 性能を保持するには、 できるだけ元のデータを残すことが重要であることが実験 的に判明した。 したがって、 一行置きに半分の行だけ 1/2画素ずらす変位処理を行 レ 残り半分の行は処理せず三角配置が持つ縦線のナイキスト解像を維持させる。 変位処理は、 処理対象行の一次元内でキューピック補間により自己推定させると、 正方格子のナイキスト周波数の影響は多少受けるものの、 ほとんど問題なく三角 格子の縦線解像度が維持できることが実験により確認された。  Next, the image data generated on the triangular lattice is subjected to square conversion. It has been experimentally found that it is important to keep the original data as much as possible to maintain the resolution performance of the triangular lattice. Therefore, a displacement process that shifts half a line by half a pixel every other line is performed. The remaining half of the line is not processed, and the Nyquist resolution of the vertical line of the triangular arrangement is maintained. Experiments have shown that the displacement process can maintain the vertical line resolution of the triangular lattice with little problem if it is self-estimated by cue-pick interpolation within one dimension of the row to be processed, though there is some influence from the Nyquist frequency of the square lattice. Was.
すなわち、 偶数行について、 以下の式(19)の処理を行い、 その後、 式(20)の代 入処理を行う。 これにより、 [X, y]の画素位置の R成分の色情報を求める。  That is, the processing of the following equation (19) is performed on the even-numbered rows, and then the substitution processing of the equation (20) is performed. Thus, the color information of the R component at the pixel position of [X, y] is obtained.
tmp_R [ , y] = (― 1*R [x- (3/2) pixel, y] +5* [x— (1/2) pixel, y]  tmp_R [, y] = (― 1 * R [x- (3/2) pixel, y] + 5 * [x— (1/2) pixel, y]
+ 5* [x+ (1/2) pixel, y] -1* [xl (3/2) pixel, y] 1 /8 ... (19)  + 5 * [x + (1/2) pixel, y] -1 * [xl (3/2) pixel, y] 1/8 ... (19)
R [x, y] = tmp_R [x. y] … (20)  R [x, y] = tmp_R [x. Y]… (20)
G成分、 B成分についても同様にして求める。 式(19)の処理を、 キュービック 変位処理と言う。 また、 1次元方向の位置を変位処理しているので 1次元変位処 理とも言う。 図 7は、 式(19)で使用する係数値を示す図である。 上記キュービッ クによる 1次元変位処理は正および負の係数値からなる 1次元フィル夕をかける 処理と言える。  The G and B components are obtained in the same manner. The processing of equation (19) is called cubic displacement processing. In addition, since the displacement processing is performed on the position in the one-dimensional direction, it is also called one-dimensional displacement processing. FIG. 7 is a diagram showing coefficient values used in equation (19). The one-dimensional displacement processing by the cubic described above can be said to be a processing of applying a one-dimensional filter consisting of positive and negative coefficient values.
上述したような画像復元方法は、 三角配置の限界解像度を最大限に維持できる だけでなく、 三角格子上の画像復元処理は全ての画素で同一の処理ルーチンを使 うことが可能で、 最後に半分の行だけ単純な一次元処理を施すだけで済むので、 デ一夕量の増加を伴わず、 従来技術に比べより簡略なアルゴリズムを達成するこ
Figure imgf000022_0001
The image restoration method as described above not only can maintain the limit resolution of the triangular arrangement to the maximum, but also the image restoration processing on the triangular grid can use the same processing routine for all pixels. Since only half of the rows need be subjected to simple one-dimensional processing, it is possible to achieve a simpler algorithm than the conventional technology without increasing the amount of data.
Figure imgf000022_0001
¾  ¾
* ¥ ¾耀4 g ^ V ¾^^画 ¾ ¾一 ϋ  * ¥ 4¾ ^ ^ ^ ^ ^
¾ Λ is 撇 ¾ Λ is 撇
,
11
D m D m
Figure imgf000023_0001
Figure imgf000023_0001
を掛けたりして偽色を除去し、 RGB表色系に戻す事後処理が行われる。 また、 光学ローパスフィルタ等による鮮明度の低下を補うために、 輝度成分 Y面にェッジ 強調処理を施したりする。 デルタ配列の場合も、 全く同様の事後処理を適応して もよい。 Post-processing is performed to remove false colors and return to the RGB color system. In addition, in order to compensate for the decrease in sharpness due to an optical low-pass filter, etc., edge enhancement processing is performed on the luminance component Y plane. In the case of a delta array, exactly the same post-processing may be applied.
しかし、 第 3の実施の形態では、 デルタ配列に適した事後処理の方法を示す。 第 3の実施の形態の電子力メラ 1の構成は、 第 1の実施の形態の図 1 と同様であ るのでその説明を省略する。  However, in the third embodiment, a post-processing method suitable for the delta array will be described. The configuration of the electronic device 1 of the third embodiment is the same as that of FIG. 1 of the first embodiment, and the description thereof is omitted.
図 1 0は、 第 3の実施の形態において、 画像処理部 1 1が行う画像処理の概要 を示すフローチャートである。 第 1の実施の形態と同様に三角格子上で補間処理 が行われるものとする。 図 1 0は、 補間処理後の RGBカラー画像を入力すると ころがスタートする。 すなわち、 第 1の実施の形態の図 2のステップ S 1 ~ S 5 が終了し、 その後、 図 1 0のフローチャートが開始する。  FIG. 10 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the third embodiment. It is assumed that the interpolation processing is performed on a triangular lattice as in the first embodiment. Figure 10 starts when the RGB color image after interpolation processing is input. That is, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 10 starts.
ステップ S 1 1では、 補間処理後の RGBカラ一画像データを入力する。 ステ ップ S 1 2において、 R G B表色系から本第 3の実施の形態特有の YCrCgCb表色系 に変換する。 ステップ S 1 3では、 色差面 (CrCgCb面) に対してローパスフィル 夕処理を行う。 ステップ S 1 4では、 輝度面 (Y面) に対してエッジ強調処理を行 う。 ステップ S 1 5では、 色差面の偽色が除去された段階で、 YCrCgCb表色系を元 の R G B表色系に戻す変換をする。 ステップ S 1 6において、 得られた RGB力 ラー画像デ一夕を出力する。 ステップ S 1 6で出力される RGBカラー画像デー 夕は三角格子上で得られた画像データである。  In step S11, the RGB color image data after the interpolation processing is input. In step S12, the RGB color system is converted to the YCrCgCb color system unique to the third embodiment. In step S13, a low-pass fill process is performed on the color difference plane (CrCgCb plane). In step S14, edge enhancement processing is performed on the luminance plane (Y plane). In step S15, when the false color on the color difference plane has been removed, conversion is performed to return the YCrCgCb color system to the original RGB color system. In step S16, the obtained RGB color image is output. The RGB color image data output in step S16 is image data obtained on a triangular lattice.
三角格子上で得られた画像データに対して正方化処理をする場合は、 第 1の実 施の形態と同様に、 図 2のステップ S 6、 S 7の処理を行う。 以下、 上述のステ ップ S 1 2 ~ 1 5の処理の詳細について説明する。  When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S12 to S15 will be described.
1. 表色系変換  1. Color system conversion
R G B表色系から YCrCgCb表色系に変換する。 ただし、 YCrCgCb表色系は、 式(2 8)〜(31)で表されるものとして定義する。  Convert from RGB color system to YCrCgCb color system. However, the YCrCgCb color system is defined as represented by equations (28) to (31).
Y[i, j] = (R[i, ]+G[i, j]+B[i, j])/3 ... (28)  Y [i, j] = (R [i,] + G [i, j] + B [i, j]) / 3 ... (28)
Cr [i, j] = R[i, ]-Y[i, j] - .. (29)  Cr [i, j] = R [i,] -Y [i, j]-.. (29)
Cg[i, j] = G[i, ]-Y[i, j] ... (30) Cb[i, j] = B[i, j]-Y[i, j] ... (31) Cg [i, j] = G [i,] -Y [i, j] ... (30) Cb [i, j] = B [i, j] -Y [i, j] ... (31)
サーキユラ一ゾーンプレートを撮像して三角格子上で補間処理した後、 式(28) のように、 Y面生成において R G Bを均等に扱う変換を行うと、 図 6の六角形の角 の部分を中心に発生するような色モアレが、 完全に輝度面 Yからは消え去るように なる。 その結果、 全ての偽色要素を色差成分 Cr, Cg, Cbに含ませることができる。 そして、 従来扱われている 2種類の色差成分ではなく、 第 3の実施の形態では 3 種類の色差成分を用意して扱っているので、 デル夕配列固有の全ての偽色要素を 取り出すことができる。  After taking an image of a circular zone plate and performing interpolation processing on a triangular grid, as shown in equation (28), performing conversion to treat RGB equally in the Y plane, the center of the hexagonal corner in Fig. 6 is obtained. The color moiré that occurs on the luminance plane Y completely disappears from the luminance plane Y. As a result, all false color elements can be included in the color difference components Cr, Cg, and Cb. In the third embodiment, three types of color difference components are prepared and handled instead of the two types of color difference components conventionally handled, so that all false color elements unique to the Dell array can be extracted. it can.
2. 色差補正  2. Color difference correction
ここでは、 色差補正の処理として色差面に対して式(32)により口一パス処理を 行う例を示す。 三角格子 (三角配置) 上で色差口一パスフィルタを掛けるとその 偽色低減効果は全方向に対して相性よく得られる。 また、 三角格子上のメディア ンフィルタ処理でもよい。 ここで []は、 図 2 2に示すように、 処理対象画素から' 見た三角格子上の画素位置を表すものとする。 図 1 1は、 式(32)に使用する係数 値を図示するものである。 なお、 口一パスフィルタはここに示すものに留まらず、 他のものを用いてもよい。 図 1 2、 図 1 3に他の口一パスフィルタの例を'示す。  Here, an example in which the mouth-to-mouth process is performed on the chrominance plane by the equation (32) as the chrominance correction processing will be described. When a color difference port one-pass filter is applied on a triangular lattice (triangular arrangement), the false color reduction effect can be obtained in all directions. Also, median filter processing on a triangular lattice may be used. Here, as shown in FIG. 22, [] represents a pixel position on the triangular lattice viewed from the pixel to be processed. FIG. 11 illustrates the coefficient values used in equation (32). The one-pass filter is not limited to the one shown here, and another filter may be used. Figures 12 and 13 show examples of other mouth-pass filters.
<ローパス処理 >  <Low-pass processing>
tmp—Cr [center] = l9*Cr [center]  tmp—Cr [center] = l9 * Cr [center]
+2* (Cr [nearestOOO] +Cr [nearest 120] +Cr [nearest240] +Cr [nearest 180] +Cr [nearest 300] +Cr [nearest060] )  + 2 * (Cr [nearestOOO] + Cr [nearest 120] + Cr [nearest240] + Cr [nearest 180] + Cr [nearest 300] + Cr [nearest060])
+Cr [2nd000] ICr [2ndl20] +Cr [2nd240]  + Cr [2nd000] ICr [2ndl20] + Cr [2nd240]
+Cr [2ndl80] +Cr [2nd300] +Cr [2nd060] ) /27 ... (32)  + Cr [2ndl80] + Cr [2nd300] + Cr [2nd060]) / 27 ... (32)
tmp_Cg, tnip一 Cbについても同様に処理する。  The same applies to tmp_Cg and tnip-one Cb.
<代入処理 >  <Assignment processing>
Cr [i, j] = tmp_Cr [i, j] ... (33)  Cr [i, j] = tmp_Cr [i, j] ... (33)
Cg[i, j] = tmp_Cg[i, j] ... (34)  Cg [i, j] = tmp_Cg [i, j] ... (34)
Cb [i, j] = tmp一 Cb [i, j] ... (35)  Cb [i, j] = tmp-Cb [i, j] ... (35)
3. エッジ強調処理  3. Edge enhancement
次に、 輝度面 Yに対するエッジ強調処理が必要であれば、 三角格子上で以下の処 理を行う。 Next, if edge enhancement processing is necessary for the luminance plane Y, the following processing is performed on the triangular lattice. Work.
<バンドパス処理 >  <Band pass processing>
三角格子上のラプラシアン処理によりエッジ抽出する。 図 1 4は、 式(36)のラ ブラシアン処理に使用する係数値を図示するものである。 なお、 ラプラシアンは ここに示すものに留まらず、 他のものを用いてもよい。 図 1 5、 図 1 6に他のラ ブラシアンの例を示す。  Edges are extracted by Laplacian processing on a triangular lattice. FIG. 14 illustrates the coefficient values used for the Laplace Braun's processing of equation (36). It should be noted that the Laplacian is not limited to the one shown here, and another one may be used. Figures 15 and 16 show examples of other Labrians.
YH [center] = 16*Y [center]  YH [center] = 16 * Y [center]
― (Y [nearest 000] +Y [nearest 120] +Y [nearest 240]  ― (Y [nearest 000] + Y [nearest 120] + Y [nearest 240]
+Y [nearest 180] +Y [nearest 300] +Y [nearest060] ) ) /12 ... (36) <エッジ強調処理 >  + Y [nearest 180] + Y [nearest 300] + Y [nearest060])) / 12 ... (36) <Edge enhancement>
Y [center] =Y [center] + *YH [center] ... (37)  Y [center] = Y [center] + * YH [center] ... (37)
ここで、 Kは零以上の値で、 エッジ強調の加減を調整するパラメ一夕である。 4. 表色系変換  Here, K is a value greater than or equal to zero, and is a parameter for adjusting the level of edge enhancement. 4. Color system conversion
前述した通り色差補正処理がなされ、 色差面の偽色が除去された段階で、 元の R G B表色系に戻す。  As described above, when the color difference correction processing is performed and the false color on the color difference surface is removed, the color system is returned to the original RGB color system.
R[i,〗]=Cr [i, j]+Y[i, j] … (38)  R [i,〗] = Cr [i, j] + Y [i, j]… (38)
G[i, j]=Cg[i, j]+Y[i, j] ... (39)  G [i, j] = Cg [i, j] + Y [i, j] ... (39)
B[i, j]=Cb[i, j]+Y[i, j] ... (40)  B [i, j] = Cb [i, j] + Y [i, j] ... (40)
このように、 輝度解像を R:G:B=1:1:1の比で生成した Y成分に待避させ、 3種類 の均等に扱った色差成分 Cr, Cg, Cbを導入することにより、 極めて偽色抑制能力の 高い色差補正が可能となる。 また、 色差補正処理や輝度補正処理を三角格子上で 行うことにより、 デル夕配列の方向性に適した補正処理が可能となる。  In this way, the luminance resolution is saved to the Y component generated at a ratio of R: G: B = 1: 1: 1, and three equally treated color difference components Cr, Cg, and Cb are introduced. This enables color difference correction with extremely high false color suppression capability. In addition, by performing the color difference correction processing and the luminance correction processing on the triangular lattice, it is possible to perform the correction processing suitable for the directionality of the Dell array.
—第 4の実施の形態—  —Fourth embodiment—
第 1の実施の形態では、 三角格子上において、 各画素に欠落する色成分の色情 報を補間する処理について説明した。 第 4の実施の形態では、 第 1の実施の形態 の R G B面内における補間処理とは異なる系統の画像復元処理の例を示す。 これ は、 デル夕配列から、 RG B面内の補間処理を介さずに直接輝度成分と色差成分 を作る方式である。 思想的に第 3の実施の形態で示した 1つの輝度面と 3つの色 差面に分離することの有用性を引き継ぎ、 輝度面には無彩色の輝度解像力を最大 限に引き出す役割を担当させ、 3つの色差面には 3原色の色解像力を最大限に引 き出す役割を担当させている。 In the first embodiment, the process of interpolating the color information of the color component missing in each pixel on the triangular lattice has been described. In the fourth embodiment, an example of image restoration processing of a different system from the interpolation processing in the RGB plane of the first embodiment will be described. In this method, the luminance component and the color difference component are created directly from the Dell array without interpolating in the RGB plane. Ideally, the usefulness of separating one luminance plane and three chrominance planes described in the third embodiment is taken over, and the luminance plane maximizes the achromatic luminance resolution. The three color difference planes are responsible for maximizing the color resolution of the three primary colors.
第 4の実施の形態の電子カメラ 1の構成は、 第 1の実施の形態の図 1 と同様で あるのでその説明を省略する。  The configuration of the electronic camera 1 according to the fourth embodiment is the same as that of FIG. 1 according to the first embodiment, and a description thereof will be omitted.
(画像処理)  (Image processing)
図 1 7は、 デル夕配列のデルタ面から、 直接、 輝度面(Y)と 3つの色差面(Cgb, Cbr, C rg)を生成し、 その後、 元の R G Bの表色系に変換する概念を示す図である。 図 1 8は、 第 4の実施の形態において、 画像処理部 1 1が行う画像処理の概要を 示すフローチヤ一トである。  Figure 17 shows the concept of generating the luminance plane (Y) and three color difference planes (Cgb, Cbr, Crg) directly from the delta plane in the Delhi array, and then converting to the original RGB color system FIG. FIG. 18 is a flowchart showing an outline of the image processing performed by the image processing unit 11 in the fourth embodiment.
ステップ S 2 1では、 デルタ配列の撮像素子 2 1で得られた画像を入力する。 ステップ S 2 2において、 類似度の算出を行う。 ステップ S 2 3では、 ステップ S 2 2で得られた類似度に基づき類似性を判定する。 ステップ S 2 4では、 ステ ップ S 2 3で得られた類似性の判定結果とステップ S 2 1で得られたデルタ配列 の画像データに基づき輝度面 (Y0面) を生成する。 ステップ S 2 5では、 ステツ プ S 2 4で得られた輝度面 (Y0面) に対して補正処理を行う。  In step S21, an image obtained by the delta-array image sensor 21 is input. In step S22, the similarity is calculated. In step S23, similarity is determined based on the similarity obtained in step S22. In step S24, a luminance plane (Y0 plane) is generated based on the similarity determination result obtained in step S23 and the delta array image data obtained in step S21. In step S25, a correction process is performed on the luminance plane (Y0 plane) obtained in step S24.
一方、 ステップ S 2 6では、 ステップ S 2 3で得られた類似性の判定結果とス テツプ S 2 1で得られたデル夕配列の画像データに基づき、 色差成分 Cgb, Cbr, C r gを生成する。 ステップ S 2 6では、 まだ、 すべての画素においてすベての色差成 分 Cgb, Cbr, Crgが生成されてはいない。 ステップ S 2 7で、 生成されていない色差 成分を周辺の色差成分に基づき補間処理をする。 その結果、 Cgb, Cbr, Crgの色差面 が完成する。  On the other hand, in step S26, color difference components Cgb, Cbr, and Crg are generated based on the similarity determination result obtained in step S23 and the image data of the Delaunay array obtained in step S21. I do. In step S26, all the color difference components Cgb, Cbr, and Crg have not yet been generated in all the pixels. In step S27, interpolation processing is performed on the color difference components that have not been generated based on the surrounding color difference components. As a result, the color difference plane of Cgb, Cbr, Crg is completed.
ステップ S 2 8では、 生成された Y, Cgb, Cbr, Crgの表色系から R G B表色系に変 換する。 ステップ S 2 9で、 変換された R G Bカラ一画像データを出力する。 ス テツプ S 2 1〜ステップ S 2 9は、 すべて三角格子上での処理である。 従って、 ステップ S 2 9で出力される R G Bカラ一画像データは、 三角格子の画像データ である。 以下、 これらの処理の詳細について説明する。 なお、 正方化処理が必要 な場合は、 第 1の実施の形態の図 2のステップ S 6、 S 7 と同様の処理を行う。 1 . 類似度の算出  In step S28, the generated Y, Cgb, Cbr, and Crg color systems are converted to the RGB color system. In step S29, the converted RGB single image data is output. Steps S 21 to S 29 are all processes on a triangular lattice. Therefore, the RGB image data output in step S29 is triangular lattice image data. Hereinafter, details of these processes will be described. If the square processing is necessary, the same processing as steps S6 and S7 in FIG. 2 of the first embodiment is performed. 1. Calculation of similarity
まず、 類似度の算出を行う。 ここでは、 任意の方式で求められる類似度でよい。 ただし、 できるだけ正確なものを利用するものとする。 例えば、 第 1の実施の形 態で示した異色間類似度や、 第 2の実施の形態で示した同色間類似度や、 それら を組み合わせて用いたもの、 あるいは色指標等によって異色間類似度と同色間類 似度を切り替えて用いたものでもよい。 First, the similarity is calculated. Here, the similarity obtained by an arbitrary method may be used. However, the most accurate one shall be used. For example, the similarity between different colors shown in the first embodiment, the similarity between same colors shown in the second embodiment, a combination thereof, or the similarity between different colors based on a color index or the like is used. Alternatively, the same color similarity may be switched and used.
2. 類似性判定  2. Similarity judgment
第 1の実施の形態と同様にして判定する。  The determination is made in the same manner as in the first embodiment.
3. 復元値算出  3. Recovery value calculation
1) 輝度成分生成  1) Generate luminance component
まず、 デルタ面を方向類似性に応じて変化する正の係数で常に R:G:B=1: 1:1とな るように加重加算して輝度面を生成する。 その加重加算の範囲は第 1の実施の形 態と同様の理由によって、 G成分と B成分は第 2隣接画素までとる。 すなわち、 第 1の実施の形態の式(10)〜式(15)の定義式を用いてもう一度、 式(8) (9)を書き 直すと式(41) (42)のようになる。  First, a luminance plane is generated by weighting and adding the delta plane so that R: G: B = 1: 1: 1: 1 always with a positive coefficient that changes according to the direction similarity. For the same reason as in the first embodiment, the range of the weighted addition takes the G component and the B component up to the second adjacent pixel. That is, when the equations (8) and (9) are rewritten using the definition equations of the equations (10) to (15) of the first embodiment, the equations (41) and (42) are obtained.
<G>2 ll d = Gave = wOOO*GaveOOO+wl20*Gavel20lw240*Gave240 ... (41) <G> 2 ll d = Gave = wOOO * GaveOOO + wl20 * Gavel20lw240 * Gave240 ... (41)
<B>2 n d = Bave = w000*Bave000+wl20*Bavel20+w240*Bave240… (42) これらと中心の R成分を用いて、 式(43)により輝度成分 Y0を生成する。 <B> 2 nd = Bave = w000 * Bave000 + wl20 * Bavel20 + w240 * Bave240 (42) Using these and the central R component, a luminance component Y0 is generated by equation (43).
Figure imgf000028_0001
Figure imgf000028_0001
このようにして生成される輝度成分は、 正の方向加重係数を用いて常に一定の 色成分比で中心画素を含みながら生成されるので、 極めて階調鮮明性の潜在能力 が高く、 空間解像力の高い画像が得られ、 かつ色収差に影響されずに極めて滑ら かに周辺画素と連結する画像が得られる。 例えば、 無彩色のサ一キユラ一ゾーン プレートに対して、 異色間類似度を類似度として採用した場合、 第 1の実施の形 態と同様に空間解像度は図 6の限界解像まで達する。  Since the luminance component generated in this way is always generated using the positive direction weighting coefficient while including the center pixel at a constant color component ratio, the potential for gradation sharpness is extremely high, and the spatial resolution power is extremely high. A high image can be obtained, and an image can be obtained that is very smoothly connected to peripheral pixels without being affected by chromatic aberration. For example, when the similarity between different colors is adopted as the similarity for an achromatic color zone plate, the spatial resolution reaches the limit resolution of FIG. 6 as in the first embodiment.
2) 輝度成分補正  2) Brightness component correction
上述した輝度面は正の係数のみを用いて生成されているので、 そこに含まれて いる階調鮮明性の潜在能力を引き出すために、 ラプラシアンによる補正処理を行 う。 輝度面 Y0は、 方向性を考慮して周辺画素と極めて滑らかに連結するように作 られているので、 補正処理は新たに方向性に応じて計算しなければならないよう な補正項の算出は不要で、 固定パンドパスフィルタによる単一処理でよい。 三角 配置におけるラプラシアンの取り方は、 第 3の実施の形態の図 1 4〜図 1 6に示 すようにいくつかの方法が可能である。 しかし、 若干なりとも最適度を上げるな らば、 輝度面 Y0が 0度、 120度、 240度方向の G, B成分を集めてしか生成しえないの で、 その間隙の方向を埋め合わせるために、 ここではそれとは独立な 30度、 150度、 270度方向の図 1 5のラプラシアンで補正する場合を示す(式(44))。 補正された輝 度成分を Yとする。 Since the above-mentioned luminance plane is generated using only positive coefficients, a correction process using Laplacian is performed in order to extract the potential of gradation clarity contained therein. Since the luminance plane Y0 is designed to be connected to peripheral pixels extremely smoothly taking into account the directionality, it is not necessary to calculate a correction term that requires a new calculation in the correction process according to the directionality. Thus, a single process using a fixed bandpass filter may be used. triangle As shown in FIGS. 14 to 16 of the third embodiment, several methods are available for taking the Laplacian in the arrangement. However, if the degree of optimality is slightly increased, the luminance plane Y0 can only be generated by collecting the G and B components in the directions of 0, 120, and 240 degrees. Here, a case where the correction is performed using the Laplacian shown in FIG. 15 in the directions of 30 degrees, 150 degrees, and 270 degrees, which is independent of the above (Equation (44)). Let Y be the corrected brightness component.
ぐバンドパス処理 >  Band pass processing>
YH [center] = I6*Y0 [center]  YH [center] = I6 * Y0 [center]
― (Y0 [2nd030] +Y0 [2ndl50] +Y0 [2nd270]  ― (Y0 [2nd030] + Y0 [2ndl50] + Y0 [2nd270]
+Y0 [2nd210] +Y0 [2nd330] +Y0 [2nd090] ) 1 /12 ... (44) + Y0 [2nd210] + Y0 [2nd330] + Y0 [2nd090]) 1/12 ... (44)
<補正処理 > <Correction processing>
Y [center] =Y [center] +k*YH [center] ... (45)  Y [center] = Y [center] + k * YH [center] ... (45)
ここで、 kは正の値で通常 1とする。 ただし、 1より大きい値に設定することで、 第 3の実施の形態に示したようなエッジ強調処理をここで兼ね備えることもでき る。  Here, k is a positive value and is usually set to 1. However, by setting the value to be larger than 1, the edge enhancement processing as shown in the third embodiment can be provided here.
3 ) 色差成分生成  3) Color difference component generation
3つの色差面は第 3の実施の形態の定義と異なり、 輝度面 Yとは独立にデルタ面 から直接生成する。 3つの色差成分 Cgb, Cbr, Crgを求める。 ただし、 Cgb=G-B、 Cbr =B - R、 Crg=R-Gと定義する。  Unlike the definition of the third embodiment, the three chrominance planes are generated directly from the delta plane independently of the luminance plane Y. Find three color difference components Cgb, Cbr, Crg. Here, Cgb = G-B, Cbr = B-R, and Crg = R-G are defined.
まず、 R位置において Crgと Cbrを求める。 中心画素の R成分と周辺画素の G成分も しくは B成分との差分値を方向性を考慮しながら算出する。
Figure imgf000029_0001
First, find Crg and Cbr at the R position. The difference between the R component of the central pixel and the G or B component of the peripheral pixel is calculated while taking into account the directionality.
Figure imgf000029_0001
ここに、 dG, dBは、 第 2の実施の形態の式(24) (25)で定義されたものと同じで、 く G>2d,く B>2dは式(41) (42)と同じである。 色差成分の生成においても第 1の実施 の形態と同様に、 第 2隣接画素までを含む平均情報を算出し、 輝度成分との整合 性をとることにより、 30度、 150度、 270度方向の解像力を上げている。 また、 dG と d Bは必ずしも必要な訳ではないが、 色解像力と色鮮やかさを上げる効果がある ので付加している。 G位置における色差成分 Cgb, Crg、 および B位置における色差成分 Cbr, Cgbも同様 にして求める。 この時点で R位置には Crg, Cbr成分が、 その最隣接画素には Cgb成分 が求まっている。 図 1 9は、 その様子を示す図である。 Here, dG and dB are the same as those defined by the equations (24) and (25) in the second embodiment, and G> 2d and B> 2d are obtained by the equation (41) ( Same as 42). In the generation of the color difference components, as in the first embodiment, the average information including the pixels up to the second adjacent pixel is calculated, and the consistency with the luminance component is obtained. The resolution has been raised. Also, dG and dB are not necessarily required, but they are added because they have the effect of increasing color resolution and vividness. The color difference components Cgb and Crg at the G position and the color difference components Cbr and Cgb at the B position are obtained in the same manner. At this point, the Crg and Cbr components have been obtained at the R position, and the Cgb component has been obtained at the nearest pixel. FIG. 19 is a diagram showing this state.
次に、 R位置周辺画素の Cgb成分を用いて式(48)より R位置に Cgb成分を求める (補間処理) 。 このとき、 R位置で求まっている方向判定結果を用いて算出する。 Cgb [center] =w000* (Cgb [nearestOOO] +Cgb [nearest 180] ) /1  Next, the Cgb component at the R position is obtained from the equation (48) using the Cgb component of the pixel around the R position (interpolation processing). At this time, the calculation is performed using the direction determination result obtained at the R position. Cgb [center] = w000 * (Cgb [nearestOOO] + Cgb [nearest 180]) / 1
+wl20* (Cgb [nearest 120] +Cgb [nearest300] ) /2  + wl20 * (Cgb [nearest 120] + Cgb [nearest300]) / 2
+w240* (Cgb [nearest240] +Cgb [nearest060] ) /2 ... (48) 以上のようにして、 全ての画素に YCgbCbrCrgの 4成分が求まる。 必要に応じて、 色差面 Cgb, Cbr, Crgに対しては第 3の実施の形態と同様の色差口一パスフィルタ等 の補正処理を行って、 偽色抑制を図ってもよい。  + w240 * (Cgb [nearest240] + Cgb [nearest060]) / 2 ... (48) As described above, the four components of YCgbCbrCrg are obtained for all pixels. If necessary, the color difference planes Cgb, Cbr, and Crg may be subjected to correction processing such as a color difference port one-pass filter similar to that of the third embodiment to suppress false colors.
4) 表色系変換 4) Color system conversion
各画素に生成された YCgbCbrCrgを RGB表色系に変換する。 Y=(MG+B)/3, Cgb =G - B, Cbr=B-R, Crg=R- Gは独立な 4式ではなく、 Cgb+Cbr+Crg=0の関係を満たす。 すなわち、 4対 3変換なので変換方法は一意的ではないが、 色モアレを抑制し、 輝 度解像度と色解像度を最高にするため、 R, G, Bそれぞれの変換式に全ての Y, Cgb, C br, Crg成分が含まれるようにし、 相互のモアレ相殺効果を利用する。 こうして Y, Cgb, Cbr, Crg各面がそれぞれに役割分担して生成されてきた最高性能の全てを、 R, G, Bの各々に反映させることができる。  Convert YCgbCbrCrg generated for each pixel to RGB color system. Y = (MG + B) / 3, Cgb = G−B, Cbr = B−R, Crg = R−G are not independent four equations, but satisfy the relationship of Cgb + Cbr + Crg = 0. In other words, the conversion method is not unique because it is a 4 to 3 conversion, but to suppress color moiré and maximize the brightness resolution and color resolution, all Y, Cgb, C br and Crg components are included, and the mutual moire canceling effect is used. In this way, all of the highest performances generated by the respective roles of Y, Cgb, Cbr, and Crg can be reflected in each of R, G, and B.
R[i, j] = (9*Y[i, j]+Cgb[i, j]-2*Cbr[i, j]+4*Crg[i, j])/9 ... (49)  R [i, j] = (9 * Y [i, j] + Cgb [i, j] -2 * Cbr [i, j] + 4 * Crg [i, j]) / 9 ... (49)
G[i, j] = (9*Y[i, j]+4*Cgb[i, j]+Cbr [i, j]-2*Crg[i, j])/9 ... (50)  G [i, j] = (9 * Y [i, j] + 4 * Cgb [i, j] + Cbr [i, j] -2 * Crg [i, j]) / 9 ... (50)
B[i, j] = (9*Y[i, j]-2*Cgb[i, j]+4*Cbr[i, j]+Crg[i, j])/9 ... (51)  B [i, j] = (9 * Y [i, j] -2 * Cgb [i, j] + 4 * Cbr [i, j] + Crg [i, j]) / 9 ... (51)
以上説明したように、 第 4の実施の形態の画像復元方法は、 極めて階調鮮明性 が高く、 空間方向にも優れた輝度解像性能と色解像性能を同時に達成しつつ、 色 収差に対しても強いという効果を発揮する。 正方化処理が必要な場合は、 第 1の 実施の形態と同様に行うことができる。  As described above, the image restoration method according to the fourth embodiment has extremely high gradation clarity, and simultaneously achieves excellent luminance resolution performance and color resolution performance in the spatial direction, while reducing chromatic aberration. It has the effect of being strong against them. When the square processing is required, it can be performed in the same manner as in the first embodiment.
—第 5の実施の形態一 —Fifth Embodiment I
第 2の実施の形態では、 同色間類似度を求めて類似性を判定した。 このとき、 0度、 120度、 240度方向の類似度のみを求めるものであった。 しかし、 第 5の実施 の形態では、 30度、 150度、 270度方向の類似度も求めるようにしたものである。 第 5の実施の形態の電子カメラ 1の構成は、 第 1の実施の形態の図 1 と同様であ るのでその説明を省略する。 In the second embodiment, the similarity is determined by calculating the similarity between the same colors. At this time, only similarities in the directions of 0, 120 and 240 degrees were obtained. But the fifth implementation In the embodiment, similarities in the directions of 30, 150, and 270 degrees are also obtained. The configuration of the electronic camera 1 according to the fifth embodiment is the same as that of FIG. 1 of the first embodiment, and a description thereof will be omitted.
R画素位置に G、 B成分を補間する場合を中心に述べる。 また、 第 2の実施の 形態の図 8を参照する。 R画素位置において G成分の最隣接画素は隣接して 0度、 120度、 240度を指す位置に 3つあり、 第 2隣接画素は 2画素分離れた 60度、 180度、 300度を指す位置に 3つある。 B成分も同様に、 最隣接画素は隣接して 60度、 180 度、 300度を指す位置に 3つあり、 第 2隣接画素は 2画素分離れた 0度、 120度、 1 40度を指す位置に 3つある。 また、 R成分の最隣接画素は 2画素分離れた 30度、 9 0度、 150度、 210度、 270度、 330度を指す位置に 6つあり、 第 2隣接画素は 3画素 分離れた 0度、 60度、 120度、 180度、 240度、 300度を指す位置に 6つある。  The description focuses on the case where G and B components are interpolated at the R pixel position. Also, refer to FIG. 8 of the second embodiment. At the R pixel position, there are three nearest neighbors of the G component at positions that point to 0, 120, and 240 degrees, and the second adjacent pixel points to 60, 180, and 300 degrees separated by two pixels. There are three in position. Similarly, for the B component, there are three nearest neighbors at 60, 180, and 300 degrees adjacent to each other, and the second adjacent pixel points to 0, 120, and 140 degrees separated by two pixels. There are three in position. In addition, the nearest neighbor pixel of the R component is 6 pixels at positions 30 degrees, 90 degrees, 150 degrees, 210 degrees, 270 degrees, and 330 degrees separated by 2 pixels, and the second adjacent pixel is separated by 3 pixels There are six positions at 0, 60, 120, 180, 240, and 300 degrees.
1. 類似度の算出  1. Calculation of similarity
1 ) 3画素間隔同色間類似度の算出  1) Calculate the similarity between same colors at 3 pixel intervals
0度、 120度、 240度方向の類似度 C000、 C120、 C240を算出する。 これらの方向の 同じ色成分間で構成される同色間類似度は、 3画素間隔より短く定義することが できない。  Calculate the similarities C000, C120, and C240 in the directions of 0, 120, and 240 degrees. The same-color similarity between the same color components in these directions cannot be defined shorter than three pixel intervals.
C000 = ( (|R000-Rctr| + | 180- ctr|)/2+ IG000-G1801 + | B000-B 18011/3 ... (52) C120 = i (|R120-Rctr| + |R300-Rctr|)/2+|G120-G300| + |B120-B300|)/3 ... (53) C240 = ( (| 240-Rctr| + |R060-Rctr|)/2+|G240-G060| + |B240-B060|)/3 ... (54) このように定義される同色間類似度は、 R画素位置に欠落する G成分と B成分 が存在する方向と一致する方向性を調べているので、 彩色部における 0度方向の横 線や 120度方向、 240度方向の画像構造を空間的に解像させる能力を持つと考えら れる。 しかし、 べィァ配列における 2画素間隔の同色間類似度と違って、 3画素 間隔と極めて離れた画素間の情報であることに注意を要する。  C000 = ((| R000-Rctr | + | 180- ctr |) / 2 + IG000-G1801 + | B000-B 18011/3 ... (52) C120 = i (| R120-Rctr | + | R300-Rctr |) / 2 + | G120-G300 | + | B120-B300 |) / 3 ... (53) C240 = ((| 240-Rctr | + | R060-Rctr |) / 2 + | G240-G060 | + | B240-B060 |) / 3 ... (54) The similarity between same colors defined in this way checks the directionality that matches the direction in which the G component and the B component are missing at the R pixel position. Therefore, it is considered that it has the ability to spatially resolve the horizontal image line in the 0-degree direction and the image structure in the 120- and 240-degree directions in the colored portion. However, it should be noted that, unlike the similarity between the same colors at a two-pixel interval in the Bayer array, the information is between pixels that are very far from each other at a three-pixel interval.
2) 2画素間隔同色間類似度の算出.  2) Calculate the similarity between two pixels at the same color.
次に、 30度、 150度、 270度方向の類似度 C030、 C150, C270を算出する。 これら の方向の同色間類似度は、 0度、 120度、 240度方向と異なりより短い 2画素間隔で 定義することができる。  Next, the similarities C030, C150, and C270 in the directions of 30 degrees, 150 degrees, and 270 degrees are calculated. The similarity between the same colors in these directions can be defined by a shorter two-pixel interval, unlike the 0, 120, and 240 degree directions.
C030 = ( (|R030-Rctr| +|R210-Rctr|)/2+ IG240-G000 |+| B060-B18011/3 ... (55) C150 = I (|R150-Rctr| + lR330-Rctr|)/2+|GOOO-G120| + |B180-B300|)/3 ... (56) C270 = l(|R270-Rctr|+|R090-Rctr|)/2+|G120-G240|l|B300-B060|)/3 ... (57) ただし、 R画素位置に欠落する G成分と B成分が存在する方向と一致しない方 向の類似性を調べているので、 これらを有効に活用するにはテクニックを要する。 3) 類似度の周辺加算 C030 = ((| R030-Rctr | + | R210-Rctr |) / 2 + IG240-G000 | + | B060-B18011 / 3 ... (55) C150 = I (| R150-Rctr | + lR330-Rctr |) / 2 + | GOOO-G120 | + | B180-B300 |) / 3 ... (56) C270 = l (| R270-Rctr | + | R090 -Rctr |) / 2 + | G120-G240 | l | B300-B060 |) / 3 ... (57) However, the direction that does not match the direction in which the G component and the B component are missing at the R pixel position Since we are examining similarities, techniques are needed to utilize them effectively. 3) Similarity marginal addition
更に、 上述した類似度の各々について周辺加算を行って周辺画素との連続性を 考慮することにより、 類似度の精度を上げる。 ここでは各画素について求めた上 記 6つの類似度それぞれに対し、 R画素位置について周辺加算を行う。 周辺加算 を行った類似度を小文字で表記する。 ただし、 []は、 処理対象画素から見たデル 夕配列上の画素位置を表すものとする。 式(58)は第 1の実施の形態の式(4)と同様 である。  Further, by performing marginal addition on each of the above-described similarities and taking into account continuity with neighboring pixels, the precision of the similarity is increased. Here, peripheral addition is performed for the R pixel position for each of the above-described six similarities obtained for each pixel. The similarity after margin addition is written in lower case. Here, [] indicates the pixel position on the DELUXE array viewed from the processing target pixel. Equation (58) is the same as equation (4) in the first embodiment.
cOOO = (6*C000 [Rctr]  cOOO = (6 * C000 [Rctr]
+C000 030] +C000 [R150] +C000 [R270]  + C000 030] + C000 [R150] + C000 [R270]
+C000 [R210] +C000 [ 300] +C000 [R090] ) /12 ... (58)  + C000 [R210] + C000 [300] + C000 [R090]) / 12 ... (58)
cl20, c240, c030, cl50, c270も同様にして求める。  cl20, c240, c030, cl50, and c270 are obtained in the same manner.
ここに、 上述した類似度の方位関係を図 26に示す。 FIG. 26 shows the azimuth relationship of the similarity described above.
2. 類似性判定 2. Similarity judgment
上述した類似度は値が小さいほどその方向に対して大きな類似性を示す。 ただ し、 類似性を判定して意味があるのは、 処理対象画素に存在しない G成分と B成 分が存在する方向、 すなわち 0度、 120度、 240度方向であり、 30度、 150度、 270度 方向の類似性が判定できてもあまり意味がない。 そこで、 まず考えられるのは、 0度、 120度、 240度方向の類似性の強弱を、 0度、 120度、 240度方向の類似度を用 いて、 その逆数比で連続的に判定することである。 すなわち(1 000): (1/C120): (1/C240)で判定する。  The smaller the value of the similarity described above, the greater the similarity in that direction. However, the significance of judging similarity is the direction in which the G component and B component that do not exist in the processing target pixel exist, that is, the directions of 0, 120, and 240 degrees, and 30 and 150 degrees. It is not meaningful to determine the similarity in the 270 degrees direction. Therefore, it is conceivable to first determine the degree of similarity in the 0-, 120-, and 240-degree directions by using the similarity in the 0-, 120-, and 240-degree directions and continuously determine the reciprocal ratio. It is. That is, it is determined by (1 000): (1 / C120): (1 / C240).
例えば、 類似度 cOOOは有彩色の横線を解像する能力を有するので、 図 2 0のデ ル夕配列の R G B各色成分の周波数再現域の ky軸方向を限界解像まで伸ばすことがで きる。 すなわち、 このような判定方法は、 有彩色の画像に対して、 図 20の 6角 形の頂点の限界解像まで色解像度を引き伸ばすことができる。 しかしながら、 3 画素間隔という長距離間の類似度であるため、 その間の角度となると折り返し周 波数成分の影響を受けて方向性を判別することができず、 特に 30度、 150度、 270 度方向に対しては最もその悪影響が生じ、 図 2 0の 6角形の各辺の中点付近がえ ぐれるような色解像力しか発揮することができない。 For example, since the similarity cOOO has the ability to resolve chromatic horizontal lines, it is possible to extend the ky-axis direction of the frequency reproduction range of each of the RGB color components in the Derby arrangement of FIG. 20 to the limit resolution. That is, such a determination method can extend the color resolution of a chromatic image to the limit resolution of the vertices of the hexagon in FIG. However, since it is a similarity over a long distance of 3 pixel intervals, if the angle between them is an The directionality cannot be determined due to the influence of the wave number component, and especially in the direction of 30 degrees, 150 degrees, and 270 degrees, the most adverse effect occurs. Near the midpoint of each side of the hexagon in Figure 20 It can only exhibit color resolution that can be lost.
そこで、 このような長距離相関の悪影響を防ぐため、 短距離相関の類似度 C030, cl50, C270を有効活用する。 ただし、 逆数をとるだけでは 30度、 150度、 270度方向 の類似性が判別できるようになるだけなので、 それを 0度、 120度、 240度方向の類 似性に変換するために、 逆数ではなく類似度の値自体が 30度、 150度、 270度方向 に直交する方向、 すなわち 120度、 240度、 0度方向の類似性を表しているものと解 釈する。 故に 0度、 120度、 240度方向の類似性を以下の比率で判定する。  Therefore, in order to prevent such adverse effects of long-range correlation, the similarities C030, cl50, and C270 of short-range correlation are effectively used. However, simply taking the reciprocal can only determine the similarity in the directions of 30, 150, and 270 degrees, so in order to convert it to the similarity in the directions of 0, 120, and 240 degrees, the reciprocal Rather, it is interpreted that the value of the similarity itself expresses the similarity in the direction orthogonal to the 30 °, 150 °, and 270 ° directions, ie, the 120 °, 240 °, and 0 ° directions. Therefore, the similarity in the directions of 0, 120 and 240 degrees is determined by the following ratio.
(c270/c000): (c030/cl20): (cl50/c240)  (c270 / c000): (c030 / cl20): (cl50 / c240)
0度、 120度、 240度方向の類似性を 1で規格化された加重係数 wOOO, vl20, w240と して表すと、  Expressing the similarity in the 0, 120, and 240-degree directions as weighting coefficients wOOO, vl20, and w240 standardized by 1,
w000= (cl20*c240*c270+Th)  w000 = (cl20 * c240 * c270 + Th)
/(cl20*c240*c270+c240*cOOO*c030+cOOO*cl20*cl50+3*Th) ... (59) wl20= (c240*c000*c030+Th)  / (cl20 * c240 * c270 + c240 * cOOO * c030 + cOOO * cl20 * cl50 + 3 * Th) ... (59) wl20 = (c240 * c000 * c030 + Th)
/(cl20*c240*c270+c240*cOOO*c030+cOOO*cl20*cl50+3*Th) ... (60) w240=(c000*cl20*cl50+Th)  / (cl20 * c240 * c270 + c240 * cOOO * c030 + cOOO * cl20 * cl50 + 3 * Th) ... (60) w240 = (c000 * cl20 * cl50 + Th)
/(cl20*c240*c270+c240*cOOO*c030+cOOO*cl20*cl50+3*Th) ... (61) により求まる。 ただし、 定数 Thは発散を防ぐための項で正の値をとる。 通常 Th=l とすればよいが、 高感度撮影画像などノイズの多い画像に対してはこの閾値を上 げるとよい。  / (cl20 * c240 * c270 + c240 * cOOO * c030 + cOOO * cl20 * cl50 + 3 * Th) ... (61) However, the constant Th takes a positive value in the term to prevent divergence. Normally, it is sufficient to set Th = l, but it is better to increase this threshold for images with much noise such as high-sensitivity images.
なお、 0度方向と 270度方向、 120度方向と 30度方向、 240度方向と 150度方向は直 交した関係である。 この直交した関係を 0度方向丄 270度方向、 120度方向丄 30度方 向、 240度方向丄 150度方向と表す。  The 0 degree direction and the 270 degree direction, the 120 degree direction and the 30 degree direction, and the 240 degree direction and the 150 degree direction are orthogonal relations. This orthogonal relationship is expressed as a 0-degree direction 丄 270-degree direction, a 120-degree direction 方 30-degree direction, and a 240-degree direction 度 150-degree direction.
こうして 6方向の類似度を用いて連続的に判定された類似性は、 有彩色画像に 対して図 2 0の六角形の全てを正確に再現する空間解像力を有する。 また、 この 同色間類似度による空間解像力は、 同じ色成分間で類似性を見ているため、 光学 系に含まれる色収差の影響を受けずに常に達成することができる。  The similarity continuously determined using the similarities in the six directions has a spatial resolution that accurately reproduces all the hexagons in FIG. 20 with respect to the chromatic image. In addition, the spatial resolution based on the similarity between same colors can always be achieved without being affected by chromatic aberration included in the optical system because similarity between the same color components is observed.
3. 補間値算出 補間値は、 第 2の実施の形態と同様にして求める。 3. Interpolation value calculation The interpolation value is obtained in the same manner as in the second embodiment.
以上のようにして、 第 5の実施の形態では如何なる画像に対してもデルタ配列 が元来有する各 RGB単色の空間色解像力を全て引き出すことができる。 また、 階調 方向に鮮明な画像復元が可能で、 色収差を含む系に対しても強い性能を発揮する。 なお、 第 5の実施の形態の類似度の算出および類似性の判定を第 4の実施の形 態の類似度の算出および類似性の判定にも適用できる。 このような画像復元方法 は、 極めて階調鮮明性が高く、 空間方向にも優れた輝度解像性能と色解像性能を 同時に達成しつつ、 色収差に対しても強いという効果を発揮する。 特に、 鮮明な 階調を達成しつつデルタ配列の最高の色解像性能を導き出すことができる。 通常 の画像処理では、 階調鮮明性を上げようとエッジ強調処理を行ったりして、 その 分色解像力が落ちて色褪せたり、 立体感が損なわれたりしてしまう トレードオフ が生じるが、 この場合、 4成分の復元値を導入することによりこの相反する課題 を同時に解決することができるようになる。  As described above, in the fifth embodiment, it is possible to extract all the spatial color resolving power of each RGB single color originally included in the delta arrangement for any image. In addition, clear image restoration in the gradation direction is possible, and it shows strong performance even for systems containing chromatic aberration. The calculation of the similarity and the determination of the similarity in the fifth embodiment can be applied to the calculation of the similarity and the determination of the similarity in the fourth embodiment. Such an image restoration method has extremely high gradation clarity, achieves excellent luminance resolution performance and color resolution performance in the spatial direction at the same time, and exhibits an effect of being strong against chromatic aberration. In particular, it is possible to derive the highest color resolution performance of the delta arrangement while achieving clear gradation. In normal image processing, there is a trade-off in which edge enhancement processing is performed to increase gradation clarity, and the color resolution is reduced by that amount, resulting in fading or a loss of three-dimensional appearance. By introducing restoration values of the four components, these conflicting problems can be solved simultaneously.
—第 6の実施の形態—  —Sixth Embodiment—
べィァ配列においては通常、 補間処理後に画像に残っている偽色を低減するた めに RGB信号を、 輝度と色差からなる YCbCrに変換し、 Cb、 Cr面で色差口一パスフ ィルタを掛けたり、 色差メディアンフィルタを掛けたりして偽色を除去し、 RGB表 色系に戻す事後処理が行われる。 デルタ配列の場合も、 完全に光学ローパスフィ ル夕でナイキス ト周波数を落とせなければ、 見栄えを良くするため適度な偽色低 減処理を必要とする。 第 6の実施の形態では、 デルタ配列の優れた特徴である色 解像性能をできるだけ損なわないような事後処理の方法を示す。 第 6の実施の形 態の電子力メラ 1の構成は、 第 1の実施の形態の図 1 と同様であるのでその説明 を省略する。  In the Bayer array, the RGB signal is usually converted to YCbCr consisting of luminance and color difference to reduce false colors remaining in the image after the interpolation processing, and a color difference port-one-pass filter is applied to the Cb and Cr planes. Post-processing is performed to remove false colors by applying a color difference median filter or to return to the RGB color system. Even in the case of the delta arrangement, if the Nyquist frequency cannot be completely reduced in the optical low-pass filter, appropriate false color reduction processing is required to improve the appearance. In the sixth embodiment, a post-processing method that does not impair the color resolution performance, which is an excellent feature of the delta array, as much as possible will be described. The configuration of the electronic device 1 according to the sixth embodiment is the same as that shown in FIG. 1 of the first embodiment, and a description thereof will be omitted.
図 2 1は、 第 6の実施の形態において、 画像処理部 1 1が行う画像処理の概要 を示すフローチャートである。 第 1 、 2、 4、 5の実施の形態と同様に三角格子 上で補間処理が行われるものとする。 図 2 1は、 補間処理後の R G Bカラ一画像 を入力するところからスタートする。 例えば、 第 1の実施の形態の図 2のステツ プ S 1 ~ S 5が終了し、 その後、 図 2 1のフローチャートが開始する。  FIG. 21 is a flowchart illustrating an outline of image processing performed by the image processing unit 11 in the sixth embodiment. Interpolation processing is performed on a triangular lattice as in the first, second, fourth, and fifth embodiments. Figure 21 starts with the input of an RGB color image after interpolation. For example, steps S1 to S5 in FIG. 2 of the first embodiment are completed, and thereafter, the flowchart in FIG. 21 starts.
ステップ S 3 1では、 補間処理後の R G Bカラー画像データを入力する。 ステ ップ S 3 2において、 RGB表色系から本第 6の実施の形態特有の YCgbCbrCrg表 色系に変換する。 ステップ S 3 3では、 色判定用画像を生成する。 ステップ S 3 4では、 ステップ S 3 3で生成した色判定用画像を使用して色指標を算出する。 ステップ S 3 5では、 ステップ S 3 4の色指標に基づき低彩度か高彩度かの色判 定を行う。 In step S31, the RGB color image data after the interpolation processing is input. Stay In step S32, the RGB color system is converted to the YCgbCbrCrg color system unique to the sixth embodiment. In step S33, a color determination image is generated. In step S34, a color index is calculated using the color determination image generated in step S33. In step S35, a color judgment of low saturation or high saturation is performed based on the color index of step S34.
ステップ S 3 6において、 ステップ S 3 5の色判定結果に基づき、 使用する口 一パスフィルタを切り換えて色差補正を行う。 補正の対象となる色差データはス テツプ S 3 2で生成されたものである。 ステップ S 3 7では、 色差面の偽色が除 去された段階で、 YCgbCbrCrg表色系を元の RGB表色系に戻す変換をする。 ステ ップ S 3 8において、 得られた RGBカラー画像データを出力する。 ステップ S 3 8で出力される R G Bカラー画像データは三角格子上で得られた画像データで ある。  In step S36, based on the color determination result in step S35, the mouth-pass filter to be used is switched to perform color difference correction. The color difference data to be corrected is generated in step S32. In step S37, conversion is performed to return the YCgbCbrCrg color system to the original RGB color system when the false colors on the color difference plane have been removed. In step S38, the obtained RGB color image data is output. The RGB color image data output in step S38 is image data obtained on a triangular lattice.
三角格子上で得られた画像データに対して正方化処理をする場合は、 第 1の実 施の形態と同様に、 図 2のステップ S 6、 S 7の処理を行う。 以下、 上述のステ ップ S 3 2〜 3 7の処理の詳細について説明する。  When performing the square processing on the image data obtained on the triangular lattice, the processing of steps S6 and S7 in FIG. 2 is performed as in the first embodiment. Hereinafter, the details of the processing in steps S32 to S37 will be described.
1. 表色系変換  1. Color system conversion
RGB表色系から YCgbCbrCrg表色系に変換する。 ただし、 YCgbCbrCrg表色系は次の 式(62)〜(65)で定義される。  Convert from RGB color system to YCgbCbrCrg color system. However, the YCgbCbrCrg color system is defined by the following equations (62) to (65).
Y[i, j] = (R[i, j]+G[i, j]+B[i, j])/3 ... (62)  Y [i, j] = (R [i, j] + G [i, j] + B [i, j]) / 3 ... (62)
Cgb[i, j] = G[i, j]-B[i, j] ... (63)  Cgb [i, j] = G [i, j] -B [i, j] ... (63)
Cbr[i, j] = B[i, j]-R[i, j] ... (64)  Cbr [i, j] = B [i, j] -R [i, j] ... (64)
Crg[i, j] = R[i, j]-G[i, j] ... (65)  Crg [i, j] = R [i, j] -G [i, j] ... (65)
このように RGBを均等に扱う変換を行うと、 例えばサ一キユラ一ゾーンプレート で、 図 6の六角形の角の部分を中心に発生するような色モアレが、 完全に輝度面 Yからは消え去り、 全ての偽色要素を色差成分 Cgb, Cbr, Crgに含ませることができ る。  Performing conversion that treats RGB equally in this way, for example, the color moiré that occurs at the center of the hexagonal corner in Fig. 6 in a square zone plate completely disappears from the luminance plane Y. All false color elements can be included in the color difference components Cgb, Cbr, and Crg.
2. 色評価  2. Color evaluation
1 ) 色判定用画像の生成  1) Generation of color judgment image
無彩色部に於ける偽色をできるだけ彩色部と区別して色評価するために、 全面 に強力な色差ローパスフィルタをかけて偽色を一旦低減する。 ただし、 これは一 時的な色判定用画像なので、 実際の画像には影響を与えない。 In order to distinguish false colors in achromatic areas from color areas as much as possible, To reduce the false color by applying a powerful color difference low-pass filter to the image. However, since this is a temporary color judgment image, it does not affect the actual image.
TCgb [center] = l9*Cgb [center]  TCgb [center] = l9 * Cgb [center]
+ 2* (Cgb [nearestOOO] +Cgb [nearestl20] +Cgb [nearest240]  + 2 * (Cgb [nearestOOO] + Cgb [nearestl20] + Cgb [nearest240]
+Cg [nearestl80] +Cgb [nearest300] +Cgb [neares t060] )  + Cg [nearestl80] + Cgb [nearest300] + Cgb [neares t060])
+Cgb [2nd000] +Cgb [2ndl20] +Cgb [2nd240]  + Cgb [2nd000] + Cgb [2ndl20] + Cgb [2nd240]
+ Cgb [2ndl80] +Cgb [2nd300] +Cgb [2nd060] ] /27 ... (66)  + Cgb [2ndl80] + Cgb [2nd300] + Cgb [2nd060]] / 27 ... (66)
TCbr, TCrgについても同様に求める。  TCbr and TCrg are also calculated in the same way.
2) 色指標の算出  2) Calculation of color index
次に、 偽色の低減された色判定用画像を用いて色指標 Cdiffを算出し、 画素単位 の色評価を行う。  Next, the color index Cdiff is calculated using the image for color determination in which the false color is reduced, and the color evaluation is performed in pixel units.
Cdiff [i, j] = (|Cgb[i, j] l + ICbr [i, j] | + |Crg[i, j] |)/3 ... (67)  Cdiff [i, j] = (| Cgb [i, j] l + ICbr [i, j] | + | Crg [i, j] |) / 3 ... (67)
3 ) 色判定  3) Color judgment
上記連続的色指標 Cdiffを閾値判定し、 離散的な色指標 BWに変換する。  The above continuous color index Cdiff is judged as a threshold value and converted to a discrete color index BW.
if Cdiff [i, j]≤Th then BW[i, j]='a' (低彩度部)  if Cdiff [i, j] ≤Th then BW [i, j] = 'a' (low saturation)
else then BW[i, j]='c' (高彩度部)  else then BW [i, j] = 'c' (high saturation part)
ここで閾値 Thは 256階調の場合 30程度に設定するのがよい。 Here, the threshold Th is preferably set to about 30 for 256 gradations.
3. 適応的色差補正  3. Adaptive color difference correction
色判定により画像を 2つの領域に分割することができたので、 無彩色部の偽色 は強力に消す一方、 彩色部の色解像はできるだけ温存する色差補正処理を加える。 ここでは式 (68)および式 (69)の口一パス処理を行うが、 メディアンフィル夕処理 を用いてもよい。 []は、 処理対象画素から見た三角格子上の画素位置を表すもの とする。 式(68)のローパスフィルタは、 第 3の実施の形態の図 1 1に示す係数値 を使用し、 式(69)の口一パスフィルタは図 1 3に示す係数値を使用する。  Since the image could be divided into two regions by color judgment, false colors in the achromatic portion were strongly eliminated, while color resolution in the chromatic portion was subjected to a color difference correction process that preserved as much as possible. Here, the mouth-to-pass processing of Expressions (68) and (69) is performed, but the median fill processing may be used. [] Represents the pixel position on the triangular lattice viewed from the processing target pixel. The low-pass filter of Expression (68) uses the coefficient values shown in FIG. 11 of the third embodiment, and the single-pass filter of Expression (69) uses the coefficient values shown in FIG.
<ローパス処理 >  <Low-pass processing>
if BW [center] =' a'  if BW [center] = 'a'
tmp_Cgb [center] = l9*Cgb [center]  tmp_Cgb [center] = l9 * Cgb [center]
+ 2* (Cgb [nearestOOO] ICgb [nearestl20] +Cgb [nearest240]  + 2 * (Cgb [nearestOOO] ICgb [nearestl20] + Cgb [nearest240]
+Cgb [nearestl80] +Cgb [nearest300] +Cgb [nearest060] ) 白 + Cgb [nearestl80] + Cgb [nearest300] + Cgb [nearest060]) White
Figure imgf000037_0001
White
Figure imgf000037_0001
B方式も第 4の実施の形態のような YCCC方式も含む) と階調処理の関係をここに示 す。 図 2 7は、 その処理を示すフローチャートである。 The relationship between the B method and the YCCC method as in the fourth embodiment) and the gradation processing are shown here. FIG. 27 is a flowchart showing the processing.
1 ) 線形階調デルタ配列データ入力 (ステップ S 4 1 )  1) Linear gradation delta array data input (step S 4 1)
2 ) ガンマ補正処理 (ステップ S 4 2 )  2) Gamma correction processing (Step S 42)
3 ) 画像復元処理 (ステップ S 4 3 )  3) Image restoration processing (Step S43)
4 ) 逆ガンマ補正処理 (ステップ S 4 4 )  4) Inverse gamma correction processing (Step S44)
5 ) ユーザーガンマ補正処理 (ステップ S 4 5 )  5) User gamma correction processing (step S45)
ユーザーガンマ補正処理は、 線形階調からディスプレイ出力に適した 8ビッ ト階 調に変換する処理、 すなわち、 画像のダイナミックレンジを出力表示機器の範囲 内に圧縮する処理である。 これとは独立に、 一旦あるガンマ空間に階調を変換し て、 第 1の実施の形態〜第 6の実施の形態に相当する画像復元処理を行うと、 よ り優れた復元結果が得られる。 この階調変換方法として以下のようなものがある。  The user gamma correction process is a process of converting a linear gradation to an 8-bit gradation suitable for display output, that is, a process of compressing the dynamic range of an image to within the range of the output display device. Independently of this, once the gradation is converted to a certain gamma space and the image restoration processing corresponding to the first to sixth embodiments is performed, a better restoration result can be obtained. . There are the following methods for this gradation conversion.
<ガンマ補正処理 >  <Gamma correction processing>
入力信号 x (0≤x≤xmax)、 出力信号 y (0≤x≤ymax)、 入力画像はデルタ面 y=ymaxv (x/xmax)  Input signal x (0≤x≤xmax), Output signal y (0≤x≤ymax), Input image is delta surface y = ymaxv (x / xmax)
入力信号が 12ビッ 卜の場合は xmax=4095で、 出力信号は例えば 16ビッ トの ymax= 65535に設定するとよい。  If the input signal is 12 bits, xmax = 4095, and the output signal should be, for example, 16 bits ymax = 65535.
<逆ガンマ補正処理 >  <Reverse gamma correction processing>
入力信号 y (0≤x≤ymax)、 出力信号 x (0≤x≤xmax)、 入力画像は RGB面  Input signal y (0≤x≤ymax), Output signal x (0≤x≤xmax), Input image is RGB plane
x=xmax (y/y ax) 2 x = xmax (y / y ax) 2
このように平方根型のガンマ空間で画像復元処理を行うと、 画像復元処理のテ クニックとは別の観点から次のような利点が同時に得られる。  When the image restoration processing is performed in the square root type gamma space as described above, the following advantages can be simultaneously obtained from a viewpoint different from the technique of the image restoration processing.
1 ) 色境界部の鮮明化が可能  1) Color boundary can be sharpened
(例えば、 赤白境界の色にじみ抑制や、 黒縁発生の抑制等)  (For example, suppression of color fringing at the red-white border, suppression of black border generation, etc.)
2 ) 輝点 (極めて明るい部分) 周辺の偽色抑制が可能  2) Suppress false colors around bright spots (extremely bright areas)
3 ) 方向判定精度が向上  3) Improved direction determination accuracy
1 ) と 2 ) は画像復元処理の RGB方式では 「補間値算出部」 、 Y方式では 「復元 値算出部」 が関与して生み出される効果である。 また、 3 ) は画像復元処理の 「類似度算出部」 と 「類似性判定部」 が関与して生み出される効果である。 つま り、 入力信号 xには量子論的揺らぎの誤差 dx=k x ( kは I SO感度で決まる定数) が 含まれており、 平方根のガンマ空間に変換すると誤差伝搬則によりこの誤差が全 階調 0≤y≤ymaxに渡って均一の誤差 dy=cons t an tで扱えるようになるため、 方向判 定精度が向上する。 この技術はデルタ配列に限らずペイァ配列やその他諸々のフ ィル夕配列の補間処理にも応用することができる。 上記実施の形態においては、 デル夕配列は元々ペイァ配列よりも単色の色解像性能が高いため、 この階調変換 処理を画像復元処理の前後に入れることで、 更に優れた色鮮明性を生み出すこと が可能となる。 1) and 2) are the effects created by the involvement of the “interpolated value calculator” in the RGB method of image restoration processing and the “reconstructed value calculator” in the Y method. In addition, 3) is the effect produced by the involvement of the “similarity calculation unit” and “similarity determination unit” in the image restoration process. Toes In addition, the input signal x contains an error dx = kx (k is a constant determined by the ISO sensitivity) of the quantum fluctuation, and when converted to the square root gamma space, this error is reduced to the total gradation 0 by the error propagation law. Since the uniform error dy = constant can be handled over ≤y≤ymax, the direction determination accuracy is improved. This technique can be applied not only to the delta arrangement but also to the interpolation processing of the payer arrangement and other various file arrangements. In the above-described embodiment, since the Delaware array originally has higher color resolution of a single color than the Payer array, by inserting this gradation conversion processing before and after the image restoration processing, even better color clarity is produced. It becomes possible.
上記第 1の実施の形態では、 三角格子上で補間処理された画像データを正方化 処理をする例を示した。 また、 他の実施の形態でも、 正方化処理が必要な場合は その処理を行う旨示した。 しかし、 三角格子上で補間処理や表色系変換された画 像データそのものをそのまま使用するようにしてもよい。  In the first embodiment, an example has been described in which image data interpolated on a triangular lattice is subjected to square processing. Also, in other embodiments, it has been shown that when the square processing is necessary, the processing is performed. However, the image data itself that has been subjected to the interpolation processing or the color system conversion on the triangular lattice may be used as it is.
上記第 3の実施の形態では、 第 1の実施の形態と同様に生成された R G Bカラ 一画像データに補正処理を行う例を示したが、 必ずしもこの内容に限定する必要 はない。 他の実施の形態で生成された R G Bカラー画像データに対しても同様に 適用できる。 このように、 上記第 1〜第 6の実施の形態では、 それぞれ適宜組み 合わせることが可能である。 すなわち、 上記第 1〜第 6の実施の形態では、 類似 の方向性判定処理、 補間処理あるいは色差面直接生成処理、 補正等の後処理、 正 方化処理などを記載しているが、 各実施の形態の各処理を適宜組み合わせて最適 な画像処理方法、 処理装置を実現することができる。  In the third embodiment, an example is described in which correction processing is performed on RGB image data generated in the same manner as in the first embodiment, but the present invention is not necessarily limited to this. The same applies to RGB color image data generated in other embodiments. As described above, the first to sixth embodiments can be appropriately combined with each other. That is, in the first to sixth embodiments, similar direction determination processing, interpolation processing or direct generation processing of color difference planes, post-processing such as correction, square processing, and the like are described. An optimal image processing method and processing apparatus can be realized by appropriately combining the processes of the embodiments.
上記実施の形態では、 単板式の撮像素子を前提にした説明を行ったが、 この内 容に限定する必要はない。 本発明は、 2板式の撮像素子においても適用できる。 2板式であれば、 例えば、 各画素において欠落する色成分は 1つとなるが、 この 欠落する 1つの色成分の補間処理に上記実施の形態の内容を適用することができ、 必要であれば正方化処理も同様にできる。 また、 第 4の実施の形態の、 デルタ配 列から補間処理を介さずに直接輝度成分と色差成分を作る方式においても、 同様 に 2板式の撮像素子にも適用できる。  In the above-described embodiment, the description has been made on the assumption that the image sensor is of a single-plate type. The present invention can be applied to a two-chip image sensor. In the case of the two-plate system, for example, one color component is missing in each pixel, but the content of the above embodiment can be applied to the interpolation processing of this one missing color component. Conversion processing can be performed in the same manner. Further, the method of directly generating a luminance component and a color difference component from the delta array without through interpolation processing according to the fourth embodiment can be similarly applied to a two-chip image sensor.
上記実施の形態では、 類似性の判定に各種の計算式を示したが、 必ずしも実施 の形態に示した内容に限定する必要はない。 他の、 適切な計算式により類似性を 判定するようにしてもよい。 また、 輝度情報の計算においても各種の計算式を示 したが、 必ずしも実施の形態に示した内容に限定する必要はない。 他の、 適切な 計算式により輝度情報を生成するようにしてもよい。 In the above embodiment, various calculation formulas are shown for determining similarity. However, the present invention is not necessarily limited to the content shown in the embodiment. Similarity with other, appropriate formulas The determination may be made. In addition, various calculation formulas have been shown in the calculation of the luminance information, but it is not necessarily limited to the contents described in the embodiment. The luminance information may be generated by another appropriate calculation formula.
上記実施の形態では、 色差補正等でローパスフィルタ、 エッジ強調でバンドパ スフィルタの例を示したが、 必ずしも実施の形態で示した内容に限定する必要は ない。 他の構成のローパスフィルタゃバンドパスフィルタであってもよい。  In the above embodiment, an example of a low-pass filter for color difference correction and the like and a band-pass filter for edge enhancement have been described. However, the present invention is not necessarily limited to the contents described in the embodiment. A low-pass filter / band-pass filter having another configuration may be used.
上記実施の形態では、 電子カメラの例で示したが、 必ずしもこの内容に限定す る必要はない。 動画を撮像するビデオカメラや、 撮像素子つきパーソナルコンビ ユー夕や携帯電話などであってもよい。 すなわち、 撮像素子によりカラー画像デ 一夕を生成するあらゆる装置に適用できる。  In the above embodiment, an example of an electronic camera has been described, but the present invention is not necessarily limited to this content. It may be a video camera for capturing moving images, a personal convenience store with an image sensor, a mobile phone, or the like. That is, the present invention can be applied to any device that generates a color image data by an image sensor.
パーソナルコンピュータなどに適用する場合、 上述した処理に関するプログラ ムは、 C D— R O Mなどの記録媒体やインターネッ トなどのデータ信号を通じて 提供することができる。 図 2 3はその様子を示す図である。 パーソナルコンビュ 一夕 1 0 0は、 C D— R O M 1 0 4を介してプログラムの提供を受ける。 また、 パーソナルコンピュータ 1 0 0は通信回線 1 0 1 との接続機能を有する。 コンビ ユー夕 1 0 2は上記プログラムを提供するサーバ一コンピュータであり、 ハード ディスク 1 0 3などの記録媒体にプログラムを格納する。 通信回線 1 0 1は、 ィ ンタ一ネッ ト、 パソコン通信などの通信回線、 あるいは専用通信回線などである コンピュータ 1 0 2はハ一ドディスク 1 0 3を使用してプログラムを読み出し、 通信回線 1 0 1を介してプログラムをパーソナルコンピュータ 1 0 0に送信する ( すなわち、 プログラムをデータ信号として搬送波にのせて、 通信回線 1 0 1を介 して送信する。 このように、 プログラムは、 記録媒体や搬送波などの種々の形態 のコンピュータ読み込み可能なコンピュータプログラム製品として供給できる。 上記実施の形態の主な有利な点をまとめると次の通りである。 When applied to a personal computer or the like, a program relating to the above-described processing can be provided through a recording medium such as a CD-ROM or a data signal through the Internet or the like. FIG. 23 is a diagram showing this state. The personal convenience store 100 will be provided with the program via CD-ROM 104. The personal computer 100 has a function of connecting to the communication line 101. The combination computer 102 is a server computer that provides the above program, and stores the program on a recording medium such as a hard disk 103. The communication line 101 is a communication line such as the Internet, personal computer communication, or a dedicated communication line. The computer 102 reads out a program using the hard disk 103, and reads the program. The program is transmitted to the personal computer 100 via the communication line 101 ( that is, the program is transmitted as a data signal on a carrier wave via the communication line 101. In this way, the program It can be supplied as a computer-readable computer program product in various forms, such as a carrier wave, etc. The main advantages of the above embodiment are summarized as follows.
取得した画像の三角格子状の画素位置で補間処理等を行った後、 正方格子状の 画素位置に変換しているので、 三角配置の限界解像度を最大限に維持しながら、 正方格子状に配置された画像データを出力することができる。  After performing interpolation processing at the pixel positions of the triangular lattice of the acquired image, it is converted to pixel positions of the square lattice, so it is arranged in a square lattice while maintaining the maximum resolution of the triangle arrangement as much as possible. The output image data can be output.
Ξ角格子状に配置された第 1の画像の欠落する色成分の色情報を補間するとき. 色成分の曲率情報を固定の演算により求めて使用しているので、 あらゆる方向の 高周波領域まで階調鮮明性を向上させることが可能となる。 すなわち、 三角格子 状に画素が配置された画像データの有効性を最大限発揮させながら高精細な補間 が可能となる。 と き When interpolating the color information of the missing color component of the first image arranged in a square grid. Since the curvature information of the color component is obtained by a fixed calculation and used, It is possible to improve the tone clarity up to the high frequency region. That is, it is possible to perform high-definition interpolation while maximizing the effectiveness of image data in which pixels are arranged in a triangular lattice.
複数の方向からなる第 1方向群の各々について類似度を算出し、 第 1方向群の 少なく とも 1方向と直交し、 かつ第 1方向群とは異なる複数の方向からなる第 2 方向群の各々について類似度を算出して類似性を判定している。 例えば、 デルタ 配列において 6方向の類似度を用いて 3方向間の類似性を連続的に判定している ので、 有彩色画像に対してデルタ配列の有する図 2 0の六角形の全てを正確に再 現する空間解像力を有する。 すなわち、 デル夕配列が元来有する各 RGB単色の空間 色解像力を全て引き出すことができる。  A similarity is calculated for each of the first direction groups including a plurality of directions, and each of the second direction groups including a plurality of directions that are orthogonal to at least one direction of the first direction group and different from the first direction group. Are calculated and the similarity is determined. For example, since the similarity in three directions is continuously determined using the similarity in six directions in the delta array, all the hexagons in FIG. It has a reproducible spatial resolution. That is, it is possible to extract all the spatial color resolving power of each of the RGB single colors originally possessed by the Delhi array.
第 1色成分が欠落する画素に第 1色成分を補間するときに、 補間対象画素に対 して第 1色成分が 2番目に近接する画素を含む領域の色情報を用いて補間を行つ ているので、 例えば、 図 3の 30度方向、 150度方向、 270度方向 (縦線) の境界線 の画像が劇的に向上する。 すなわち、 30度、 150度、 270度方向の空間解像力が向 上する。  When the first color component is interpolated into the pixel where the first color component is missing, interpolation is performed using the color information of the area including the pixel whose first color component is second closest to the interpolation target pixel. Therefore, for example, the image of the 30-degree, 150-degree, and 270-degree (vertical line) boundary lines in Figure 3 is dramatically improved. That is, the spatial resolution in the directions of 30, 150, and 270 degrees is improved.
第 1〜第 3色成分で表された第 1の画像の全ての画素において、 第 1〜第 3色 成分の色情報を常に均等 ( 1 : 1 : 1 ) の色成分比率で加重加算して第 1の画像 の色情報と異なる色成分の色情報を生成する。 このようにして生成される色成分 の色情報は極めて階調鮮明性の潜在能力が高く、 空間解像力の高い画像が得られ、 かつ色収差に影響されずに極めて滑らかに周辺画素と連結する画像が得られる。  For all pixels of the first image represented by the first to third color components, the color information of the first to third color components is always weighted and added at a uniform (1: 1: 1) color component ratio. The color information of a color component different from the color information of the first image is generated. The color information of the color components generated in this way has an extremely high gradation clear potential, an image with high spatial resolution is obtained, and an image that is connected to peripheral pixels very smoothly without being affected by chromatic aberration is obtained. can get.

Claims

請求の範囲 The scope of the claims
1 . 画像処理方法であって、 1. An image processing method,
複数の色成分からなる表色系で表され、 1つの画素に少なくとも 1つの色成分 の色情報を有する複数の画素からなり、 前記複数の画素は三角格子状に配置され た第 1の画像を取得する画像取得手順と、  A pixel is represented by a color system composed of a plurality of color components, and one pixel includes a plurality of pixels having color information of at least one color component, and the plurality of pixels represent a first image arranged in a triangular lattice. An image acquisition procedure to be acquired,
前記取得した第 1の画像の色情報を用いて、 少なく とも 1つの新しい色情報を、 前記第 1の画像と同じ三角格子状の画素位置に生成する色情報生成手順と、 前記生成された色情報を含む前記三角格子状の画素位置にある複数の画素の色 情報を、 一方向に並ぶ画素間において一次元変位処理を行うことによって、 各画 素間位置の色情報に変換する画素位置変換手順と、  A color information generating step of generating at least one new color information at the same triangular lattice pixel position as the first image using the acquired color information of the first image; and Pixel position conversion for converting color information of a plurality of pixels at pixel positions in the triangular lattice including information into color information of a position between pixels by performing a one-dimensional displacement process between pixels arranged in one direction. Instructions,
前記画素位置が変換された色情報を使用して、 複数の画素が正方格子状に配置 された第 2の画像を出力する出力手順を備える。  An output step of outputting a second image in which a plurality of pixels are arranged in a square lattice using the color information whose pixel positions have been converted.
2 . クレーム 1記載の画像処理方法において、 2. In the image processing method described in claim 1,
前記新しい色情報は、 前記第 1の画像の表色系の色成分のうち前記第 1の画像 の各画素において欠落する色成分の色情報である。  The new color information is color information of a color component missing in each pixel of the first image among color components of a color system of the first image.
3 . クレーム 1記載の画像処理方法において、 3. In the image processing method described in claim 1,
前記新しい色情報は、 前記第 1の画像の表色系とは異なる表色系の色情報であ る。  The new color information is color information of a color system different from the color system of the first image.
4 . クレーム 1〜 3のいずれか 1項に記載の画像処理方法において、 4. The image processing method according to any one of claims 1 to 3,
前記一次元変位処理は、 正及び負の係数値からなる一次元フィルタを用いて行 Ό。  The one-dimensional displacement processing is performed using a one-dimensional filter including positive and negative coefficient values.
5 . クレーム 1〜 4のいずれか 1項に記載の画像処理方法において、 5. In the image processing method according to any one of claims 1 to 4,
前記一次元変位処理は、 前記第 1の画像に対して、 行単位で一行置きに行う。 The one-dimensional displacement processing is performed on the first image every other line in line units.
6 . クレーム 1〜 5のいずれか 1項に記載の画像処理方法において、 少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、 6. The image processing method according to any one of claims 1 to 5, further comprising a similarity determination procedure for determining the degree of similarity in at least three directions,
色情報生成手順では、 判定された類似性の強さに応じて前記新しい色情報を生 成する。  In the color information generation procedure, the new color information is generated according to the determined similarity strength.
7 . クレーム 6に記載の画像処理方法において、 7. An image processing method according to claim 6,
前記類似性判定手順では、 少なくとも 3方向に対する類似度を算出し、 前記類 似度の逆数に基づいて各方向の類似性の強弱を判定する。  In the similarity determination procedure, the similarity in at least three directions is calculated, and the strength of similarity in each direction is determined based on the reciprocal of the similarity.
8 . クレーム 1〜 7のいずれか 1項に記載の画像処理方法において、 8. The image processing method according to any one of claims 1 to 7,
前記生成された色情報を含む前記三角格子状の画素位置にある複数の画素の色 情報に基づき、 前記三角格子状の画素位置に色差成分の色情報を生成する色差生 成手順と、  A color difference generation step of generating color information of a color difference component at the triangular lattice pixel position based on the color information of a plurality of pixels at the triangular lattice pixel position including the generated color information;
前記色差生成手順で生成された色差成分の色情報を補正する補正手順とをさら に備える。  A correction procedure for correcting the color information of the color difference components generated in the color difference generation procedure.
9 . クレーム 1〜 7のいずれか 1項に記載の画像処理方法において、 9. In the image processing method according to any one of claims 1 to 7,
前記生成された色情報を含む前記三角格子状の画素位置にある複数の画素の色 情報に基づき、 前記三角格子状の画素位置に輝度成分の色情報を生成する輝度生 成手順と、  A luminance generating step of generating color information of a luminance component at the triangular lattice pixel position based on the color information of a plurality of pixels at the triangular lattice pixel position including the generated color information;
前記輝度生成手順で生成された輝度成分の色情報を補正する補正手順とをさら に備える。  And a correction procedure for correcting the color information of the luminance component generated in the luminance generation procedure.
1 0 . 画像処理方法であって、 1 0. An image processing method,
第 1〜第 n色成分 (n≥ 2 ) で表され、 1つの画素に 1つの色成分の色情報を 有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順 と、  An image acquisition procedure for acquiring a first image in which a plurality of pixels represented by first to n-th color components (n≥2) and each pixel having color information of one color component are arranged in a triangular lattice shape When,
前記取得した第 1の画像の色情報を用いて、 第 1色成分が欠落する画素に第 1 色成分の色情報を補間する補間手順と、 Using the acquired color information of the first image, the first color component is assigned to the pixel where the first color component is missing. An interpolation procedure for interpolating color information of color components;
前記第 1の画像の色情報と前記補間された色情報とに基づき、 第 2の画像を出 力する出力手順とを備え、  An output step of outputting a second image based on the color information of the first image and the interpolated color information,
前記補間手順は、 前記第 1の画像の補間対象画素に関して、  The interpolation procedure includes: regarding an interpolation target pixel of the first image,
1 ) 第 1色成分の平均情報を可変な演算により求め、  1) The average information of the first color component is obtained by a variable operation,
2 ) 第 1〜第 nの何れか少なくとも 1つの色成分の曲率情報を固定の演算によ り求め、  2) The curvature information of at least one of the first to n-th color components is obtained by a fixed calculation,
前記平均情報と前記曲率情報に基づいて前記補間を行う。  The interpolation is performed based on the average information and the curvature information.
1 1 . クレーム 1 0に記載の画像処理方法において、 1 1. In the image processing method described in claim 10,
少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、  A similarity judgment procedure for judging the degree of similarity in at least three directions is further provided,
前記補間手順は、 前記類似性判定手順で判定された類似性の強さに応じて前記 第 1色成分の平均情報の演算を可変にする。  In the interpolation procedure, the calculation of the average information of the first color component is made variable according to the similarity strength determined in the similarity determination procedure.
1 2 . クレーム 1 0または 1 1に記載の画像処理方法において、 1 2. In the image processing method according to claim 10 or 11,
前記曲率情報を 2次微分演算により求める。  The curvature information is obtained by a second derivative operation.
1 3 . クレーム 1 0〜 1 2のいずれか 1項に記載の画像処理方法において、 第 1〜第 3色成分で表される第 1の画像を入力するとき、 第 1〜第 3の全ての 色成分の曲率情報に基づいて前記補間を行う。 13. In the image processing method according to any one of claims 10 to 12, when the first image represented by the first to third color components is input, all of the first to third The interpolation is performed based on the curvature information of the color components.
1 4 . 画像処理方法であって、 1 4. An image processing method,
複数の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複数の画 素が非矩形状に配置された第 1の画像を記録する記録手順と、  A recording procedure of recording a first image in which a plurality of pixels represented by a plurality of color components and each pixel having color information of one color component are arranged in a non-rectangular form;
前記第 1の画像の色情報を用いて、 複数の方向からなる第 1方向群の各々につ いて類似度を算出する第 1方向群類似度算出手順と、  A first direction group similarity calculation procedure for calculating a similarity for each of a plurality of first direction groups using the color information of the first image;
前記第 1の画像の色情報を用いて、 前記第 1方向群の少なくとも 1方向と直交 し、 かつ前記第 1方向群とは異なる複数の方向からなる第 2方向群の各々につい て類似度を算出する第 2方向群類似度算出手順と、 Using the color information of the first image, for each of the second direction groups that are orthogonal to at least one direction of the first direction group and that include a plurality of directions different from the first direction group. A second direction group similarity calculation procedure for calculating the similarity by using
前記第 1方向群の類似度と前記第 2方向群の類似度を合わせて用いて、 前記第 1方向群間の類似性の強弱を判定する類似性判定手順とを備える。  A similarity determining step of determining the strength of similarity between the first direction groups by using the similarity of the first direction group and the similarity of the second direction group together.
1 5 . クレーム 1 4に記載の画像処理方法において、 15. In the image processing method described in claim 14,
前記類似性判定手順の判定結果に基づき、 前記第 1の画像の画素位置に、 少な くとも 1つの新しい色情報を生成する色情報生成手順をさらに備える。  The method further includes a color information generation step of generating at least one new color information at a pixel position of the first image based on a determination result of the similarity determination step.
1 6 . クレーム 1 4または 1 5に記載の画像処理方法において、 16. In the image processing method described in claim 14 or 15,
前記第 1方向群類似度算出手順は、 D l, D 2 , 〜, DNで表される N方向 (N≥ 2 ) の類似度 C D 1, CD2, ~, CDNを算出し、 The first direction group similarity calculating procedure, D l, D 2, ~, N direction (N≥ 2) similarity C D 1, CD2 represented by DN, calculates-the C DN,
前記第 2方向群類似度算出手順は、 D 1 ' , D 2 ' , 〜, DN' (D i ' は D i と直交する方向、 i = l, 2 , 〜, N) で表される N方向 (N≥ 2 ) の類似度 In the second direction group similarity calculation procedure, D 1 ′, D 2 ′,..., DN ′ (D i ′ is a direction orthogonal to D i, i = 1, 2, 2, N) Similarity of direction (N≥2)
C D 1 · , し D2', 〜, し DN' "ST 出し、 C D 1 ·, then D2 ', ~, then DN' "ST
前記類似性判定手順は、 前記第 1方向群間の類似性の強弱を  The similarity determination step includes determining the strength of similarity between the first direction groups.
( C DI' / C DI) : ( C D2' ZC C DN' / C DN) (CDI '/ CDI): ( CD2 ' ZC CDN '/ CDN)
で表される比率を基にした関数を用いて判定する。 The determination is made using a function based on the ratio represented by
1 7 . クレーム 1 4〜 1 6のいずれか 1項に記載の画像処理方法において、 前記第 1の画像の画素は三角格子状に配置され、 17. The image processing method according to any one of claims 14 to 16, wherein the pixels of the first image are arranged in a triangular lattice,
前記第 1方向群類似度算出手順と前記第 2方向群類似度算出手順は共に、 3に設定する。  The first direction group similarity calculation procedure and the second direction group similarity calculation procedure are both set to 3.
1 8. 画像処理方法であって、 1 8. An image processing method,
複数の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複数の画 素が非矩形状に配置された第 1の画像を記録する記録手順と、  A recording procedure of recording a first image in which a plurality of pixels represented by a plurality of color components and each pixel having color information of one color component are arranged in a non-rectangular form;
前記第 1の画像の色情報を用いて、 第 1の画素間隔の色情報で構成される類似 度を、 複数の方向からなる第 1方向群の各々について、 算出する第 1方向群類似 度算出手順と、 前記第 1の画像の色情報を用いて、 第 2の画素間隔の色情報で構成される類似 度を、 前記第 1方向群とは異なる複数の方向からなる第 2方向群の各々について、 算出する第 2方向群類似度算出手順と、 A first direction group similarity calculation for calculating a similarity composed of color information at a first pixel interval for each of the first direction groups composed of a plurality of directions using the color information of the first image; Instructions, Using the color information of the first image, a similarity composed of color information at a second pixel interval is calculated for each of a second direction group including a plurality of directions different from the first direction group. A second direction group similarity calculation procedure,
前記第 1方向群の類似度と前記第 2方向群の類似度を合わせて用いて、 前記第 1方向群間の類似性の強弱を判定する類似性判定手順とを備える。  A similarity determining step of determining the strength of similarity between the first direction groups by using the similarity of the first direction group and the similarity of the second direction group together.
1 9 . クレーム 1 8に記載の画像処理方法において、 1 9. In the image processing method described in claim 18,
前記類似性判定手順の判定結果に基づき、 前記第 1の画像の画素位置に、 少な くとも 1つの新しい色情報を生成する色情報生成手順をさらに備える。  The method further includes a color information generation step of generating at least one new color information at a pixel position of the first image based on a determination result of the similarity determination step.
2 0 . クレーム 1 8または 1 9に記載の画像処理方法において、 20. In the image processing method according to claim 18 or 19,
前記第 1方向群類似度算出手順と前記第 2方向群類似度算出手順は共に、 前記 類似度として、 同じ色成分間の色情報で構成される同色間類似度を算出する。  In both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure, the same color similarity composed of color information between the same color components is calculated as the similarity.
2 1 . クレーム 1 8〜 2 0のいずれか 1項に記載の画像処理方法において、 前記第 1の画像は第 1〜第 3の色成分で表され、 21. The image processing method according to any one of claims 18 to 20, wherein the first image is represented by first to third color components,
前記第 1方向群類似度算出手順と前記第 2方向群類似度算出手順は共に、 Both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure
1 ) 第 1色成分のみの色情報で構成される類似度成分 1) Similarity component composed of color information of only the first color component
2 ) 第 2色成分のみの色情報で構成される類似度成分  2) Similarity component composed of color information of only the second color component
3 ) 第 3色成分のみの色情報で構成される類似度成分  3) Similarity component composed of color information of only the third color component
の内、 少なくとも 2種類の類似度成分を用いて前記類似度を算出する。 The similarity is calculated using at least two types of similarity components.
2 2 . クレーム 2 0記載の画像処理方法において、 22. In the image processing method described in claim 20,
前記第 1方向群は、 同じ色成分の色情報が前記第 1の画素間隔で配置されてい る方向からなり、  The first direction group includes directions in which color information of the same color component is arranged at the first pixel interval.
前記第 2方向群は、 同じ色成分の色情報が前記第 2の画素間隔で配置されてい る方向からなる。  The second direction group includes directions in which color information of the same color component is arranged at the second pixel interval.
2 3 . クレーム 1 8〜 2 2のいずれか 1項に記載の画像処理方法において、 前記第 1の画素間隔は前記第 2の画素間隔より長い。 23. In the image processing method according to any one of claims 18 to 22, The first pixel interval is longer than the second pixel interval.
2 4 . クレーム 1 8〜 2 2のいずれか 1項に記載の画像処理方法において、 前記第 1の画素間隔は約 3画素間隔であり、 24. The image processing method according to any one of claims 18 to 22, wherein the first pixel interval is approximately 3 pixel intervals,
前記第 2の画素間隔は約 2画素間隔である。  The second pixel interval is approximately two pixel intervals.
2 5 . クレーム 1 4〜 2 4のいずれか 1項に記載の画像処理方法において、 前記第 1方向群類似度算出手順と前記第 2方向群類似度算出手順は共に、 画像 処理の対象となる処理対象画素について算出された類似度のみならず、 該処理対 象画素の周辺画素について算出された類似度も含めて前記類似度を算出する。 25. In the image processing method according to any one of claims 14 to 24, both the first direction group similarity calculation procedure and the second direction group similarity calculation procedure are subject to image processing. The similarity is calculated including not only the similarity calculated for the processing target pixel but also the similarity calculated for the peripheral pixels of the processing target pixel.
2 6 . クレーム 1 4〜 2 5のいずれか 1項に記載の画像処理方法において、 前記第 1の画像は、 複数の画素が三角格子状に配置されている。 26. In the image processing method according to any one of claims 14 to 25, in the first image, a plurality of pixels are arranged in a triangular lattice.
2 7 . クレーム 1 4〜 2 5のいずれか 1項に記載の画像処理方法において、 前記第 1の画像は、 第 1〜第 3の色成分で表され、 27. The image processing method according to any one of claims 14 to 25, wherein the first image is represented by first to third color components,
該第 1〜第 3の色成分は、 均等な画素密度で配分されている。  The first to third color components are distributed at a uniform pixel density.
2 8 . クレーム 1 5または 1 9に記載の画像処理方法において、 28. In the image processing method according to claim 15 or 19,
前記第 1の画像が第 1〜第 3の色成分で表されるとき、 前記色情報生成手順は. 第 1色成分を有する画素に第 2色成分及び Z又は第 3色成分の色情報を生成する,  When the first image is represented by first to third color components, the color information generating step includes: providing the pixel having the first color component with the color information of the second color component and Z or the third color component. Generating,
2 9 . クレーム 1 5または 1 9に記載の画像処理方法において、 29. In the image processing method according to claim 15 or 19,
前記色情報生成手順は、 前記第 1の画像の色情報と異なる輝度成分の色情報を 生成する。  The color information generating step generates color information of a luminance component different from the color information of the first image.
3 0 . クレーム 1 5または 1 9に記載の画像処理方法において、 30. In the image processing method according to claim 15 or 19,
前記色情報生成手順は、 前記第 1の画像の色情報と異なる色差成分の色情報を 生成する。 The color information generating step generates color information of a color difference component different from the color information of the first image.
3 1 . クレーム 3 0に記載の画像処理方法において、 31. In the image processing method according to claim 30,
前記第 1の画像が第 1〜第 3の色成分で表されるとき、  When the first image is represented by first to third color components,
前記色情報生成手順は、  The color information generation procedure includes:
1 ) 第 1色成分と第 2色成分の間の色差成分と  1) The color difference component between the first color component and the second color component
2 ) 第 2色成分と第 3色成分の間の色差成分と  2) The color difference component between the second color component and the third color component
3 ) 第 3色成分と第 1色成分の間の色差成分  3) Color difference component between the third color component and the first color component
の 3種類の色差成分の色情報を生成する。 The color information of the three types of color difference components is generated.
3 2 . 画像処理方法であって、 3 2. An image processing method,
第 1〜第 n色成分 (n≥ 2 ) で表され、 1つの画素に 1つの色成分の色情報を 有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順 と、  An image acquisition procedure for acquiring a first image in which a plurality of pixels represented by first to n-th color components (n≥2) and each pixel having color information of one color component are arranged in a triangular lattice shape When,
前記取得した第 1の画像の色情報を用いて、 第 1色成分が欠落する画素に第 1 色成分を補間する補間手順と、  An interpolation procedure of interpolating the first color component to a pixel where the first color component is missing, using the acquired color information of the first image;
前記第 1の画像の色情報と前記補間された色情報とに基づき、 第 2の画像を出 力する出力手順とを備え、  An output step of outputting a second image based on the color information of the first image and the interpolated color information,
前記補間手順は、 前記第 1の画像の補間対象画素に対して、 第 1色成分が 2番 目に近接する画素を含む領域の色情報を用いて、 第 1色成分の平均情報を求め、 前記補間を行う。  The interpolation step calculates average information of the first color component by using color information of an area including a pixel whose first color component is second closest to the interpolation target pixel of the first image, The interpolation is performed.
3 3 . クレーム 3 2に記載の画像処理方法において、 3 3. In the image processing method described in claim 32,
少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、  A similarity judgment procedure for judging the degree of similarity in at least three directions is further provided,
前記補間手順は、 前記類似性判定手順で判定された類似性の強さに応じて前記 第 1色成分の平均情報を求める。  In the interpolation procedure, average information of the first color component is obtained according to the similarity strength determined in the similarity determination procedure.
3 4 . 画像処理方法であって、 3 4. An image processing method,
複数の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複数の画 素が Ξ角格子状に配置された第 1の画像を取得する画像取得手順と、 Multiple images represented by multiple color components and having color information of one color component per pixel An image acquisition procedure for acquiring a first image in which the elements are arranged in a rectangular grid,
前記取得した第 1の画像の色情報を零以上の可変な係数値で加重加算すること によって、 前記第 1の画像の色情報と異なる色成分の色情報を生成する色情報生 成手順と、  A color information generating step of generating color information of a color component different from the color information of the first image by weight-adding the acquired color information of the first image with a variable coefficient value of zero or more;
前記生成された色情報を使用して第 2の画像を出力する出力手順を備え、 前記色情報生成手順は、 前記第 1の画像の処理対象画素に対して、 該画素の色 成分と異なる色成分が 2番目に近接する画素を含む領域内の色情報を加重加算す る。  An output step of outputting a second image using the generated color information, wherein the color information generation step includes: for a pixel to be processed of the first image, a color different from a color component of the pixel The color information in the area including the pixel whose component is the second closest is weighted and added.
3 5 . クレーム 3 4に記載の画像処理方法において、 35. In the image processing method described in claim 34,
少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、  A similarity judgment procedure for judging the degree of similarity in at least three directions is further provided,
前記色情報生成手順は、 前記類似性判定手順で判定された類似性の強さに応じ て前記加重加算の係数値を可変にする。  In the color information generation procedure, the coefficient value of the weighted addition is made variable according to the similarity strength determined in the similarity determination procedure.
3 6 . クレーム 3 4または 3 5に記載の画像処理方法において、 36. In the image processing method described in claim 34 or 35,
第 1の画像が第 1〜第 3の色成分で表され、 前記第 1の画像の第 1色成分を有 する画素が処理対象画素の場合、  When the first image is represented by first to third color components, and a pixel having the first color component of the first image is a processing target pixel,
前記色情報生成手順は、 前記処理対象画素、 及び第 2色成分が 2番目に近接す る画素、 及び第 3色成分が 2番目に近接する画素を含む領域内の色情報を加重加 算する。  The color information generation procedure weights and adds color information in an area including the pixel to be processed, the pixel whose second color component is second closest, and the pixel whose third color component is second closest. .
3 7 . クレーム 3 4〜 3 6のいずれか 1項に記載の画像処理方法において、 前記色情報生成手順後前記出力手順前に、 前記色情報生成手順で生成された前 記第 1の画像の色情報と異なる色成分の色情報を、 予め決められた固定のフィル 夕係数からなるフィル夕処理により、 補正する補正手順をさらに備える。 37. The image processing method according to any one of claims 34 to 36, wherein after the color information generation step and before the output step, the first image generated in the color information generation step The image processing apparatus further includes a correction procedure for correcting color information of a color component different from the color information by a filter process including a predetermined fixed filter coefficient.
3 8 . クレーム 3 7に記載の画像処理方法において、 38. In the image processing method described in claim 37,
前記フィル夕係数の中に、 正および負の値を含む。 The filter coefficient includes positive and negative values.
3 9 . 画像処理方法であって、 3 9. An image processing method,
第 1〜第 n色成分 (n≥ 2 ) で表され、 1つの画素に 1つの色成分の色情報を 有する複数の画素が三角格子状に配置された第 1の画像を取得する画像取得手順 と、  An image acquisition procedure for acquiring a first image in which a plurality of pixels represented by first to n-th color components (n≥2) and each pixel having color information of one color component are arranged in a triangular lattice shape When,
前記第 1の画像の色情報を用いて、 第 1色成分と第 2色成分の間の色差成分の 色情報を生成する色差生成手順と、  A color difference generation procedure for generating color information of a color difference component between a first color component and a second color component using the color information of the first image;
前記生成された色差成分の色情報を使用して第 2の画像を出力する出力手順を 備え、  An output step of outputting a second image using the color information of the generated color difference component,
前記色差生成手順は、 前記第 1の画像の第 1色成分を有する画素に対して、 少 なくとも第 2色成分が 2番目に近接する画素の色情報を用いて、 前記色差成分の 色情報を生成する。  The color-difference generating step includes: using color information of a pixel having a second color component that is at least second closest to a pixel having a first color component of the first image; Generate
4 0 . クレーム 3 9に記載の画像処理方法において、 40. The method of claim 39, wherein:
前記色差生成手順は、 前記第 1の画像の第 1色成分を有する処理対象画素に対 して、  The color difference generation step includes: for a processing target pixel having a first color component of the first image,
1 ) 該画素の第 1色成分の色情報と  1) The color information of the first color component of the pixel and
2 ) 該画素に対して第 2色成分が 2番目に近接する画素を含む領域内における 第 2色成分の色情報の平均情報と  2) The average information of the color information of the second color component in the region including the pixel whose second color component is second closest to the pixel
に基づいて前記色差成分の色情報を生成する。 The color information of the color difference component is generated based on
4 1 . クレーム 3 9または 4 0に記載の画像処理方法において、 41. In the image processing method according to claim 39 or 40,
前記色差生成手順は、 さらに、 処理対象画素に関する第 2色成分の曲率情報に 基づいて、 前記色差成分の色情報を生成する。  The color difference generating step further includes generating color information of the color difference component based on curvature information of a second color component for the pixel to be processed.
4 2 . クレーム 3 9〜 4 1のいずれか 1項に記載の画像処理方法において、 少なくとも 3方向に対する類似性の強弱を判定する類似性判定手順をさらに備 え、 42. The image processing method according to any one of claims 39 to 41, further comprising a similarity determination procedure for determining the strength of similarity in at least three directions,
前記色差生成手順は、 前記類似性の強さに応じて前記色差成分の色情報を生成 する。 The color difference generation step generates color information of the color difference component according to the similarity strength. I do.
4 3 . クレーム 3 2〜 4 2のいずれか 1項に記載の画像処理方法において、 前記出力手順は、 前記第 1の画像と同じ画素位置に、 前記第 2画像を出力する。 43. In the image processing method according to any one of claims 32 to 42, in the output step, the second image is output at the same pixel position as the first image.
4 4 . 画像処理方法であって、 4 4. An image processing method,
第 1〜第 3色成分で表され、 1つの画素に 1つの色成分の色情報を有する複数 の画素が均等色配分された第 1の画像を取得する画像取得手順と、  An image acquisition procedure of acquiring a first image represented by first to third color components, in which a plurality of pixels having color information of one color component in one pixel are uniformly distributed,
前記取得した第 1の画像の色情報を零以上の可変な係数値で加重加算すること によって、 前記第 1の画像の色情報と異なる色成分の色情報を生成する色情報生 成手順と、  A color information generating step of generating color information of a color component different from the color information of the first image by weight-adding the acquired color information of the first image with a variable coefficient value of zero or more;
前記生成された色情報を使用して第 2の画像を出力する出力手順を備え、 前記色情報生成手順は、 前記第 1の画像の全ての画素において、 第 1〜第 3色 成分の色情報を常に均等な色成分比率で加重加算する。  An output step of outputting a second image using the generated color information, wherein the color information generation step includes, for all pixels of the first image, color information of first to third color components. Is always weighted and added at a uniform color component ratio.
4 5 . クレーム 4 4に記載の画像処理方法において、 45. In the image processing method of claim 44,
複数の方向に対する類似性の強弱を判定する類似性判定手順をさらに備え、 前記色情報生成手順は、 前記類似性判定手順で判定された類似性の強さに応じ て前記加重加算の係数値を可変にする。  The method further includes a similarity determination step of determining the degree of similarity in a plurality of directions, wherein the color information generation step includes the step of: Make it variable.
4 6 . クレーム 4 4または 4 5に記載の画像処理方法において、 46. In the image processing method according to claim 44 or 45,
前記第 1の画像は、 複数の画素が三角格子状に配置されている。  In the first image, a plurality of pixels are arranged in a triangular lattice.
4 7 . クレーム 4 4〜 4 6のいずれか 1項に記載の画像処理方法において、 前記色情報生成手順後前記出力手順前に、 前記色情報生成手順で生成された前 記第 1の画像の色情報と異なる色成分の色情報を、 予め決められた固定のフィル 夕係数からなるフィルタ処理により、 補正する補正手順をさらに備える。 47. The image processing method according to any one of claims 44 to 46, wherein after the color information generation step and before the output step, the first image generated in the color information generation step The image processing apparatus further includes a correction procedure for correcting color information of a color component different from the color information by a filter process including a predetermined fixed filter coefficient.
4 8 . クレーム 4 7に記載の画像処理方法において、 前記フィルタ係数の中に、 正および負の値を含む。 48. In the image processing method of claim 47, The filter coefficients include positive and negative values.
4 9 . 画像処理方法であって、 4 9. An image processing method,
3種類以上の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複 数の画素からなる第 1の画像を取得する画像取得手順と、  An image acquisition procedure for acquiring a first image composed of a plurality of pixels represented by three or more types of color components and having color information of one color component per pixel;
前記取得した第 1の画像の色情報を用いて、 輝度成分の色情報と少なくとも 3 種類の色差成分の色情報とを生成する色情報生成手順と、  A color information generating step of generating color information of a luminance component and color information of at least three types of color difference components using the color information of the obtained first image;
前記色情報生成手順で生成された輝度成分の色情報と色差成分の色情報とを使 用して第 2の画像を出力する出力手順とを備える。  An output step of outputting a second image using the color information of the luminance component and the color information of the color difference component generated in the color information generation procedure.
5 0 . クレーム 4 9に記載の画像処理方法において、 50. The method of claim 49, wherein:
前記輝度成分の色情報と前記少なく とも 3種類の色差成分の色情報とを用いて、 3種類の色成分の色情報に変換する変換手順をさらに備え、  A conversion step of using the color information of the luminance component and the color information of the at least three types of color difference components to convert to color information of three types of color components;
前記出力手順は、 前記変換手順で変換された前記 3種類の色成分の色情報を使 用して第 2の画像を出力する。  The output step outputs a second image using the color information of the three types of color components converted in the conversion step.
5 1 . クレーム 4 9または 5 0に記載の画像処理方法において、 51. In the image processing method according to claim 49 or 50,
前記色情報生成手順で生成された輝度成分の色情報と色差成分の色情報は、 前 記第 1の画像の前記 3種類以上の色成分とは異なる成分の色情報である。  The color information of the luminance component and the color information of the color difference component generated in the color information generation procedure are color information of components different from the three or more types of color components of the first image.
5 2 . クレーム 4 9〜 5 1のいずれか 1項に記載の画像処理方法において、 前記第 1の画像は、 第 1〜第 3色成分で表され、 複数の画素が均等色配分され, 前記色情報生成手順は、 52. The image processing method according to any one of claims 49 to 51, wherein the first image is represented by first to third color components, and a plurality of pixels are uniformly distributed in color. The color information generation procedure is as follows:
1 ) 第 1〜第 3色成分の色成分比率が 1 : 1 : 1で構成される輝度成分の色情 報と、  1) The color information of the luminance component in which the color component ratio of the first to third color components is 1: 1: 1,
2 ) 第 1色成分と第 2色成分の間の色差成分の色情報と、  2) color information of a color difference component between the first color component and the second color component;
3 ) 第 2色成分と第 3色成分の間の色差成分の色情報と、  3) color information of a color difference component between the second color component and the third color component;
4 ) 第 3色成分と第 1色成分の間の色差成分の色情報と  4) Color information of the color difference component between the third color component and the first color component
を生成する。 Generate
5 3 . クレーム 4 9〜 5 2のいずれか 1項に記載の画像処理方法において、 複数の方向に対する類似性の強弱を判定する類似性判定手順をさらに備え、 前記色情報生成手順は、 前記類似性判定手順で判定された類似性の強さに応じ て前記輝度成分の色情報と前記少なくとも 3種類の色差成分の色情報とを生成す る。 53. The image processing method according to any one of claims 49 to 52, further comprising: a similarity determination step of determining a degree of similarity in a plurality of directions, wherein the color information generation step includes: Color information of the luminance component and color information of the at least three types of color difference components are generated according to the similarity strength determined in the sex determination procedure.
5 4 . クレーム 4 9〜 5 3のいずれか 1項に記載の画像処理方法において、 前記第 1の画像は、 複数の画素が三角格子状に配置されている。 54. In the image processing method according to any one of claims 49 to 53, in the first image, a plurality of pixels are arranged in a triangular lattice.
5 5 . 画像処理方法であって、 5 5. An image processing method,
3種類以上の色成分で表され、 1つの画素に 1つの色成分の色情報を有する複 数の画素からなる第 1の画像を取得する画像取得手順と、  An image acquisition procedure for acquiring a first image composed of a plurality of pixels represented by three or more types of color components and having color information of one color component per pixel;
前記取得した第 1の画像の色情報を用いて、 少なく とも 3種類の色差成分の色 情報を生成する色差生成手順と、  A color difference generation procedure for generating color information of at least three types of color difference components using the obtained color information of the first image;
前記生成した各々の色差成分の色情報に対して補正処理を行う補正手順と、 前記補正した色差成分の色情報を使用して第 2の画像を出力する出力手順とを 備える。  A correction procedure of performing a correction process on the color information of each of the generated color difference components; and an output procedure of outputting a second image using the corrected color information of the color difference components.
5 6 . クレーム 5 5に記載の画像処理方法において、 56. In the image processing method according to claim 55,
前記第 1の画像は第 1〜第 3色成分で表され、  The first image is represented by first to third color components,
前記色差生成手順は、  The color difference generation procedure includes:
1 ) 第 1色成分と第 2色成分の間の色差成分の色情報と、  1) color information of a color difference component between the first color component and the second color component,
2 ) 第 2色成分と第 3色成分の間の色差成分の色情報と、  2) color information of a color difference component between the second color component and the third color component,
3 ) 第 3色成分と第 1色成分の間の色差成分の色情報と  3) Color information of the color difference component between the third color component and the first color component
を生成する。 Generate
5 7 . クレーム 5 5に記載の画像処理方法において、 57. The method of claim 55, wherein:
前記第 1の画像は第 1〜第 3色成分で表され、 前記色差生成手順は、 The first image is represented by first to third color components, The color difference generation procedure includes:
前記第 1の画像の色情報を用いて、 前記第 1の画像の色情報と異なる輝度成分 の色情報を生成し、  Using the color information of the first image, generating color information of a luminance component different from the color information of the first image,
1 ) 第 1色成分と前記輝度成分の間の色差成分の色情報と、  1) color information of a color difference component between the first color component and the luminance component,
2 ) 第 2色成分と前記輝度成分の間の色差成分の色情報と、  2) color information of a color difference component between the second color component and the luminance component,
3 ) 第 3色成分と前記輝度成分の間の色差成分の色情報と  3) color information of a color difference component between the third color component and the luminance component;
を生成する。 Generate
5 8 . クレーム 5 7に記載の画像処理方法において、 58. The method of claim 57, wherein:
前記第 1の画像は、 前記第 1〜第 3色成分が複数の画素に均等色配分され、 前記色差生成手順は、 前記輝度成分として、 前記第 1〜第 3色成分の色成分比 率が 1 : 1 : 1で構成される輝度成分の色情報を生成する。  In the first image, the first to third color components are evenly distributed to a plurality of pixels, and the color difference generation step includes: as a luminance component, a color component ratio of the first to third color components. The color information of the luminance component composed of 1: 1: 1 is generated.
5 9 . クレーム 4 4〜 5 8のいずれか 1項に記載の画像処理方法において、 前記出力手順は、 前記第 1の画像と同じ画素位置に、 前記第 2画像を出力する。 59. In the image processing method according to any one of claims 44 to 58, in the output step, the second image is output at the same pixel position as the first image.
6 0 . コンピュータ読み込み可能なコンピュータプログラム製品は、 クレーム 1 〜 5 9のいずれか 1項に記載の画像処理方法の手順をコンピュータに実行させる ための画像処理プログラムを有する。 60. A computer-readable computer program product has an image processing program for causing a computer to execute the procedure of the image processing method described in any one of claims 1 to 59.
6 1 . クレーム 6 0に記載のコンピュータプログラム製品は、 前記画像処理プ ログラムが記録された記録媒体である。 61. A computer program product according to claim 60 is a recording medium on which the image processing program is recorded.
6 2 . 画像処理装置であって、 6 2. An image processing apparatus,
クレーム 1〜 5 9のいずれか 1項に記載の画像処理方法の手順を実行する制御 装置を備える。  A control device for executing the procedure of the image processing method according to any one of claims 1 to 59 is provided.
PCT/JP2003/006388 2002-05-24 2003-05-22 Image processing method, image processing program, image processor WO2003101119A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP2002/150788 2002-05-24
JP2002150788A JP4239480B2 (en) 2002-05-24 2002-05-24 Image processing method, image processing program, and image processing apparatus
JP2002/159229 2002-05-31
JP2002159229A JP4239484B2 (en) 2002-05-31 2002-05-31 Image processing method, image processing program, and image processing apparatus
JP2002/159250 2002-05-31
JP2002159250A JP4196055B2 (en) 2002-05-31 2002-05-31 Image processing method, image processing program, and image processing apparatus
JP2002159228A JP4239483B2 (en) 2002-05-31 2002-05-31 Image processing method, image processing program, and image processing apparatus
JP2002/159228 2002-05-31

Publications (1)

Publication Number Publication Date
WO2003101119A1 true WO2003101119A1 (en) 2003-12-04

Family

ID=29587777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/006388 WO2003101119A1 (en) 2002-05-24 2003-05-22 Image processing method, image processing program, image processor

Country Status (1)

Country Link
WO (1) WO2003101119A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004112401A1 (en) * 2003-06-12 2004-12-23 Nikon Corporation Image processing method, image processing program, image processor
CN104954767A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Information processing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000341701A (en) * 1999-05-25 2000-12-08 Nikon Corp Interpolation processor and storage medium recording interpolation processing program
JP2001016597A (en) * 1999-07-01 2001-01-19 Fuji Photo Film Co Ltd Solid-state image pickup device and signal processing method
JP2001103295A (en) * 1999-07-27 2001-04-13 Fuji Photo Film Co Ltd Image conversion method and device and recording medium
JP2001245314A (en) * 1999-12-21 2001-09-07 Nikon Corp Interpolation processing apparatus and recording medium recording interpolation processing program
JP2001275126A (en) * 2000-01-20 2001-10-05 Nikon Corp Interpolation processor and recording medium recorded with interpolation processing program
JP2001292455A (en) * 2000-04-06 2001-10-19 Fuji Photo Film Co Ltd Image processing method and unit, and recording medium
JP2001326942A (en) * 2000-05-12 2001-11-22 Fuji Photo Film Co Ltd Solid-state image pickup device and signal processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000341701A (en) * 1999-05-25 2000-12-08 Nikon Corp Interpolation processor and storage medium recording interpolation processing program
JP2001016597A (en) * 1999-07-01 2001-01-19 Fuji Photo Film Co Ltd Solid-state image pickup device and signal processing method
JP2001103295A (en) * 1999-07-27 2001-04-13 Fuji Photo Film Co Ltd Image conversion method and device and recording medium
JP2001245314A (en) * 1999-12-21 2001-09-07 Nikon Corp Interpolation processing apparatus and recording medium recording interpolation processing program
JP2001275126A (en) * 2000-01-20 2001-10-05 Nikon Corp Interpolation processor and recording medium recorded with interpolation processing program
JP2001292455A (en) * 2000-04-06 2001-10-19 Fuji Photo Film Co Ltd Image processing method and unit, and recording medium
JP2001326942A (en) * 2000-05-12 2001-11-22 Fuji Photo Film Co Ltd Solid-state image pickup device and signal processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004112401A1 (en) * 2003-06-12 2004-12-23 Nikon Corporation Image processing method, image processing program, image processor
US7391903B2 (en) 2003-06-12 2008-06-24 Nikon Corporation Image processing method, image processing program and image processing processor for interpolating color components
US7630546B2 (en) 2003-06-12 2009-12-08 Nikon Corporation Image processing method, image processing program and image processor
CN104954767A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Information processing method and electronic equipment
CN104954767B (en) * 2014-03-26 2017-08-29 联想(北京)有限公司 A kind of information processing method and electronic equipment

Similar Documents

Publication Publication Date Title
EP1289310B1 (en) Method and system for adaptive demosaicing
JP7646619B2 (en) Camera image processing method and camera
EP1395041B1 (en) Colour correction of images
JP5045421B2 (en) Imaging apparatus, color noise reduction method, and color noise reduction program
JP3985679B2 (en) Image processing method, image processing program, and image processing apparatus
US6724932B1 (en) Image processing method, image processor, and storage medium
JP5574615B2 (en) Image processing apparatus, control method thereof, and program
US7755670B2 (en) Tone-conversion device for image, program, electronic camera, and tone-conversion method
US7072509B2 (en) Electronic image color plane reconstruction
US8320714B2 (en) Image processing apparatus, computer-readable recording medium for recording image processing program, and image processing method
JP4321064B2 (en) Image processing apparatus and image processing program
JPWO2006006373A1 (en) Image processing apparatus and computer program product
EP0739571A1 (en) Color wide dynamic range camera using a charge coupled device with mosaic filter
JP4196055B2 (en) Image processing method, image processing program, and image processing apparatus
JP4239483B2 (en) Image processing method, image processing program, and image processing apparatus
JP4239480B2 (en) Image processing method, image processing program, and image processing apparatus
WO2003101119A1 (en) Image processing method, image processing program, image processor
JP4239484B2 (en) Image processing method, image processing program, and image processing apparatus
JP4122082B2 (en) Signal processing apparatus and processing method thereof
JP2012100215A (en) Image processing device, imaging device, and image processing program
JP2004064227A (en) Video signal processing apparatus
JP2001086523A (en) Signal generating method and device and recording medium
JP2000050292A (en) Signal processing unit and its signal processing method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载