US20120057798A1 - Image processing apparatus - Google Patents
Image processing apparatus Download PDFInfo
- Publication number
- US20120057798A1 US20120057798A1 US13/051,731 US201113051731A US2012057798A1 US 20120057798 A1 US20120057798 A1 US 20120057798A1 US 201113051731 A US201113051731 A US 201113051731A US 2012057798 A1 US2012057798 A1 US 2012057798A1
- Authority
- US
- United States
- Prior art keywords
- image
- texture
- pixel
- generation unit
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009466 transformation Effects 0.000 claims description 21
- 230000009467 reduction Effects 0.000 claims description 19
- 238000000034 method Methods 0.000 description 49
- 238000000926 separation method Methods 0.000 description 33
- 230000014509 gene expression Effects 0.000 description 28
- 238000000605 extraction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000008030 elimination Effects 0.000 description 7
- 238000003379 elimination reaction Methods 0.000 description 7
- 230000002829 reductive effect Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000005286 illumination Methods 0.000 description 6
- 238000003707 image sharpening Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000006866 deterioration Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- 230000010355 oscillation Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
Definitions
- Embodiments described herein relate generally to an image processing apparatus.
- Digital TV broadcasting allows to play back high-resolution, high-quality images as compared with conventional analog TV broadcasting.
- various techniques have been devised to further improve the image quality of such images.
- a texture expression is a factor that influences image quality. If, for example, texture deterioration or loss occurs accompanying enlargement transformation of an image, the image quality deteriorates. For this reason, the processing of generating a desired texture and adding it to an image is effective in improving the image quality.
- FIG. 1 is a block diagram exemplifying an image processing apparatus according to a first embodiment
- FIG. 2 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 1 ;
- FIG. 3A is a graph exemplifying the luminance value distribution of an image before shading component-texture component separation
- FIG. 3B is a graph exemplifying the luminance value distribution of a shading component separated from the image in FIG. 3A by the Center/Surround Retinex method;
- FIG. 3C is a graph exemplifying the luminance value distribution of a texture component separated from the image in FIG. 3A by the Center/Surround Retinex method;
- FIG. 3D is a graph exemplifying the luminance value distribution of a shading component separated from the image in FIG. 3A by the skeleton/texture separation method;
- FIG. 3E is a graph exemplifying the luminance value distribution of a texture component separated from the image in FIG. 3A by the skeleton/texture separation method
- FIG. 4 is a view for explaining a search range
- FIG. 5 is a view for explaining a template area
- FIG. 6 is a view for explaining a technique of generating a texture image in the first embodiment
- FIG. 7 is a block diagram exemplifying an image processing apparatus according to a second embodiment
- FIG. 8 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 7 ;
- FIG. 9 is a block diagram exemplifying an image processing apparatus according to a third embodiment.
- FIG. 10 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 9 ;
- FIG. 11 is a block diagram exemplifying an image processing apparatus according to a fourth embodiment.
- FIG. 12 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 11 ;
- FIG. 13 is a block diagram exemplifying a sample texture image generating unit in an image processing apparatus according to a fifth embodiment
- FIG. 14 is a flowchart exemplifying the operation of the image processing apparatus according to the fifth embodiment.
- FIG. 15 is a block diagram exemplifying an image processing apparatus according to a sixth embodiment.
- FIG. 16 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 15 ;
- FIG. 17 is a view for explaining a technique of generating a texture image in the sixth embodiment.
- FIG. 18 is a block diagram exemplifying an image processing apparatus according to a seventh embodiment.
- FIG. 19 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 18 ;
- FIG. 20 is a block diagram exemplifying an image processing apparatus according to an eighth embodiment.
- FIG. 21 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 20 ;
- FIG. 22 is a block diagram exemplifying an image processing apparatus according to a ninth embodiment.
- FIG. 23 is a flowchart exemplifying the operation of the image processing apparatus in FIG. 22 ;
- FIG. 24 is a view for explaining a technique of generating a texture image in the ninth embodiment.
- an image processing apparatus includes a first generation unit, a second generation unit and a combination unit.
- the first generation unit generates a sample texture image holding a texture component of a transform target image.
- the second generation unit generates a texture image by searching for a similar pixel area to a processed pixel area near a processing target pixel in the texture image larger than the sample texture image from a neighboring area at a position corresponding to the processing target pixel in the sample texture image and assigning the processing target pixel a pixel value near the similar pixel area in accordance with a positional relationship between the processed pixel area and the processing target pixel.
- the second generation unit searches for the similar pixel area based on a first similarity between a pixel value in a pixel area in the neighboring area and a pixel value in the processed pixel area and a determination result indicating whether each pixel in the neighboring area expresses a same object as that expressed by the processing target pixel.
- the combination unit generates a combined image by combining the texture image with a base image which holds a non-texture component of the transform target image and has same size as that of the texture image.
- an “image” indicates a digital image having one or more components per pixel.
- each embodiment will exemplify the processing for luminance values. However, it is possible to replace this processing with processing for other kinds of components (for example, a color difference component and each component in the RGB format), as needed.
- an image processing apparatus 1 includes a sample texture image generation unit 101 , a texture image generation unit 102 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 1 .
- the image processing apparatus 1 processes the transform target image 106 and outputs a combined image 110 .
- the base image 105 has a size larger than that of the transform target image 106 .
- the base image 105 has the same size as that of the combined image 110 .
- the base image 105 is an image holding shading in the transform target image 106 . In this case, shading indicates a gradual change pattern of luminance values. More specifically, the base image 105 may be an enlarged image of the transform target image 106 or an image (non-texture image) holding a rough change pattern of luminance values in the enlarged image. In this embodiment, the base image 105 is prepared in advance and input to the image processing apparatus 1 .
- the base image 105 by eliminating a texture component from the enlarged image of the transform target image 106 .
- a texture component and a shading component non-texture component
- the skeleton/texture separation method, Center/Surround Retinex method, ⁇ -filter, and the like are known.
- FIGS. 3B and 3C respectively show the shading component and the texture component obtained by applying the Center/Surround Retinex method to the image shown in FIG. 3A .
- FIGS. 3D and 3E respectively show the shading component and the texture component obtained by applying the skeleton/texture separation method to the image shown in FIG. 3A .
- FIGS. 3A to 3E show one-dimensional luminance value distributions for the sake of simplicity. Referring to FIGS. 3A to 3E , the abscissa represents the coordinates; and the ordinate, the luminance values. According to the skeleton/texture separation method, an edge with a high intensity is retained as a shading component. On the other hand, according to the Center/Surround Retinex method, an edge with a large luminance difference is retained as a texture component.
- the skeleton/texture separation method is a technique of separating an input image I into a skeleton component (corresponding to a non-texture component U) representing a rough structure of an object and a texture component (corresponding to a texture component V) representing minute oscillation on the surface of the object.
- Skeleton/texture separation methods are roughly classified into an addition separation type expressed by expression (1) given below and a multiplication separation type expressed by expression (2) given below.
- (x, y) represents the position (coordinates) of a target pixel.
- (x, y) represents the position (coordinates) of a target pixel.
- f represents the logarithmic input image function obtained by logarithmically transforming the input image I
- u and v respectively represent the logarithmic skeleton function and the logarithmic texture function obtained by logarithmically transforming the skeleton component U and the texture component V
- ⁇ represents a parameter for adjusting the upper limit of a G norm of a logarithmic texture component v
- ⁇ represents a parameter for adjusting the allowable range of a residual f-u-v
- X represents an image function space
- G represents an oscillation function space near a dual space of a bounded variation function space
- J(u) represents the Total Variation energy defined by expression (4):
- P G ⁇ (h) represents orthogonal projection to a partial space G ⁇ of a function h.
- V ( x,y ) exp( v ( x, y )) (9)
- the Center/Surround Retinex method is a technique of separating the input image I into an illumination component (corresponding to the non-texture component U) and a reflection component (corresponding to the texture component V), as indicated by expression (10):
- the illumination component U is estimated by
- G M (x, y) represents a Gaussian filter having a filter size of M ⁇ M, and * represents a convolution integral.
- the Gaussian filter G M (x, y) is represented by
- the reflection component V is estimated by expression (14) by using the input image I and the illumination component U estimated by expression (11).
- V i ⁇ ( x , y ) I i ⁇ ( x , y ) U i ⁇ ( x , y ) , ⁇ i ⁇ ( R , G , B ) ( 14 )
- the Center/Surround Retinex method is implemented in the above manner.
- the sample texture image generation unit 101 generates a sample texture image 107 based on the transform target image 106 . More specifically, the sample texture image generation unit 101 eliminates a shading component from the transform target image 106 to obtain a texture component of the transform target image 106 . This texture component corresponds to the sample texture image 107 . It is possible to use, for example, the skeleton/texture separation method or the Center/Surround Retinex method to separate a texture component from the transform target image 106 . Note that the sample texture image 107 has the same size as that of the transform target image 106 . That is, the sample texture image 107 has a size smaller than that of the base image 105 and combined image 110 .
- the texture image generation unit 102 generates a texture image 108 based on the sample texture image 107 from the sample texture image generation unit 101 .
- the texture image 108 has the same size as that of the base image 105 and combined image 110 . That is, the texture image 108 has a size larger than that of the sample texture image 107 . Note that a practical example of the operation of the texture image generation unit 102 will be described later.
- the image combination unit 104 generates the combined image 110 by combining the base image 105 with the texture image 108 from the texture image generation unit 102 . Note that a practical example of the operation of the image combination unit 104 will be described later.
- the image processing apparatus 1 reads the base image 105 (step S 201 ) and transform target image 106 (step S 202 ). Note that in the operation example in FIG. 2 , since the base image 105 is not used until step S 206 , it is possible to execute step S 201 at an arbitrary timing before step S 206 .
- the image processing apparatus 1 inputs the transform target image 106 read in step S 202 to the sample texture image generation unit 101 .
- the sample texture image generation unit 101 generates the sample texture image 107 by eliminating a shading component from the transform target image 106 read in step S 202 (step S 203 ). More specifically, the sample texture image generation unit 101 can generate the sample texture image 107 by separating the texture component V by applying the above skeleton/texture separation method, Center/Surround Retinex method, or the like to the transform target image 106 . Note that if the normal direction and illumination conditions in a local area in an image remain constant, an invariant can be obtained relative to a diffusion reflection component.
- the texture image generation unit 102 generates each pixel constituting the texture image 108 based on the sample texture image 107 generated in step S 203 (the loop of steps S 204 and S 205 ).
- the texture image generation unit 102 selects a pixel, of the pixels constituting the texture image 108 , which has not been processed (i.e., a pixel to which no pixel value is assigned) (to be referred to as a processing target pixel hereinafter).
- the texture image generation unit 102 determines, as a search range, an area near a position corresponding to the processing target pixel in the sample texture image 107 .
- the texture image generation unit 102 searches the search range for a pixel area (to be referred as a similar area hereinafter) similar in the change pattern of pixel values to a pixel area which has been processed (i.e., a pixel area to which pixel values are assigned) (to be referred to as a template area hereinafter) and is located near the processing target pixel.
- the texture image generation unit 102 searches for a corresponding pixel near the similar area in accordance with the positional relationship between the template area and the processing target pixel and assigns the pixel value of the corresponding pixel to the processing target pixel.
- a position (I k , J k ) of a processing target pixel in the texture image 108 corresponds to a position (i k , j k ) in the sample texture image 107
- an area near the position (i k , j k ) is determined as a search range.
- the sample texture image 107 has a size of w ⁇ h [pixel]
- the texture image 108 has a size of W ⁇ H [pixel].
- the position (i k , j k ) can be derived by
- a search range is typically a pixel area centered on the position (i k , j k ).
- a search area is a rectangular area having a size of 20 ⁇ 20 [pixel] or 40 ⁇ 40 [pixel]. Note that the shape and size of a search range are not specifically limited.
- the texture image generation unit 102 sequentially generates the pixels of the texture image 108 in raster scan order, for example.
- raster scan order with regard to an arbitrary processing target pixel, pixels occupying the upper side of the processing target pixel and pixels which are located on the same horizontal line as that of the processing target pixel and occupy the left side of the processing target pixel have already been processed.
- the template area is constituted by processed pixels included in a rectangular area centered on the processing target pixel and having a size of N ⁇ N [pixel].
- the texture image generation unit 102 searches the search range for a pixel area similar in the change pattern of pixel values in the template area. It is possible to evaluate the similarity between a given pixel area and the template area by using, for example, the sum of square differences (SSD). That is, the texture image generation unit 102 searches the search range for a pixel area exhibiting a minimum SSD relative to the template area. The texture image generation unit 102 then searches for a corresponding pixel (the hatched pixel in the case in FIG. 6 ) near the similar area in accordance with the positional relationship between the template area and the processing target pixel, and assigns the pixel value (luminance value) of this corresponding pixel to the processing target pixel.
- SSD sum of square differences
- enlargement transformation such as the bilinear interpolation method or the cubic interpolation method is known.
- generating the texture image 108 by enlargement transformation may degrade or lead to loss of the high-frequency texture component of the sample texture image 107 . That is, it is difficult to obtain the high-definition combined image 110 even by combining the texture image 108 generated by enlargement transformation with the base image 105 .
- this embodiment uses the above technique to generate the texture image 108 holding the high-frequency texture component of the sample texture image 107 .
- the image combination unit 104 generates the combined image 110 by combining the base image 105 read in step S 201 with the texture image 108 generated in the loop of steps S 204 and S 205 (step S 206 ). More specifically, the image combination unit 104 generates the combined image 110 according to expression (16) or (17):
- the image combination unit 104 When the sample texture image generation unit 101 is to generate the sample texture image 107 by using the addition separation type technique (for example, the addition type skeleton/texture separation method or the E-filter), the image combination unit 104 generates the combined image 110 according to expression (16). In contrast, when the sample texture image generation unit 101 is to generate the sample texture image 107 by using the multiplication separation type technique (for example, the multiplication type skeleton/texture separation method or the Center/Surround Retinex method), the image combination unit 104 generates the combined image 110 according to expression (17).
- the addition separation type technique for example, the addition type skeleton/texture separation method or the E-filter
- the image combination unit 104 when the sample texture image generation unit 101 is to generate the sample texture image 107 by using the multiplication separation type technique (for example, the multiplication type skeleton/texture separation method or the Center/Surround Retinex method), the image combination unit 104 generates the combined image 110 according to expression (17).
- the image processing apparatus 1 outputs the combined image 110 generated in step S 206 to a display device (not show) or the like (step S 207 ), and terminates the processing.
- the image processing apparatus uses a base image which holds shading (non-texture component) of a transform target image, and hence a combined image maintains shading of the transform target image.
- the image processing apparatus uses the texture image generated based on a sample texture image holding the direction information of a local texture of a transform target image, the directivity of a texture of the transform target image is maintained in a combined image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- an image processing apparatus 7 includes a base image generation unit 701 , an image reduction unit 702 , a sample texture image generation unit 101 , a texture image generation unit 102 , and an image combination unit 104 .
- a transform target image 106 is input to the image processing apparatus 7 .
- the image processing apparatus 7 processes the transform target image 106 and outputs a combined image 110 .
- the transform target image 106 has the same size as that of a base image 105 , texture image 108 , and combined image 110 .
- the base image generation unit 701 generates the base image 105 based on the transform target image 106 .
- the base image generation unit 701 generates the base image 105 by separating a non-texture component from the transform target image 106 (that is, eliminating a texture component). It is possible to use each of the above techniques to separate a non-texture component.
- the base image generation unit 701 may directly generate the transform target image 106 as the base image 105 . In this case, it is not necessary to use the base image generation unit 701 .
- the image reduction unit 702 generates a reduced image 109 by reduction transformation of the transform target image 106 . If the size of the transform target image 106 decreases K times (where K is an arbitrary real number satisfying 0 ⁇ K ⁇ 1.0), the frequency of the texture pattern held by the transform target image 106 increases to 1/K. Obviously, the reduced image 109 has a size smaller than that of the transform target image 106 .
- the sample texture image generation unit 101 generates a sample texture image 107 based on the reduced image 109 .
- the sample texture image 107 has the same size as that of the reduced image 109 .
- the image processing apparatus 7 reads the transform target image 106 (step S 801 ).
- the image processing apparatus 7 inputs the transform target image 106 read in step S 801 to the base image generation unit 701 and the image reduction unit 702 .
- the base image generation unit 701 generates the base image 105 based on the transform target image 106 read in step S 801 (step S 802 ). Note that in the operation example in FIG. 8 , since the base image 105 is not used until step S 806 , it is possible to execute step S 802 at an arbitrary timing before step S 806 .
- the image reduction unit 702 generates the reduced image 109 by performing reduction transformation of the transform target image 106 read in step S 801 (step S 803 ).
- the sample texture image generation unit 101 generates the sample texture image 107 by eliminating a shading component from the reduced image 109 generated in step S 803 (step S 203 ).
- the image combination unit 104 generates the combined image 110 by combining the base image 105 generated in step S 802 with the texture image 108 generated in the loop of steps S 204 and S 205 (step S 206 ).
- the image processing apparatus As described above, the image processing apparatus according to the second embodiment generates a texture image based on a reduced image of a transform target image. Therefore, the image processing apparatus according to this embodiment can obtain a combined image with a texture pattern having a higher frequency and is similar to the texture pattern held by the transform target image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- an image processing apparatus 9 includes an image enlargement unit 901 , a base image generation unit 701 , a sample texture image generation unit 101 , a texture image generation unit 102 , and an image combination unit 104 .
- a transform target image 106 is input to the image processing apparatus 9 .
- the image processing apparatus 9 processes the transform target image 106 and outputs a combined image 110 .
- the transform target image 106 has the same size as that of a sample texture image 107 .
- the image enlargement unit 901 generates an enlarged image 111 by performing enlargement transformation of the transform target image 106 .
- the enlarged image 111 has the same size as that of a base image 105 , texture image 108 , and combined image 110 .
- the base image generation unit 701 generates the base image 105 based on the enlarged image 111 .
- Enlargement transformation is implemented by, for example, the nearest neighbor interpolation method, linear interpolation method, cubic convolution method, or bicubic interpolation method.
- the image enlargement unit 901 generates the enlarged image 111 by performing enlargement transformation of the transform target image 106 read in step S 202 (step S 1001 ).
- the base image generation unit 701 generates the base image 105 based on the enlarged image 111 generated in step S 1001 (step S 802 ).
- step S 206 since the base image 105 is not used until step S 206 , it is possible to execute steps S 1001 and S 802 at arbitrary timings before step S 206 .
- the image combination unit 104 generates the combined image 110 by combining the base image 105 generated in step S 802 with the texture image 108 generated in the loop of steps S 204 and S 205 (step S 206 ).
- the image processing apparatus generates a base image based on an enlarged image of a transform target image.
- the transform target image holds a texture pattern having a higher frequency than that of the enlarged image. Therefore, the image processing apparatus according to this embodiment can obtain a combined image with a texture pattern having a higher frequency and is similar to the texture pattern of the enlarged image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- an image processing apparatus 11 includes an image sharpening unit 1101 , a sample texture image generation unit 101 , a texture image generation unit 102 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 11 .
- the image processing apparatus 11 processes the transform target image 106 and outputs a combined image 110 .
- the transform target image 106 has the same size as that of a sample texture image 107 .
- the image sharpening unit 1101 generates a sharpened base image 1102 by sharpening the base image 105 .
- the sharpened base image 1102 has the same size as that of the base image 105 , texture image 108 , and combined image 110 .
- an arbitrary sharpening technique such as an unsharp masking method can be used.
- the image combination unit 104 generates the combined image 110 by combining the sharpened base image 1102 from the image sharpening unit 1101 with the texture image 108 from the texture image generation unit 102 .
- the image sharpening unit 1101 generates the sharpened base image 1102 by sharpening the base image 105 read in step S 201 (step S 1201 ). Note that in the operation example in FIG. 12 , since the sharpened base image 1102 is not used until step S 206 , it is possible to execute steps S 201 and S 1201 at arbitrary timings before step S 206 .
- the image combination unit 104 generates the combined image 110 by combining the sharpened base image 1102 generated in step S 1201 with the texture image 108 generated in the loop of steps S 204 and S 205 (step S 206 ).
- the image processing apparatus As has been described above, the image processing apparatus according to the fourth embodiment generates a combined image by combining a texture image with a sharpened base image. Therefore, the image processing apparatus according to this embodiment can improve the sharpness of the texture and edges in the combined image.
- An image processing apparatus corresponds to an arrangement obtained by replacing the sample texture image generation unit 101 in the image processing apparatus according to each embodiment described above with a sample texture image generation unit 1301 .
- the sample texture image generation unit 1301 includes a shading elimination unit 1302 and an amplitude adjustment unit 1303 .
- the shading elimination unit 1302 eliminates a shading component from a transform target image 106 like the sample texture image generation unit 101 to obtain a texture component (to be referred to as a shading-eliminated image hereinafter) of the transform target image 106 .
- the shading elimination unit 1302 inputs the shading-eliminated image to the amplitude adjustment unit 1303 .
- the amplitude adjustment unit 1303 generates a sample texture image 107 by adjusting the amplitude of the shading-eliminated image from the shading elimination unit 1302 .
- the amplitude adjustment unit 1303 attenuates the luminance amplitude of the shading-eliminated image. More specifically, the amplitude adjustment unit 1303 adjusts the amplitude of the shading-eliminated image according to expression (18) or (19):
- V ′( x, y ) V ( x, y ) a (18)
- V ′( x, y ) a ⁇ V ( x, y ) (19)
- the amplitude adjustment unit 1303 When the shading elimination unit 1302 is to generate a shading-eliminated image by using an addition separation type technique (for example, the addition type skeleton/texture separation method or the ⁇ -filter), the amplitude adjustment unit 1303 generates the sample texture image 107 according to expression (19).
- an addition separation type technique for example, the addition type skeleton/texture separation method or the ⁇ -filter
- the amplitude adjustment unit 1303 generates the sample texture image 107 according to expression (19).
- the shading elimination unit 1302 When the shading elimination unit 1302 is to generate a shading-eliminated image by using a multiplication separation type technique (for example, the multiplication type skeleton/texture separation method or the Center/Surround Retinex method), the amplitude adjustment unit 1303 generates the sample texture image 107 according to expression (18).
- V(x, y) represents the pixel value at the position (coordinates) (x, y) in the shading-eliminated image
- V′(x, y) represents the pixel value at the position (x, y) in the sample texture image 107
- W V represents the horizontal size [pixel] of the transform target image 106
- W V represents the horizontal size [pixel] of the base image 105
- H V represents the vertical size [pixel] of the transform target image 106
- H V represents the vertical size [pixel] of the base image 105 .
- the shading elimination unit 1302 generates a shading-eliminated image by eliminating a shading component from the transform target image 106 read in step S 202 (step S 203 ).
- the amplitude adjustment unit 1303 generates the sample texture image 107 by adjusting the amplitude of the shading-eliminated image generated in step S 203 (step S 1401 ).
- a texture image generation unit 102 generates each pixel constituting a texture image 108 based on the sample texture image 107 generated in step S 1401 (the loop of steps S 204 and S 205 ).
- the image processing apparatus generates a sample texture image by adjusting the amplitude of the texture component of the transform target image.
- the image processing apparatus according to this embodiment can obtain a combined image which has the same frequency as that of the texture of the transform target image but gives a different impression.
- an image processing apparatus 15 includes a skeleton image generation unit 1501 , a sample texture image generation unit 1503 , a texture image generation unit 1505 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 15 .
- the image processing apparatus 15 processes the transform target image 106 and outputs a combined image 110 .
- the skeleton image generation unit 1501 generates a base skeleton image 1502 by eliminating a texture component from the base image 105 .
- the skeleton image generation unit 1501 inputs the base skeleton image 1502 to the texture image generation unit 1505 . More specifically, the skeleton image generation unit 1501 can eliminate a texture component from the base image 105 by using one of the techniques described above. For example, the skeleton image generation unit 1501 may generate a non-texture component U obtained by using the skeleton/texture separation method as the base skeleton image 1502 , or may generate an illumination component U obtained by using the Center/Surround Retinex method as the base skeleton image 1502 .
- the skeleton image generation unit 1501 may directly generate the base image 105 as the base skeleton image 1502 . In this case, it is not necessary to use the skeleton image generation unit 1501 . That is, the base skeleton image is formed from only the non-texture image.
- the base skeleton image 1502 is used to determine the identity of an object.
- the base skeleton image 1502 therefore preferably does not include a fine density pattern expressing the texture on the surface of the object in the base image 105 .
- an edge expressing an object boundary is preferably sharp.
- the skeleton image generation unit 1501 may generate the base skeleton image 1502 based on a sharpened base image 1102 described above in place of the base image 105 .
- the sample texture image generation unit 1503 generates a sample texture image 107 based on the transform target image 106 like the sample texture image generation unit 101 or the sample texture image generation unit 1301 .
- the sample texture image generation unit 1503 also generates a sample skeleton image 1504 based on the transform target image 106 .
- the sample skeleton image 1504 corresponds to a non-texture component of the transform target image 106 .
- the sample texture image generation unit 1503 inputs a sample texture image 107 and the sample skeleton image 1504 to the texture image generation unit 1505 .
- the sample skeleton image 1504 is used to determine the identity of an object.
- the sample skeleton image 1504 therefore preferably does not include a fine density pattern expressing the texture on the surface of the object in the transform target image 106 .
- an edge expressing an object boundary is preferably sharp.
- the sample texture image generation unit 1503 may generate the sample skeleton image 1504 based on an image obtained by sharpening the transform target image 106 in place of the transform target image 106 .
- the base skeleton image 1502 and the sample skeleton image 1504 may be grayscale images or images having a plurality of color components.
- color systems such as RGB, YUV, and L*a*b*
- a skeleton image having a plurality of color components it is possible to express the similarity between pixels by using the square difference of the distance or the absolute value of the difference between the two points in a color space.
- the texture image generation unit 1505 generates a texture image 108 based on the sample texture image 107 from the sample texture image generation unit 1503 . As will be described later, the texture image generation unit 1505 determines the identity of an object based on the base skeleton image 1502 from the skeleton image generation unit 1501 and the sample skeleton image 1504 from the sample texture image generation unit 1503 . The texture image generation unit 1505 generates the texture image 108 by using the determination result.
- the image combination unit 104 generates a combined image 110 by combining the base image 105 with the texture image 108 from the texture image generation unit 1505 .
- the skeleton image generation unit 1501 generates the base skeleton image 1502 by eliminating a texture component from the base image 105 read in step S 201 (step S 1601 ).
- the sample texture image generation unit 1503 generates the sample skeleton image 1504 and the sample texture image 107 by separating a shading component and a texture component from the transform target image 106 read in step S 202 (step S 1602 ). Note that it is possible to execute the series of processing in steps S 201 and S 1601 and the series of processing in steps S 202 and S 1602 in reverse order or in parallel.
- the texture image generation unit 1505 generates each pixel constituting the texture image 108 based on the sample texture image 107 generated in step S 1602 (the loop of steps S 1603 and S 205 ).
- the texture image generation unit 1505 differs from the texture image generation unit 102 in that it searches for a similar area by using the identity determination result on an object.
- the texture image generation unit 1505 determines the identity of an object with reference to a pixel (to be referred to as a reference pixel hereinafter) at a position corresponding to a processing target pixel in the base skeleton image 1502 .
- the texture image generation unit 1505 determines whether the similarity between each pixel included in a neighboring area at a position corresponding to a processing target pixel in the sample skeleton image 1504 and a reference pixel (for example, the square difference of the pixel values or the absolute value of the difference between the pixel values) is equal to or less than a threshold Th. If the similarity between a given pixel and the reference pixel is equal to or less than the threshold Th, the texture image generation unit 1505 determines that the corresponding pixel in the texture image 108 expresses the same object as that expressed by the processing target pixel. If the similarity between a given pixel and the reference pixel exceeds the threshold Th, the texture image generation unit 1505 determines that the corresponding pixel in the texture image 108 does not express the same object as that expressed by the processing target pixel.
- a threshold Th for example, the square difference of the pixel values or the absolute value of the difference between the pixel values
- skeleton images (the base skeleton image 1502 and the sample skeleton image 1504 ) hold rough changes in luminance value, they tend to cause large difference between pixels expressing different objects.
- the texture image generation unit 1505 narrows down the search range to pixels determined to express the same object as that expressed by the processing target pixel in the texture image 108 . In other words, the texture image generation unit 1505 excludes pixels determined not to express the same object as that expressed by the processing target pixel in the texture image 108 .
- the identity determination result on an object i.e., the similarity between an object expressed by a processing target pixel and an object expressed by each pixel included in a neighboring area
- a multilevel value a real number between or equal to 0 and 1
- the texture image generation unit 1505 may search the search range for a pixel area in which the weighted sum of the SSD between itself and a template area and the total sum of identity determination results is minimum.
- the texture image generation unit 1505 may assign each pixel in the search range a weight corresponding to a determination result. That is, the texture image generation unit 1505 may search the search range for a pixel area in which the weighted sum of square differences between itself and a template area is minimum.
- the image combination unit 104 generates the combined image 110 by combining the base image 105 read in step S 201 with the texture image 108 generated in the loop of steps S 1603 and S 205 (step S 206 ).
- the image processing apparatus searches for a similar area by using the identity determination result on an object.
- the image processing apparatus is robust against an error caused by the texture of a false object being assigned to a processing target pixel near an object boundary, and easily avoids texture failure of a combined image due to the propagation of such an error. That is, the image processing apparatus according to the embodiment can obtain a high-definition combined image which accurately retains the texture of a transform target image including a plurality of objects.
- an image processing apparatus 18 includes a skeleton image generation unit 1501 , an image reduction unit 1801 , a sample texture image generation unit 101 , a texture image generation unit 1505 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 18 .
- the image processing apparatus 18 processes the transform target image 106 and outputs a combined image 110 .
- the skeleton image generation unit 1501 generates a base skeleton image 1502 by eliminating a texture component from the base image 105 .
- the skeleton image generation unit 1501 inputs the base skeleton image 1502 to the texture image generation unit 1505 and the image reduction unit 1801 .
- the image reduction unit 1801 generates a sample skeleton image 1504 by performing reduction transformation of the base skeleton image 1502 from the skeleton image generation unit 1501 .
- the image reduction unit 1801 inputs the sample skeleton image 1504 to the texture image generation unit 1505 .
- Reduction transformation is implemented by, for example, the nearest neighbor interpolation method, linear interpolation method, or cubic convolution method (bicubic interpolation method).
- reduction transformation that can generate the sample skeleton image 1504 with little blur.
- the texture image generation unit 1505 determines the identity of the object described above based on the base skeleton image 1502 from the skeleton image generation unit 1501 and the sample skeleton image 1504 from the image reduction unit 1801 .
- the image reduction unit 1801 generates the sample skeleton image 1504 by performing reduction transformation of the base skeleton image 1502 generated in step S 1601 (step S 1901 ). Note that it is possible to execute the series of processing in steps S 201 , S 1601 , and S 1901 and the series of processing in steps S 202 and S 203 in reverse order or in parallel.
- the image processing apparatus generates a sample skeleton image by performing reduction transformation of a base skeleton image. Therefore, the image processing apparatus according to this embodiment need not generate a sample skeleton image based on a transform target image, and hence can select a texture component separation technique for the transform target image, with priority being given to an improvement in the accuracy of a sample texture image.
- an image processing apparatus 20 includes a sample texture image generation unit 1503 , an image enlargement unit 2001 , a texture image generation unit 1505 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 20 .
- the image processing apparatus 20 processes the transform target image 106 and outputs a combined image 110 .
- the sample texture image generation unit 1503 generates a sample texture image 107 and a sample skeleton image 1504 based on the transform target image 106 .
- the sample texture image generation unit 1503 inputs a sample texture image 107 to the texture image generation unit 1505 .
- the sample texture image generation unit 1503 inputs the sample skeleton image 1504 to the image enlargement unit 2001 and the texture image generation unit 1505 .
- the image enlargement unit 2001 generates a base skeleton image 1502 by performing enlargement transformation of the sample skeleton image 1504 from the sample texture image generation unit 1503 .
- the base skeleton image 1502 has the same size as that of the base image 105 and combined image 110 .
- the image enlargement unit 2001 inputs the base skeleton image 1502 to the sample texture image generation unit 1503 . Note that the image enlargement unit 2001 may perform arbitrary enlargement transformation.
- the texture image generation unit 1505 determines the identity of the object described above based on the base skeleton image 1502 from the image enlargement unit 2001 and the sample skeleton image 1504 from the sample texture image generation unit 1503 .
- the image enlargement unit 2001 generates the base skeleton image 1502 by performing enlargement transformation of the sample skeleton image 1504 generated in step S 1602 (step S 2101 ).
- the image processing apparatus As described above, the image processing apparatus according to the eighth embodiment generates a base skeleton image by performing enlargement transformation of a sample skeleton image. Therefore, the image processing apparatus according to this embodiment applies texture component/non-texture component separation processing, which requires a relatively large amount of calculation, to a transform target image only once. That is, the image processing apparatus according to the embodiment can obtain a combined image in a short period of time (short delay).
- an image processing apparatus 22 includes a base feature amount extraction unit 2201 , a sample feature amount extraction unit 2203 , a sample texture image generation unit 101 , a texture image generation unit 2205 , and an image combination unit 104 .
- a base image 105 and a transform target image 106 are input to the image processing apparatus 22 .
- the image processing apparatus 22 processes the transform target image 106 and outputs a combined image 110 .
- the base feature amount extraction unit 2201 obtains a base feature amount 2202 by extracting an image feature amount for each local area of the base image 105 .
- the base feature amount 2202 reflects the pixel value feature of each local area of the base image 105 .
- the base feature amount 2202 may be the average or variance of the histogram of pixel values in each local area of the base image 105 , a co-occurrence matrix feature characterizing a texture pattern in each local area, or the like.
- the base feature amount extraction unit 2201 inputs the base feature amount 2202 to the texture image generation unit 2205 .
- the sample feature amount extraction unit 2203 obtains a sample feature amount 2204 by extracting an image feature amount for each local area of the transform target image 106 .
- the sample feature amount 2204 reflects the pixel value feature of each local area of the transform target image 106 . Note that the sample feature amount 2204 and the base feature amount 2202 are the same type of image feature amounts.
- the sample feature amount extraction unit 2203 inputs the sample feature amount 2204 to the texture image generation unit 2205 .
- the texture image generation unit 2205 generates a texture image 108 based on a sample texture image 107 from the sample texture image generation unit 101 . As will be described later, the texture image generation unit 2205 determines the identity of an object based on the base feature amount 2202 from the base feature amount extraction unit 2201 and the sample feature amount 2204 from the sample feature amount extraction unit 2203 . The texture image generation unit 2205 generates the texture image 108 by using the determination result.
- the image combination unit 104 generates the combined image 110 by combining the base image 105 with the texture image 108 from the texture image generation unit 2205 .
- the base feature amount extraction unit 2201 obtains the base feature amount 2202 by extracting an image feature amount from the base image 105 read in step S 201 (step S 2301 ).
- the sample feature amount extraction unit 2203 obtains the sample feature amount 2204 by extracting an image feature amount from the sample texture image 107 generated in step S 203 (step S 2302 ). Note that it is possible to execute the series of processing in steps S 201 and S 2301 and the series of processing in steps S 202 , S 203 , and S 2302 in reverse order or in parallel.
- the texture image generation unit 2205 generates each pixel constituting the texture image 108 based on the sample texture image 107 generated in step S 203 (the loop of steps S 2303 and S 205 ).
- the texture image generation unit 2205 differs from the texture image generation unit 1505 in that it determines the identity of an object based on the base feature amount 2202 and the sample feature amount 2204 .
- the texture image generation unit 2205 determines the identity of an object with reference to a feature amount (to be referred to as a reference feature amount hereinafter) at a position corresponding to a processing target pixel in the base feature amount 2202 .
- the texture image generation unit 2205 determines whether the similarity (e.g., the square difference between feature amounts or the absolute value of the difference between feature amounts) between each feature amount included in a neighboring area at a position corresponding to the processing target pixel in the sample feature amount 2204 and the reference feature amount is equal to or less than a threshold Th′. If the similarity between a given feature amount and the reference feature amount is equal to or less than the threshold Th′, the texture image generation unit 2205 determines that the corresponding pixel in the texture image 108 expresses the same object as that expressed by the processing target pixel. If the similarity between a given feature amount and the reference feature amount exceeds the threshold Th′, the texture image generation unit 2205 determines that the corresponding pixel in the texture image 108 does not express the same object as that expressed by the processing target pixel.
- the similarity e.g., the square difference between feature amounts or the absolute value of the difference between feature amounts
- image feature amounts each reflect the pixel value feature of each local area in the image, a large difference tends to occur between local areas expressing different objects.
- the texture image generation unit 2205 narrows down the search range to pixels determined to express the same object as that expressed by the processing target pixel in the texture image 108 . In other words, the texture image generation unit 2205 excludes pixels determined not to express the same object as that expressed by the processing target pixel in the texture image 108 .
- the identity determination result on an object i.e., the similarity between an object expressed by a processing target pixel and an object expressed each pixel included in a neighboring area
- a multilevel value a real number between or equal to 0 and 1
- the texture image generation unit 2205 may search the search range for a pixel area in which the weighted sum of the SSD between itself and a template area and the total sum of identity determination results is minimum.
- the texture image generation unit 2205 may assign each pixel in the search range a weight corresponding to a determination result. That is, the texture image generation unit 2205 may search the search range for a pixel area in which the weighted sum of square differences between itself and a template area is minimum.
- the image combination unit 104 generates the combined image 110 by combining the base image 105 read in step S 201 with the texture image 108 generated in the loop of steps S 2303 and S 205 (step S 206 ).
- the image processing apparatus searches for a similar area by using the identity determination result on an object.
- the image processing apparatus is robust against an error caused by the texture of a false object being assigned to a processing target pixel near an object boundary, and avoids texture failure of a combined image due to the propagation of such an error. That is, the image processing apparatus according to the embodiment can obtain a high-definition combined image which accurately retains the texture of a transform target image including a plurality of objects.
- Each embodiment described above can be implemented by using a general-purpose computer as basic hardware.
- Programs implementing the processing in each embodiment described above may be provided by being stored in a computer-readable storage medium.
- the programs are stored, in the storage medium, as files in a form that can be installed or in a form that can be executed.
- the storage medium can take any form, for example, a magnetic disk, an optical disc (e.g., a CD-ROM, CD-R, or DVD), a magnetooptical disc (e.g., an MO), or a semiconductor memory, as long as it is a computer-readable storage medium.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
- This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2010-198185, filed Sep. 3, 2010; the entire contents of which are incorporated herein by reference.
- Embodiments described herein relate generally to an image processing apparatus.
- Digital TV broadcasting allows to play back high-resolution, high-quality images as compared with conventional analog TV broadcasting. In addition, various techniques have been devised to further improve the image quality of such images.
- A texture expression is a factor that influences image quality. If, for example, texture deterioration or loss occurs accompanying enlargement transformation of an image, the image quality deteriorates. For this reason, the processing of generating a desired texture and adding it to an image is effective in improving the image quality.
-
FIG. 1 is a block diagram exemplifying an image processing apparatus according to a first embodiment; -
FIG. 2 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 1 ; -
FIG. 3A is a graph exemplifying the luminance value distribution of an image before shading component-texture component separation; -
FIG. 3B is a graph exemplifying the luminance value distribution of a shading component separated from the image inFIG. 3A by the Center/Surround Retinex method; -
FIG. 3C is a graph exemplifying the luminance value distribution of a texture component separated from the image inFIG. 3A by the Center/Surround Retinex method; -
FIG. 3D is a graph exemplifying the luminance value distribution of a shading component separated from the image inFIG. 3A by the skeleton/texture separation method; -
FIG. 3E is a graph exemplifying the luminance value distribution of a texture component separated from the image inFIG. 3A by the skeleton/texture separation method; -
FIG. 4 is a view for explaining a search range; -
FIG. 5 is a view for explaining a template area; -
FIG. 6 is a view for explaining a technique of generating a texture image in the first embodiment; -
FIG. 7 is a block diagram exemplifying an image processing apparatus according to a second embodiment; -
FIG. 8 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 7 ; -
FIG. 9 is a block diagram exemplifying an image processing apparatus according to a third embodiment; -
FIG. 10 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 9 ; -
FIG. 11 is a block diagram exemplifying an image processing apparatus according to a fourth embodiment; -
FIG. 12 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 11 ; -
FIG. 13 is a block diagram exemplifying a sample texture image generating unit in an image processing apparatus according to a fifth embodiment; -
FIG. 14 is a flowchart exemplifying the operation of the image processing apparatus according to the fifth embodiment; -
FIG. 15 is a block diagram exemplifying an image processing apparatus according to a sixth embodiment; -
FIG. 16 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 15 ; -
FIG. 17 is a view for explaining a technique of generating a texture image in the sixth embodiment; -
FIG. 18 is a block diagram exemplifying an image processing apparatus according to a seventh embodiment; -
FIG. 19 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 18 ; -
FIG. 20 is a block diagram exemplifying an image processing apparatus according to an eighth embodiment; -
FIG. 21 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 20 ; -
FIG. 22 is a block diagram exemplifying an image processing apparatus according to a ninth embodiment; -
FIG. 23 is a flowchart exemplifying the operation of the image processing apparatus inFIG. 22 ; and -
FIG. 24 is a view for explaining a technique of generating a texture image in the ninth embodiment. - The embodiments will be described below with reference to the views of the accompanying drawing.
- In general, according to one embodiment, an image processing apparatus includes a first generation unit, a second generation unit and a combination unit. The first generation unit generates a sample texture image holding a texture component of a transform target image. The second generation unit generates a texture image by searching for a similar pixel area to a processed pixel area near a processing target pixel in the texture image larger than the sample texture image from a neighboring area at a position corresponding to the processing target pixel in the sample texture image and assigning the processing target pixel a pixel value near the similar pixel area in accordance with a positional relationship between the processed pixel area and the processing target pixel. Further, the second generation unit searches for the similar pixel area based on a first similarity between a pixel value in a pixel area in the neighboring area and a pixel value in the processed pixel area and a determination result indicating whether each pixel in the neighboring area expresses a same object as that expressed by the processing target pixel. The combination unit generates a combined image by combining the texture image with a base image which holds a non-texture component of the transform target image and has same size as that of the texture image.
- In the following description, the same reference numerals denote the same parts in each embodiment, and a description of them will be omitted. In each embodiment, an “image” indicates a digital image having one or more components per pixel. For the sake of simplicity, each embodiment will exemplify the processing for luminance values. However, it is possible to replace this processing with processing for other kinds of components (for example, a color difference component and each component in the RGB format), as needed.
- As shown in
FIG. 1 , animage processing apparatus 1 according to the first embodiment includes a sample textureimage generation unit 101, a textureimage generation unit 102, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 1. Theimage processing apparatus 1 processes thetransform target image 106 and outputs a combinedimage 110. - The
base image 105 has a size larger than that of thetransform target image 106. Thebase image 105 has the same size as that of the combinedimage 110. Thebase image 105 is an image holding shading in thetransform target image 106. In this case, shading indicates a gradual change pattern of luminance values. More specifically, thebase image 105 may be an enlarged image of thetransform target image 106 or an image (non-texture image) holding a rough change pattern of luminance values in the enlarged image. In this embodiment, thebase image 105 is prepared in advance and input to theimage processing apparatus 1. - For example, it is possible to generate the
base image 105 by eliminating a texture component from the enlarged image of thetransform target image 106. As techniques of separating a texture component and a shading component (non-texture component) from an image, the skeleton/texture separation method, Center/Surround Retinex method, ε-filter, and the like are known. -
FIGS. 3B and 3C respectively show the shading component and the texture component obtained by applying the Center/Surround Retinex method to the image shown inFIG. 3A .FIGS. 3D and 3E respectively show the shading component and the texture component obtained by applying the skeleton/texture separation method to the image shown inFIG. 3A . Note thatFIGS. 3A to 3E show one-dimensional luminance value distributions for the sake of simplicity. Referring toFIGS. 3A to 3E , the abscissa represents the coordinates; and the ordinate, the luminance values. According to the skeleton/texture separation method, an edge with a high intensity is retained as a shading component. On the other hand, according to the Center/Surround Retinex method, an edge with a large luminance difference is retained as a texture component. - The skeleton/texture separation method will be briefly described below by using expressions.
- The skeleton/texture separation method is a technique of separating an input image I into a skeleton component (corresponding to a non-texture component U) representing a rough structure of an object and a texture component (corresponding to a texture component V) representing minute oscillation on the surface of the object. Skeleton/texture separation methods are roughly classified into an addition separation type expressed by expression (1) given below and a multiplication separation type expressed by expression (2) given below.
-
I(x, y)≈U(x, y)+V(x, y) (1) -
I(x, y)≈U(x, y)×V(x, y) (2) - In expressions (1) and (2), (x, y) represents the position (coordinates) of a target pixel. For the sake of simplicity, the skeleton/texture separation method of the multiplication separation type will be described below, and a description of the skeleton/texture separation method of the addition separation type will be omitted. The skeleton/texture separation method of the multiplication separation type can be handled as the minimization problem expressed by expression (3) given below:
-
- In expression (3), f represents the logarithmic input image function obtained by logarithmically transforming the input image I, u and v respectively represent the logarithmic skeleton function and the logarithmic texture function obtained by logarithmically transforming the skeleton component U and the texture component V, μ represents a parameter for adjusting the upper limit of a G norm of a logarithmic texture component v, γ represents a parameter for adjusting the allowable range of a residual f-u-v, X represents an image function space, G represents an oscillation function space near a dual space of a bounded variation function space, and J(u) represents the Total Variation energy defined by expression (4):
-
J(u)=∫∥∇u∥dxdy (4) - It is possible to solve the energy minimization problem represented by expression (3) by using the following projection algorithms [1] to [3] (expressions (5) to (7)):
- [1] Initialization:
-
u=(0)=v(0)=0 (5) - [2] Iterations:
-
v (n+1 =G Gμ (f−u (n)) u (n+1) =f−v (n+1 −P Gγ (f−v (n+1)) (6) - In projection algorithm [2] (expression (6)), PGλ(h) represents orthogonal projection to a partial space Gλ of a function h.
- [3] Convergence test:
-
if max(|u (n+1) −u (n) |,|v (n+1) −v (n)|)≦ε, (7) - then stop the iteration.
- Exponentially transforming the functions u and v obtained by the above projection algorithm yields the skeleton component U and the texture component V, as indicated by expressions (8) and (9):
-
U(x,y)=exp(u(x,y)) (8) -
V(x,y)=exp(v(x, y)) (9) - The skeleton/texture separation method of the multiplication separation type is implemented in the above manner. The Center/Surround Retinex method will be briefly described next by using expressions. The Center/Surround Retinex method is a technique of separating the input image I into an illumination component (corresponding to the non-texture component U) and a reflection component (corresponding to the texture component V), as indicated by expression (10):
-
input image I=illumination component U×reflection component V (10) - The illumination component U is estimated by
-
U i(x,y)=G M(x,y)*I i(x,y), iε(R,G,B) (11) - In expression (11), GM(x, y) represents a Gaussian filter having a filter size of M×M, and * represents a convolution integral. The Gaussian filter GM(x, y) is represented by
-
- In expression (12), σ represents a standard deviation, and K is a value satisfying expression (13):
-
∫∫G(x, y)dxdy=1 (13) - The reflection component V is estimated by expression (14) by using the input image I and the illumination component U estimated by expression (11).
-
- The Center/Surround Retinex method is implemented in the above manner.
- The sample texture
image generation unit 101 generates asample texture image 107 based on thetransform target image 106. More specifically, the sample textureimage generation unit 101 eliminates a shading component from thetransform target image 106 to obtain a texture component of thetransform target image 106. This texture component corresponds to thesample texture image 107. It is possible to use, for example, the skeleton/texture separation method or the Center/Surround Retinex method to separate a texture component from thetransform target image 106. Note that thesample texture image 107 has the same size as that of thetransform target image 106. That is, thesample texture image 107 has a size smaller than that of thebase image 105 and combinedimage 110. - The texture
image generation unit 102 generates atexture image 108 based on thesample texture image 107 from the sample textureimage generation unit 101. Thetexture image 108 has the same size as that of thebase image 105 and combinedimage 110. That is, thetexture image 108 has a size larger than that of thesample texture image 107. Note that a practical example of the operation of the textureimage generation unit 102 will be described later. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 with thetexture image 108 from the textureimage generation unit 102. Note that a practical example of the operation of theimage combination unit 104 will be described later. - An example of the operation of the
image processing apparatus 1 will be described below with reference toFIG. 2 . - The
image processing apparatus 1 reads the base image 105 (step S201) and transform target image 106 (step S202). Note that in the operation example inFIG. 2 , since thebase image 105 is not used until step S206, it is possible to execute step S201 at an arbitrary timing before step S206. Theimage processing apparatus 1 inputs thetransform target image 106 read in step S202 to the sample textureimage generation unit 101. - The sample texture
image generation unit 101 generates thesample texture image 107 by eliminating a shading component from thetransform target image 106 read in step S202 (step S203). More specifically, the sample textureimage generation unit 101 can generate thesample texture image 107 by separating the texture component V by applying the above skeleton/texture separation method, Center/Surround Retinex method, or the like to thetransform target image 106. Note that if the normal direction and illumination conditions in a local area in an image remain constant, an invariant can be obtained relative to a diffusion reflection component. - The texture
image generation unit 102 generates each pixel constituting thetexture image 108 based on thesample texture image 107 generated in step S203 (the loop of steps S204 and S205). - More specifically, (a) the texture
image generation unit 102 selects a pixel, of the pixels constituting thetexture image 108, which has not been processed (i.e., a pixel to which no pixel value is assigned) (to be referred to as a processing target pixel hereinafter). (b) The textureimage generation unit 102 determines, as a search range, an area near a position corresponding to the processing target pixel in thesample texture image 107. (c) The textureimage generation unit 102 searches the search range for a pixel area (to be referred as a similar area hereinafter) similar in the change pattern of pixel values to a pixel area which has been processed (i.e., a pixel area to which pixel values are assigned) (to be referred to as a template area hereinafter) and is located near the processing target pixel. (d) The textureimage generation unit 102 searches for a corresponding pixel near the similar area in accordance with the positional relationship between the template area and the processing target pixel and assigns the pixel value of the corresponding pixel to the processing target pixel. - For example, as shown in
FIG. 4 , if a position (Ik, Jk) of a processing target pixel in thetexture image 108 corresponds to a position (ik, jk) in thesample texture image 107, an area near the position (ik, jk) is determined as a search range. In the case inFIG. 4 , thesample texture image 107 has a size of w×h [pixel], and thetexture image 108 has a size of W×H [pixel]. The position (ik, jk) can be derived by -
- A search range is typically a pixel area centered on the position (ik, jk). For example, a search area is a rectangular area having a size of 20×20 [pixel] or 40×40 [pixel]. Note that the shape and size of a search range are not specifically limited.
- The texture
image generation unit 102 sequentially generates the pixels of thetexture image 108 in raster scan order, for example. In the case of raster scan order, with regard to an arbitrary processing target pixel, pixels occupying the upper side of the processing target pixel and pixels which are located on the same horizontal line as that of the processing target pixel and occupy the left side of the processing target pixel have already been processed. As indicated by the hatched pixel area inFIG. 5 , the template area is constituted by processed pixels included in a rectangular area centered on the processing target pixel and having a size of N×N [pixel]. - As shown in
FIG. 6 , the textureimage generation unit 102 searches the search range for a pixel area similar in the change pattern of pixel values in the template area. It is possible to evaluate the similarity between a given pixel area and the template area by using, for example, the sum of square differences (SSD). That is, the textureimage generation unit 102 searches the search range for a pixel area exhibiting a minimum SSD relative to the template area. The textureimage generation unit 102 then searches for a corresponding pixel (the hatched pixel in the case inFIG. 6 ) near the similar area in accordance with the positional relationship between the template area and the processing target pixel, and assigns the pixel value (luminance value) of this corresponding pixel to the processing target pixel. - Note that as a technique of generating the high-
resolution texture image 108 from thesample texture image 107, enlargement transformation such as the bilinear interpolation method or the cubic interpolation method is known. However, generating thetexture image 108 by enlargement transformation may degrade or lead to loss of the high-frequency texture component of thesample texture image 107. That is, it is difficult to obtain the high-definition combinedimage 110 even by combining thetexture image 108 generated by enlargement transformation with thebase image 105. For this reason, this embodiment uses the above technique to generate thetexture image 108 holding the high-frequency texture component of thesample texture image 107. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 read in step S201 with thetexture image 108 generated in the loop of steps S204 and S205 (step S206). More specifically, theimage combination unit 104 generates the combinedimage 110 according to expression (16) or (17): -
combinedimage 110=base image 105+texture image 108 (16) -
combined image110=base image 105×texture image 108 (17) - When the sample texture
image generation unit 101 is to generate thesample texture image 107 by using the addition separation type technique (for example, the addition type skeleton/texture separation method or the E-filter), theimage combination unit 104 generates the combinedimage 110 according to expression (16). In contrast, when the sample textureimage generation unit 101 is to generate thesample texture image 107 by using the multiplication separation type technique (for example, the multiplication type skeleton/texture separation method or the Center/Surround Retinex method), theimage combination unit 104 generates the combinedimage 110 according to expression (17). - The
image processing apparatus 1 outputs the combinedimage 110 generated in step S206 to a display device (not show) or the like (step S207), and terminates the processing. - As described above, the image processing apparatus according to the first embodiment uses a base image which holds shading (non-texture component) of a transform target image, and hence a combined image maintains shading of the transform target image. In addition, since the image processing apparatus according to this embodiment uses the texture image generated based on a sample texture image holding the direction information of a local texture of a transform target image, the directivity of a texture of the transform target image is maintained in a combined image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- As shown in
FIG. 7 , animage processing apparatus 7 according to the second embodiment includes a baseimage generation unit 701, animage reduction unit 702, a sample textureimage generation unit 101, a textureimage generation unit 102, and animage combination unit 104. Atransform target image 106 is input to theimage processing apparatus 7. Theimage processing apparatus 7 processes thetransform target image 106 and outputs a combinedimage 110. Note that in this embodiment, thetransform target image 106 has the same size as that of abase image 105,texture image 108, and combinedimage 110. - The base
image generation unit 701 generates thebase image 105 based on thetransform target image 106. For example, the baseimage generation unit 701 generates thebase image 105 by separating a non-texture component from the transform target image 106 (that is, eliminating a texture component). It is possible to use each of the above techniques to separate a non-texture component. Alternatively, the baseimage generation unit 701 may directly generate thetransform target image 106 as thebase image 105. In this case, it is not necessary to use the baseimage generation unit 701. - The
image reduction unit 702 generates a reducedimage 109 by reduction transformation of thetransform target image 106. If the size of thetransform target image 106 decreases K times (where K is an arbitrary real number satisfying 0<K<1.0), the frequency of the texture pattern held by thetransform target image 106 increases to 1/K. Obviously, the reducedimage 109 has a size smaller than that of thetransform target image 106. - The sample texture
image generation unit 101 generates asample texture image 107 based on the reducedimage 109. In this embodiment, thesample texture image 107 has the same size as that of the reducedimage 109. - An example of the operation of the
image processing apparatus 7 will be described below with reference toFIG. 8 . - The
image processing apparatus 7 reads the transform target image 106 (step S801). Theimage processing apparatus 7 inputs thetransform target image 106 read in step S801 to the baseimage generation unit 701 and theimage reduction unit 702. - The base
image generation unit 701 generates thebase image 105 based on thetransform target image 106 read in step S801 (step S802). Note that in the operation example inFIG. 8 , since thebase image 105 is not used until step S806, it is possible to execute step S802 at an arbitrary timing before step S806. - The
image reduction unit 702 generates the reducedimage 109 by performing reduction transformation of thetransform target image 106 read in step S801 (step S803). The sample textureimage generation unit 101 generates thesample texture image 107 by eliminating a shading component from the reducedimage 109 generated in step S803 (step S203). - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 generated in step S802 with thetexture image 108 generated in the loop of steps S204 and S205 (step S206). - As described above, the image processing apparatus according to the second embodiment generates a texture image based on a reduced image of a transform target image. Therefore, the image processing apparatus according to this embodiment can obtain a combined image with a texture pattern having a higher frequency and is similar to the texture pattern held by the transform target image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- As shown in
FIG. 9 , an image processing apparatus 9 according to the third embodiment includes animage enlargement unit 901, a baseimage generation unit 701, a sample textureimage generation unit 101, a textureimage generation unit 102, and animage combination unit 104. Atransform target image 106 is input to the image processing apparatus 9. The image processing apparatus 9 processes thetransform target image 106 and outputs a combinedimage 110. Note that in this embodiment, thetransform target image 106 has the same size as that of asample texture image 107. - The
image enlargement unit 901 generates anenlarged image 111 by performing enlargement transformation of thetransform target image 106. Theenlarged image 111 has the same size as that of abase image 105,texture image 108, and combinedimage 110. The baseimage generation unit 701 generates thebase image 105 based on theenlarged image 111. - Enlargement transformation is implemented by, for example, the nearest neighbor interpolation method, linear interpolation method, cubic convolution method, or bicubic interpolation method. In order to suppress deterioration in the image quality of the combined
image 110, it is preferable to use enlargement transformation that can generate theenlarged image 111 with little blur. - An example of the operation of the image processing apparatus 9 will be described below with reference to
FIG. 10 . - The
image enlargement unit 901 generates theenlarged image 111 by performing enlargement transformation of thetransform target image 106 read in step S202 (step S1001). The baseimage generation unit 701 generates thebase image 105 based on theenlarged image 111 generated in step S1001 (step S802). - Note that in the operation example in
FIG. 10 , since thebase image 105 is not used until step S206, it is possible to execute steps S1001 and S802 at arbitrary timings before step S206. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 generated in step S802 with thetexture image 108 generated in the loop of steps S204 and S205 (step S206). - As described above, the image processing apparatus according to the third embodiment generates a base image based on an enlarged image of a transform target image. The transform target image holds a texture pattern having a higher frequency than that of the enlarged image. Therefore, the image processing apparatus according to this embodiment can obtain a combined image with a texture pattern having a higher frequency and is similar to the texture pattern of the enlarged image. That is, the image processing apparatus according to the embodiment can obtain a combined image having a high-definition texture.
- As shown in
FIG. 11 , animage processing apparatus 11 according to the fourth embodiment includes animage sharpening unit 1101, a sample textureimage generation unit 101, a textureimage generation unit 102, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 11. Theimage processing apparatus 11 processes thetransform target image 106 and outputs a combinedimage 110. Note that in this embodiment, thetransform target image 106 has the same size as that of asample texture image 107. - The
image sharpening unit 1101 generates a sharpenedbase image 1102 by sharpening thebase image 105. The sharpenedbase image 1102 has the same size as that of thebase image 105,texture image 108, and combinedimage 110. For image sharpening, an arbitrary sharpening technique such as an unsharp masking method can be used. Theimage combination unit 104 generates the combinedimage 110 by combining the sharpenedbase image 1102 from theimage sharpening unit 1101 with thetexture image 108 from the textureimage generation unit 102. - An example of the operation of the
image processing apparatus 11 will be described below with reference toFIG. 12 . - The
image sharpening unit 1101 generates the sharpenedbase image 1102 by sharpening thebase image 105 read in step S201 (step S1201). Note that in the operation example inFIG. 12 , since the sharpenedbase image 1102 is not used until step S206, it is possible to execute steps S201 and S1201 at arbitrary timings before step S206. - The
image combination unit 104 generates the combinedimage 110 by combining the sharpenedbase image 1102 generated in step S1201 with thetexture image 108 generated in the loop of steps S204 and S205 (step S206). - As has been described above, the image processing apparatus according to the fourth embodiment generates a combined image by combining a texture image with a sharpened base image. Therefore, the image processing apparatus according to this embodiment can improve the sharpness of the texture and edges in the combined image.
- An image processing apparatus according to the fifth embodiment corresponds to an arrangement obtained by replacing the sample texture
image generation unit 101 in the image processing apparatus according to each embodiment described above with a sample textureimage generation unit 1301. For the sake of simplicity, assume that the sample textureimage generation unit 101 in theimage processing apparatus 1 according to the first embodiment is replaced by the sample textureimage generation unit 1301. As shown inFIG. 13 , the sample textureimage generation unit 1301 includes ashading elimination unit 1302 and anamplitude adjustment unit 1303. - The
shading elimination unit 1302 eliminates a shading component from atransform target image 106 like the sample textureimage generation unit 101 to obtain a texture component (to be referred to as a shading-eliminated image hereinafter) of thetransform target image 106. Theshading elimination unit 1302 inputs the shading-eliminated image to theamplitude adjustment unit 1303. - The
amplitude adjustment unit 1303 generates asample texture image 107 by adjusting the amplitude of the shading-eliminated image from theshading elimination unit 1302. For example, theamplitude adjustment unit 1303 attenuates the luminance amplitude of the shading-eliminated image. More specifically, theamplitude adjustment unit 1303 adjusts the amplitude of the shading-eliminated image according to expression (18) or (19): -
V′(x, y)=V(x, y)a (18) -
V′(x, y)=a·V(x, y) (19) - When the
shading elimination unit 1302 is to generate a shading-eliminated image by using an addition separation type technique (for example, the addition type skeleton/texture separation method or the ε-filter), theamplitude adjustment unit 1303 generates thesample texture image 107 according to expression (19). When theshading elimination unit 1302 is to generate a shading-eliminated image by using a multiplication separation type technique (for example, the multiplication type skeleton/texture separation method or the Center/Surround Retinex method), theamplitude adjustment unit 1303 generates thesample texture image 107 according to expression (18). - In expressions (18) and (19), V(x, y) represents the pixel value at the position (coordinates) (x, y) in the shading-eliminated image, V′(x, y) represents the pixel value at the position (x, y) in the
sample texture image 107, and a represents an amplitude adjustment coefficient. For example, when the amplitude of a texture component is to be attenuated to ½, amplitude adjustment coefficient a=0.5. - It is possible to manually set the amplitude adjustment coefficient a to an arbitrary value or automatically set it to a specific value. When automatically setting the amplitude adjustment coefficient a in accordance with the size ratio between the
transform target image 106 and abase image 105, it is possible to use expression (20) given below: -
- In expression (20), WV represents the horizontal size [pixel] of the
transform target image 106, WV, represents the horizontal size [pixel] of thebase image 105, HV represents the vertical size [pixel] of thetransform target image 106, and HV, represents the vertical size [pixel] of thebase image 105. - An example of the operation of the image processing apparatus according to this embodiment will be described below with reference to
FIG. 14 . - The
shading elimination unit 1302 generates a shading-eliminated image by eliminating a shading component from thetransform target image 106 read in step S202 (step S203). Theamplitude adjustment unit 1303 generates thesample texture image 107 by adjusting the amplitude of the shading-eliminated image generated in step S203 (step S1401). - A texture
image generation unit 102 generates each pixel constituting atexture image 108 based on thesample texture image 107 generated in step S1401 (the loop of steps S204 and S205). - As described above, the image processing apparatus according to the fifth embodiment generates a sample texture image by adjusting the amplitude of the texture component of the transform target image. The image processing apparatus according to this embodiment can obtain a combined image which has the same frequency as that of the texture of the transform target image but gives a different impression.
- As shown in
FIG. 15 , animage processing apparatus 15 according to the sixth embodiment includes a skeletonimage generation unit 1501, a sample textureimage generation unit 1503, a textureimage generation unit 1505, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 15. Theimage processing apparatus 15 processes thetransform target image 106 and outputs a combinedimage 110. - The skeleton
image generation unit 1501 generates abase skeleton image 1502 by eliminating a texture component from thebase image 105. The skeletonimage generation unit 1501 inputs thebase skeleton image 1502 to the textureimage generation unit 1505. More specifically, the skeletonimage generation unit 1501 can eliminate a texture component from thebase image 105 by using one of the techniques described above. For example, the skeletonimage generation unit 1501 may generate a non-texture component U obtained by using the skeleton/texture separation method as thebase skeleton image 1502, or may generate an illumination component U obtained by using the Center/Surround Retinex method as thebase skeleton image 1502. If thebase image 105 is a non-texture image, the skeletonimage generation unit 1501 may directly generate thebase image 105 as thebase skeleton image 1502. In this case, it is not necessary to use the skeletonimage generation unit 1501. That is, the base skeleton image is formed from only the non-texture image. - As will be described later, the
base skeleton image 1502 is used to determine the identity of an object. Thebase skeleton image 1502 therefore preferably does not include a fine density pattern expressing the texture on the surface of the object in thebase image 105. In addition, in thebase skeleton image 1502, an edge expressing an object boundary is preferably sharp. For this reason, the skeletonimage generation unit 1501 may generate thebase skeleton image 1502 based on a sharpenedbase image 1102 described above in place of thebase image 105. - The sample texture
image generation unit 1503 generates asample texture image 107 based on thetransform target image 106 like the sample textureimage generation unit 101 or the sample textureimage generation unit 1301. The sample textureimage generation unit 1503 also generates asample skeleton image 1504 based on thetransform target image 106. Thesample skeleton image 1504 corresponds to a non-texture component of thetransform target image 106. The sample textureimage generation unit 1503 inputs asample texture image 107 and thesample skeleton image 1504 to the textureimage generation unit 1505. - Like the
base skeleton image 1502, thesample skeleton image 1504 is used to determine the identity of an object. Thesample skeleton image 1504 therefore preferably does not include a fine density pattern expressing the texture on the surface of the object in thetransform target image 106. In addition, in thesample skeleton image 1504, an edge expressing an object boundary is preferably sharp. For this reason, the sample textureimage generation unit 1503 may generate thesample skeleton image 1504 based on an image obtained by sharpening thetransform target image 106 in place of thetransform target image 106. - The
base skeleton image 1502 and thesample skeleton image 1504 may be grayscale images or images having a plurality of color components. With regard to color systems such as RGB, YUV, and L*a*b*, it is possible to generate a skeleton image by using one or two color components or generate a skeleton image by using all three color components. When generating a skeleton image by using a plurality of color components, it is possible to apply the Center/Surround Retinex method or the skeleton/texture separation method to each color component. In addition, in a skeleton image having a plurality of color components, it is possible to express the similarity between pixels by using the square difference of the distance or the absolute value of the difference between the two points in a color space. - The texture
image generation unit 1505 generates atexture image 108 based on thesample texture image 107 from the sample textureimage generation unit 1503. As will be described later, the textureimage generation unit 1505 determines the identity of an object based on thebase skeleton image 1502 from the skeletonimage generation unit 1501 and thesample skeleton image 1504 from the sample textureimage generation unit 1503. The textureimage generation unit 1505 generates thetexture image 108 by using the determination result. - The
image combination unit 104 generates a combinedimage 110 by combining thebase image 105 with thetexture image 108 from the textureimage generation unit 1505. - An example of the operation of the
image processing apparatus 15 will be described below with reference toFIG. 16 . - The skeleton
image generation unit 1501 generates thebase skeleton image 1502 by eliminating a texture component from thebase image 105 read in step S201 (step S1601). The sample textureimage generation unit 1503 generates thesample skeleton image 1504 and thesample texture image 107 by separating a shading component and a texture component from thetransform target image 106 read in step S202 (step S1602). Note that it is possible to execute the series of processing in steps S201 and S1601 and the series of processing in steps S202 and S1602 in reverse order or in parallel. - The texture
image generation unit 1505 generates each pixel constituting thetexture image 108 based on thesample texture image 107 generated in step S1602 (the loop of steps S1603 and S205). - The texture
image generation unit 1505 differs from the textureimage generation unit 102 in that it searches for a similar area by using the identity determination result on an object. - As shown in
FIG. 17 , the textureimage generation unit 1505 determines the identity of an object with reference to a pixel (to be referred to as a reference pixel hereinafter) at a position corresponding to a processing target pixel in thebase skeleton image 1502. - For example, the texture
image generation unit 1505 determines whether the similarity between each pixel included in a neighboring area at a position corresponding to a processing target pixel in thesample skeleton image 1504 and a reference pixel (for example, the square difference of the pixel values or the absolute value of the difference between the pixel values) is equal to or less than a threshold Th. If the similarity between a given pixel and the reference pixel is equal to or less than the threshold Th, the textureimage generation unit 1505 determines that the corresponding pixel in thetexture image 108 expresses the same object as that expressed by the processing target pixel. If the similarity between a given pixel and the reference pixel exceeds the threshold Th, the textureimage generation unit 1505 determines that the corresponding pixel in thetexture image 108 does not express the same object as that expressed by the processing target pixel. - Since skeleton images (the
base skeleton image 1502 and the sample skeleton image 1504) hold rough changes in luminance value, they tend to cause large difference between pixels expressing different objects. - The texture
image generation unit 1505 narrows down the search range to pixels determined to express the same object as that expressed by the processing target pixel in thetexture image 108. In other words, the textureimage generation unit 1505 excludes pixels determined not to express the same object as that expressed by the processing target pixel in thetexture image 108. - In addition, the identity determination result on an object (i.e., the similarity between an object expressed by a processing target pixel and an object expressed by each pixel included in a neighboring area) can be expressed by a multilevel value (a real number between or equal to 0 and 1) instead of a binary value (0 or 1). When expressing the identity determination result on an object with a multilevel value, the texture
image generation unit 1505 may search the search range for a pixel area in which the weighted sum of the SSD between itself and a template area and the total sum of identity determination results is minimum. - Alternatively, when expressing the identity determination result on an object with a multilevel value, the texture
image generation unit 1505 may assign each pixel in the search range a weight corresponding to a determination result. That is, the textureimage generation unit 1505 may search the search range for a pixel area in which the weighted sum of square differences between itself and a template area is minimum. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 read in step S201 with thetexture image 108 generated in the loop of steps S1603 and S205 (step S206). - As described above, the image processing apparatus according to the sixth embodiment searches for a similar area by using the identity determination result on an object. The image processing apparatus according to this embodiment is robust against an error caused by the texture of a false object being assigned to a processing target pixel near an object boundary, and easily avoids texture failure of a combined image due to the propagation of such an error. That is, the image processing apparatus according to the embodiment can obtain a high-definition combined image which accurately retains the texture of a transform target image including a plurality of objects.
- As shown in
FIG. 18 , animage processing apparatus 18 according to the seventh embodiment includes a skeletonimage generation unit 1501, animage reduction unit 1801, a sample textureimage generation unit 101, a textureimage generation unit 1505, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 18. Theimage processing apparatus 18 processes thetransform target image 106 and outputs a combinedimage 110. - The skeleton
image generation unit 1501 generates abase skeleton image 1502 by eliminating a texture component from thebase image 105. The skeletonimage generation unit 1501 inputs thebase skeleton image 1502 to the textureimage generation unit 1505 and theimage reduction unit 1801. - The
image reduction unit 1801 generates asample skeleton image 1504 by performing reduction transformation of thebase skeleton image 1502 from the skeletonimage generation unit 1501. Theimage reduction unit 1801 inputs thesample skeleton image 1504 to the textureimage generation unit 1505. - Reduction transformation is implemented by, for example, the nearest neighbor interpolation method, linear interpolation method, or cubic convolution method (bicubic interpolation method). In order to prevent deterioration in the accuracy of identity determination on an object, it is preferable to use reduction transformation that can generate the
sample skeleton image 1504 with little blur. - The texture
image generation unit 1505 determines the identity of the object described above based on thebase skeleton image 1502 from the skeletonimage generation unit 1501 and thesample skeleton image 1504 from theimage reduction unit 1801. - An example of the operation of the
image processing apparatus 18 will be described below with reference toFIG. 19 . - The
image reduction unit 1801 generates thesample skeleton image 1504 by performing reduction transformation of thebase skeleton image 1502 generated in step S1601 (step S1901). Note that it is possible to execute the series of processing in steps S201, S1601, and S1901 and the series of processing in steps S202 and S203 in reverse order or in parallel. - As described above, the image processing apparatus according to the seventh embodiment generates a sample skeleton image by performing reduction transformation of a base skeleton image. Therefore, the image processing apparatus according to this embodiment need not generate a sample skeleton image based on a transform target image, and hence can select a texture component separation technique for the transform target image, with priority being given to an improvement in the accuracy of a sample texture image.
- As shown in
FIG. 20 , animage processing apparatus 20 according to the eighth embodiment includes a sample textureimage generation unit 1503, animage enlargement unit 2001, a textureimage generation unit 1505, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 20. Theimage processing apparatus 20 processes thetransform target image 106 and outputs a combinedimage 110. - The sample texture
image generation unit 1503 generates asample texture image 107 and asample skeleton image 1504 based on thetransform target image 106. The sample textureimage generation unit 1503 inputs asample texture image 107 to the textureimage generation unit 1505. The sample textureimage generation unit 1503 inputs thesample skeleton image 1504 to theimage enlargement unit 2001 and the textureimage generation unit 1505. - The
image enlargement unit 2001 generates abase skeleton image 1502 by performing enlargement transformation of thesample skeleton image 1504 from the sample textureimage generation unit 1503. Thebase skeleton image 1502 has the same size as that of thebase image 105 and combinedimage 110. Theimage enlargement unit 2001 inputs thebase skeleton image 1502 to the sample textureimage generation unit 1503. Note that theimage enlargement unit 2001 may perform arbitrary enlargement transformation. - The texture
image generation unit 1505 determines the identity of the object described above based on thebase skeleton image 1502 from theimage enlargement unit 2001 and thesample skeleton image 1504 from the sample textureimage generation unit 1503. - An example of the operation of the
image processing apparatus 20 will be described below with reference toFIG. 21 . - The
image enlargement unit 2001 generates thebase skeleton image 1502 by performing enlargement transformation of thesample skeleton image 1504 generated in step S1602 (step S2101). - As described above, the image processing apparatus according to the eighth embodiment generates a base skeleton image by performing enlargement transformation of a sample skeleton image. Therefore, the image processing apparatus according to this embodiment applies texture component/non-texture component separation processing, which requires a relatively large amount of calculation, to a transform target image only once. That is, the image processing apparatus according to the embodiment can obtain a combined image in a short period of time (short delay).
- As shown in
FIG. 22 , animage processing apparatus 22 according to the ninth embodiment includes a base featureamount extraction unit 2201, a sample featureamount extraction unit 2203, a sample textureimage generation unit 101, a textureimage generation unit 2205, and animage combination unit 104. Abase image 105 and atransform target image 106 are input to theimage processing apparatus 22. Theimage processing apparatus 22 processes thetransform target image 106 and outputs a combinedimage 110. - The base feature
amount extraction unit 2201 obtains abase feature amount 2202 by extracting an image feature amount for each local area of thebase image 105. Thebase feature amount 2202 reflects the pixel value feature of each local area of thebase image 105. For example, thebase feature amount 2202 may be the average or variance of the histogram of pixel values in each local area of thebase image 105, a co-occurrence matrix feature characterizing a texture pattern in each local area, or the like. The base featureamount extraction unit 2201 inputs thebase feature amount 2202 to the textureimage generation unit 2205. - The sample feature
amount extraction unit 2203 obtains asample feature amount 2204 by extracting an image feature amount for each local area of thetransform target image 106. Thesample feature amount 2204 reflects the pixel value feature of each local area of thetransform target image 106. Note that thesample feature amount 2204 and thebase feature amount 2202 are the same type of image feature amounts. The sample featureamount extraction unit 2203 inputs thesample feature amount 2204 to the textureimage generation unit 2205. - The texture
image generation unit 2205 generates atexture image 108 based on asample texture image 107 from the sample textureimage generation unit 101. As will be described later, the textureimage generation unit 2205 determines the identity of an object based on thebase feature amount 2202 from the base featureamount extraction unit 2201 and thesample feature amount 2204 from the sample featureamount extraction unit 2203. The textureimage generation unit 2205 generates thetexture image 108 by using the determination result. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 with thetexture image 108 from the textureimage generation unit 2205. - An example of the operation of the
image processing apparatus 22 will be described below with reference toFIG. 23 . - The base feature
amount extraction unit 2201 obtains thebase feature amount 2202 by extracting an image feature amount from thebase image 105 read in step S201 (step S2301). The sample featureamount extraction unit 2203 obtains thesample feature amount 2204 by extracting an image feature amount from thesample texture image 107 generated in step S203 (step S2302). Note that it is possible to execute the series of processing in steps S201 and S2301 and the series of processing in steps S202, S203, and S2302 in reverse order or in parallel. - The texture
image generation unit 2205 generates each pixel constituting thetexture image 108 based on thesample texture image 107 generated in step S203 (the loop of steps S2303 and S205). - The texture
image generation unit 2205 differs from the textureimage generation unit 1505 in that it determines the identity of an object based on thebase feature amount 2202 and thesample feature amount 2204. - As shown in
FIG. 24 , the textureimage generation unit 2205 determines the identity of an object with reference to a feature amount (to be referred to as a reference feature amount hereinafter) at a position corresponding to a processing target pixel in thebase feature amount 2202. - For example, the texture
image generation unit 2205 determines whether the similarity (e.g., the square difference between feature amounts or the absolute value of the difference between feature amounts) between each feature amount included in a neighboring area at a position corresponding to the processing target pixel in thesample feature amount 2204 and the reference feature amount is equal to or less than a threshold Th′. If the similarity between a given feature amount and the reference feature amount is equal to or less than the threshold Th′, the textureimage generation unit 2205 determines that the corresponding pixel in thetexture image 108 expresses the same object as that expressed by the processing target pixel. If the similarity between a given feature amount and the reference feature amount exceeds the threshold Th′, the textureimage generation unit 2205 determines that the corresponding pixel in thetexture image 108 does not express the same object as that expressed by the processing target pixel. - Since image feature amounts (the
base feature amount 2202 and the sample feature amount 2204) each reflect the pixel value feature of each local area in the image, a large difference tends to occur between local areas expressing different objects. - The texture
image generation unit 2205 narrows down the search range to pixels determined to express the same object as that expressed by the processing target pixel in thetexture image 108. In other words, the textureimage generation unit 2205 excludes pixels determined not to express the same object as that expressed by the processing target pixel in thetexture image 108. - In addition, the identity determination result on an object (i.e., the similarity between an object expressed by a processing target pixel and an object expressed each pixel included in a neighboring area) can be expressed by a multilevel value (a real number between or equal to 0 and 1) instead of a binary value (0 or 1). When expressing the identity determination result on an object with a multilevel value, the texture
image generation unit 2205 may search the search range for a pixel area in which the weighted sum of the SSD between itself and a template area and the total sum of identity determination results is minimum. - Alternatively, when expressing the identity determination result on an object with a multilevel value, the texture
image generation unit 2205 may assign each pixel in the search range a weight corresponding to a determination result. That is, the textureimage generation unit 2205 may search the search range for a pixel area in which the weighted sum of square differences between itself and a template area is minimum. - The
image combination unit 104 generates the combinedimage 110 by combining thebase image 105 read in step S201 with thetexture image 108 generated in the loop of steps S2303 and S205 (step S206). - As described above, the image processing apparatus according to the ninth embodiment searches for a similar area by using the identity determination result on an object. The image processing apparatus according to this embodiment is robust against an error caused by the texture of a false object being assigned to a processing target pixel near an object boundary, and avoids texture failure of a combined image due to the propagation of such an error. That is, the image processing apparatus according to the embodiment can obtain a high-definition combined image which accurately retains the texture of a transform target image including a plurality of objects.
- Each embodiment described above can be implemented by using a general-purpose computer as basic hardware. Programs implementing the processing in each embodiment described above may be provided by being stored in a computer-readable storage medium. The programs are stored, in the storage medium, as files in a form that can be installed or in a form that can be executed. The storage medium can take any form, for example, a magnetic disk, an optical disc (e.g., a CD-ROM, CD-R, or DVD), a magnetooptical disc (e.g., an MO), or a semiconductor memory, as long as it is a computer-readable storage medium. In addition, it is possible to store the programs implementing the processing in each embodiment described above in a computer (server) connected to a network such as the Internet and make a computer (client) download the programs via the network.
- While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (10)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-198185 | 2010-09-03 | ||
JP2010198185A JP5159844B2 (en) | 2010-09-03 | 2010-09-03 | Image processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120057798A1 true US20120057798A1 (en) | 2012-03-08 |
US8144996B1 US8144996B1 (en) | 2012-03-27 |
Family
ID=45770774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/051,731 Expired - Fee Related US8144996B1 (en) | 2010-09-03 | 2011-03-18 | Image processing apparatus |
Country Status (2)
Country | Link |
---|---|
US (1) | US8144996B1 (en) |
JP (1) | JP5159844B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140052454A1 (en) * | 2012-08-14 | 2014-02-20 | Mstar Semiconductor, Inc. | Method for determining format of linear pulse-code modulation data |
US20150086126A1 (en) * | 2012-04-27 | 2015-03-26 | Nec Corporation | Image processing method, image processing system, image processing device, and image processing device |
US20170041620A1 (en) * | 2014-04-18 | 2017-02-09 | Beijing Zhigu Rui Tuo Tech Co., Ltd. | Image Processing Methods and Image Processing Apparatuses |
KR20180045645A (en) * | 2016-10-26 | 2018-05-04 | 삼성전자주식회사 | Image processing apparatus, method for processing image and computer-readable recording medium |
CN109272538A (en) * | 2017-07-17 | 2019-01-25 | 腾讯科技(深圳)有限公司 | The transmission method and device of picture |
CN110334657A (en) * | 2019-07-08 | 2019-10-15 | 创新奇智(北京)科技有限公司 | A kind of training sample generation method, system and the electronic equipment of flake fault image |
CN111696117A (en) * | 2020-05-20 | 2020-09-22 | 北京科技大学 | Loss function weighting method and device based on skeleton perception |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876873B (en) * | 2018-06-22 | 2022-07-19 | 上海闻泰电子科技有限公司 | Image generation method, device, equipment and storage medium |
WO2022180782A1 (en) * | 2021-02-26 | 2022-09-01 | 日本電信電話株式会社 | Information processing device, information processing method, and information processing program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6005582A (en) * | 1995-08-04 | 1999-12-21 | Microsoft Corporation | Method and system for texture mapping images with anisotropic filtering |
US7995854B2 (en) * | 2008-03-28 | 2011-08-09 | Tandent Vision Science, Inc. | System and method for identifying complex tokens in an image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3578921B2 (en) * | 1998-09-18 | 2004-10-20 | シャープ株式会社 | Image magnifier |
JP3625144B2 (en) * | 1999-01-18 | 2005-03-02 | 大日本スクリーン製造株式会社 | Image processing method |
JP4776705B2 (en) | 2009-03-06 | 2011-09-21 | 株式会社東芝 | Image processing apparatus and method |
-
2010
- 2010-09-03 JP JP2010198185A patent/JP5159844B2/en not_active Expired - Fee Related
-
2011
- 2011-03-18 US US13/051,731 patent/US8144996B1/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6005582A (en) * | 1995-08-04 | 1999-12-21 | Microsoft Corporation | Method and system for texture mapping images with anisotropic filtering |
US7995854B2 (en) * | 2008-03-28 | 2011-08-09 | Tandent Vision Science, Inc. | System and method for identifying complex tokens in an image |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150086126A1 (en) * | 2012-04-27 | 2015-03-26 | Nec Corporation | Image processing method, image processing system, image processing device, and image processing device |
US9430816B2 (en) * | 2012-04-27 | 2016-08-30 | Nec Corporation | Image processing method, image processing system, image processing device, and image processing program capable of removing noises over different frequency bands, respectively |
US20140052454A1 (en) * | 2012-08-14 | 2014-02-20 | Mstar Semiconductor, Inc. | Method for determining format of linear pulse-code modulation data |
US20170041620A1 (en) * | 2014-04-18 | 2017-02-09 | Beijing Zhigu Rui Tuo Tech Co., Ltd. | Image Processing Methods and Image Processing Apparatuses |
US10123024B2 (en) * | 2014-04-18 | 2018-11-06 | Beijing Zhigu Rui Tuo Tech Co., Ltd | Image processing methods and image processing apparatuses |
US11257186B2 (en) * | 2016-10-26 | 2022-02-22 | Samsung Electronics Co., Ltd. | Image processing apparatus, image processing method, and computer-readable recording medium |
KR20180045645A (en) * | 2016-10-26 | 2018-05-04 | 삼성전자주식회사 | Image processing apparatus, method for processing image and computer-readable recording medium |
CN109891459A (en) * | 2016-10-26 | 2019-06-14 | 三星电子株式会社 | Image processing apparatus, image processing method and computer readable recording medium |
KR102384234B1 (en) * | 2016-10-26 | 2022-04-07 | 삼성전자주식회사 | Image processing apparatus, method for processing image and computer-readable recording medium |
CN109272538A (en) * | 2017-07-17 | 2019-01-25 | 腾讯科技(深圳)有限公司 | The transmission method and device of picture |
CN110334657B (en) * | 2019-07-08 | 2020-08-25 | 创新奇智(北京)科技有限公司 | Training sample generation method and system for fisheye distortion image and electronic equipment |
CN110334657A (en) * | 2019-07-08 | 2019-10-15 | 创新奇智(北京)科技有限公司 | A kind of training sample generation method, system and the electronic equipment of flake fault image |
CN111696117A (en) * | 2020-05-20 | 2020-09-22 | 北京科技大学 | Loss function weighting method and device based on skeleton perception |
Also Published As
Publication number | Publication date |
---|---|
JP5159844B2 (en) | 2013-03-13 |
US8144996B1 (en) | 2012-03-27 |
JP2012058773A (en) | 2012-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8144996B1 (en) | Image processing apparatus | |
US10339643B2 (en) | Algorithm and device for image processing | |
Shin et al. | Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing | |
Lee et al. | Contrast enhancement based on layered difference representation of 2D histograms | |
KR102146560B1 (en) | Method and apparatus for adjusting image | |
JP4460839B2 (en) | Digital image sharpening device | |
CN110175964A (en) | Retinex image enhancement method based on Laplacian pyramid | |
US20180122051A1 (en) | Method and device for image haze removal | |
US8781247B2 (en) | Adding new texture to an image by assigning pixel values of similar pixels and obtaining a synthetic image | |
CN105740876B (en) | A kind of image pre-processing method and device | |
Avanaki | Exact global histogram specification optimized for structural similarity | |
US8594446B2 (en) | Method for enhancing a digitized document | |
Guo et al. | Objective measurement for image defogging algorithms | |
US9077926B2 (en) | Image processing method and image processing apparatus | |
Liu et al. | Underwater image colour constancy based on DSNMF | |
Cao et al. | A brightness-preserving two-dimensional histogram equalization method based on two-level segmentation | |
EP2966613A1 (en) | Method and apparatus for generating a super-resolved image from an input image | |
US8977058B2 (en) | Image processing apparatus and method | |
CN109816006B (en) | A kind of sea antenna detection method, device and computer readable storage medium | |
Storozhilova et al. | 2.5 D extension of neighborhood filters for noise reduction in 3D medical CT images | |
Wang et al. | An airlight estimation method for image dehazing based on gray projection | |
Parihar | Histogram modification and DCT based contrast enhancement | |
US11200708B1 (en) | Real-time color vector preview generation | |
Saxena et al. | An efficient single image haze removal algorithm for computer vision applications | |
US11055852B2 (en) | Fast automatic trimap generation and optimization for segmentation refinement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAITO, KANAKO;KANEKO, TOSHIMITSU;REEL/FRAME:026310/0436 Effective date: 20110317 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: TOSHIBA VISUAL SOLUTIONS CORPORATION, JAPAN Free format text: CORPORATE SPLIT;ASSIGNOR:TOSHIBA LIFESTYLE PRODUCTS & SERVICES CORPORATION;REEL/FRAME:040458/0859 Effective date: 20160713 Owner name: TOSHIBA LIFESTYLE PRODUCTS & SERVICES CORPORATION, Free format text: ASSIGNMENT OF PARTIAL RIGHTS;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:040458/0840 Effective date: 20160630 |
|
AS | Assignment |
Owner name: TOSHIBA VISUAL SOLUTIONS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:046881/0120 Effective date: 20180420 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20200327 |