+

US20020005962A1 - ''image process apparatus, image process method and storage medium - Google Patents

''image process apparatus, image process method and storage medium Download PDF

Info

Publication number
US20020005962A1
US20020005962A1 US08/883,572 US88357297A US2002005962A1 US 20020005962 A1 US20020005962 A1 US 20020005962A1 US 88357297 A US88357297 A US 88357297A US 2002005962 A1 US2002005962 A1 US 2002005962A1
Authority
US
United States
Prior art keywords
color
under color
region
under
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US08/883,572
Other versions
US6381034B2 (en
Inventor
Osamu Iwasaki
Naoji Otsuka
Kiichiro Takahashi
Hitoshi Nishikori
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IWASAKI, OSAMU, NISHIKORI, HITOSHI, OTSUKA, NAOJI, TAKAHASHI, KIICHIRO
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IWASAKI, OSAMU, NISHIKORI, HITOSHI, OTSUKA, NAOJI, TAKAHASHI, KIICHIRO
Publication of US20020005962A1 publication Critical patent/US20020005962A1/en
Application granted granted Critical
Publication of US6381034B2 publication Critical patent/US6381034B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6016Conversion to subtractive colour signals
    • H04N1/6022Generating a fourth subtractive colour signal, e.g. under colour removal, black masking

Definitions

  • the present invention relates to image process apparatus and method which perform an under color process for inputted image data.
  • the color printer converts such the RGB signals into C (cyan), M (magenta) and Y (yellow) signals or C (cyan), M (magenta), Y (yellow) and K (black) signals, by using an image process means.
  • An image process method which is performed by such the image process means has been proposed as U.S. patent application Ser. No. 08/711,953 filed on Sep. 6, 1996, by the same applicant as that of the present application.
  • FIG. 8 is block diagram for explaining a concept of such the image process method proposed by the same applicant as that of the present application.
  • a masking means 20004 performs a masking process on the C1, M1, Y1 and K1 data to output C2, M2 and Y2 data.
  • an under color component separation means 20005 performs a process on the basis of following equations, to output C3, M3, Y3 and U data.
  • an under color process means 20100 generates C4, M4, Y4 and K4 data each consisting of eight bits, on the basis of the under color component data U.
  • the under color process means 20100 is composed of a black component generation means 20006 , a cyan component generation means 20007 , a magenta component generation means 20008 and a yellow component generation means 20009 , and thus generates the C4, M4, Y4 and K4 data each consisting of eight bits by using functions KGR( ), CGR( ), MGR( ) and YGR( ) shown in FIG. 9. That is, following relation is satisfied.
  • the C3, M3 and Y3 data outputted from the under color component separation means 20005 and the C4, M4 and Y4 data outputted from the under color process means 20100 are synthesized respectively by a cyan component output means 20011 , a magenta component output means 20012 and a yellow component output means 20013 , to respectively generate C6, M6 and Y6 data.
  • Such processes are performed on the basis of following equations.
  • an output gamma correction means 20101 respectively output C7, M7, Y7 and K7 data each consisting of eight bits.
  • the output gamma correction means 20101 is composed of a black output gamma correction means 20014 , a cyan output gamma correction means 20015 , a magenta output gamma correction means 20016 and a yellow output gamma correction means 20017 , and calculates following equations by using functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ).
  • K 7 KGAM ( K 4)
  • the output gamma correction means perform the converting to linearize relation between the inputted values (i.e., C6, M6, Y6 and K4 data) and optical reflection densities of printed or outputted results.
  • the inputted values i.e., C6, M6, Y6 and K4 data
  • optical reflection densities of printed or outputted results i.e., C6, M6, Y6 and K4 data
  • each of the functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) shown in FIG. 10 consists of 256 reference tables.
  • FIG. 11 is a view showing relation between a blue signal (B) and an optical reflection density in the conventional image process apparatus.
  • FIG. 11 it can be understood that, in respect of cyan (C), the optical reflection density of a blue signal 127 is lower than that of a blue signal 255 . That is, the tonality in blue is not compensated. Especially, as the color component other than the achromatic color becomes large, i.e., as the inputted image becomes vivid, the tonality becomes deteriorative.
  • Such a tendency is remarkable in an ink-jet record system. That is, in recording dot groups which represent a paper surface and an achromatic component (i.e., background color) by the gray scale, in a case where the recording dot groups are recorded such that the group representing the achromatic color component comes into contact with or overlaps the group representing the color component other than the achromatic color component on condition that the gray scale is optically compensated, such the tendency is remarkable because the ink-jet record system changes the structure of a dye or a pigment on a surface of a recording medium.
  • an achromatic component i.e., background color
  • An object of the present invention is to provide image process apparatus and method which compensate tonality or gradient even in a case where a color component other than an under color component is included in input image data.
  • an another object of the present invention is to compensate the tonality especially in the blue region.
  • an image process method comprising:
  • an image process method comprising:
  • an image process method comprising:
  • the under color process step selectively performs a first under color process or a second under color process in accordance with the color region judged in the judgment step, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion.
  • FIG. 1 is a block diagram showing an example of structure of an image process apparatus according to the present invention
  • FIG. 2 is a block diagram showing an example of structure of an under color process means according to a first embodiment of the present invention
  • FIG. 3 is a flow chart showing an example of a processing flow in an under color process according to the first embodiment
  • FIG. 4 is a view showing an example of a function SFT in the under color process according to the first embodiment
  • FIG. 5 is a view showing an example of a function KGR2 in the under color process according to the first embodiment
  • FIG. 6 is a view showing results of the under color process according to the first embodiment
  • FIG. 7 is a block diagram showing an example of structure of an under color process means according to a second embodiment of the present invention.
  • FIG. 8 is a block diagram showing structure of a conventional image process apparatus
  • FIG. 9 is a view showing an example of functions in the under color process
  • FIG. 10 is a view showing an example of functions in a gamma process.
  • FIG. 11 is a view showing results of a conventional under color process.
  • FIG. 1 is a block diagram showing an example of structure of an image process apparatus according to the present invention.
  • image data consists of eight bits for each of RGB colors, and “eight bits” represents integers from 0 to 255.
  • the image data is inputted into an image input means 10 , and then R, G and B data each consisting of eight bits are transferred to a luminance and density conversion means 20 .
  • the luminance and density conversion means 20 converts the R, G and B data into C, M and Y data each consisting of eight bits, by using a function BTD( ) in following equations.
  • a black component generation means 30 generates a black component K on the basis of minimum values of the C, M and Y data. If it is assumed that a function to be used for calculating the minimum value is min( ), C1, M1, Y1 and K1 data each consisting of eight bits and outputted from the black component generation means 30 are obtained by following equations.
  • K 1 min( C, M, Y )
  • a masking means 40 adjusts a tint by performing a matrix calculation based on following equations.
  • M 2 a 21 ⁇ C 1 +a 22 ⁇ M 1 +a 23 ⁇ Y 1 +a 24 ⁇ K 1
  • Y 2 a 31 ⁇ C 1 +a 32 ⁇ M 1 +a 33 ⁇ Y 1 +a 34 ⁇ K 1
  • an under color process means 50 performs under color processes described in the following embodiments.
  • An gamma process means 60 performs gamma correction on the inputted values by using such functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) as shown in FIG. 10, such that relation between an inputted value and an optical reflection density of a printed result is linearized.
  • each of the functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) consists of 256 reference tables.
  • a record encode means 70 performs a binarizing process such as a dither process, an error diffusing process or the like.
  • an ink-jet record means 80 forms an image on a recording medium in an area gradation manner, by using C, M, Y and K coloring agents.
  • a CPU 90 central processing unit
  • the CPU 90 which uses a RAM (random access memory) 92 as a working memory sets a masking coefficient in the masking means 40 , the table in the gamma process means 60 and the like, on the basis of programs stored in a ROM (read-only memory) 91 .
  • RAM random access memory
  • the first embodiment it is improved tonality (or gradient) in gradation from blue to black in which the above-described conventional problem remarkably occurs.
  • Such the problem remarkably occurs in the gradation from blue to black because, in a hue circle, lightness in a blue region is lower than that in other color regions and also a lightness difference in the gradation of the blue region is small.
  • FIG. 2 is a block diagram showing an under color process means according to the first embodiment
  • FIG. 3 is a flow chart showing a flow of the under color process.
  • an under color component separation means 30005 performs a process based on following equations, to output C3, M3, Y3 and U data (step S 10 ).
  • a secondary color component generation means 30006 generates a blue component (i.e., B1 data) from the Y3, M3 and C3 data outputted from the under color component separation means 30005 .
  • B1 data a blue component
  • an under color process selection means 30010 judges whether or not the inputted image data belongs to the blue region in the hue circle, and then selects the under color process means according to a judged result (step S 30 ).
  • the first under color process means 30100 performs the same process as in the conventional under color process means 20100 by using an under color component (i.e., U1 data), to output C4, M4, Y4 and K4 data.
  • the first under color process means 30100 is composed of a first means for generating a black component, a first means for generating a cyan component, a first means for generating a magenta component and a first means for generating a yellow component.
  • the first under color process means 30100 uses the functions KGR( ), CGR( ), MGR( ) and YGR( ) shown in FIG. 9, the first under color process means 30100 generates the C4, M4, Y4 and K4 data each consisting of eight bits on the basis of following equations (step S 40 ).
  • the functions are set such that the black component (i.e., K4 data) is “0” in a portion where color components other than the under color component are large and vivid, a tint can be reproduced in high quality in the vivid portion of the region which is other than the blue region and of which lightness in the vivid portion is relatively high.
  • the second under color process means 30200 is composed of a second means for generating the black component, a second means for generating the cyan component, a second means for generating the magenta component and a second means for generating the yellow component.
  • KGR( ), CGR( ), MGR( ) and YGR( ) used in the first under color process means 30100 , functions KGR2( ), CGR2( ), MGR2( ) and YGR2( ) shown in FIG. 5, and a function SFT( ) shown in FIG. 4, the second under color process means 30200 performs following under color processes (step S 50 ).
  • Each of the functions KGR2( ), CGR2( ) MGR2( ) and YGR2( ) is the function of which input has eight bits and of which output has also eight bits. Further, the function SFT( ) is the function of which input has eight bits but of which output has any one of values within a range “0” to “1”.
  • the second means for generating the black component calculates
  • K 5 KGR ( U 2) ⁇ (1 ⁇ SFT ( U 2 +B 1))+ KGR 2( U 2) ⁇ SFT ( U 2 +B 1).
  • the second means for generating the cyan component calculates
  • C 5 CGR ( U 2) ⁇ (1 ⁇ SFT ( U 2 +B 1))+ CGR 2( U 2) ⁇ SFT ( U 2 +B 1).
  • the second means for generating the magenta component calculates
  • M 5 MGR ( U 2) ⁇ (1 ⁇ SFT ( U 2 +B 1))+ MGR 2( U 2) ⁇ SFT ( U 2 +B 1).
  • the second means for generating the yellow component calculates
  • Y 5 YGR ( U 2) ⁇ (1 ⁇ SFT ( U 2 +B 1))+ YGR 2( U 2) ⁇ SFT ( U 2 +B 1).
  • the black component i.e., K data
  • the tonality can be compensated in the portion in which the components other than the under color component are large and of which tonality could not be compensated because of its low lightness. That is, the tonality can be compensated even in the vivid portion.
  • FIG. 6 is a view showing relation between a blue signal (B) and an optical reflection density in the present embodiment. As can be seen from FIG. 6, it has been solved the conventional problem that, as the blue signal in the cyan component is lowered, the optical reflection density is lowered.
  • the cyan (C), magenta (M) and yellow (Y) components corresponding to the black (K) component posterior to the under color process are generated on the basis of the C4, M4 and Y4 data generated by the first under color process means 30100 or the C5, M5 and Y5 data generated by the second under color process means 30200 and the components (i.e., chromatic color components) other than the under color component of the inputted image data (step S 60 ).
  • the tonality in the gradation from blue to black in the blue region (in which the lightness of the vivid portion is low) can be improved without affecting other color regions and as the continuity between the adjacent hue is maintained.
  • the tonality or gradient can be compensated.
  • the functions to be used in the first and second under color process means are not limited to the above-described functions. That is, it is obviously understood that other functions may be used.
  • a color space is divided into seven regions; i.e., an achromatic color region; a cyan (C) region, a magenta (M) region and a yellow (Y) region which construct a primary color region; and a red (R) region, a green (G) region and a blue (B) region which construct a secondary color region. Then, an under color process suitable for each color region is performed.
  • FIG. 7 is a block diagram showing an example of an under color process means according to the second embodiment.
  • the same parts as shown in FIG. 1 according to the first embodiment are added with the same reference numerals, and thus explanations thereof are omitted.
  • a secondary color component separation means 40006 extracts primary color components (i.e., C8, M8 and Y8 data) and secondary color components (i.e., R2, G2 and B2 data) on the basis of following equations.
  • R 2 min( M 3 , Y 3)
  • an under color process selection means 40010 judges a color region to which the inputted image data belongs, on the basis of the under color component (i.e., U data) and the primary and secondary color components (i.e., C8, M8, Y8, R2, B2 and G2 data), so as to select the under color process means corresponding to such a judged result. That is, in a case where both the primary and secondary color components are “0”, the under color process means judges that the inputted image data belongs to the achromatic color region. On the other hand, in a case where both the primary and secondary color components are not “0”, the under color process means judges that the inputted image data belongs to the region of which color component has a largest value.
  • the under color component i.e., U data
  • the primary and secondary color components i.e., C8, M8, Y8, R2, B2 and G2 data
  • the under color process selection means 40010 selects either one of a first means 40100 corresponding to the achromatic color region, a fifth means 40140 corresponding to the C region, a sixth means 40150 corresponding to the M region, a seventh means 40160 corresponding to the Y region, a second means 40110 corresponding to the R region, a third means 40120 corresponding to the G region, and a fourth means 40130 corresponding to the B region.
  • the first means 40100 performs the under color process suitable for the achromatic color region on the basis of following equations.
  • K 10 ( W 2/(255 ⁇ U )) ⁇ KGR 1( U )
  • the second means 40110 performs the under color process suitable for the R region on the basis of following equations.
  • K 11 ( R 2/(255 ⁇ U )) ⁇ KGR 2( U )
  • the third means 40120 performs the under color process suitable for the G region on the basis of following equations.
  • K 12 ( G 2/(255 ⁇ U )) ⁇ KGR 3( U )
  • the fourth means 40130 performs the under color process suitable for the B region on the basis of following equations.
  • K 13 ( B 2/(255 ⁇ U )) ⁇ KGR 4( U )
  • the fifth means 40140 performs the under color process suitable for the C region on the basis of following equations.
  • K 14 ( C 8/(255 ⁇ U )) ⁇ KGR 5( U )
  • the sixth means 40150 performs the under color process suitable for the M region on the basis of following equations.
  • K 15 ( M 8/(255 ⁇ U )) ⁇ KGR 6( U )
  • the seventh means 40160 performs the under color process suitable for the Y region on the basis of following equations.
  • K 16 ( Y 8/(255 ⁇ U 2)) ⁇ KGR 7( U )
  • the achromatic color region may be more widened. That is, for example, in a case where a total value of the primary and secondary color components is equal to or smaller than a predetermined value, it may be judged that the inputted image data belongs to the achromatic color region.
  • the under color process in the achromatic color region the under color process is performed according to ratio of a component (i.e., W2 data) to the range other than the under color component. Therefore, for example, even if the achromatic color region is more widened, the high-quality under color process can be performed.
  • the third embodiment is a modification of the above-described second embodiment. That is, in the third embodiment, the under color process which has been used in the means for generating the CMYK color components of the under color corresponding to each color region in the second embodiment is modified as follows.
  • the reference symbol A denotes a value of a main color component.
  • the functions KGR ⁇ ( ), CGR ⁇ ( ), MGR ⁇ ( ) and YGR ⁇ ( ) are prepared for each color region.
  • the under color process is performed by using the function SFT( ), the relation of under color process between adjacent hue can be arbitrarily set by setting the function SFT( ).
  • the function SFT( ) may be set for each color region.
  • the under color process means has performed calculating for each pixel.
  • the present invention is not limited to such the embodiments. That is, by previously storing the relation between input and output based on such the calculating in the form of a table, the under color process may be performed by using such the table.
  • the C, M, Y and K coloring agents have been used as the coloring agents used in the ink-jet record means.
  • the present invention is not limited to such the embodiments. That is, e.g., coloring agents corresponding to specific colors such as B (blue), V (violet) and the like may be used.
  • the color region may be divided based on the used coloring agents, and the function used in the under color process corresponding to such the color region may be set based on the characteristic of the used coloring agent.
  • the present invention can be applied to a system constructed by a plurality of equipments (e.g., host computer, interface equipment, reader, printer and the like) or can be also applied to an apparatus comprising a single equipment (e.g., copy machine, facsimile machine).
  • equipments e.g., host computer, interface equipment, reader, printer and the like
  • apparatus e.g., copy machine, facsimile machine
  • the invention employed by a method whereby program codes of a software to realize the functions of the above-described embodiments are supplied to a computer in an apparatus or a system connected to various devices so as to make the devices operative in order to realize the functions of the above-described embodiments and thus the various devices are operated in accordance with the programs stored in the computer (CPU or MPU) of the system or apparatus is also included in the scope of the present invention.
  • the program codes themselves of the software realize the functions of the above-described embodiments, and the program codes themselves and means for supplying the program codes to the computer, e.g., a memory medium in which the program codes have been stored, construct the present invention.
  • the memory medium to store the program codes e.g., it is possible to use a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a nonvolatile memory card, a ROM or the like can be used.
  • the present invention also incorporates a case where the supplied program codes are stored into a memory provided for a function expansion board of a computer or a function expansion unit connected to a computer and, after that, a CPU or the like provided for the function expansion board or the function expansion unit executes a part or all of the actual processes on the basis of instructions of the program codes, and the functions of the above-described embodiments are realized by the processes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)
  • Image Processing (AREA)
  • Ink Jet (AREA)

Abstract

It is provided an image process method comprising an input step of inputting image data, and an under color process step of performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal, wherein the color region is defined by hue. Therefore, even if the inputted image data includes color components other than an under color component, tonality (i.e., linearity between an inputted value and an output result) can be compensated.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to image process apparatus and method which perform an under color process for inputted image data. [0002]
  • 2. Related Background Art [0003]
  • In recent years, color printers applying various recording systems have been developed as output apparatuses of a color image. In these printers, an ink-jet record apparatus has been widely used, because such the apparatus has many advantages. That is, the ink-jet record apparatus can be manufactured at low cost, can print a high-quality image on various kinds of recording media, can be easily made compact in size, and the like. [0004]
  • Numbers of image data which are outputted by such the color printer frequently correspond to an output apparatus which utilizes a light emission element such as a CRT (cathode-ray tube) monitor or the like. Therefore, such the image data are composed of R (red), G (green) and B (blue) signals. [0005]
  • The color printer converts such the RGB signals into C (cyan), M (magenta) and Y (yellow) signals or C (cyan), M (magenta), Y (yellow) and K (black) signals, by using an image process means. An image process method which is performed by such the image process means has been proposed as U.S. patent application Ser. No. 08/711,953 filed on Sep. 6, 1996, by the same applicant as that of the present application. [0006]
  • FIG. 8 is block diagram for explaining a concept of such the image process method proposed by the same applicant as that of the present application. [0007]
  • It is assumed that the image data consists of eight bits for each of RGB colors, and “eight bits” in the present application represents integers from 0 to 255. [0008]
  • The image data is inputted into an image input means [0009] 20001 and then R, G and B data each consisting of eight bits are transferred to a luminance and density conversion means 20002. The luminance and density conversion means 20002 performs a luminance and density converting process on the R, G and B data to convert these data into C, M and Y data each consisting of eight bits.
  • Subsequently, a black component generation means [0010] 20003 generates a black component K on the basis of minimum values of the C, M and Y data. If it is assumed that a function to be used for calculating the minimum value is min( ), C1, M1, Y1 and K1 data each consisting of eight bits and outputted from the black component generation means 20003 are obtained by following equations.
  • C1=C
  • M1=M
  • Y1=Y
  • K1=min(C, M, Y)
  • Subsequently, a masking means [0011] 20004 performs a masking process on the C1, M1, Y1 and K1 data to output C2, M2 and Y2 data.
  • Subsequently, an under color component separation means [0012] 20005 performs a process on the basis of following equations, to output C3, M3, Y3 and U data.
  • U=min(C2, M2, Y2)
  • C3=C2−U
  • M3=M2−U
  • Y3=Y2−U
  • Subsequently, an under color process means [0013] 20100 generates C4, M4, Y4 and K4 data each consisting of eight bits, on the basis of the under color component data U. The under color process means 20100 is composed of a black component generation means 20006, a cyan component generation means 20007, a magenta component generation means 20008 and a yellow component generation means 20009, and thus generates the C4, M4, Y4 and K4 data each consisting of eight bits by using functions KGR( ), CGR( ), MGR( ) and YGR( ) shown in FIG. 9. That is, following relation is satisfied.
  • C4=CGR(U)
  • M4=MGR(U)
  • Y4=YGR(U)
  • K4=KGR(U)
  • Subsequently, the C3, M3 and Y3 data outputted from the under color component separation means [0014] 20005 and the C4, M4 and Y4 data outputted from the under color process means 20100 are synthesized respectively by a cyan component output means 20011, a magenta component output means 20012 and a yellow component output means 20013, to respectively generate C6, M6 and Y6 data. Such processes are performed on the basis of following equations.
  • C6=C3+C4
  • M6=M3+M4
  • Y6=Y3+Y4
  • In this case, if values of the C6, M6 and Y6 data are equal to or smaller than “0”, such the values are determined as “0”. On the other hand, if these values are equal to or larger than “256”, such the values are determined as “255”. On the basis of the C6, M6, Y6 and K4 data outputted through such the processes, an output gamma correction means [0015] 20101 respectively output C7, M7, Y7 and K7 data each consisting of eight bits. The output gamma correction means 20101 is composed of a black output gamma correction means 20014, a cyan output gamma correction means 20015, a magenta output gamma correction means 20016 and a yellow output gamma correction means 20017, and calculates following equations by using functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ).
  • C7=CGAM(C6)
  • M7=MGAM(M6)
  • Y7=YGAM(Y6)
  • K7=KGAM(K4)
  • The output gamma correction means perform the converting to linearize relation between the inputted values (i.e., C6, M6, Y6 and K4 data) and optical reflection densities of printed or outputted results. Ordinarily, each of the functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) shown in FIG. 10 consists of 256 reference tables. [0016]
  • The functions CGR( ), MGR( ), YGR( ) and KGR( ) are set such that, in a case where the inputted image data satisfy the equations R=G=B, the printed results are obtained by an achromatic color. That is, such the functions have the structure for compensating that, in a case where the image data represents gray scale, the printed result is also represented by the gray scale. [0017]
  • However, in such the image process method, in a case where the inputted image data includes a color component other than the under color, there has been a problem that tonality or gradient (i.e., linearity between the inputted value and the outputted result) can not be compensated. [0018]
  • FIG. 11 is a view showing relation between a blue signal (B) and an optical reflection density in the conventional image process apparatus. [0019]
  • In this case, the blue signal (B) has a value which represents blue components in the inputted C2, M2 and Y2 data (satisfying C2=M2 and min(C2, M2, Y2)=Y2), and can be obtained by a following equation. [0020]
  • B=C2−Y2
  • In FIG. 11, it can be understood that, in respect of cyan (C), the optical reflection density of a [0021] blue signal 127 is lower than that of a blue signal 255. That is, the tonality in blue is not compensated. Especially, as the color component other than the achromatic color becomes large, i.e., as the inputted image becomes vivid, the tonality becomes deteriorative.
  • Such a tendency is remarkable in an ink-jet record system. That is, in recording dot groups which represent a paper surface and an achromatic component (i.e., background color) by the gray scale, in a case where the recording dot groups are recorded such that the group representing the achromatic color component comes into contact with or overlaps the group representing the color component other than the achromatic color component on condition that the gray scale is optically compensated, such the tendency is remarkable because the ink-jet record system changes the structure of a dye or a pigment on a surface of a recording medium. [0022]
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide image process apparatus and method which compensate tonality or gradient even in a case where a color component other than an under color component is included in input image data. [0023]
  • In consideration of a fact that the above-described tendency is remarkably seen in a blue region, an another object of the present invention is to compensate the tonality especially in the blue region. [0024]
  • In order to achieve the above objects, it is provided an image process method comprising: [0025]
  • an input step of inputting image data; and [0026]
  • an under color process step of performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal, [0027]
  • wherein the color region is defined by hue. [0028]
  • Further, it is provided an image process method comprising: [0029]
  • an input step of inputting image data; [0030]
  • a judgment step of judging a color region of the image data; and [0031]
  • an under color process step of performing an under color process according to the color region, [0032]
  • wherein, in the under color process step, a blue region is subjected to the under color process which is different from the under color process for other color regions. [0033]
  • Furthermore, it is provided an image process method comprising: [0034]
  • an input step of inputting image data; [0035]
  • a judgment step of judging a color region of the image data; and [0036]
  • an under color process step of performing an under color process according to the color region, [0037]
  • wherein the under color process step selectively performs a first under color process or a second under color process in accordance with the color region judged in the judgment step, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion. [0038]
  • The above and other objects of the present invention will become apparent from the following detailed description when read in conjunction with the accompanying drawings.[0039]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing an example of structure of an image process apparatus according to the present invention; [0040]
  • FIG. 2 is a block diagram showing an example of structure of an under color process means according to a first embodiment of the present invention; [0041]
  • FIG. 3 is a flow chart showing an example of a processing flow in an under color process according to the first embodiment; [0042]
  • FIG. 4 is a view showing an example of a function SFT in the under color process according to the first embodiment; [0043]
  • FIG. 5 is a view showing an example of a function KGR2 in the under color process according to the first embodiment; [0044]
  • FIG. 6 is a view showing results of the under color process according to the first embodiment; [0045]
  • FIG. 7 is a block diagram showing an example of structure of an under color process means according to a second embodiment of the present invention; [0046]
  • FIG. 8 is a block diagram showing structure of a conventional image process apparatus; [0047]
  • FIG. 9 is a view showing an example of functions in the under color process; [0048]
  • FIG. 10 is a view showing an example of functions in a gamma process; and [0049]
  • FIG. 11 is a view showing results of a conventional under color process. [0050]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 is a block diagram showing an example of structure of an image process apparatus according to the present invention. [0051]
  • In the present invention, it is assumed that image data consists of eight bits for each of RGB colors, and “eight bits” represents integers from 0 to 255. [0052]
  • The image data is inputted into an image input means [0053] 10, and then R, G and B data each consisting of eight bits are transferred to a luminance and density conversion means 20. The luminance and density conversion means 20 converts the R, G and B data into C, M and Y data each consisting of eight bits, by using a function BTD( ) in following equations.
  • C=BTD(R)
  • M=BTD(G)
  • Y=BTD(B)
  • Subsequently, a black component generation means [0054] 30 generates a black component K on the basis of minimum values of the C, M and Y data. If it is assumed that a function to be used for calculating the minimum value is min( ), C1, M1, Y1 and K1 data each consisting of eight bits and outputted from the black component generation means 30 are obtained by following equations.
  • C1=C
  • M1=M
  • Y1=Y
  • K1=min(C, M, Y)
  • Then, a masking means [0055] 40 adjusts a tint by performing a matrix calculation based on following equations.
  • C2=a11×C1+a12×M1+a13×Y1+a14×K1
  • M2=a21×C1+a22×M1+a23×Y1+a24×K1
  • Y2=a31×C1+a32×M1+a33×Y1+a34×K1
  • In this case, numerals between “0.9” and “1.3” are generally used as the values a11, a22 and a33, and numerals between “0” and “−0.6” are generally used as the values a12, a13, a21, a23, a31 and a32. In this processing system, it is desirable that the values a14, a24 and a34 respectively satisfy following equations. [0056]
  • a14=1−(a11+a12+a13)
  • a24=1−(a21+a22+a23)
  • a34=1−(a31+a32+a33)
  • As a result of the above matrix calculation, if values of the C2, M2 and Y2 data are equal to or smaller than “0”, such the values are determined as “0”. On the other hand, if these values are equal to or larger than “256”, such the values are determined as “255”. [0057]
  • Subsequently, an under color process means [0058] 50 performs under color processes described in the following embodiments.
  • An gamma process means [0059] 60 performs gamma correction on the inputted values by using such functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) as shown in FIG. 10, such that relation between an inputted value and an optical reflection density of a printed result is linearized.
  • In this case, each of the functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) consists of 256 reference tables. [0060]
  • Subsequently, a record encode means [0061] 70 performs a binarizing process such as a dither process, an error diffusing process or the like.
  • On the basis of the C, M, Y and K binary data, an ink-jet record means [0062] 80 forms an image on a recording medium in an area gradation manner, by using C, M, Y and K coloring agents.
  • These means described above are controlled by a CPU (central processing unit) [0063] 90. The CPU 90 which uses a RAM (random access memory) 92 as a working memory sets a masking coefficient in the masking means 40, the table in the gamma process means 60 and the like, on the basis of programs stored in a ROM (read-only memory) 91.
  • (First Embodiment) [0064]
  • In the first embodiment, it is improved tonality (or gradient) in gradation from blue to black in which the above-described conventional problem remarkably occurs. Such the problem remarkably occurs in the gradation from blue to black because, in a hue circle, lightness in a blue region is lower than that in other color regions and also a lightness difference in the gradation of the blue region is small. [0065]
  • FIG. 2 is a block diagram showing an under color process means according to the first embodiment, and FIG. 3 is a flow chart showing a flow of the under color process. [0066]
  • Initially, an under color component separation means [0067] 30005 performs a process based on following equations, to output C3, M3, Y3 and U data (step S10).
  • U=min(C2, M2, Y2)
  • C3=C2−U
  • M3=M2−U
  • Y3=Y2−U
  • A secondary color component generation means [0068] 30006 generates a blue component (i.e., B1 data) from the Y3, M3 and C3 data outputted from the under color component separation means 30005. In this case, it is assumed that under color components of the M3 and C3 data are outputted as the B1 data. Therefore, if a processing function to be used for obtaining the minimum value is min( ), an equation B1=min(M3, C3) is satisfied. For this reason, a blue component in inputted image data is represented by the B1 data consisting of eight bits (step S20).
  • On the basis of whether or not the blue component (i.e., B1 data) is “0”, an under color process selection means [0069] 30010 judges whether or not the inputted image data belongs to the blue region in the hue circle, and then selects the under color process means according to a judged result (step S30).
  • If the B1 data is “0”, since the inputted image data does not belong to the blue region, a first under color process means [0070] 30100 is selected to be used but a second under color process means 30200 is not used.
  • The first under color process means [0071] 30100 performs the same process as in the conventional under color process means 20100 by using an under color component (i.e., U1 data), to output C4, M4, Y4 and K4 data. In this case, the U1 data satisfies an equation U1=U.
  • The first under color process means [0072] 30100 is composed of a first means for generating a black component, a first means for generating a cyan component, a first means for generating a magenta component and a first means for generating a yellow component. By using the functions KGR( ), CGR( ), MGR( ) and YGR( ) shown in FIG. 9, the first under color process means 30100 generates the C4, M4, Y4 and K4 data each consisting of eight bits on the basis of following equations (step S40).
  • C4=CGR(U)
  • M4=MGR(U)
  • Y4=YGR(U)
  • K4=KGR(U)
  • Further, in the case where the B1 data is “0”, since the second under color process means [0073] 30200 is not used, C5, M5, Y5 and K5 data (or signals) are not outputted.
  • As described above, in the case where the inputted image data does not belong to the blue region, if the under color process is performed by using the functions CGR( ), MGR( ), YGR( ) and KGR( ) which have been set such that a printed result when the inputted image data satisfies the relation R=G=B becomes an achromatic color, it can be compensated that the printed result comes to have a gray scale when the inputted image data has the gray scale. Further, since the functions are set such that the black component (i.e., K4 data) is “0” in a portion where color components other than the under color component are large and vivid, a tint can be reproduced in high quality in the vivid portion of the region which is other than the blue region and of which lightness in the vivid portion is relatively high. [0074]
  • If the B1 data is larger than “0”, since the inputted image data belongs to the blue region, the under color selection means [0075] 30010 outputs the under color component U, as U2 data, to the second under color process means 30200. That is, an equation U2=U is satisfied in this case.
  • Further, in the case where the B1 data is larger than “0”, since the first under color process means [0076] 30100 is not used, the C4, M4, Y4 and K4 data or signals are not outputted.
  • The second under color process means [0077] 30200 is composed of a second means for generating the black component, a second means for generating the cyan component, a second means for generating the magenta component and a second means for generating the yellow component. By using the functions KGR( ), CGR( ), MGR( ) and YGR( ) used in the first under color process means 30100, functions KGR2( ), CGR2( ), MGR2( ) and YGR2( ) shown in FIG. 5, and a function SFT( ) shown in FIG. 4, the second under color process means 30200 performs following under color processes (step S50).
  • Each of the functions KGR2( ), CGR2( ) MGR2( ) and YGR2( ) is the function of which input has eight bits and of which output has also eight bits. Further, the function SFT( ) is the function of which input has eight bits but of which output has any one of values within a range “0” to “1”. [0078]
  • By using the above functions, the second means for generating the black component calculates [0079]
  • K5=KGR(U2)×(1−SFT(U2+B1))+KGR2(U2)×SFT(U2+B1).
  • The second means for generating the cyan component calculates [0080]
  • C5=CGR(U2)×(1−SFT(U2+B1))+CGR2(U2)×SFT(U2+B1).
  • The second means for generating the magenta component calculates [0081]
  • M5=MGR(U2)×(1−SFT(U2+B1))+MGR2(U2)×SFT(U2+B1).
  • The second means for generating the yellow component calculates [0082]
  • Y5=YGR(U2)×(1−SFT(U2+B1))+YGR2(U2)×SFT(U2+B1).
  • As described above, in the case where the inputted image data belongs to the blue region, by using the functions KGR2( ), CGR2( ), MGR2( ) and YGR2( ) shown in FIG. 5, the black component (i.e., K data) is generated even when the value of the under color component is low. Therefore, the tonality can be compensated in the portion in which the components other than the under color component are large and of which tonality could not be compensated because of its low lightness. That is, the tonality can be compensated even in the vivid portion. [0083]
  • FIG. 6 is a view showing relation between a blue signal (B) and an optical reflection density in the present embodiment. As can be seen from FIG. 6, it has been solved the conventional problem that, as the blue signal in the cyan component is lowered, the optical reflection density is lowered. [0084]
  • Further, as shown in FIG. 4, since the continuous function is used as the SFT function, continuity between adjacent hue can be maintained. [0085]
  • Subsequently, the cyan (C), magenta (M) and yellow (Y) components corresponding to the black (K) component posterior to the under color process are generated on the basis of the C4, M4 and Y4 data generated by the first under color process means [0086] 30100 or the C5, M5 and Y5 data generated by the second under color process means 30200 and the components (i.e., chromatic color components) other than the under color component of the inputted image data (step S60).
  • Then, a cyan component synthesis means [0087] 30011 outputs C6 data by calculating C6=C3+(C4+C5).
  • A magenta component synthesis means [0088] 30012 outputs M6 data by calculating M6=M3+(M4+M5).
  • A yellow component synthesis means [0089] 30013 outputs Y6 data by calculating Y6=Y3+(Y4+Y5).
  • As described above, according to the present embodiment, the tonality in the gradation from blue to black in the blue region (in which the lightness of the vivid portion is low) can be improved without affecting other color regions and as the continuity between the adjacent hue is maintained. [0090]
  • Further, even in the case where the color components other than the under color component are included in the inputted image data, the tonality or gradient can be compensated. [0091]
  • In the present embodiment, the functions to be used in the first and second under color process means are not limited to the above-described functions. That is, it is obviously understood that other functions may be used. [0092]
  • (Second Embodiment) [0093]
  • In the second embodiment, a color space is divided into seven regions; i.e., an achromatic color region; a cyan (C) region, a magenta (M) region and a yellow (Y) region which construct a primary color region; and a red (R) region, a green (G) region and a blue (B) region which construct a secondary color region. Then, an under color process suitable for each color region is performed. [0094]
  • FIG. 7 is a block diagram showing an example of an under color process means according to the second embodiment. In FIG. 7, the same parts as shown in FIG. 1 according to the first embodiment are added with the same reference numerals, and thus explanations thereof are omitted. [0095]
  • From C3, M3 and Y3 data representing chromatic color components of inputted image data from which an under color component has been eliminated by an under color component separation means [0096] 30005, a secondary color component separation means 40006 extracts primary color components (i.e., C8, M8 and Y8 data) and secondary color components (i.e., R2, G2 and B2 data) on the basis of following equations.
  • R2=min(M3, Y3)
  • G2=min(Y3, C3)
  • B2=min(C3, M3)
  • C8=C3−(B2+G2)
  • M8=M3−(R2+B2)
  • Y8=Y3−(R2+G2)
  • Subsequently, an under color process selection means [0097] 40010 judges a color region to which the inputted image data belongs, on the basis of the under color component (i.e., U data) and the primary and secondary color components (i.e., C8, M8, Y8, R2, B2 and G2 data), so as to select the under color process means corresponding to such a judged result. That is, in a case where both the primary and secondary color components are “0”, the under color process means judges that the inputted image data belongs to the achromatic color region. On the other hand, in a case where both the primary and secondary color components are not “0”, the under color process means judges that the inputted image data belongs to the region of which color component has a largest value. Then, on the basis of such a judged result, the under color process selection means 40010 selects either one of a first means 40100 corresponding to the achromatic color region, a fifth means 40140 corresponding to the C region, a sixth means 40150 corresponding to the M region, a seventh means 40160 corresponding to the Y region, a second means 40110 corresponding to the R region, a third means 40120 corresponding to the G region, and a fourth means 40130 corresponding to the B region.
  • Hereinafter, the process in the means for generating CMYK color components of each under color will be explained. [0098]
  • <[0099] First Means 40100 for Generating CMYK Color Components of Under Color>
  • The first means [0100] 40100 performs the under color process suitable for the achromatic color region on the basis of following equations.
  • W2=255−(U+R2+G2+B2+C8+M8+Y8)
  • K10=(W2/(255−U))×KGR1(U)
  • C10=(W2/(255−U))×CGR1(U)
  • M10=(W2/(255−U))×MGR1(U)
  • Y10=(W2/(255−U))×YGR1(U)
  • <[0101] Second Means 40110 for Generating CMYK Color Components of Under Color>
  • The second means [0102] 40110 performs the under color process suitable for the R region on the basis of following equations.
  • K11=(R2/(255−U))×KGR2(U)
  • C11=(R2/(255−U))×CGR2(U)
  • M11=(R2/(255−U))×MGR2(U)
  • Y11=(R2/(255−U))×YGR2(U)
  • <[0103] Third Means 40120 for Generating CMYK Color Components of Under Color>
  • The third means [0104] 40120 performs the under color process suitable for the G region on the basis of following equations.
  • K12=(G2/(255−U))×KGR3(U)
  • C12=(G2/(255−U))×CGR3(U)
  • M12=(G2/(255−U))×MGR3(U)
  • Y12=(G2/(255−U))×YGR3(U)
  • <[0105] Fourth Means 40130 for Generating CMYK Color Components of Under Color>
  • The fourth means [0106] 40130 performs the under color process suitable for the B region on the basis of following equations.
  • K13=(B2/(255−U))×KGR4(U)
  • C13=(B2/(255−U))×CGR4(U)
  • M13=(B2/(255−U))×MGR4(U)
  • Y13=(B2/(255−U))×YGR4(U)
  • <[0107] Fifth Means 40140 for Generating CMYK Color Components of Under Color>
  • The fifth means [0108] 40140 performs the under color process suitable for the C region on the basis of following equations.
  • K14=(C8/(255−U))×KGR5(U)
  • C14=(C8/(255−U))×CGR5(U)
  • M14=(C8/(255−U))×MGR5(U)
  • Y14=(C8/(255−U))×YGR5(U)
  • <[0109] Sixth Means 40150 for Generating CMYK Color Components of Under Color>
  • The sixth means [0110] 40150 performs the under color process suitable for the M region on the basis of following equations.
  • K15=(M8/(255−U))×KGR6(U)
  • C15=(M8/(255−U))×CGR6(U)
  • M15=(M8/(255−U))×MGR6(U)
  • Y15=(M8/(255−U))×YGR6(U)
  • <Seventh Means [0111] 40160 for Generating CMYK Color Components of Under Color>
  • The seventh means [0112] 40160 performs the under color process suitable for the Y region on the basis of following equations.
  • K16=(Y8/(255−U2))×KGR7(U)
  • C16=(Y8/(255−U2))×CGR7(U)
  • M16=(Y8/(255−U2))×MGR7(U)
  • Y16=(Y8/(255−U2))×YGR7(U)
  • As described above, in the present embodiment, since the functions KGR( ), CGR( ), MGR( ) and YGR( ) are provided for each color region, the under color process suitable for each color region can be performed. [0113]
  • Further, since the under color process is performed according to ratio of a main color component to a range other than the under color component, continuity between adjacent hue can be maintained. [0114]
  • In the above-described second embodiment, it is judged that the inputted image data belongs to the achromatic color region in the case where the primary and secondary color components are “0”. However, the achromatic color region may be more widened. That is, for example, in a case where a total value of the primary and secondary color components is equal to or smaller than a predetermined value, it may be judged that the inputted image data belongs to the achromatic color region. According to the above-described under color process in the achromatic color region, the under color process is performed according to ratio of a component (i.e., W2 data) to the range other than the under color component. Therefore, for example, even if the achromatic color region is more widened, the high-quality under color process can be performed. [0115]
  • (Third Embodiment) [0116]
  • The third embodiment is a modification of the above-described second embodiment. That is, in the third embodiment, the under color process which has been used in the means for generating the CMYK color components of the under color corresponding to each color region in the second embodiment is modified as follows. [0117]
  • <Under Color Process in Achromatic Color Region>[0118]
  • SW=SET(U)+SFT(R2)+SFT(G2)+SFT(B2)+SFT(C8)+SFT(M8)+SFT(Y8)
  • K10=(1−SWKGR1(U)
  • C10=(1−SWCGR1(U)
  • M10=(1−SWMGR1(U)
  • Y10=(1−SWYGR1(U)
  • <Under Color Process in each of Chromatic Color Regions>[0119]
  • Kα=SFT(AKGRβ(U2)
  • Cα=SFT(ACGRβ(U2)
  • Mα=SFT(AMGRβ(U2)
  • Yα=SFT(AYGRβ(U2)
  • In the third embodiment, the reference symbol A denotes a value of a main color component. Further, like the second embodiment, the functions KGRβ( ), CGRβ( ), MGRβ( ) and YGRβ( ) are prepared for each color region. [0120]
  • According to the third embodiment, since the under color process is performed by using the function SFT( ), the relation of under color process between adjacent hue can be arbitrarily set by setting the function SFT( ). [0121]
  • On the other hand, in the third embodiment, the function SFT( ) may be set for each color region. [0122]
  • (Other Modifications) [0123]
  • In the above-described embodiments, the under color process means has performed calculating for each pixel. However, the present invention is not limited to such the embodiments. That is, by previously storing the relation between input and output based on such the calculating in the form of a table, the under color process may be performed by using such the table. [0124]
  • Further, in the above-described embodiments, the C, M, Y and K coloring agents have been used as the coloring agents used in the ink-jet record means. However, the present invention is not limited to such the embodiments. That is, e.g., coloring agents corresponding to specific colors such as B (blue), V (violet) and the like may be used. In this case, the color region may be divided based on the used coloring agents, and the function used in the under color process corresponding to such the color region may be set based on the characteristic of the used coloring agent. [0125]
  • The present invention can be applied to a system constructed by a plurality of equipments (e.g., host computer, interface equipment, reader, printer and the like) or can be also applied to an apparatus comprising a single equipment (e.g., copy machine, facsimile machine). [0126]
  • The invention employed by a method whereby program codes of a software to realize the functions of the above-described embodiments are supplied to a computer in an apparatus or a system connected to various devices so as to make the devices operative in order to realize the functions of the above-described embodiments and thus the various devices are operated in accordance with the programs stored in the computer (CPU or MPU) of the system or apparatus is also included in the scope of the present invention. [0127]
  • In such a case, the program codes themselves of the software realize the functions of the above-described embodiments, and the program codes themselves and means for supplying the program codes to the computer, e.g., a memory medium in which the program codes have been stored, construct the present invention. [0128]
  • As such the memory medium to store the program codes, e.g., it is possible to use a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a magnetic tape, a nonvolatile memory card, a ROM or the like can be used. [0129]
  • It will be obviously understood that the program codes are included in the embodiments of the present invention, even in not only a case where the functions of the above-described embodiments are realized by executing the supplied program codes by the computer but also a case where the functions of the above-described embodiments are realized in cooperation with the OS (operating system) by which the program codes operate in the computer or another application software or the like. [0130]
  • Further, it will be also obviously understood that the present invention also incorporates a case where the supplied program codes are stored into a memory provided for a function expansion board of a computer or a function expansion unit connected to a computer and, after that, a CPU or the like provided for the function expansion board or the function expansion unit executes a part or all of the actual processes on the basis of instructions of the program codes, and the functions of the above-described embodiments are realized by the processes. [0131]
  • Although the present invention has been described above with respect to the preferred embodiments, the present invention is not limited to the above-described embodiments but many modifications and variations are possible with the spirit and scope of the appended claims. [0132]

Claims (21)

What is claimed is:
1. An image process method comprising:
an input step of inputting image data; and
an under color process step of performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal,
wherein the color region is defined by hue.
2. A method according to claim 1, wherein the color region is defined by combination of color components other than an under color component of the image data.
3. A method according to claim 2, wherein the image data consists of the plurality of color components, and
the color region to which the image data belongs is judged by extracting the under color component, a primary color component and a secondary color component on the basis of the plurality of color components of the image data.
4. A method according to claim 1, wherein image forming is performed by using an ink-jet record unit on the basis of the plurality of component signals.
5. A method according to claim 1, wherein the color region includes an achromatic color region, a red region, a green region, a blue region, a cyan region, a magenta region and a yellow region.
6. A method according to claim 1, wherein the under color process is performed in accordance with the color region according to a kind of a coloring agent.
7. A method according to claim 1, wherein the under color process is performed by using a table.
8. An image process method comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein, in said under color process step, a blue region is subjected to the under color process which is different from the under color process for other color regions.
9. A method according to claim 8, wherein, in said under color process step, the under color process is performed on the blue region such that continuity between the blue region and the other color regions does not lose.
10. A method according to claim 8, wherein, in said judgment step, the color region of the image data is judged on the basis of combination of color components other than an under color component of the image data.
11. A method according to claim 8, wherein image forming is performed by using an ink-jet record unit on the basis of a plurality of component signals.
12. A method according to claim 8, wherein the under color process is performed by using a table.
13. An image process method comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein said under color process step selectively performs a first under color process or a second under color process in accordance with the color region judged in said judgment step, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion.
14. A method according to claim 13, wherein, in the first under color process, the black component is not added in a case where an under color component of the image data is equal to or smaller than a predetermined value.
15. A method according to claim 13, wherein, in said under color process step, the under color process is performed by using a table for correlating the image data with a result of said under color process step.
16. An image process apparatus comprising:
input means for inputting image data; and
under color process means for performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal,
wherein the color region is defined by hue.
17. A recording medium which stores a program for an image process method comprising:
an input step of inputting image data; and
an under color process step of performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal,
wherein the color region is defined by hue.
18. An image process apparatus comprising:
input means for inputting image data;
judgment means for judging a color region of the image data; and
under color process means for performing an under color process according to the color region,
wherein, by said under color process means, a blue region is subjected to the under color process which is different from the under color process for other color regions.
19. A recording medium which stores a program for an image process method comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein, in said under color process step, a blue region is subjected to the under color process which is different from the under color process for other color regions.
20. An image process apparatus comprising:
input means for inputting image data;
judgment means for judging a color region of the image data; and
under color process means for performing an under color process according to the color region,
wherein said under color process means selectively performs a first under color process or a second under color process in accordance with the color region judged by said judgment means, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion.
21. A recording medium which stores a program for an image process method comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein said under color process step selectively performs a first under color process or a second under color process in accordance with the color region judged in said judgment step, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion.
US08/883,572 1996-06-28 1997-06-26 Image process apparatus, image process method and storage medium Expired - Fee Related US6381034B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP16932896A JP4109726B2 (en) 1996-06-28 1996-06-28 Image processing apparatus and method
JP8-169328 1996-06-28

Publications (2)

Publication Number Publication Date
US20020005962A1 true US20020005962A1 (en) 2002-01-17
US6381034B2 US6381034B2 (en) 2002-04-30

Family

ID=15884521

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/883,572 Expired - Fee Related US6381034B2 (en) 1996-06-28 1997-06-26 Image process apparatus, image process method and storage medium

Country Status (4)

Country Link
US (1) US6381034B2 (en)
EP (1) EP0817471B1 (en)
JP (1) JP4109726B2 (en)
DE (1) DE69719621T2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038996A1 (en) * 2001-08-21 2003-02-27 Toshiba Tec Kabushiki Kaisha. Image processing apparatus
US20050264587A1 (en) * 2003-07-29 2005-12-01 Seiko Epson Corporation Color filter, color image display device, and electronic apparatus

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7792355B2 (en) * 2006-03-30 2010-09-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image capturing apparatus
JP5610799B2 (en) * 2010-03-15 2014-10-22 キヤノン株式会社 Image forming apparatus
CN103903283B (en) * 2012-12-28 2017-12-19 北京大学 A kind of digital image content enhancing extracting method based on color

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62293793A (en) 1986-06-13 1987-12-21 ソニー株式会社 Printed board
US4989079A (en) 1987-10-23 1991-01-29 Ricoh Company, Ltd. Color correction device and method having a hue area judgement unit
US5038208A (en) * 1987-11-16 1991-08-06 Canon Kabushiki Kaisha Image forming apparatus with a function for correcting recording density uneveness
US4985759A (en) * 1988-04-05 1991-01-15 Ricoh Company, Ltd. Method and apparatus for extracting black color component
JP2872285B2 (en) * 1989-08-02 1999-03-17 キヤノン株式会社 Image processing apparatus and image processing method
JPH06121161A (en) * 1991-05-14 1994-04-28 Fuji Xerox Co Ltd Character processing system for color image processor
JPH05276368A (en) 1992-03-26 1993-10-22 Canon Inc Color picture processing method and its device
JPH0670147A (en) 1992-08-21 1994-03-11 Minolta Camera Co Ltd Color image generating device
US5847729A (en) * 1993-06-14 1998-12-08 Canon Kabushiki Kaisha Ink-jet printing apparatus and method, and printed matter obtained thereby and processed article obtained from printed matter
US5729360A (en) * 1994-01-14 1998-03-17 Fuji Xerox Co., Ltd. Color image processing method and system
JP2906974B2 (en) 1994-01-14 1999-06-21 富士ゼロックス株式会社 Color image processing method and apparatus
JPH0856292A (en) * 1994-08-12 1996-02-27 Fuji Xerox Co Ltd Image processor
US5841897A (en) * 1995-05-23 1998-11-24 Yamatoya & Co., Ltd. Method for the tonal control or adjustment of reproduced color image and picture producing system making use of said method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038996A1 (en) * 2001-08-21 2003-02-27 Toshiba Tec Kabushiki Kaisha. Image processing apparatus
US7023582B2 (en) * 2001-08-21 2006-04-04 Kabushiki Kaisha Toshiba Image processing apparatus
US20050264587A1 (en) * 2003-07-29 2005-12-01 Seiko Epson Corporation Color filter, color image display device, and electronic apparatus
US7545395B2 (en) * 2003-07-29 2009-06-09 Seiko Epson Corporation Color filter, color image display device, and electronic apparatus

Also Published As

Publication number Publication date
US6381034B2 (en) 2002-04-30
DE69719621D1 (en) 2003-04-17
EP0817471B1 (en) 2003-03-12
EP0817471A2 (en) 1998-01-07
EP0817471A3 (en) 1998-12-09
JP4109726B2 (en) 2008-07-02
JPH1023278A (en) 1998-01-23
DE69719621T2 (en) 2003-12-11

Similar Documents

Publication Publication Date Title
US7321448B2 (en) Color proofing method and apparatus, and recorded medium on which color proofing program is recorded
US5107332A (en) Method and system for providing closed loop color control between a scanned color image and the output of a color printer
US5608549A (en) Apparatus and method for processing a color image
US6191874B1 (en) Image processing apparatus and method, and a recording medium
US6084689A (en) Method and apparatus for saturation compensation in total ink limited output
US8355173B2 (en) Color processing apparatus and control method thereof
US7450267B2 (en) Accuracy of color conversion profile
WO1993020648A1 (en) Color correction with a four-dimensional look-up table
US7032989B2 (en) Image processing method and image processing apparatus
US6992783B1 (en) Image processing apparatus and method
US20020005962A1 (en) &#39;&#39;image process apparatus, image process method and storage medium
US5915075A (en) Image processing apparatus for converting input color chart data into color data for an output device
JP2007043424A (en) Color processing method and apparatus adopting the same
US6567186B1 (en) Method for determining gray values in a printer
JP3963444B2 (en) Image processing method and image processing apparatus
US7229146B2 (en) System and method for characterizing a printing device
KR100362379B1 (en) Non-linear gamut compression device and method using multiple-convergent points
US8045220B2 (en) Method of creating color conversion table and image processing apparatus
JP2001232860A (en) Hue determining device for specific color for recorder and method therefor and recording medium
US7427992B2 (en) Color correction table compiling method, controlling program, recording medium, and device
US6554385B2 (en) Color image generation apparatus and color image generation method
US20040160454A1 (en) Link file generating program product, method and apparatus for generating link file used for color matching system
JPH08223433A (en) Color image processing method
JP4154051B2 (en) Color processing apparatus and method
EP1628467A1 (en) Color proofing method and apparatus, and recorded medium on which color proofing program is recorded

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IWASAKI, OSAMU;OTSUKA, NAOJI;TAKAHASHI, KIICHIRO;AND OTHERS;REEL/FRAME:009017/0883

Effective date: 19970825

AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IWASAKI, OSAMU;OTSUKA, NAOJI;TAKAHASHI, KIICHIRO;AND OTHERS;REEL/FRAME:009120/0530

Effective date: 19970825

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140430

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载