WO2006006666A1 - ディジタル画像の輪郭追跡による画像処理方法並びに画像処理装置 - Google Patents
ディジタル画像の輪郭追跡による画像処理方法並びに画像処理装置 Download PDFInfo
- Publication number
- WO2006006666A1 WO2006006666A1 PCT/JP2005/013026 JP2005013026W WO2006006666A1 WO 2006006666 A1 WO2006006666 A1 WO 2006006666A1 JP 2005013026 W JP2005013026 W JP 2005013026W WO 2006006666 A1 WO2006006666 A1 WO 2006006666A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- contour
- image
- tracking
- point
- contour line
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 94
- 238000000034 method Methods 0.000 claims abstract description 60
- 239000003086 colorant Substances 0.000 claims abstract description 25
- 230000006870 function Effects 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 28
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 238000003786 synthesis reaction Methods 0.000 claims 4
- 239000002131 composite material Substances 0.000 abstract 1
- 239000000203 mixture Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 15
- 235000019557 luminance Nutrition 0.000 description 10
- 238000000605 extraction Methods 0.000 description 6
- 230000009467 reduction Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/20—Contour coding, e.g. using detection of edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
Definitions
- the present invention relates to an image processing method and an image processing apparatus, and in particular, for graphics and images (hereinafter, it is not necessary to distinguish between them, or simply images) that are indispensable for digital image processing processed by a computer.
- the present invention relates to an image processing method and an image processing apparatus by contour tracking of a digital image, which accurately grasps the contour of a region and maintains an appropriate relationship with adjacent graphics and images even when the image is enlarged or reduced.
- graphics and images are handled as digital data on a computer (including a microcomputer and a digital data processing device).
- a computer including a microcomputer and a digital data processing device.
- it is necessary to grasp the outline of the figure or image area for recognition of the figure or color area.
- grasping the contours of adjacent graphics and image areas depending on the method of grasping the contours, it is not possible to grasp the contour lines redundantly and maintain an appropriate relationship in the enlargement or reduction of the graphics or image areas. There is a case.
- Image data is roughly divided into a raster format and a function format.
- a raster image is represented as a collection of colored dots called pixels that are regularly arranged in a grid. This is a format suitable for display on modern CRT displays and LCDs.
- the function form expresses the features of graphics and image areas in the image by functions, so it is suitable for expressing simple graphics and image areas, and the image quality is not degraded by affine transformation.
- Patent Document 1 the coordinate for expressing the boundary of the region is set by the region boundary extraction mechanism for obtaining the boundary of the region of the image. It is disclosed.
- Patent Document 3 discloses a technique in which a boundary line is shared by dividing an area so that one pixel, that is, one dot overlaps in the vertical and horizontal boundaries of a divided image.
- Patent Document 4 In the field of digital image processing,! / Is a reference technique related to sharing of boundary lines, but in Patent Document 4, two adjacent closed regions are integrated into one closed region. There is something. That is, in Patent Document 4, in a curved surface dividing method such as a mold of a product, two adjacent closed regions are integrated with respect to a closed region surrounded by each boundary line based on the phase relationship of each region. A technology has been disclosed that automatically performs the unfolding process for one closed region.
- Patent Document 1 Japanese Patent Laid-Open No. 10-145590
- Patent Document 2 JP 2001-52187 A
- Patent Document 3 Japanese Patent Laid-Open No. 6-348837
- Patent Document 4 Japanese Patent Laid-Open No. 7-44733
- FIG. 10 shows an example of grasping the outline of a figure by the function format of the prior art.
- the contour tracking method for binary images is applied to a multicolor image, so an area in which a certain color or brightness in a figure or image appears as shown in Fig. 10 (hereinafter referred to as a color area). ), And consider the inside of the area as a figure and the outside as the ground. Guo chasing.
- this conventional method has two problems due to the fact that the outline is doubled at the place where the two color regions meet.
- the method described in Patent Document 1 extracts a region using a difference in density in a color Z gradation image, and compresses and decodes the boundary of the region by function approximation. .
- the pixels within a predetermined difference are within the same region, and the region where the difference is larger than a predetermined value is the boundary of the region. Therefore, in the invention described in Patent Document 1, a gray level difference that is determined to be the same region is input, an image is scanned in a horizontal direction (raster scan) in units of pixels, and a point where the gray level difference is larger than a predetermined value is a boundary point. And boundary points are extracted for all images. Thereafter, the region is determined by connecting the boundary points.
- the boundary line is a horizontal scanning line. This may not be detected in parallel. In other words, since only the gray level difference of the pixels in the horizontal direction is determined, the boundary line cannot be detected if there is no horizontal gray level difference. In addition, even if a boundary point is found, the connection direction of the boundary point cannot be determined, so that it is necessary to separately track the boundary line.
- the joining point is defined as a dividing point, but this dividing point is extracted by dividing the boundary point by 2 * 2 pixels. Scanning is performed with a window function (consisting of 4 pixels), and the point where the color (shading) of the 4 pixels in the window becomes 3 or more is taken as the dividing point.
- a window function consisting of 4 pixels
- processing takes time.
- the present invention improves the problems of the prior art, and in particular, a digital image that can be processed at high speed with a small amount of data and a high-precision graphic or image including a plurality of colors or luminance.
- the purpose is to propose a contour tracking method.
- Another object of the present invention is to propose a digital image contour tracking method capable of maintaining and reproducing the contour of the original image with high precision.
- the present invention has the following features.
- the image processing device tracks the contour of the digital image, grasps the image area, and generates contour information.
- Contour line processing is performed to extract contour lines by performing contour tracking that tracks the border edges of pixels that have the same relationship as the color of the pixels at both ends in the direction of travel (or pixels that have a color difference greater than or equal to a predetermined value)!
- the branching process and the synthesizing process are performed on the extracted contour line segment, and the contour determined by the combination of the contour segment is formed to generate the contour information of the digital image.
- Another feature of the present invention is that pixel points or color differences in which the left and right or upper and lower colors of the image are scanned from an arbitrary point in the region including the target image in the horizontal direction or the vertical direction.
- a pixel point having a (gradation difference) of a predetermined value or more is found, the contour line segment is traced using that point as a starting point, a meeting point of pixels of three or more colors is searched, and the meeting point is detected as the contour line segment.
- the initial value of the end point is to be used.
- Another feature of the present invention is that the tracking of the contour is stopped once at the point where three or more colors or three or more luminance levels of the contour line meet, and a contour line is taken for each boundary of two colors or two luminances.
- the purpose is to allow two adjacent areas to share contour data.
- the invention's effect [0022]
- using a color difference (shading difference) as a region recognition reference a given digital image is first traced between pixels to obtain a contour, and two adjacent regions are contour data. Can be shared, and contour information is generated based on the contour line data. Contour line By sharing data, it is no longer approximated by different functions, and solves the problems of conventional methods such as separation of figures during enlargement, reduction, movement, etc., and reduces data processing efficiency and storage efficiency It can be prevented.
- the contour line is extracted by tracing the boundary side of the pixel, the contour of the original image can be accurately maintained and reproduced. Therefore, even when processing such as image enlargement is performed, the contour line is along the boundary of the pixels, and there is an advantage that a fine image is maintained.
- the input color image data is arranged in a two-dimensional bitmap.
- a color difference shading difference
- the same color difference shading difference
- the contour tracking is stopped once at the point where three or more colors or three or more brightness levels meet, and a contour line is taken for each boundary between two colors or two brightness levels.
- FIG. 3 shows an example of the configuration of an image processing apparatus that performs graphic processing and image processing, which are objects of the present invention.
- the processing device is composed of, for example, a computer having a CPU, a memory, input / output means, and the like, and has a function of performing processing for graphics and image processing by loading and executing a program on the memory.
- Figure 3 shows a block diagram of the functions of this processor.
- reference numeral 100 denotes an input unit for image data.
- the image data (color figure, color photograph, picture, color document, etc.) is read by the image scanner as digital information into the processing device, or already digital.
- the information directly stored in the recording medium or other processing device as information has a function of reading into the processing device via the communication network.
- 150 indicates the digital image data read into the processor.
- Reference numeral 200 denotes a contour tracking processing unit, which performs pixel side tracking processing, that is, contour tracking of a color area of a graphic or image of the read data 150.
- Reference numeral 250 indicates contour information as a result of the processing.
- [0028] 300 is a function approximation processing unit that converts a boundary of a contour, that is, a color region into a function by function approximation based on the contour information 250, and 350 is a function format and parameter that represents a contour of a figure or an image region obtained as a result of the processing.
- Reference numeral 400 denotes an image processing unit that generates a figure and an image using the function format and the parameter 350, and generates a processed figure and a processed image 450 as the processing result.
- Reference numeral 500 denotes an output unit such as a display device or a printer.
- the output unit 500 includes means for recording the processed graphic and the processed image on a recording medium or transmitting them to another processing apparatus via a network. It is also conceivable to record and use it on a recording medium at the stage of processed graphic or processed image 450.
- FIG. 4A shows a specific configuration example of the contour tracking processing unit 200 of FIG.
- An image data input unit 210 includes a memory for storing graphic data and image data.
- the input unit 210 has a function of arranging the read data into a two-dimensional bitmap, and the image data is arranged into a two-dimensional bitmap.
- An outline tracking unit 220 reads the contents of the memory of the data input unit and tracks the outline on a pixel basis. The In other words, with respect to image data arranged in a bitmap format, the color difference (shading difference) is traced as a boundary between adjacent pixels.
- Reference numeral 230 denotes a contour line segment branch processing unit that performs processing when there is a branch point in the contour segment and processing for concatenating the contour segment.
- Reference numeral 240 denotes an area outline forming unit that forms an outline of an area surrounding a figure or image determined by the combination of outline segments, and generates and outputs outline information 250.
- the functions of the contour tracking processing unit 200 are realized by one or more programs as a whole rather than being limited to the categories shown in 210 to 240 above.
- the contour line tracking unit 220 and the contour line segment connecting / branching processing unit 230 may be configured by a single program that performs a series of continuous contour line tracking processes. These programs are executed by reading a program held in a recording medium by a computer or reading a program held in a storage device of a computer into a memory.
- FIG. 4B is a functional block diagram more specifically showing the configuration of FIG. 4A.
- the data input unit 210 includes an AD converter 211, an image data input unit 212, an image data two-dimensional array unit 213, It has an image data memory storage function 214.
- Reference numeral 216 denotes an image memory.
- a density difference setting unit 218 gives a pixel color difference or luminance difference that serves as a reference for image area recognition. For example, R, G, B, 8 bits each, 256 gradations, and the color difference (shading difference) that should be identified within the range of approximately 160,000 color representations are set appropriately as the region recognition standard.
- a luminance gradation difference that is recognized as a boundary for example, a 50 gradation difference or a 100 gradation difference, is given to each of the R, G, and B color signals of a natural image.
- a luminance signal Y signal
- the images handled in the present invention include a binary image, a multicolor image, and a natural image.
- the multicolor image includes a dullation image.
- RGB signals there are combinations of luminance signals (Y signals) and color difference signals (Cr, Cb signals) as image signals.
- differences in the levels of all these signals are used as color differences.
- the contour line tracking unit 220 includes an image data reading unit 221 and a contour line segment extracting unit 222. Each of these functional units is realized by one or more programs.
- the pixel color is three colors, in other words, the color difference set by the shading difference setting unit.
- the point where three or more (shading differences) meet is the end point of the contour line segment.
- the boundary edge of the pixel having the same relationship as the color or color difference of the pixels at both ends in the traveling direction of the starting point of the contour tracking process, or the pixel having a relationship of a predetermined value or more is tracked.
- the initial value of the end point of this contour line segment is a horizontal or vertical scan from any point in the area that contains the target image, finds pixel points with different colors on the left, right, or top of the image. Trace the outline as a starting point. Then, search for the meeting point of pixels of three or more colors, and use the meeting point as the initial value of the end point of the line segment.
- the contour line tracking unit 220 further has a memory area for individually registering the branch direction as an untracked direction and a tracked direction at the end points of the contour line segment. Then, referring to the untracked direction of the memory area, one of them is used as the start end point of the contour tracking, and the contour tracking is performed, and when the tracking is completed, the tracked direction is registered in the memory area. Alternatively, when a new branch point is found, it is registered in the memory area as an untracked direction.
- FIG. 5 shows the case of a multicolor image, and shows the contour tracking algorithm and pixel pattern in the contour line segment extraction unit 222 of FIG. 4B.
- the principle of contour tracking is to store the color or brightness of the left and right pixels at the starting pixel (S501), and if the left and right pixels have the same color or brightness (pixel pattern 514), the boundary between the pixels Is tracked (S507, S508). If the left or right color or brightness in the direction of travel is different from the color or brightness of the starting point, that is, if more than two colors or more than three kinds of brightness meet, the contour tracking is terminated ( S502).
- the direction of travel is changed to the right (S505, S506). If the color or brightness of the pixel is the same as the color of the right pixel of the starting point or the brightness (pixel pattern 510), the direction of travel is changed to the left (S503, S504), and the color of the left and right pixels always in the direction of travel Alternatively, contour tracking is performed so that the luminance is the same as the color or luminance stored as the starting point. The contour line tracked in this way The starting point, end point, line number, or line name is registered in the memory as a line list.
- Fig. 6 shows the contour tracking algorithm of Fig. 5, that is, all the contour lines connected to a certain point are traced, and a point where three or more colors or three or more patterns meet is used as a branch point.
- a specific example of a method of performing contour tracking from the above and forming a contour line segment of the region will be described. This example is intended for image data having a large number of three-color rectangular regions in which the orange region (center portion) is surrounded by the upper left green region and the right and lower blue regions. The same processing is performed if the image data includes three or more color differences (light / dark differences).
- the black circle point (1, 1) in Fig. 6 (a) is the starting point, and the first traveling direction is the upward direction.
- left and right front colors are stored in the memory as “left color” and “right color”, respectively.
- pixels having a difference in density are tracked to recognize boundaries and their connections. It is possible to find this boundary accurately regardless of whether it is horizontal or vertical. In addition, since the tracking is performed while judging the direction, it is possible to surely find the boundary.
- tracking is always performed in the contour tracking process while determining the density difference of the surrounding four pixels. Therefore, it is not necessary to scan with a window function that requires a window function. Therefore, the extraction of the dividing points and the boundary are performed at the same time, and the processing speed is improved.
- the boundary line is extracted by tracking the boundary side of the pixel, the outline of the original image can be accurately maintained and reproduced. Therefore, even if processing such as image enlargement is performed, the contour line is along the boundary of the pixel, and there is an advantage that a fine image is maintained.
- FIG. 7 shows the processing of the contour line segment joining / branching processing unit 230, that is, the processing when there is a branch point in the contour line segment obtained by tracking the pixel boundary and the contour line segment. It is a figure which shows the specific example of the method of performing this joint process.
- branch list One element of the branch list is extracted and traced (S706). If the end is already in the branch list (S707), it is deleted from the branch list (S708). If the end is not in the branch list (S709
- FIG. 7 In order to describe the processing procedure of FIG. 7 more specifically, the contour tracking of a figure will be described below using the example of FIG. 8A. 8A and FIG. 8B showing the procedure of the area contour forming process of FIG. 9 described later, and the process from the figure outline tracking process to the area outline forming process will be described as a continuous series of processes.
- this example is intended for three-color image data with two-color areas of blue and red and a white background.
- the number of branches is 0 when the contour line of an independent region is tracked without touching the contour of another region.
- ⁇ Untracked branch list ⁇ A, B, C ⁇
- Tracking end point E is in the untracked branch list, so it is deleted from the list.
- ⁇ Untracked branch list ⁇ F ⁇
- tracked branch set ⁇ A, D, B, E, C, F ⁇
- Tracking end point F is in the untracked branch list, so it is deleted from the list.
- ⁇ Region X ⁇ AD ⁇
- Tracked branch set ⁇ B, C, D, E, F ⁇
- Attention A ⁇
- ⁇ Region X ⁇ AD ⁇
- tracked branch set ⁇ B, C, D, F ⁇
- attention E ⁇
- Focused branch E force Focus on branch A closest to the end of the tracked contour in the counterclockwise direction.
- Branch F is the end branch of the contour traced from B, which is the closest counterclockwise.
- the first registered area X represents the outside of the figure and is discarded.
- contour information 250 generated in this way can completely solve the problem of the conventional method in which the area does not leave in the processing such as enlargement, reduction, and movement of the graphic in the subsequent processing.
- contour line segmentation is performed on a digital image by performing contour tracking for tracking the boundary side of pixels having the same relationship as the color of pixels at both ends in the traveling direction of the starting point or pixels having a color difference equal to or greater than a predetermined value. To extract. Then, the contour tracking is stopped once at the point where three or more colors meet, and the contour line is taken for each boundary between the two colors, so that two adjacent regions can share the contour data. By sharing the contour line data, it is no longer approximated by different functions, and it is possible to prevent a decrease in data processing efficiency and storage efficiency.
- the contour line is extracted by tracking the boundary side of the pixel, the contour of the original image can be accurately maintained and reproduced. Therefore, even when processing such as image enlargement is performed, the contour line is along the boundary of the pixels, and there is an advantage that a fine image is maintained.
- the contour line segment is extracted with the start point, end point, and branch point of the contour line segment as end points, and the connection state of the contour line segment is traced via the branch point to form a closed loop.
- the outermost contour may be the same as the contour of the inner region. In this case, the process of adopting the inner periphery of the contour is performed. As a result, each region is only a portion surrounded by a single outline, and is not surrounded by two.
- pixels having a difference in density are tracked, and the boundary points and their connections as described in Patent Document 1 are recognized.
- pixels having a difference in density are tracked, it is possible to accurately find the boundary line regardless of whether the boundary line is in the horizontal direction or the vertical direction. Since the force is also tracked while judging the direction, It is possible to find out.
- division point extraction and boundary extraction are performed simultaneously, and the processing speed is improved.
- FIG. 1 is an explanatory diagram of an example of contour tracing of a figure based on the present invention.
- FIG. 2 is an explanatory diagram of how to obtain coordinate points, which is the basis of the present invention.
- FIG. 3 is a diagram showing a configuration example of an image processing apparatus that performs graphic processing and image processing, which are objects of the present invention.
- FIG. 4A is a diagram showing a specific configuration example of the contour tracking processing unit in FIG. 3.
- FIG. 4A is a diagram showing a specific configuration example of the contour tracking processing unit in FIG. 3.
- FIG. 4B is a functional block diagram more specifically showing the configuration of FIG. 4A.
- FIG. 5 is a diagram showing a contour tracking algorithm and a pixel pattern in the contour line segment extraction unit in FIG. 4B.
- FIG. 6 is a diagram showing the contour tracking procedure of FIG. 5.
- FIG. 7 is a diagram illustrating a processing procedure of a contour line segment branching processing unit in an embodiment of the present invention.
- FIG. 8A is a diagram for more specifically explaining the procedure of the contour tracking process of FIG.
- FIG. 8B is a diagram for more specifically explaining the procedure of the contour tracking process of FIG.
- FIG. 9 is a diagram for explaining the procedure of region contour formation processing in one embodiment of the present invention.
- FIG. 10 is an explanatory diagram of figure contour tracking in the prior art.
- FIG. 11 is a diagram showing an example of problems in the prior art.
- 100 Image data input unit, 150: Graphic data and image data, 200: Outline tracking processing unit, 210: Input unit, 211 to AD conversion, 212: Image data input unit, 213: Image data 2D array unit, 214 ... Image data memory storage function, 216 ... Image memory, 218 ... Shading difference setting unit, 220 ... Contour line tracking unit, 230 ... Contour line segmentation branch processing unit, 240 .. Forming unit, 250... Contour information, 300 ... function approximation processing unit, 350 ... function format and parameters, 400 ... image processing unit, 450 ... figures and images, 500 ... output unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004207156A JP2006031245A (ja) | 2004-07-14 | 2004-07-14 | ディジタル画像の輪郭追跡による画像処理方法並びに画像処理装置 |
JP2004-207156 | 2004-07-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006006666A1 true WO2006006666A1 (ja) | 2006-01-19 |
Family
ID=35784002
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/013026 WO2006006666A1 (ja) | 2004-07-14 | 2005-07-14 | ディジタル画像の輪郭追跡による画像処理方法並びに画像処理装置 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2006031245A (ja) |
WO (1) | WO2006006666A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2528027A (en) * | 2014-05-13 | 2016-01-13 | String Labs Ltd | Perimeter detection |
CN112419357A (zh) * | 2020-11-18 | 2021-02-26 | 方正株式(武汉)科技开发有限公司 | 一种生成图像实体轮廓一笔画路径的方法及系统 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007139544A1 (en) | 2006-05-31 | 2007-12-06 | Thomson Licensing | Multi-tracking of video objects |
JP5629483B2 (ja) | 2010-03-30 | 2014-11-19 | キヤノン株式会社 | 画像処理方法、画像処理装置、及びプログラム |
JP5566158B2 (ja) | 2010-03-30 | 2014-08-06 | キヤノン株式会社 | 画像処理方法、画像処理装置、及びプログラム |
JP5600524B2 (ja) | 2010-08-27 | 2014-10-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、プログラム、および記憶媒体 |
JP5653141B2 (ja) | 2010-09-01 | 2015-01-14 | キヤノン株式会社 | 画像処理方法、画像処理装置、及び、プログラム |
JP5597096B2 (ja) | 2010-10-18 | 2014-10-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61208181A (ja) * | 1985-03-12 | 1986-09-16 | Mitsubishi Electric Corp | 輪郭抽出装置 |
JPH06223183A (ja) * | 1993-01-22 | 1994-08-12 | Nec Corp | 輪郭追跡方法 |
JPH10145590A (ja) * | 1996-11-12 | 1998-05-29 | Tsukuba Soft Kenkyusho:Kk | カラー/濃淡画像入力出力装置と入力出力方法 |
-
2004
- 2004-07-14 JP JP2004207156A patent/JP2006031245A/ja not_active Withdrawn
-
2005
- 2005-07-14 WO PCT/JP2005/013026 patent/WO2006006666A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61208181A (ja) * | 1985-03-12 | 1986-09-16 | Mitsubishi Electric Corp | 輪郭抽出装置 |
JPH06223183A (ja) * | 1993-01-22 | 1994-08-12 | Nec Corp | 輪郭追跡方法 |
JPH10145590A (ja) * | 1996-11-12 | 1998-05-29 | Tsukuba Soft Kenkyusho:Kk | カラー/濃淡画像入力出力装置と入力出力方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2528027A (en) * | 2014-05-13 | 2016-01-13 | String Labs Ltd | Perimeter detection |
GB2528027B (en) * | 2014-05-13 | 2016-12-14 | String Labs Ltd | Perimeter detection |
US10275885B2 (en) | 2014-05-13 | 2019-04-30 | String Limited | Perimeter detection |
CN112419357A (zh) * | 2020-11-18 | 2021-02-26 | 方正株式(武汉)科技开发有限公司 | 一种生成图像实体轮廓一笔画路径的方法及系统 |
CN112419357B (zh) * | 2020-11-18 | 2023-06-30 | 方正株式(武汉)科技开发有限公司 | 一种生成图像实体轮廓一笔画路径的方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
JP2006031245A (ja) | 2006-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6522329B1 (en) | Image processing device and method for producing animated image data | |
US7149355B2 (en) | Image processing apparatus, image processing method, image processing program, and computer-readable record medium storing image processing program | |
JP5958023B2 (ja) | 画像処理装置および画像処理プログラム | |
US20050008254A1 (en) | Image generation from plurality of images | |
JP4568459B2 (ja) | 画像処理装置及び記録媒体 | |
KR101000421B1 (ko) | 드롭 아웃 컬러 처리방법 및 그 장치 | |
US7526137B2 (en) | Image processing apparatus, image processing method, image processing program, and storage medium | |
CN102053804A (zh) | 图像处理装置及控制方法 | |
CN112308773B (zh) | 一种无人机航摄影像无损放大和拼接融合方法 | |
WO2006006666A1 (ja) | ディジタル画像の輪郭追跡による画像処理方法並びに画像処理装置 | |
JP2004102819A (ja) | 画像処理方法およびその装置 | |
JP3952188B2 (ja) | 画像補間装置、画像補間方法および画像補間プログラム | |
JP2009272665A (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP5395722B2 (ja) | 線画処理装置、線画処理方法およびプログラム | |
JPS60163164A (ja) | 画像の塗りつぶし装置 | |
JP5023205B2 (ja) | ドロップアウトカラー処理装置およびこれを用いたドロップアウトカラー処理方法 | |
US8553294B2 (en) | Outlining method for properly representing curved line and straight line, and image compression method using the same | |
JP2007088912A (ja) | 画像処理装置およびプログラム | |
JP2005309870A (ja) | 画像処理装置 | |
JP2559359B2 (ja) | 画像の構造記憶方法及び画像登録装置 | |
JP3669008B2 (ja) | 画像処理装置および画像処理方法 | |
JPH03225477A (ja) | 画像処理装置 | |
JP2004207923A (ja) | エッジ生成装置、エッジ生成方法およびエッジ生成プログラム | |
JP5898438B2 (ja) | 画像処理装置及び画像処理方法並びにプログラム | |
CN119359559A (zh) | 一种基于语义感知的可见光与红外图像融合方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |