WO2008044365A1 - Medical image processing device and medical image processing method - Google Patents
Medical image processing device and medical image processing method Download PDFInfo
- Publication number
- WO2008044365A1 WO2008044365A1 PCT/JP2007/061628 JP2007061628W WO2008044365A1 WO 2008044365 A1 WO2008044365 A1 WO 2008044365A1 JP 2007061628 W JP2007061628 W JP 2007061628W WO 2008044365 A1 WO2008044365 A1 WO 2008044365A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dimensional model
- boundary line
- image boundary
- correction
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/507—Depth or shape recovery from shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
- G06T2207/30032—Colon polyp
Definitions
- the present invention relates to a medical image processing apparatus and a medical image processing method, and in particular, medical image processing for estimating 3D model data of a living tissue based on 2D image data of an image of a living tissue.
- the present invention relates to an apparatus and a medical image processing method.
- the endoscope device has, for example, an insertion part that can be inserted into a body cavity, and the inside of the body cavity imaged by an objective optical system disposed at the distal end of the insertion part.
- An image is picked up by an image pickup means such as a solid-state image pickup device and output as an image pickup signal, and an image and an image of a body cavity are displayed on a display means such as a monitor based on the image pickup signal.
- the user observes, for example, an organ in the body cavity based on the image of the image in the body cavity displayed on the display means such as a monitor.
- the endoscope apparatus can directly capture an image of the digestive tract mucosa. Therefore, the user can comprehensively observe, for example, the color of the mucous membrane, the shape of the lesion, and the fine structure of the mucosal surface.
- an endoscope apparatus is disclosed in, for example, Japanese Patent Application Laid-Open No. 2005-192880 (Patent Document 1). It is also possible to detect an image including a lesion site such as a polyp by using the image processing method described in).
- Patent Document 1 extracts a contour of an input image and detects a lesion having a local raised shape in the image based on the shape of the contour. be able to.
- 3D data is estimated from a 2D image, and colon polyps are detected using 3D feature values (Shape Index / Curvedness).
- 3D feature values Shape Index / Curvedness.
- MI2003-102 The Institute of Electronics, Information and Communication Engineers, IEICE Technical Report (MI2003-102), Examination of the method for detecting large intestine polyps from 3D abdominal CT images based on shape information Kimura, Hayashi, Kitasaka, Mori, Suenaga pp. 29-34, 2004).
- This three-dimensional feature is realized by calculating the partial differential coefficient at the reference point from the three-dimensional data and using the partial differential coefficient.
- a polyp candidate is detected by performing threshold processing on the three-dimensional feature value.
- the “Shape From Shading” method which has been conventionally used as an estimation method of 3D data, has a problem that accuracy is deteriorated in a portion imaged as an edge in a 2D image.
- the boundary between a curved surface boundary or a visible range and an occlusion (visible concealment or invisible) range on the 3D image is used as a boundary.
- An edge area occurs.
- the “Shape From Shading” method which estimates 3D data from 2D images, is a method of estimating 3D positions based on pixel values of 2D images. Since the pixel value of this edge region is lower than the pixel value of the adjacent region, the “Shape From Shading” method uses a 3D position as if a “groove (concave)” exists in this edge region. The estimated value is calculated.
- the coordinates of the three-dimensional data points are estimated by the "Shape From Shading" method, and the three-dimensional data points are When generating a 3D surface, ideally a 3D image as shown in Figure 27 should be generated.
- the present invention has been made in view of the above circumstances, and accurately constructs a three-dimensional image from a two-dimensional image to improve detection accuracy when detecting a lesion having a local raised shape. It is an object of the present invention to provide a medical image processing apparatus and a medical image processing method that can be performed.
- a medical image processing apparatus of the present invention includes a three-dimensional model estimation unit that estimates a three-dimensional model of a biological tissue from a two-dimensional image of a biological tissue image in a body cavity that is input from the medical imaging apparatus.
- An image boundary line detecting means for detecting an image boundary line of an image area constituting the two-dimensional image image, a line width calculating means for calculating a line width of the image boundary line, and the three-dimensional based on the line width
- correction means for correcting the estimation result of the three-dimensional model of the image boundary line by the model estimation means.
- the medical image processing method of the present invention provides a three-dimensional model estimation that estimates a three-dimensional model of a living tissue from a two-dimensional image of the image of the living tissue in a body cavity that is input from a medical imaging device.
- An image boundary line detecting step for detecting an image boundary line of an image area constituting the two-dimensional image image, a line width calculating step for calculating a line width of the image boundary line, and based on the line width, And a correction step of correcting the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step.
- FIG. 1 is a diagram showing an example of the overall configuration of an endoscope system in which a medical image processing apparatus according to Embodiment 1 of the present invention is used.
- FIG. 2 is a functional block diagram showing the functional configuration of the CPU in FIG.
- FIG. 3 is a diagram showing a storage information configuration of the hard disk in FIG.
- FIG. 4 is a flowchart showing the processing flow of the CPU of FIG.
- FIG. 5 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG.
- FIG. 6 is a first diagram illustrating the process of FIG.
- FIG. 7 is a second diagram for explaining the processing in FIG.
- FIG. 8 is a third diagram illustrating the process of FIG.
- FIG. 9 is a fourth diagram illustrating the process of FIG.
- FIG. 10 is a flowchart showing a flow of correction processing of 3D point sequence data between edge end points as the edge region of FIG.
- FIG. 11 is a fifth diagram illustrating the process of FIG.
- FIG. 12 is a sixth diagram illustrating the process of FIG.
- FIG. 13 is a flowchart showing a process flow of the first modified example of FIG.
- FIG. 14 is a flowchart showing a process flow of the second modified example of FIG.
- FIG. 15 is a first diagram illustrating the process of FIG.
- FIG. 16 is a second diagram illustrating the process of FIG.
- FIG. 17 is a diagram showing a storage information configuration of a hard disk according to Embodiment 2 of the present invention.
- FIG. 18 is a first diagram illustrating Example 2.
- FIG. 19 is a second diagram illustrating Example 2.
- FIG. 20 is a third diagram for explaining the second embodiment.
- FIG. 21 is a flowchart showing the flow of processing of a CPU according to the second embodiment.
- FIG. 22 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG.
- FIG. 23 is a first diagram illustrating the processing of FIG.
- FIG. 24 is a second diagram illustrating the process of FIG.
- FIG. 25 is a third diagram illustrating the process of FIG.
- FIG. 26 is a first diagram illustrating a problem of the conventional example.
- FIG. 27 is a second diagram for explaining the problem of the conventional example.
- FIG. 28 is a third diagram for explaining the problem of the conventional example.
- FIG. 1 is a diagram illustrating an example of an overall configuration of an endoscope system in which a medical image processing apparatus is used.
- FIG. 2 is a functional block diagram showing the functional configuration of the CPU of FIG.
- FIG. 3 is a diagram showing a storage information configuration of the hard disk of FIG. 1
- FIG. 4 is a flowchart showing a processing flow of the CPU of FIG.
- FIG. 5 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG.
- FIG. 6 is a first diagram illustrating the processing of FIG.
- FIG. 7 is a second diagram for explaining the processing of FIG.
- FIG. 8 is a third diagram for explaining the processing of FIG.
- FIG. 9 is a fourth diagram for explaining the processing of FIG.
- FIG. 10 is a flowchart showing a flow of correction processing of 3D point sequence data between edge end points as the edge region of FIG.
- FIG. 11 is a fifth diagram for explaining the processing of FIG.
- FIG. 12 is a sixth diagram for explaining the processing of FIG.
- FIG. 13 is a flowchart showing the flow of processing of the first modification of FIG.
- FIG. 14 is a flowchart showing the flow of processing of the second modified example of FIG.
- FIG. 15 is a first diagram illustrating the process of FIG.
- FIG. 16 is a second diagram for explaining the processing of FIG.
- an endoscope system 1 includes a medical observation device 2, a medical image processing device 3, and a monitor 4, and the main part is configured. Les.
- the medical observation apparatus 2 is an observation apparatus that captures an image of a subject and outputs a two-dimensional image of the image of the subject.
- the medical image processing apparatus 3 is configured by a personal computer or the like, and performs image processing on the video signal of the two-dimensional image output from the medical observation apparatus 2 and after performing the image processing. It is an image processing device that outputs the video signal as an image signal.
- the monitor 4 is a display device that displays an image based on the image signal output from the medical image processing device 3.
- the medical observation apparatus 2 includes an endoscope 6, a light source device 7, a camera control unit (hereinafter abbreviated as CCU) 8, and a monitor 9. Yes.
- CCU camera control unit
- the endoscope 6 is inserted into a body cavity of a subject, images a subject such as a living tissue existing in the body cavity, and outputs it as an imaging signal.
- the light source device 7 supplies illumination light for illuminating a subject imaged by the endoscope 6.
- the CCU 8 performs various controls on the endoscope 6 and performs signal processing on the imaging signal output from the endoscope 6 to output it as a video signal of a two-dimensional image.
- the monitor 9 displays an image of the subject imaged by the endoscope 6 based on the video signal of the two-dimensional image output from the CCU 8.
- the endoscope 6 is provided on the insertion portion 11 to be inserted into the body cavity and on the proximal end side of the insertion portion 11. And an operation unit 12.
- a light guide 13 for transmitting illumination light supplied from the light source device 7 is inserted into a portion from the proximal end side in the insertion portion 11 to the distal end portion 14 on the distal end side in the insertion portion 11. Yes.
- the light guide 13 has a distal end side disposed at the distal end portion 14 of the endoscope 6 and a rear end side connected to the light source device 7.
- the illumination light supplied from the light source device 7 is transmitted by the light guide 13 and then provided on the distal end surface of the distal end portion 14 of the insertion portion 11.
- the light is emitted from an illumination window (not shown).
- illumination light is emitted from an illumination window (not shown) to illuminate a living tissue or the like as a subject.
- an objective optical system 15 attached to an observation window (not shown) adjacent to an illumination window (not shown) and an imaging position of the objective optical system 15 are arranged.
- An imaging unit 17 having an imaging device 16 constituted by a CCD (charge coupled device) or the like is provided. With such a configuration, the subject image formed by the objective optical system 15 is captured by the imaging element 16 and then output as an imaging signal.
- the image sensor 16 is not limited to a CCD, and may be composed of, for example, a C-MOS sensor.
- the image sensor 16 is connected to the CCU 8 via a signal line.
- the image sensor 16 is driven based on the drive signal output from the CCU 8 and outputs an image signal corresponding to the imaged subject to the CCU 8.
- the imaging signal input to the CCU 8 is converted and output as a video signal of a two-dimensional image by performing signal processing in a signal processing circuit (not shown) provided in the CCU 8.
- the video signal of the two-dimensional image output from the CCU 8 is output to the monitor 9 and the medical image processing device 3.
- the monitor 9 displays the subject image based on the video signal output from the CCU 8 as a two-dimensional image.
- the medical image processing apparatus 3 performs an AZD conversion on the video signal of the two-dimensional image output from the medical observation apparatus 2, and outputs the image input section 21 from the image input section 21.
- CPU 22 as a central processing unit that performs image processing on a video signal to be processed, a processing program storage unit 23 in which a processing program related to the image processing is written, a video signal output from the image input unit 21, and the like Images stored in the image storage unit 24 and images performed by the CPU 22 And an analysis information storage unit 25 for storing calculation results in the processing.
- the medical image processing apparatus 3 includes a storage device interface (I / F) 26, image data as a result of image processing of the CPU 22 via the storage device I / F 26, and the CPU 22 uses it for image processing.
- the display processing for displaying the image data on the monitor 4 is performed.
- Display processing unit 28 that outputs the image data after the image processing as an image signal, a parameter for the image processing performed by the CPU 22, and an operation instruction for the medical image processing apparatus 3 can be input by the user, a keyboard or a mouse, etc.
- an input operation unit 29 composed of a pointing device or the like.
- the monitor 4 displays an image based on the image signal output from the display processing unit 28.
- Each of 29 is connected to each other via a data bus 30.
- the CPU 22 performs a model correction as a 3D model estimation unit 22a, a detection target region setting unit 22b, a shape feature amount calculation unit 22c, a line width calculation unit, and a correction unit as a 3D model estimation unit. It consists of functional units 22d, 3D shape detection unit 22e, and polyp determination unit 22f.
- these functional units are realized by software executed by the CPU 22.
- the hard disk 27 has a plurality of storage areas for storing various data generated by each process performed by the CPU 22. Specifically, the hard disk 27 has an edge image storage area 27a, an edge thinned image storage area 27b, a 3D point sequence data storage area 27c, a correspondence table storage area 27d, and a detected lesion part storage area 27e. Details of the data stored in each of these storage areas will be described later.
- FIGS. 4, 5 and 10 the flowcharts of FIGS. 4, 5 and 10 are used, and FIGS. 6 to 9 and FIG. This will be described with reference to FIG. [0034]
- the user After the user turns on the power of each part of the endoscope system 1, the user inserts the insertion part 11 of the endoscope 6 into the body cavity of the subject, for example, into the large intestine.
- the insertion unit 11 When the insertion unit 11 is inserted into the large intestine of the subject by the user, for example, an object such as a living tissue existing in the large intestine, for example, an image of the polyp 500 shown in FIG.
- the image is captured by the imaging unit 17 provided in the unit 14. Then, the subject image captured by the imaging unit 17 is output to the CCU 8 as an imaging signal.
- the CCU 8 performs signal processing on the imaging signal output from the imaging device 16 of the imaging unit 17 in a signal processing circuit (not shown), thereby converting the imaging signal as a video signal of a two-dimensional image. Output. Based on the video signal output from the CCU 8, the monitor 9 displays the subject image captured by the imaging unit 17 as a two-dimensional image. Further, the CCU 8 outputs a video signal of a two-dimensional image obtained by performing signal processing on the imaging signal output from the imaging device 16 of the imaging unit 17 to the medical image processing device 3. .
- the video signal of the two-dimensional image output to the medical image processing apparatus 3 is an image input unit.
- the 3D model estimation unit 22a of the CPU 22 performs, for example, a “Shape From Shading” method on the 2D image output from the image input unit 21 in step S1.
- the 3D model corresponding to the 2D image is estimated by performing processing such as geometric conversion based on the luminance information of the 2D image.
- the 3D model estimation unit 22a generates a correspondence table in which the 3D data point sequence indicating the intestinal surface of the large intestine is associated with the 2D image data, and stores the correspondence table in the correspondence table storage area 27d of the hard disk 27. To do.
- the model correction unit 22d of the CPU 22 performs model correction processing described later on the 3D model estimated in step S2, and stores the coordinates of each data point of the corrected 3D model in the storage device I. Stored in the 3D point sequence data storage area 27c of the hard disk 27 via / F26.
- the detection target region setting unit 22b of the CPU 22 performs the color change of the two-dimensional image output from the image input unit 21 in step S3 and the three-dimensional model corrected by the processing in step S2.
- a target area that is a detection target area is set as a target area to which a process for detecting a disease having a bulge shape in the three-dimensional model is applied.
- the detection target area setting unit 22b of the CPU 22 converts, for example, the two-dimensional image output from the image input unit 21 into an R (red) image, a G (green) image, and a B (blue) image. After being separated into each of the plane images, a raised change is detected based on the data of the three-dimensional model estimated according to the R image, and the color tone is determined based on the chromaticity of the R image and the G image. Detect changes. Then, the detection target area setting unit 22b of the CPU 22 determines, based on the detection result of the bulge change and the detection result of the color tone, the area where both the bulge change and the color change are detected as the target area. Set as.
- the shape feature quantity calculation unit 22c of the CPU 22 calculates the local partial differential coefficient of the target region in step S4. Specifically, the shape feature quantity calculation unit 22c of the CPU 22 applies the R pixel in the local region (curved surface) including the noted 3D position (x, y, z) to the calculated 3D shape. Calculate first order partial differential coefficients fx, fy, fz and second order partial differential coefficients fxx, fyy, fzz, fxy, fyz, fxz at the value f.
- the shape feature quantity calculation unit 22c of the CPU 22 calculates a local partial differential coefficient as a shape feature quantity of (3D shape) for each data point existing in the processing target area of the 3D model in step S5. Based on the above, a process for calculating a shape index value and a curvedness value is performed.
- the shape feature amount calculation unit 22c of the CPU 22 calculates a Gaussian curvature K and an average curvature H.
- Shape IndexSI and Curvedness ssCV which are feature amounts representing the curved surface shape in this case, are respectively
- the shape feature amount calculation unit 22c of the CPU 22 calculates the Shape Index SI and Curvedness CV of each three-dimensional curved surface as the three-dimensional shape information, and stores it in the analysis information storage unit 25.
- the above-described Shape Index value is a value for indicating the state of unevenness at each data point of the three-dimensional model, and is indicated as a numerical value within a range of 0 to 1. Specifically, at each data point in the 3D model, if the Shape Index value is close to 0, it indicates that a concave shape exists, and if the Shape Index value is close to 1, it is convex. The existence of the shape is suggested.
- the aforementioned Curvedness value is a value for indicating the curvature at each data point of the three-dimensional model. Specifically, at each data point existing in the 3D model, the smaller the Curvedness value, the more sharply curved the surface is suggested, and the larger the Curvedness value, the slower the curve. The existence of a curved surface is suggested.
- step S6 the 3D shape detection unit 22d of the CPU 22 presets the shape index value and the curvedness value at each data point existing in the target area of the 3D model.
- the data points are detected as a data group having a raised shape.
- the CPU 22 selects, for example, a plurality of data points having a shape index value larger than the threshold T and a curvedness value larger than the threshold T2 from the data points existing in the processing target area of the three-dimensional model. This is detected as a data group having a raised shape.
- the polyp determination unit 22f of the CPU 22 corresponds to the ridge shape derived from a disease such as a polyp, in which each of the plurality of data points detected as the data group having the ridge shape in the three-dimensional model in step S7. A raised shape discriminating process is performed to discriminate whether the data point.
- step S8 the polyp determination unit 22f of the CPU 22 determines a region having a data group consisting of data points corresponding to the raised shape derived from the lesion as the polyp region, and detects the polyp that is the lesion region To do.
- the CPU 22 stores the detection result in association with the endoscopic image to be detected in the detected lesion part storage area 27e of the hard disk 27 of FIG. For example, it is displayed side by side with the endoscopic image to be detected on the monitor 4 via the unit 28.
- the monitor 4 displays an image of a three-dimensional model of the subject so that the user can easily recognize the position where the raised shape derived from a lesion such as a polyp exists.
- the model correction processing by the model correction unit 22d in step S2 will be described.
- the mode correction unit 22d of the CPU 22 performs edge extraction processing on the input 2D image to generate an edge image in step S21, and generates an edge image storage area on the hard disk 27.
- the model correction unit 22d performs thinning processing on the generated edge image to generate an edge thinned image, and stores it in the edge thinned image storage area 27b of the hard disk 27.
- the edge thinned image is an image obtained by thinning each edge region of the edge image to a line width of 1 pixel by thinning processing.
- the mode correction unit 22d of the CPU 22 sets the parameters i and j in step S23, respectively.
- the mode correction unit 22d of the CPU 22 obtains the i-th edge line Li of the edge thinned image in step S24, and in step S25 the j-th (noticeable) on the i-th edge line Li. Get the edge point Pi, j (which is a point).
- the model correction unit 22d of the CPU 22 generates an edge orthogonal line Hi, j that passes through the jth edge point PiJ and is orthogonal to the ith edge line Li in step S26. Subsequently, the model correction unit 22d of the CPU 22 determines the 3D data points corresponding to the points (xi, j, yi, j, zi, j) on the 2D image on the edge orthogonal line Hi, j in step S27. Get column from correspondence table. As described above, this correspondence table is stored in the correspondence table storage area 27d of the hard disk 27 by the three-dimensional model estimation unit 22a of the CPU 22 in step S1.
- the model correction unit 22d of the CPU 22 determines the edge endpoints AiJ, Bi, j (see the enlarged view of FIG. 6) on the edge orthogonal line Hi, j in step S28. Specifically, in this embodiment, as shown in FIG. 7, the pixel value corresponding to each point on the two-dimensional image is acquired, and the value of the pixel value having a predetermined threshold value is set as the point on the edge. Using the determination process, the edge endpoints Ai, j, B i, j are obtained by making a one-way determination from each point on the thinned edge.
- the mode correction unit 22d of the CPU 22 determines whether the edge end points Ai, j, and BiJ Find the distance Di, j. Then, the mode correction unit 22d of the CPU 22 determines in step S30 whether the distance Di, j between the edge points Ai, j, Bi, j of the edge is smaller than the predetermined value DO (Di, j is DO). .
- the mode correction unit 22d of the CPU 22 determines the edge in step S31. It is determined that the area between the end points Ai, j, and BiJ is the part represented as an edge region, and the 3D point sequence data is corrected. Details of the correction processing of the three-dimensional point sequence data in step S31 will be described later.
- the model correction unit 22d of the CPU 22 2 determines that the edge endpoint Ai, j is at step S32.
- Bi, j is determined to be a portion expressed as an occlusion area
- the three-dimensional point sequence data between edge endpoints Ai, j, BiJ is deleted, and the process proceeds to step S33.
- the model correction unit 22d determines in step S32 that the distance Di, j between the edge endpoints Ai, j, Bi, j is equal to or greater than a predetermined value DO, as shown in FIG.
- the 3D point sequence data between the edge points Ai, j, Bi, and j on the 3D point sequence data line indicating the surface is deleted as shown in Fig. 9 and corrected as an occlusion area. Generate a line.
- step S33 the model correction unit 22d of the CPU 22 determines whether the parameter j is less than all the points Nj on the edge line Li, and the parameter j is less than all the points Nj on the edge line Li. In this case, the parameter j is incremented in step S34 and the process returns to step S25, and the above steps S25 to S34 are repeated for all points on the edge line Li.
- the mode correction unit 22d of the CPU 22 determines whether the parameter i is less than the number of all edge lines Ni in step S35. If the parameter i is less than the number of all edge lines Ni, the parameter i is incremented in step S36 and the process returns to step S24, and the above steps S24 to S36 are repeated for all edge lines.
- step S31 Next, the 3D point sequence data correction process in step S31 will be described.
- the model correction unit 22d of the CPU 22 performs processing in step S41 on the 3D space having the 3D point sequence data (the point of interest). (3) An NXNXN cubic region centered on the edge point Pi, j is formed, and the average value of the coordinates of the 3D point sequence data included in this NXNXN cubic region is calculated.
- the model correction unit 22d of the CPU 22 smoothes the coordinate data of the three-dimensional point sequence data on the point sequence data line of the edge portion by the calculated average value.
- 3D point sequence data correction process is not limited to the process shown in FIG. 10, and the following modifications 1 and 2 of the 3D point sequence data may be corrected.
- the mode correction unit 22d of the CPU 22 calculates the average value of the pixel values (tone values) of the coordinate points of the two-dimensional image included in the 5 ⁇ 5 square area.
- the model correction unit 22d of the CPU 22 performs the edge edge detection from the edge edge points Ai, j, Bi, j in step S41b as shown in FIG.
- step S42b the model correction unit 22d of the CPU 22 compares, for example, the z coordinate of each point in the edge region with the z coordinate of the intersection point Qi, j, and performs correction according to the magnitude relationship. Determine the approximate line used for. Specifically, as shown in Fig. 16, when the y coordinate of each point in the edge region is smaller than the y coordinate of the intersection QU, the approximation approximated using the coordinate value smaller than the y coordinate of the intersection Qi, j If a straight line is used (in the case of Fig.
- an approximate straight line on the edge end point Ai, j side) and the y coordinate of each point in the edge area is larger than the y coordinate of the intersection QiJ, the y coordinate of the intersection QU
- an approximate straight line approximated by using a larger coordinate value in the case of Fig. 16, the approximate straight line on the edge Bi, j side of the edge.
- the threshold value is corrected using the position (Z coordinate) at the point of interest in the three-dimensional data.
- a threshold that excludes the influence of secondary light can be used for polyp detection processing, and the detection accuracy of polyp candidates can be improved. Therefore, it is possible to prompt the user to improve the polyp candidate discovery rate in the colonoscopy.
- FIGS. 17 to 25 relate to the second embodiment of the present invention.
- FIG. 17 is a diagram showing a configuration of information stored in the hard disk.
- FIG. 18 is a first diagram illustrating the second embodiment.
- FIG. 19 is a second diagram illustrating the second embodiment.
- FIG. 20 is a third diagram for explaining the second embodiment.
- FIG. 21 is a flowchart illustrating the processing flow of the CPU according to the second embodiment.
- FIG. 22 is a flowchart showing the flow of the correction process for the 3D model data of FIG.
- FIG. 23 is a first diagram illustrating the process of FIG.
- FIG. 24 is a second diagram for explaining the processing of FIG.
- FIG. 25 is a third diagram for explaining the processing of FIG.
- the hard disk 27 includes an edge image storage area 27a, an edge thinned image storage area 27b, a three-dimensional point sequence data storage area 27c, and a corresponding table storage area.
- an edge image storage area 27a In addition to 27d and the detected lesion storage area 27e, further morphological parameters Data map storage area 27f.
- Other configurations are the same as those in the first embodiment.
- This morphological transformation is a transformation method that outputs the center of the sphere when rolling the sphere on the three-dimensional surface, and dilation processing rolls the sphere to the intestinal tract surface, and erosion processing rolls the sphere to the back of the intestinal tract. Become.
- the center of the sphere in the morphological transformation is determined by the magnitude of the target noise. For example, as shown in Fig. 18, for example, when rolling a sphere 300a with a diameter of 5 on a noisy surface surface with a width of 1 pixel, the locus of the center is a straight line, so the same size from the back side of the surface formed by the locus You can fill the noise by rolling the sphere.
- the size of the noise is small, but the size of the spike noise or edge noise on the edge depends on the thickness of the edge. Dependent. Therefore, at the position where the edge exists in the 2D image, it is necessary to change the diameter of the sphere for smoothing, that is, the smoothing parameter.
- the model correction unit 22d of the CPU 22 performs the morphological processing by the morphological parameter Wi in step S10 after the model correction processing in step S2 described in the first embodiment. Execute, and move to step S3 described in the first embodiment.
- the CPU 22 moderation correction 22df, FIG. 22 (as shown, steps S100 and S101 (trowel, FIG. 23 ( The three-dimensional space shown here is divided into cubic small regions at regular intervals, a map that can store the morphology parameters in each small region is created, and the edge line width DiJ shown in Fig. 24 is shown in Fig. 25.
- the morphological conversion parameter WiJ correspondence table is stored in the morphological parameter map storage area 27f of the hard disk 27.
- step S100 the model correction unit 22d selects and acquires one edge of the edge thinned image as a processing target, and the edge width Wi, j is calculated. Subsequently, the model correction unit 22d obtains the morphology conversion parameter W using the correspondence table of the edge line width Di, j and the morphology conversion parameter Wi, j in FIG.
- step S101 the model correction unit 22d acquires the three-dimensional coordinates corresponding to the selected point on the edge line on which the process of step S100 has been performed from the corresponding point table recorded on the hard disk 27.
- the coordinate position of the 3D small area map corresponding to the acquired tertiary coordinate is obtained.
- the model correction unit 22d adds and stores the morphology conversion parameter W to the coordinate position of the obtained three-dimensional small region map, and adds one count value at the coordinate position.
- the edge line width Di, j corresponds to the edge image when a line orthogonal to the edge line is drawn at a point on the edge thin line of the edge thinned image. Indicates the width of the edge.
- the morphological transformation parameter Wi, j in the edge image indicates the diameter of the sphere (see FIGS. 18 to 20).
- step S10 the model correcting unit 22d determines the morphology transformation parameter W at the time of morphology transformation based on the three-dimensional coordinates, and continuously performs dilation processing and erosion processing which are morphology transformation processing. Execute noise removal processing.
- the morphological transformation parameter W at each coordinate position of the 3D small area map is averaged by the count value at each coordinate position. Turn into. Through this process, it is possible to obtain the optimum parameters for smoothing by referring to the 3D small area map based on the 3D coordinate position.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Endoscopes (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
An endoscope system is substantially constructed from a medical observation device, a medical image processing device, and a monitor. A CPU (22) of the medical image processing device has function sections that are a three-dimensional model estimation section (22a), a target detection region setting section (22b), a shape characteristic amount calculation section (22c) as shape characteristic amount calculation means, a model correction section (22d), a three-dimensional shape detection section (22e), and a polyp decision section (22f).
Description
明 細 書 Specification
医療用画像処理装置及び医療用画像処理方法 Medical image processing apparatus and medical image processing method
技術分野 Technical field
[0001] 本発明は、医療用画像処理装置及び医療用画像処理方法に関し、特に、生体組 織の像の 2次元画像データに基づき、該生体組織の 3次元モデルデータを推定する 医療用画像処理装置及び医療用画像処理方法に関する。 TECHNICAL FIELD [0001] The present invention relates to a medical image processing apparatus and a medical image processing method, and in particular, medical image processing for estimating 3D model data of a living tissue based on 2D image data of an image of a living tissue. The present invention relates to an apparatus and a medical image processing method.
背景技術 Background art
[0002] 従来、医療分野にぉレ、て、 X線診断装置、 CT、 MRI、超音波観測装置及び内視 鏡装置等の画像撮像機器を用いた観察が広く行われている。このような画像撮像機 器のうち、内視鏡装置は、例えば、体腔内に挿入可能な挿入部を有し、該揷入部の 先端部に配置された対物光学系により結像した体腔内の像を固体撮像素子等の撮 像手段により撮像して撮像信号として出力し、該撮像信号に基づいてモニタ等の表 示手段に体腔内の像の画像を表示するという作用及び構成を有する。そして、ユー ザは、モニタ等の表示手段に表示された体腔内の像の画像に基づき、例えば、体腔 内における臓器等の観察を行う。 Conventionally, in the medical field, observation using an image pickup device such as an X-ray diagnostic apparatus, CT, MRI, ultrasonic observation apparatus and endoscope apparatus has been widely performed. Among such imaging devices, the endoscope device has, for example, an insertion part that can be inserted into a body cavity, and the inside of the body cavity imaged by an objective optical system disposed at the distal end of the insertion part. An image is picked up by an image pickup means such as a solid-state image pickup device and output as an image pickup signal, and an image and an image of a body cavity are displayed on a display means such as a monitor based on the image pickup signal. The user then observes, for example, an organ in the body cavity based on the image of the image in the body cavity displayed on the display means such as a monitor.
[0003] また、内視鏡装置は、消化管粘膜の像を直接的に撮像することが可能である。その ため、ユーザは、例えば、粘膜の色調、病変の形状及び粘膜表面の微細な構造等を 総合的に観察することができる。 [0003] In addition, the endoscope apparatus can directly capture an image of the digestive tract mucosa. Therefore, the user can comprehensively observe, for example, the color of the mucous membrane, the shape of the lesion, and the fine structure of the mucosal surface.
[0004] さらに、内視鏡装置は、局所的な隆起形状を有する病変が存在する所定の画像を 検出可能な画像処理方法として、例えば、 日本国特開 2005— 192880号公報(特 許文献 1)等に記載されている画像処理方法を用いることにより、ポリープ等の病変 部位が含まれる画像を検出することもまた可能である。 [0004] Furthermore, as an image processing method capable of detecting a predetermined image in which a lesion having a local raised shape exists, an endoscope apparatus is disclosed in, for example, Japanese Patent Application Laid-Open No. 2005-192880 (Patent Document 1). It is also possible to detect an image including a lesion site such as a polyp by using the image processing method described in).
[0005] この特許文献 1に記載されている画像処理方法は、入力された画像が有する輪郭 を抽出するとともに、該輪郭の形状に基づき、該画像における局所的な隆起形状を 有する病変を検出することができる。 [0005] The image processing method described in Patent Document 1 extracts a contour of an input image and detects a lesion having a local raised shape in the image based on the shape of the contour. be able to.
[0006] また、従来、大腸ポリープ検出処理においては、 2次元画像から 3次元データを推 定し、 3次元特徴量(Shape Index/Curvedness)を用いて大腸ポリープを検出してい
る(電子情報通信学会、信学技報 (MI2003-102) ,形状情報に基づく 3次元腹部 CT 像からの大腸ポリープ自動検出手法に関する検討 木村、林、北坂、森、末長 pp. 29-34, 2004)。この 3次元特徴量は、参照点における偏微分係数を 3次元データ から算出して、偏微分係数を使用して算出することにより実現する。そして、大腸ポリ ープ検出処理では、 3次元特徴量を閾値処理することにより、ポリープ候補を検出す る。 [0006] Conventionally, in colon polyp detection processing, 3D data is estimated from a 2D image, and colon polyps are detected using 3D feature values (Shape Index / Curvedness). (The Institute of Electronics, Information and Communication Engineers, IEICE Technical Report (MI2003-102), Examination of the method for detecting large intestine polyps from 3D abdominal CT images based on shape information Kimura, Hayashi, Kitasaka, Mori, Suenaga pp. 29-34, 2004). This three-dimensional feature is realized by calculating the partial differential coefficient at the reference point from the three-dimensional data and using the partial differential coefficient. In the colon polyp detection process, a polyp candidate is detected by performing threshold processing on the three-dimensional feature value.
[0007] し力 ながら、従来より 3次元データの推定手法として使用される" Shape From Shading"法は、 2次元画像でエッジとして画像化される部分では、精度が悪化する といった問題がある。 However, the “Shape From Shading” method, which has been conventionally used as an estimation method of 3D data, has a problem that accuracy is deteriorated in a portion imaged as an edge in a 2D image.
[0008] 詳細には、ポリープ等の 3次元像を内視鏡にて撮像する場合、 3次元像上の曲面 の境や可視範囲とォクルージョン(可視隠蔽 or不可視)範囲との境に、境界として、ェ ッジ領域が発生する。 2次元画像から 3次元データを推定する" Shape From Sha ding"法は、 2次元画像の画素値を元にして 3次元位置を推定する手法である。この エッジ領域の画素値は、隣接する領域の画素値と比べ低い値となるため、 "Shape From Shading"法では、このエッジ領域に「溝(凹)」が存在するかのごとぐ 3次元 位置の推定値を算出してしまう。 [0008] Specifically, when a 3D image such as a polyp is captured by an endoscope, the boundary between a curved surface boundary or a visible range and an occlusion (visible concealment or invisible) range on the 3D image is used as a boundary. An edge area occurs. The “Shape From Shading” method, which estimates 3D data from 2D images, is a method of estimating 3D positions based on pixel values of 2D images. Since the pixel value of this edge region is lower than the pixel value of the adjacent region, the “Shape From Shading” method uses a 3D position as if a “groove (concave)” exists in this edge region. The estimated value is calculated.
[0009] 例えば、図 26に示すようなポリープ 500を撮像対象とする大腸内視鏡画像に対し て、 "Shape From Shading"法により 3次元データ点の座標を推定し、 3次元デー タ点から 3次元サーフェイスを生成する場合、理想的には、図 27に示すような 3次元 画像を生成すべきである。 [0009] For example, for a colonoscopy image that captures a polyp 500 as shown in Fig. 26, the coordinates of the three-dimensional data points are estimated by the "Shape From Shading" method, and the three-dimensional data points are When generating a 3D surface, ideally a 3D image as shown in Figure 27 should be generated.
[0010] しかし、平面 x=x0 (図 26における点線位置)にて実際に" Shape From Shadin g"法により 3次元サーフヱイスを切り出す時の断面は、図 28に示すように、 2次元画 像データのエッジ領域部分にて、あたかも「溝(凹)」が存在するかのごとく推定されて しまう。腸管においては、このような「溝(凹)」が存在することがないため、従来の" Sh ape From Shading"法による推定では、存在しなレ、「溝(凹)」を 3次元データとし て推定してしまうため、 3次元画像の推定の精度が悪化するといつた問題があり、 3次 元画像推定を利用した従来の大腸ポリープ検出処理は、検出精度の低下及び誤検 出の発生といった課題を有している。
[0011] 本発明は、上記事情に鑑みてなされたものであり、 2次元画像から 3次元画像を精 度よく構築し、局所的な隆起形状を有する病変を検出する場合の検出精度を向上さ せることのできる医療用画像処理装置及び医療用画像処理方法を提供することを目 的としている。 [0010] However, the cross-section when the 3D surface is actually cut out by the "Shape From Shading" method on the plane x = x0 (dotted line position in Fig. 26) is 2D image data as shown in Fig. 28. It is estimated as if “grooves (concaves)” exist in the edge region portion of. In the intestinal tract, such “grooves (concaves)” do not exist. Therefore, in the estimation by the conventional “Shape From Shading” method, the “grooves (concaves)” that do not exist are defined as three-dimensional data. Therefore, there is a problem when the accuracy of 3D image estimation deteriorates, and conventional colon polyp detection processing using 3D image estimation has a problem in that the detection accuracy is reduced and erroneous detection occurs. Has a problem. [0011] The present invention has been made in view of the above circumstances, and accurately constructs a three-dimensional image from a two-dimensional image to improve detection accuracy when detecting a lesion having a local raised shape. It is an object of the present invention to provide a medical image processing apparatus and a medical image processing method that can be performed.
発明の開示 Disclosure of the invention
課題を解決するための手段 Means for solving the problem
[0012] 本発明の医療用画像処理装置は、医療用撮像装置から入力される体腔内の生体 組織の像の 2次元画像から前記生体組織の 3次元モデルを推定する 3次元モデル推 定手段と、前記 2次元画像画像を構成する画像領域の画像境界線を検出する画像 境界線検出手段と、前記画像境界線の線幅を算出する線幅算出手段と、前記線幅 に基づき、前記 3次元モデル推定手段による前記画像境界線の 3次元モデルの推定 結果を補正する補正手段と、とを備えて構成される。 [0012] A medical image processing apparatus of the present invention includes a three-dimensional model estimation unit that estimates a three-dimensional model of a biological tissue from a two-dimensional image of a biological tissue image in a body cavity that is input from the medical imaging apparatus. An image boundary line detecting means for detecting an image boundary line of an image area constituting the two-dimensional image image, a line width calculating means for calculating a line width of the image boundary line, and the three-dimensional based on the line width And correction means for correcting the estimation result of the three-dimensional model of the image boundary line by the model estimation means.
[0013] また、本発明の医療用画像処理方法は、医療用撮像装置から入力される体腔内の 生体組織の像の 2次元画像から前記生体組織の 3次元モデルを推定する 3次元モデ ル推定ステップと、前記 2次元画像画像を構成する画像領域の画像境界線を検出す る画像境界線検出ステップと、前記画像境界線の線幅を算出する線幅算出ステップ と、前記線幅に基づき、前記 3次元モデル推定ステップによる前記画像境界線の 3次 元モデルの推定結果を補正する補正ステップと、とを備えて構成される。 [0013] In addition, the medical image processing method of the present invention provides a three-dimensional model estimation that estimates a three-dimensional model of a living tissue from a two-dimensional image of the image of the living tissue in a body cavity that is input from a medical imaging device. An image boundary line detecting step for detecting an image boundary line of an image area constituting the two-dimensional image image, a line width calculating step for calculating a line width of the image boundary line, and based on the line width, And a correction step of correcting the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step.
図面の簡単な説明 Brief Description of Drawings
[0014] [図 1]本発明の実施例 1に係る医療用画像処理装置が用いられる内視鏡システムの 全体構成の一例を示す図。 FIG. 1 is a diagram showing an example of the overall configuration of an endoscope system in which a medical image processing apparatus according to Embodiment 1 of the present invention is used.
[図 2]図 1の CPUの機能構成を示す機能ブロック図。 FIG. 2 is a functional block diagram showing the functional configuration of the CPU in FIG.
[図 3]図 1のハードディスクの格納情報構成を示す図。 FIG. 3 is a diagram showing a storage information configuration of the hard disk in FIG.
[図 4]図 2の CPUの処理の流れを示すフローチャート。 FIG. 4 is a flowchart showing the processing flow of the CPU of FIG.
[図 5]図 4の 3次元モデルデータの補正処理の流れを示すフローチャート。 FIG. 5 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG.
[図 6]図 1の処理を説明する第 1の図。 FIG. 6 is a first diagram illustrating the process of FIG.
[図 7]図 1の処理を説明する第 2の図。 FIG. 7 is a second diagram for explaining the processing in FIG.
[図 8]図 1の処理を説明する第 3の図。
[図 9]図 1の処理を説明する第 4の図。 FIG. 8 is a third diagram illustrating the process of FIG. FIG. 9 is a fourth diagram illustrating the process of FIG.
[図 10]図 5のエッジ領域としてエッジ端点間の 3次元点列データの補正処理の流れを 示すフローチャート。 FIG. 10 is a flowchart showing a flow of correction processing of 3D point sequence data between edge end points as the edge region of FIG.
[図 11]図 1の処理を説明する第 5の図。 FIG. 11 is a fifth diagram illustrating the process of FIG.
[図 12]図 1の処理を説明する第 6の図。 FIG. 12 is a sixth diagram illustrating the process of FIG.
[図 13]図 10の第 1の変形例の処理の流れを示すフローチャート。 FIG. 13 is a flowchart showing a process flow of the first modified example of FIG.
[図 14]図 10の第 2の変形例の処理の流れを示すフローチャート。 FIG. 14 is a flowchart showing a process flow of the second modified example of FIG.
[図 15]図 15の処理を説明する第 1の図。 FIG. 15 is a first diagram illustrating the process of FIG.
[図 16]図 15の処理を説明する第 2の図。 FIG. 16 is a second diagram illustrating the process of FIG.
[図 17]本発明の実施例 2に係るハードディスクの格納情報構成を示す図。 FIG. 17 is a diagram showing a storage information configuration of a hard disk according to Embodiment 2 of the present invention.
[図 18]実施例 2を説明する第 1の図。 FIG. 18 is a first diagram illustrating Example 2.
[図 19]実施例 2を説明する第 2の図。 FIG. 19 is a second diagram illustrating Example 2.
[図 20]実施例 2を説明する第 3の図。 FIG. 20 is a third diagram for explaining the second embodiment.
[図 21]実施例 2に係る CPUの処理の流れを示すフローチャート。 FIG. 21 is a flowchart showing the flow of processing of a CPU according to the second embodiment.
[図 22]図 21の 3次元モデルデータの補正処理の流れを示すフローチャート。 FIG. 22 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG.
[図 23]図 22の処理を説明する第 1の図。 FIG. 23 is a first diagram illustrating the processing of FIG.
[図 24]図 22の処理を説明する第 2の図。 FIG. 24 is a second diagram illustrating the process of FIG.
[図 25]図 22の処理を説明する第 3の図。 FIG. 25 is a third diagram illustrating the process of FIG.
[図 26]従来例の課題を説明する第 1の図。 FIG. 26 is a first diagram illustrating a problem of the conventional example.
[図 27]従来例の課題を説明する第 2の図。 FIG. 27 is a second diagram for explaining the problem of the conventional example.
[図 28]従来例の課題を説明する第 3の図。 FIG. 28 is a third diagram for explaining the problem of the conventional example.
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
[0015] 以下、図面を参照しながら本発明の実施例について述べる。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
実施例 1 Example 1
[0016] 図 1ないし図 16は、本発明の実施例 1に係るものである。図 1は、医療用画像処理 装置が用いられる内視鏡システムの全体構成の一例を示す図である。図 2は、図 1の CPUの機能構成を示す機能ブロック図である。図 3は、図 1のハードディスクの格納 情報構成を示す図、図 4は、図 2の CPUの処理の流れを示すフローチャートである。
図 5は、図 4の 3次元モデルデータの補正処理の流れを示すフローチャートである。 図 6は、図 1の処理を説明する第 1の図である。図 7は、図 1の処理を説明する第 2の 図である。図 8は、図 1の処理を説明する第 3の図である。図 9は、図 1の処理を説明 する第 4の図である。図 10は、図 5のエッジ領域としてエッジ端点間の 3次元点列デ ータの補正処理の流れを示すフローチャートである。 1 to 16 relate to the first embodiment of the present invention. FIG. 1 is a diagram illustrating an example of an overall configuration of an endoscope system in which a medical image processing apparatus is used. FIG. 2 is a functional block diagram showing the functional configuration of the CPU of FIG. FIG. 3 is a diagram showing a storage information configuration of the hard disk of FIG. 1, and FIG. 4 is a flowchart showing a processing flow of the CPU of FIG. FIG. 5 is a flowchart showing a flow of correction processing of the three-dimensional model data of FIG. FIG. 6 is a first diagram illustrating the processing of FIG. FIG. 7 is a second diagram for explaining the processing of FIG. FIG. 8 is a third diagram for explaining the processing of FIG. FIG. 9 is a fourth diagram for explaining the processing of FIG. FIG. 10 is a flowchart showing a flow of correction processing of 3D point sequence data between edge end points as the edge region of FIG.
[0017] また、図 11は、図 1の処理を説明する第 5の図である。図 12は、図 1の処理を説明 する第 6の図である。図 13は、図 10の第 1の変形例の処理の流れを示すフローチヤ ートである。図 14は、図 10の第 2の変形例の処理の流れを示すフローチャートである 。図 15は、図 14の処理を説明する第 1の図である。図 16は、図 14の処理を説明する 第 2の図である。 FIG. 11 is a fifth diagram for explaining the processing of FIG. FIG. 12 is a sixth diagram for explaining the processing of FIG. FIG. 13 is a flowchart showing the flow of processing of the first modification of FIG. FIG. 14 is a flowchart showing the flow of processing of the second modified example of FIG. FIG. 15 is a first diagram illustrating the process of FIG. FIG. 16 is a second diagram for explaining the processing of FIG.
[0018] 図 1に示すように、本実施例の内視鏡システム 1は、医療用観察装置 2と、医療用画 像処理装置 3と、モニタ 4とを有して要部が構成されてレ、る。 As shown in FIG. 1, an endoscope system 1 according to the present embodiment includes a medical observation device 2, a medical image processing device 3, and a monitor 4, and the main part is configured. Les.
[0019] 前記医療用観察装置 2は、被写体を撮像するとともに、該被写体の像の 2次元画像 を出力する観察装置である。また、医療用画像処理装置 3は、パーソナルコンビユー タ等により構成され、医療用観察装置 2から出力される 2次元画像の映像信号に対し て画像処理を行うと共に、該画像処理を行った後の映像信号を画像信号として出力 する画像処理装置である。さらにモニタ 4は、医療用画像処理装置 3から出力される 画像信号に基づく画像を表示する表示装置である。 The medical observation apparatus 2 is an observation apparatus that captures an image of a subject and outputs a two-dimensional image of the image of the subject. Further, the medical image processing apparatus 3 is configured by a personal computer or the like, and performs image processing on the video signal of the two-dimensional image output from the medical observation apparatus 2 and after performing the image processing. It is an image processing device that outputs the video signal as an image signal. Furthermore, the monitor 4 is a display device that displays an image based on the image signal output from the medical image processing device 3.
[0020] 前記医療用観察装置 2は、内視鏡 6と、光源装置 7と、カメラコントロールユニット(以 降、 CCUと略記する) 8と、モニタ 9とを有して要部が構成されている。 The medical observation apparatus 2 includes an endoscope 6, a light source device 7, a camera control unit (hereinafter abbreviated as CCU) 8, and a monitor 9. Yes.
[0021] 前記内視鏡 6は、被検体の体腔内に挿入されると共に、該体腔内に存在する生体 組織等の被写体を撮像して撮像信号として出力するものである。前記光源装置 7は、 内視鏡 6により撮像される被写体を照明するための照明光を供給するものである。前 記 CCU8は、内視鏡 6に対する各種制御を行うと共に、内視鏡 6から出力される撮像 信号に対して信号処理を行い、 2次元画像の映像信号として出力するものである。前 記モニタ 9は、 CCU8から出力される 2次元画像の映像信号に基づき、内視鏡 6によ り撮像された被写体の像を画像表示するものである。 The endoscope 6 is inserted into a body cavity of a subject, images a subject such as a living tissue existing in the body cavity, and outputs it as an imaging signal. The light source device 7 supplies illumination light for illuminating a subject imaged by the endoscope 6. The CCU 8 performs various controls on the endoscope 6 and performs signal processing on the imaging signal output from the endoscope 6 to output it as a video signal of a two-dimensional image. The monitor 9 displays an image of the subject imaged by the endoscope 6 based on the video signal of the two-dimensional image output from the CCU 8.
[0022] 前記内視鏡 6は、体腔内に挿入される揷入部 11と、揷入部 11の基端側に設けられ
た操作部 12とを有して構成されている。また、挿入部 11内の基端側から、挿入部 11 内の先端側の先端部 14にかけての部分には、光源装置 7から供給される照明光を 伝送するためのライトガイド 13が挿通されている。 [0022] The endoscope 6 is provided on the insertion portion 11 to be inserted into the body cavity and on the proximal end side of the insertion portion 11. And an operation unit 12. A light guide 13 for transmitting illumination light supplied from the light source device 7 is inserted into a portion from the proximal end side in the insertion portion 11 to the distal end portion 14 on the distal end side in the insertion portion 11. Yes.
[0023] 前記ライトガイド 13は、先端側が内視鏡 6の先端部 14に配置されると共に、後端側 が前記光源装置 7に接続される。 The light guide 13 has a distal end side disposed at the distal end portion 14 of the endoscope 6 and a rear end side connected to the light source device 7.
[0024] ライトガイド 13がこのような構成を有することにより、光源装置 7から供給される照明 光は、ライトガイド 13により伝送された後、揷入部 11の先端部 14の先端面に設けら れた、図示しない照明窓から出射される。そして、図示しない照明窓から照明光が出 射されることにより、被写体としての生体組織等が照明される。 Since the light guide 13 has such a configuration, the illumination light supplied from the light source device 7 is transmitted by the light guide 13 and then provided on the distal end surface of the distal end portion 14 of the insertion portion 11. The light is emitted from an illumination window (not shown). Then, illumination light is emitted from an illumination window (not shown) to illuminate a living tissue or the like as a subject.
[0025] 内視鏡 6の先端部 14には、図示しない照明窓に隣接する図示しない観察窓に取り 付けられた対物光学系 15と、対物光学系 15の結像位置に配置され、例えば、 CCD (電荷結合素子)等により構成される撮像素子 16とを有する撮像部 17が設けられて いる。このような構成により、対物光学系 15により結像された被写体の像は、撮像素 子 16により撮像された後、撮像信号として出力される。なお、撮像素子 16は、 CCD に限らず、例えば C— MOSセンサにより構成してもよい。 [0025] At the distal end portion 14 of the endoscope 6, an objective optical system 15 attached to an observation window (not shown) adjacent to an illumination window (not shown) and an imaging position of the objective optical system 15 are arranged. An imaging unit 17 having an imaging device 16 constituted by a CCD (charge coupled device) or the like is provided. With such a configuration, the subject image formed by the objective optical system 15 is captured by the imaging element 16 and then output as an imaging signal. Note that the image sensor 16 is not limited to a CCD, and may be composed of, for example, a C-MOS sensor.
[0026] 前記撮像素子 16は、信号線を介して CCU8に接続されている。そして、撮像素子 1 6は、 CCU8から出力される駆動信号に基づいて駆動すると共に、 CCU8に対し、撮 像した被写体の像に応じた撮像信号を出力する。 [0026] The image sensor 16 is connected to the CCU 8 via a signal line. The image sensor 16 is driven based on the drive signal output from the CCU 8 and outputs an image signal corresponding to the imaged subject to the CCU 8.
[0027] また、 CCU8に入力された撮像信号は、 CCU8の内部に設けられた図示しない信 号処理回路において信号処理されることにより、 2次元画像の映像信号として変換さ れて出力される。 CCU8から出力された 2次元画像の映像信号は、モニタ 9及び医療 用画像処理装置 3に対して出力される。これにより、モニタ 9は、 CCU8から出力され る映像信号に基づく被写体の像を 2次元画像として表示する。 [0027] The imaging signal input to the CCU 8 is converted and output as a video signal of a two-dimensional image by performing signal processing in a signal processing circuit (not shown) provided in the CCU 8. The video signal of the two-dimensional image output from the CCU 8 is output to the monitor 9 and the medical image processing device 3. As a result, the monitor 9 displays the subject image based on the video signal output from the CCU 8 as a two-dimensional image.
[0028] 医療用画像処理装置 3は、医療用観察装置 2から出力される 2次元画像の映像信 号に対し、 AZD変換を行って出力する画像入力部 21と、画像入力部 21から出力さ れる映像信号に対して画像処理を行う、中央演算処理装置としての CPU22と、該画 像処理に関する処理プログラムが書き込まれた処理プログラム記憶部 23と、画像入 力部 21から出力される映像信号等を記憶する画像記憶部 24と、 CPU22が行う画像
処理における演算結果等を記憶する解析情報記憶部 25とを有して構成される。 [0028] The medical image processing apparatus 3 performs an AZD conversion on the video signal of the two-dimensional image output from the medical observation apparatus 2, and outputs the image input section 21 from the image input section 21. CPU 22 as a central processing unit that performs image processing on a video signal to be processed, a processing program storage unit 23 in which a processing program related to the image processing is written, a video signal output from the image input unit 21, and the like Images stored in the image storage unit 24 and images performed by the CPU 22 And an analysis information storage unit 25 for storing calculation results in the processing.
[0029] また、医療用画像処理装置 3は、記憶装置インターフェース (I/F) 26と、記憶装置 I/F26を介して CPU22の画像処理結果としての画像データ、 CPU22が画像処理 にて使用する各種データ等を記憶する、記憶装置としてのハードディスク 27と、 CPU 22の画像処理結果としての画像データに基づき、該画像データをモニタ 4に画像表 示するための表示処理を行うと共に、該表示処理を行った後の画像データを画像信 号として出力する表示処理部 28と、 CPU22が行う画像処理におけるパラメータ及び 医療用画像処理装置 3に対する操作指示をユーザが入力可能な、キーボードあるい はマウス等のポインティングデバイス等により構成される入力操作部 29とを有する。 そして、モニタ 4は、表示処理部 28から出力される画像信号に基づく画像を表示する [0029] Further, the medical image processing apparatus 3 includes a storage device interface (I / F) 26, image data as a result of image processing of the CPU 22 via the storage device I / F 26, and the CPU 22 uses it for image processing. Based on the image data as the image processing result of the CPU 22 and the hard disk 27 as a storage device for storing various data and the like, the display processing for displaying the image data on the monitor 4 is performed. Display processing unit 28 that outputs the image data after the image processing as an image signal, a parameter for the image processing performed by the CPU 22, and an operation instruction for the medical image processing apparatus 3 can be input by the user, a keyboard or a mouse, etc. And an input operation unit 29 composed of a pointing device or the like. The monitor 4 displays an image based on the image signal output from the display processing unit 28.
[0030] なお、医療用画像処理装置 3の画像入力部 21、 CPU22,処理プログラム記憶部 2 3、画像記憶部 24、解析情報記憶部 25、記憶装置インターフェース 26、表示処理部 28及び入力操作部 29のそれぞれは、データバス 30を介して相互に接続されている 。 図 2に示すように、 CPU22は、 3次元モデル推定手段としての 3次元モデル推定 部 22a、検出対象領域設定部 22b、形状特徴量算出部 22c、線幅算出手段及び補 正手段としてのモデル補正部 22d、 3次元形状検出部 22e、ポリープ決定部 22fの各 機能部からなる。 [0030] The image input unit 21, the CPU 22, the processing program storage unit 23, the image storage unit 24, the analysis information storage unit 25, the storage device interface 26, the display processing unit 28, and the input operation unit of the medical image processing apparatus 3 Each of 29 is connected to each other via a data bus 30. As shown in FIG. 2, the CPU 22 performs a model correction as a 3D model estimation unit 22a, a detection target region setting unit 22b, a shape feature amount calculation unit 22c, a line width calculation unit, and a correction unit as a 3D model estimation unit. It consists of functional units 22d, 3D shape detection unit 22e, and polyp determination unit 22f.
[0031] なお、本実施例では、これら機能部は、 CPU22が行うソフトウェアにて実現される。 In the present embodiment, these functional units are realized by software executed by the CPU 22.
また、これら機能部の詳細な作用につレ、ては後述する。 The detailed operation of these functional units will be described later.
[0032] また、図 3に示すように、ハードディスク 27は、 CPU22が行う各処理により生成され る各種データを記憶する、複数の格納領域を有している。具体的にはハードディスク 27は、エッジ画像格納領域 27a,エッジ細線化画像格納領域 27b, 3次元点列デー タ格納領域 27c,対応テーブル格納領域 27d,検出病変部格納領域 27eを有してい る。これら各格納領域に格納されているデータの詳細は、後述する。 As shown in FIG. 3, the hard disk 27 has a plurality of storage areas for storing various data generated by each process performed by the CPU 22. Specifically, the hard disk 27 has an edge image storage area 27a, an edge thinned image storage area 27b, a 3D point sequence data storage area 27c, a correspondence table storage area 27d, and a detected lesion part storage area 27e. Details of the data stored in each of these storage areas will be described later.
[0033] 次に、このように構成された、本実施例の内視鏡システム 1の作用について、図 4、 図 5及び図 10のフローチャートを用レ、、図 6〜図 9及び図 11、図 12を参照して説明 する。
[0034] まず、ユーザは、内視鏡システム 1が有する各部の電源を投入した後、被検体の体 腔内の、例えば大腸内に内視鏡 6の挿入部 11を挿入する。 Next, with respect to the operation of the endoscope system 1 of the present embodiment configured as described above, the flowcharts of FIGS. 4, 5 and 10 are used, and FIGS. 6 to 9 and FIG. This will be described with reference to FIG. [0034] First, after the user turns on the power of each part of the endoscope system 1, the user inserts the insertion part 11 of the endoscope 6 into the body cavity of the subject, for example, into the large intestine.
[0035] そして、ユーザにより挿入部 11が被検体の大腸内に挿入されると、例えば、該大腸 内に存在する生体組織等である被写体、例えば図 26に示したポリープ 500の像が、 先端部 14に設けられた撮像部 17により撮像される。そして、撮像部 17により撮像さ れた被写体の像は、撮像信号として CCU8に対して出力される。 [0035] When the insertion unit 11 is inserted into the large intestine of the subject by the user, for example, an object such as a living tissue existing in the large intestine, for example, an image of the polyp 500 shown in FIG. The image is captured by the imaging unit 17 provided in the unit 14. Then, the subject image captured by the imaging unit 17 is output to the CCU 8 as an imaging signal.
[0036] CCU8は、図示しない信号処理回路において、撮像部 17の撮像素子 16から出力 される撮像信号に対して信号処理を行うことにより、該撮像信号を 2次元画像の映像 信号として変換して出力する。そして、モニタ 9は、 CCU8から出力される映像信号に 基づき、撮像部 17により撮像された被写体の像を 2次元画像として表示する。また、 CCU8は、撮像部 17の撮像素子 16から出力される撮像信号に対して信号処理を行 うことにより得られた 2次元画像の映像信号を、医療用画像処理装置 3に対して出力 する。 [0036] The CCU 8 performs signal processing on the imaging signal output from the imaging device 16 of the imaging unit 17 in a signal processing circuit (not shown), thereby converting the imaging signal as a video signal of a two-dimensional image. Output. Based on the video signal output from the CCU 8, the monitor 9 displays the subject image captured by the imaging unit 17 as a two-dimensional image. Further, the CCU 8 outputs a video signal of a two-dimensional image obtained by performing signal processing on the imaging signal output from the imaging device 16 of the imaging unit 17 to the medical image processing device 3. .
[0037] 医療用画像処理装置 3に対して出力された 2次元画像の映像信号は、画像入力部 [0037] The video signal of the two-dimensional image output to the medical image processing apparatus 3 is an image input unit.
21において A/D変換された後、 CPU22に入力される。 After A / D conversion at 21, it is input to CPU22.
[0038] そして、図 4に示すように、 CPU22の 3次元モデル推定部 22aは、ステップ S1にて 画像入力部 21から出力される 2次元画像に対し、例えば、 "Shape From Shadin g"法等を用い、該 2次元画像の輝度情報等に基づく幾何学的な変換等の処理を施 すことにより、該 2次元画像に応じた 3次元モデルを推定する。このとき、 3次元モデ ル推定部 22aは、大腸の腸管表面を示す 3次元データ点列と、 2次元画像データとを 対応させた対応テーブルを生成し、ハードディスク 27の対応テーブル格納領域 27d に格納する。 Then, as shown in FIG. 4, the 3D model estimation unit 22a of the CPU 22 performs, for example, a “Shape From Shading” method on the 2D image output from the image input unit 21 in step S1. The 3D model corresponding to the 2D image is estimated by performing processing such as geometric conversion based on the luminance information of the 2D image. At this time, the 3D model estimation unit 22a generates a correspondence table in which the 3D data point sequence indicating the intestinal surface of the large intestine is associated with the 2D image data, and stores the correspondence table in the correspondence table storage area 27d of the hard disk 27. To do.
[0039] そして、 CPU22のモデノレ補正部 22dにより、ステップ S2にて推定した 3次元モデ ルに対して、後述するモデル補正処理を施し、補正した 3次元モデルの各データ点 の座標を記憶装置 I/F26を介してハードディスク 27の 3次元点列データ格納領域 2 7cに格納する。 [0039] Then, the model correction unit 22d of the CPU 22 performs model correction processing described later on the 3D model estimated in step S2, and stores the coordinates of each data point of the corrected 3D model in the storage device I. Stored in the 3D point sequence data storage area 27c of the hard disk 27 via / F26.
[0040] 次に、 CPU22の検出対象領域設定部 22bは、ステップ S3にて画像入力部 21から 出力される 2次元画像の色調変化と、ステップ S2の処理により補正した 3次元モデル
の隆起性変化とを検出することにより、該 3次元モデルにおける隆起形状を有する病 変を検出するための処理の適用対象となる領域としての、検出対象領域である対象 領域を設定する。 [0040] Next, the detection target region setting unit 22b of the CPU 22 performs the color change of the two-dimensional image output from the image input unit 21 in step S3 and the three-dimensional model corrected by the processing in step S2. By detecting the bulge change of the target area, a target area that is a detection target area is set as a target area to which a process for detecting a disease having a bulge shape in the three-dimensional model is applied.
[0041] 具体的には、 CPU22の検出対象領域設定部 22bは、例えば、画像入力部 21から 出力される 2次元画像を、 R (赤)画像、 G (緑)画像及び B (青)画像の各プレーン画 像に分離した後、該 R画像に応じて推定した 3次元モデルのデータに基づレ、て隆起 性変化を検出するとともに、該 R画像及び G画像の色度に基づいて色調変化を検出 する。そして、 CPU22の検出対象領域設定部 22bは、前記隆起性変化の検出結果 及び前記色調変化の検出結果に基づき、前記隆起性変化及び前記色調変化の両 方が検出された領域を、前記対象領域として設定する。 Specifically, the detection target area setting unit 22b of the CPU 22 converts, for example, the two-dimensional image output from the image input unit 21 into an R (red) image, a G (green) image, and a B (blue) image. After being separated into each of the plane images, a raised change is detected based on the data of the three-dimensional model estimated according to the R image, and the color tone is determined based on the chromaticity of the R image and the G image. Detect changes. Then, the detection target area setting unit 22b of the CPU 22 determines, based on the detection result of the bulge change and the detection result of the color tone, the area where both the bulge change and the color change are detected as the target area. Set as.
[0042] その後、 CPU22の形状特徴量算出部 22cは、ステップ S4にて対象領域の局所偏 微分係数を算出する。具体的には、 CPU22の形状特徴量算出部 22cは、算出され た 3次元形状に対して、注目する 3次元位置 (x、 y、 z)を含む局所領域(曲面)におけ るその R画素値 fにおける 1階偏微分係数 fx、 fy、 fz、及び 2階偏微分係数 fxx, fyy, fzz, fxy, fyz, fxzを算出する。 [0042] Thereafter, the shape feature quantity calculation unit 22c of the CPU 22 calculates the local partial differential coefficient of the target region in step S4. Specifically, the shape feature quantity calculation unit 22c of the CPU 22 applies the R pixel in the local region (curved surface) including the noted 3D position (x, y, z) to the calculated 3D shape. Calculate first order partial differential coefficients fx, fy, fz and second order partial differential coefficients fxx, fyy, fzz, fxy, fyz, fxz at the value f.
[0043] さらに、 CPU22の形状特徴量算出部 22cは、ステップ S5にて 3次元モデルの処理 対象領域に存在する各データ点に対し、(3次元形状)の形状特徴量として、局所偏 微分係数に基づき、 Shape Index値及び Curvedness値を算出する処理を行う。 [0043] Furthermore, the shape feature quantity calculation unit 22c of the CPU 22 calculates a local partial differential coefficient as a shape feature quantity of (3D shape) for each data point existing in the processing target area of the 3D model in step S5. Based on the above, a process for calculating a shape index value and a curvedness value is performed.
[0044] すなわち、これらの局所偏微分係数を用いて、 CPU22の形状特徴量算出部 22c は、ガウス曲率 K,平均曲率 Hを算出する。 That is, using these local partial differential coefficients, the shape feature amount calculation unit 22c of the CPU 22 calculates a Gaussian curvature K and an average curvature H.
[0045] 一方、曲面の主曲率 kl , k2 (kl≥k2)は、ガウス曲率 Kと、平均曲率 Hを用いて kl =H+ (H2-K) 1/2 k2 = H- (H2-K) 1/2 (1) [0045] On the other hand, the principal curvature kl, k2 (kl≥k2) of the curved surface is calculated using the Gaussian curvature K and the average curvature H: kl = H + (H 2 -K) 1/2 k2 = H- (H 2- K) 1/2 (1)
と表される。 It is expressed.
[0046] また、この場合における曲面形状を表す特徴量である Shape IndexSI及び Curvedne ssCVは、それぞれ [0046] In addition, Shape IndexSI and Curvedness ssCV, which are feature amounts representing the curved surface shape in this case, are respectively
SI= lZ2_ (lZ 7T ) arc tan[ (kl +k2) / (kl -k2) ] (2) SI = lZ2_ (lZ 7T) arc tan [(kl + k2) / (kl -k2)] (2)
CV= ( (kl2 + k22) /2) 1/2 (3) CV = ((kl 2 + k2 2 ) / 2) 1/2 (3)
となる。
[0047] CPU22の形状特徴量算出部 22cは、このようにして、 3次元の各曲面における Sha pe IndexSI及び CurvednessCVを 3次元形状情報として算出し、解析情報記憶部 25 に格納する。 It becomes. [0047] In this way, the shape feature amount calculation unit 22c of the CPU 22 calculates the Shape Index SI and Curvedness CV of each three-dimensional curved surface as the three-dimensional shape information, and stores it in the analysis information storage unit 25.
[0048] 前述した Shape Index値は、 3次元モデルが有する各データ点における凹凸の状態 を示すための値であり、 0以上 1以下の範囲内の数値として示される。具体的には、 3 次元モデル内に存在する個々のデータ点において、 Shape Index値が 0に近い場合 には凹型形状の存在が示唆され、また、 Shape Index値が 1に近い場合には凸型形状 の存在が示唆される。 [0048] The above-described Shape Index value is a value for indicating the state of unevenness at each data point of the three-dimensional model, and is indicated as a numerical value within a range of 0 to 1. Specifically, at each data point in the 3D model, if the Shape Index value is close to 0, it indicates that a concave shape exists, and if the Shape Index value is close to 1, it is convex. The existence of the shape is suggested.
[0049] また、前述した Curvedness値は、 3次元モデルが有する各データ点における曲率を 示すための値である。具体的には、 3次元モデル内に存在する個々のデータ点にお いて、 Curvedness値が小さければ小さい程鋭く曲がった曲面の存在が示唆され、ま た、 Curvedness値が大きければ大きい程鈍く曲がった曲面の存在が示唆される。 [0049] Further, the aforementioned Curvedness value is a value for indicating the curvature at each data point of the three-dimensional model. Specifically, at each data point existing in the 3D model, the smaller the Curvedness value, the more sharply curved the surface is suggested, and the larger the Curvedness value, the slower the curve. The existence of a curved surface is suggested.
[0050] 次に、 CPU22の 3次元形状検出部 22dは、ステップ S6にて 3次元モデルの対象領 域に存在する各データ点において、 Shape Index値及び Curvedness値の各値と、予 め設定されている所定の閾値 Tl , T2との比較処理を行うことにより、該各データ点の うち、隆起形状を有するデータ群として検出する。具体的には、 CPU22は、 3次元モ デルの処理対象領域に存在する各データ点のうち、例えば、 Shape Index値が閾値 T はり大きぐかつ、 Curvedness値が閾値 T2より大きい複数のデータ点を、隆起形状を 有するデータ群として検出する。 [0050] Next, in step S6, the 3D shape detection unit 22d of the CPU 22 presets the shape index value and the curvedness value at each data point existing in the target area of the 3D model. By performing a comparison process with predetermined threshold values Tl and T2, the data points are detected as a data group having a raised shape. Specifically, the CPU 22 selects, for example, a plurality of data points having a shape index value larger than the threshold T and a curvedness value larger than the threshold T2 from the data points existing in the processing target area of the three-dimensional model. This is detected as a data group having a raised shape.
[0051] そして、 CPU22のポリープ決定部 22fは、ステップ S7にて 3次元モデルにおいて 隆起形状を有するデータ群として検出した複数のデータ点各々が、ポリープ等の病 変に由来する隆起形状に該当するデータ点であるかを判別する隆起形状判別処理 を行う。 [0051] Then, the polyp determination unit 22f of the CPU 22 corresponds to the ridge shape derived from a disease such as a polyp, in which each of the plurality of data points detected as the data group having the ridge shape in the three-dimensional model in step S7. A raised shape discriminating process is performed to discriminate whether the data point.
[0052] その後、 CPU22のポリープ決定部 22fは、ステップ S8にて病変に由来する隆起形 状に該当するデータ点からなるデータ群を有する領域をポリープ領域として決定し、 病変領域であるポリープを検出する。 [0052] After that, in step S8, the polyp determination unit 22f of the CPU 22 determines a region having a data group consisting of data points corresponding to the raised shape derived from the lesion as the polyp region, and detects the polyp that is the lesion region To do.
[0053] そして、 CPU22は、その検出結果を、例えば図 1のハードディスク 27の検出病変 部格納領域 27eに検出対象の内視鏡画像と関連付けて格納すると共に、表示処理
部 28を経てモニタ 4に、例えば検出対象の内視鏡画像と並べて表示する。 [0053] Then, the CPU 22 stores the detection result in association with the endoscopic image to be detected in the detected lesion part storage area 27e of the hard disk 27 of FIG. For example, it is displayed side by side with the endoscopic image to be detected on the monitor 4 via the unit 28.
[0054] これにより、モニタ 4には、ポリープ等の病変に由来する隆起形状が存在する位置 をユーザが容易に認識可能であるような、被写体の 3次元モデルが画像表示される。 Thereby, the monitor 4 displays an image of a three-dimensional model of the subject so that the user can easily recognize the position where the raised shape derived from a lesion such as a polyp exists.
[0055] 次に、上記のステップ S2における、モデル補正部 22dによるモデル補正処理につ いて説明する。図 5に示すように、 CPU22のモデノレ補正部 22dは、ステップ S21にて 、入力された 2次元画像に対してエッジ抽出処理を実施しエッジ画像を生成し、ハー ドディスク 27のエッジ画像格納領域 27aに格納する。また、モデル補正部 22dは、ス テツプ S21にて、生成したエッジ画像に対して細線化処理を施しエッジ細線化画像 を生成し、ハードディスク 27のエッジ細線化画像格納領域 27bに格納する。 [0055] Next, the model correction processing by the model correction unit 22d in step S2 will be described. As shown in FIG. 5, the mode correction unit 22d of the CPU 22 performs edge extraction processing on the input 2D image to generate an edge image in step S21, and generates an edge image storage area on the hard disk 27. Store in 27a. In step S21, the model correction unit 22d performs thinning processing on the generated edge image to generate an edge thinned image, and stores it in the edge thinned image storage area 27b of the hard disk 27.
[0056] なお、 2次元のエッジ画像が図 6とすると、エッジ細線化画像は、細線化処理により エッジ画像の各エッジ領域を線幅 1画素に細線化した画像となる。 [0056] If the two-dimensional edge image is shown in FIG. 6, the edge thinned image is an image obtained by thinning each edge region of the edge image to a line width of 1 pixel by thinning processing.
[0057] 次に、 CPU22のモデノレ補正部 22dは、ステップ S23にてパラメータ i, jをそれぞれ [0057] Next, the mode correction unit 22d of the CPU 22 sets the parameters i and j in step S23, respectively.
1にセットする。続いて、 CPU22のモデノレ補正部 22dは、ステップ S24にてエッジ細 線化画像の第 i番目のエッジ線 Liを取得し、ステップ S25にて第 i番目エッジ線 Li上の 第 j番目の(注目点である)エッジ点 Pi,jを取得する。 Set to 1. Subsequently, the mode correction unit 22d of the CPU 22 obtains the i-th edge line Li of the edge thinned image in step S24, and in step S25 the j-th (noticeable) on the i-th edge line Li. Get the edge point Pi, j (which is a point).
[0058] そして、 CPU22のモデル補正部 22dは、ステップ S26にて第 j番目エッジ点 Pi Jを 通る、第 i番目エッジ線 Liに直交するエッジ直交線 Hi, jを生成する。続いて、 CPU22 のモデル補正部 22dは、ステップ S27にてエッジ直交線 Hi,j上の 2次元画像上の各 点 (xi,j, yi,j, zi,j)に対応する 3次元データ点列を対応テーブルから取得する。なお、 この対応テーブルは、上述したように、ステップ S1にて CPU22の 3次元モデル推定 部 22aによりハードディスク 27の対応テーブル格納領域 27dに格納されている。 Then, the model correction unit 22d of the CPU 22 generates an edge orthogonal line Hi, j that passes through the jth edge point PiJ and is orthogonal to the ith edge line Li in step S26. Subsequently, the model correction unit 22d of the CPU 22 determines the 3D data points corresponding to the points (xi, j, yi, j, zi, j) on the 2D image on the edge orthogonal line Hi, j in step S27. Get column from correspondence table. As described above, this correspondence table is stored in the correspondence table storage area 27d of the hard disk 27 by the three-dimensional model estimation unit 22a of the CPU 22 in step S1.
[0059] そして、 CPU22のモデル補正部 22dは、ステップ S28にてエッジ直交線 Hi,j上の エッジの端点 AiJ, Bi,j (図 6の拡大図参照)を決定する。具体的には、本実施例では 、図 7に示すように、 2次元画像上の各点に対応する画素値を取得し、その画素値が 所定の閾値はりも小さい値をエッジ上の点とする判定処理を用いて、上記の細線化 したエッジ上の各点から双方向に 1点づっ判定を行うことにより、エッジの端点 Ai,j, B i,jを求める。 [0059] Then, the model correction unit 22d of the CPU 22 determines the edge endpoints AiJ, Bi, j (see the enlarged view of FIG. 6) on the edge orthogonal line Hi, j in step S28. Specifically, in this embodiment, as shown in FIG. 7, the pixel value corresponding to each point on the two-dimensional image is acquired, and the value of the pixel value having a predetermined threshold value is set as the point on the edge. Using the determination process, the edge endpoints Ai, j, B i, j are obtained by making a one-way determination from each point on the thinned edge.
[0060] 次に、 CPU22のモデノレ補正部 22dは、ステップ S29にてエッジの端点 Ai,j, BiJ間
の距離 Di,jを求める。そして、 CPU22のモデノレ補正部 22dは、ステップ S30にてエツ ジの端点 Ai,j, Bi,j間の距離 Di,jが所定の値 DOよりも小さい(Di,jく DO)かどうか判定 する。 [0060] Next, the mode correction unit 22d of the CPU 22 determines whether the edge end points Ai, j, and BiJ Find the distance Di, j. Then, the mode correction unit 22d of the CPU 22 determines in step S30 whether the distance Di, j between the edge points Ai, j, Bi, j of the edge is smaller than the predetermined value DO (Di, j is DO). .
[0061] エッジの端点 Ai,j, Bi,j間の距離 DiJが所定の値 DOよりも小さい(Di,jく DO)と判定 すると、 CPU22のモデノレ補正部 22dは、ステップ S31にてエッジの端点 Ai,j, BiJ間 がエッジ領域として表現される部分であると判断して 3次元点列データの補正を行い 、ステップ S33に進む。なお、ステップ S31における 3次元点列データの補正処理の 詳細は後述する。 [0061] When it is determined that the distance DiJ between the edge endpoints Ai, j, Bi, j is smaller than the predetermined value DO (Di, j k DO), the mode correction unit 22d of the CPU 22 determines the edge in step S31. It is determined that the area between the end points Ai, j, and BiJ is the part represented as an edge region, and the 3D point sequence data is corrected. Details of the correction processing of the three-dimensional point sequence data in step S31 will be described later.
[0062] また、エッジの端点 Ai,j, Bi,j間の距離 Di,jが所定の値 DO以上と判定すると、 CPU2 2のモデル補正部 22dは、ステップ S32にてエッジの端点 Ai,j, Bi,j間がォクルージョ ン領域として表現される部分であると判断して、エッジの端点 Ai,j, BiJ間の 3次元点 列データを消去し、ステップ S33に進む。具体的には、モデル補正部 22dは、ステツ プ S32において、図 8に示すように、エッジの端点 Ai,j, Bi,j間の距離 Di,jが所定の値 DO以上と判定すると、腸管表面を示す 3次元点列データ線上から、エッジの端点 Ai,j , Bi,j間の 3次元点列データを、図 9に示すように消去して、ォクルージョン領域として 補正した 3次元点列データ線を生成する。 [0062] If the distance Di, j between the edge endpoints Ai, j, Bi, j is determined to be greater than or equal to a predetermined value DO, the model correction unit 22d of the CPU 22 2 determines that the edge endpoint Ai, j is at step S32. , Bi, j is determined to be a portion expressed as an occlusion area, the three-dimensional point sequence data between edge endpoints Ai, j, BiJ is deleted, and the process proceeds to step S33. Specifically, when the model correction unit 22d determines in step S32 that the distance Di, j between the edge endpoints Ai, j, Bi, j is equal to or greater than a predetermined value DO, as shown in FIG. The 3D point sequence data between the edge points Ai, j, Bi, and j on the 3D point sequence data line indicating the surface is deleted as shown in Fig. 9 and corrected as an occlusion area. Generate a line.
[0063] そして、 CPU22のモデル補正部 22dは、ステップ S33にてパラメータ jがエッジ線 Li 上のすべての点数 Nj未満かどうか判断し、パラメータ jがエッジ線 Li上のすべての点 数 Nj未満の場合には、ステップ S34にてパラメータ jをインクリメントしてステップ S25 に戻り、エッジ線 Li上のすべての点について、上記ステップ S25〜S34を繰り返す。 [0063] Then, in step S33, the model correction unit 22d of the CPU 22 determines whether the parameter j is less than all the points Nj on the edge line Li, and the parameter j is less than all the points Nj on the edge line Li. In this case, the parameter j is incremented in step S34 and the process returns to step S25, and the above steps S25 to S34 are repeated for all points on the edge line Li.
[0064] エッジ線 Li上のすべての点について上記ステップ S25〜S33が実施されると、 CP U22のモデノレ補正部 22dは、ステップ S35にてパラメータ iがすべてのエッジ線数 Ni 未満かどうか判断し、パラメータ iがすべてのエッジ線数 Ni未満の場合には、ステップ S36にてパラメータ iをインクリメントしてステップ S24に戻り、すべてのエッジ線につい て、上記ステップ S24〜S36を繰り返す。 [0064] When the above steps S25 to S33 are performed for all points on the edge line Li, the mode correction unit 22d of the CPU 22 determines whether the parameter i is less than the number of all edge lines Ni in step S35. If the parameter i is less than the number of all edge lines Ni, the parameter i is incremented in step S36 and the process returns to step S24, and the above steps S24 to S36 are repeated for all edge lines.
[0065] 次に、上記ステップ S31における 3次元点列データの補正処理について説明する。 Next, the 3D point sequence data correction process in step S31 will be described.
この 3次元点列データの補正処理では、 CPU22のモデル補正部 22dは、図 10に示 すように、ステップ S41にて 3次元点列データを有する 3次元空間上に、(注目点であ
る)エッジ点 Pi,jを中心とする N X N X Nの立方領域を形成し、この N X N X Nの立方 領域内に含まれる 3次元点列データの座標の平均値を算出する。そして、 CPU22の モデル補正部 22dは、ステップ S42にて、算出した平均値によりエッジ部分の点列デ ータ線上の 3次元点列データの座標データを平滑化する。 In this 3D point sequence data correction process, as shown in FIG. 10, the model correction unit 22d of the CPU 22 performs processing in step S41 on the 3D space having the 3D point sequence data (the point of interest). (3) An NXNXN cubic region centered on the edge point Pi, j is formed, and the average value of the coordinates of the 3D point sequence data included in this NXNXN cubic region is calculated. In step S42, the model correction unit 22d of the CPU 22 smoothes the coordinate data of the three-dimensional point sequence data on the point sequence data line of the edge portion by the calculated average value.
[0066] 図 10の処理を具体的に説明すると、例えば、図 11に示すように、 N = 5として(注目 点である)エッジ点 PiJを中心とする 5 X 5 X 5の立方領域を、 3次元点列データを有 する 3次元空間上に形成する。そして、この 5 X 5 X 5の立方領域内に含まれる 3次元 点列データの座標の平均値を算出し、算出した平均値によりエッジ部分の点列デー タ線上の 3次元点列データの座標データを平滑化することで、図 12に示すような補 正後の点列データ線を得ることができる。 [0066] The process of FIG. 10 will be described in detail. For example, as shown in FIG. 11, a 5 × 5 × 5 cubic region centered on the edge point PiJ (which is a point of interest) is set as N = 5. It is created on a 3D space with 3D point sequence data. Then, the average value of the coordinates of the 3D point sequence data included in this 5 X 5 X 5 cubic region is calculated, and the coordinates of the 3D point sequence data on the point sequence data line of the edge portion are calculated based on the calculated average value. By smoothing the data, a corrected point sequence data line as shown in Fig. 12 can be obtained.
[0067] なお、 3次元点列データの補正処理は、図 10の処理に限らず、以下の変形例 1、 2 の 3次元点列データの補正処理を実施してもよい。 Note that the 3D point sequence data correction process is not limited to the process shown in FIG. 10, and the following modifications 1 and 2 of the 3D point sequence data may be corrected.
[0068] (変形例 1) [0068] (Modification 1)
この変形例 1の 3次元点列データの補正処理では、 CPU22のモデル補正部 22d は、図 13に示すように、ステップ S41aにて、 N = 5として(注目点である)エッジ点 Pi,j を中心とする 5 X 5の正方領域を、画像入力部 21から出力される 2次元画像上に形 成する。そして、 CPU22のモデノレ補正部 22dは、ステップ S42aにて、この 5 X 5の正 方領域内に含まれる 2次元画像の座標点のそれぞれの画素値 (濃淡値)の平均値を 算出する。 In the correction process of the three-dimensional point sequence data in Modification 1, the model correction unit 22d of the CPU 22 sets the edge point Pi, j as N = 5 (which is the point of interest) in step S41a as shown in FIG. A 5 × 5 square area centered at is formed on the two-dimensional image output from the image input unit 21. In step S42a, the mode correction unit 22d of the CPU 22 calculates the average value of the pixel values (tone values) of the coordinate points of the two-dimensional image included in the 5 × 5 square area.
[0069] この変形例の場合、すべての平均(平滑化)処理が行われた後で、エッジ領域につ レ、て 3次元点列データの再計算を行レ、、 3次元点列データの更新を行う。 [0069] In the case of this modification, after all the averaging (smoothing) processing is performed, the 3D point sequence data is recalculated for the edge region, and the 3D point sequence data is recalculated. Update.
[0070] (変形例 2) [0070] (Modification 2)
この変形例 2の 3次元点列データの補正処理では、 CPU22のモデル補正部 22d は、図 14に示すように、ステップ S41bにて、エッジの端点 Ai,j, Bi,jから、エッジの端 点 Ai,j, BiJをそれぞれ含む連続する N点(ただし、(注目点である)エッジ点 Pi,jから 離れる方向の N点)の座標を取得し直線に近似し、近似した 2直線を x = 0平面に投 影し、投影した 2つの近似直線の交点を求める。具体的には、例えば N = 3としたとき 、図 15に示すように、エッジの端点 Ai,j, Bi,jをそれぞれ含む直線に近似し、 x = 0平
面に投影した 2つの近似直線の交点 Qi,jを求める。 In the correction processing of the three-dimensional point sequence data of Modification 2, the model correction unit 22d of the CPU 22 performs the edge edge detection from the edge edge points Ai, j, Bi, j in step S41b as shown in FIG. Acquire the coordinates of consecutive N points including the points Ai, j, and BiJ (however, the N points in the direction away from the edge point Pi, j (which is the point of interest)) and approximate them to a straight line. = Project onto the 0 plane and find the intersection of the two projected approximate lines. Specifically, for example, when N = 3, as shown in FIG. 15, it approximates a straight line including edge endpoints Ai, j, Bi, j, and x = 0 plane Find the intersections Qi, j of two approximate lines projected on the surface.
[0071] 続いて、 CPU22のモデル補正部 22dは、ステップ S42bにて、エッジ領域の各点の 例えば z座標と、交点 Qi,jの例えば z座標とを比較し、その大小関係に応じて補正に 使用する近似直線を決定する。具体的には、図 16に示すように、エッジ領域の各点 の y座標が交点 QUの y座標より小さい場合には、交点 Qi,jの y座標より小さい座標値 を使用して近似した近似直線(図 16の場合、エッジの端点 Ai,j側の近似直線)を使 用し、逆にエッジ領域の各点の y座標が交点 QiJの y座標より大きい場合には、交点 QUの y座標より大きい座標値を使用して近似した近似直線(図 16の場合、エッジの 端点 Bi, j側の近似直線)を使用する。そして、使用する近似直線を決定した後、補正 対象となるエッジ領域の点の y座標を近似直線に代入する(置き換える)ことにより補 正点を決定する。 [0071] Subsequently, in step S42b, the model correction unit 22d of the CPU 22 compares, for example, the z coordinate of each point in the edge region with the z coordinate of the intersection point Qi, j, and performs correction according to the magnitude relationship. Determine the approximate line used for. Specifically, as shown in Fig. 16, when the y coordinate of each point in the edge region is smaller than the y coordinate of the intersection QU, the approximation approximated using the coordinate value smaller than the y coordinate of the intersection Qi, j If a straight line is used (in the case of Fig. 16, an approximate straight line on the edge end point Ai, j side) and the y coordinate of each point in the edge area is larger than the y coordinate of the intersection QiJ, the y coordinate of the intersection QU Use an approximate straight line approximated by using a larger coordinate value (in the case of Fig. 16, the approximate straight line on the edge Bi, j side of the edge). After determining the approximate straight line to be used, the correction point is determined by substituting (replacement) the y coordinate of the point of the edge region to be corrected into the approximate straight line.
[0072] 上述したように本実施例及び変形例 1、 2では、 3次元データの着目点における位 置 (Z座標)を用いて閾値を補正するため、対象の反射/散乱特性や対象への 2次 光による影響を除外した閾値をポリープ検出処理に使用でき、ポリープ候補の検出 精度を向上させることができる。よって、ユーザに対して、大腸内視鏡検査において ポリープ候補発見率の向上を促すことが可能となる。 [0072] As described above, in the present embodiment and Modifications 1 and 2, the threshold value is corrected using the position (Z coordinate) at the point of interest in the three-dimensional data. A threshold that excludes the influence of secondary light can be used for polyp detection processing, and the detection accuracy of polyp candidates can be improved. Therefore, it is possible to prompt the user to improve the polyp candidate discovery rate in the colonoscopy.
実施例 2 Example 2
[0073] 図 17ないし図 25は、本発明の実施例 2に係るものである。図 17は、ハードディスク の格納情報構成を示す図である。図 18は、実施例 2を説明する第 1の図である。図 1 9は、実施例 2を説明する第 2の図である。図 20は、実施例 2を説明する第 3の図であ る。図 21は、実施例 2に係る CPUの処理の流れを示すフローチャートである。図 22 は、図 21の 3次元モデルデータの補正処理の流れを示すフローチャートである。図 2 3は、図 22の処理を説明する第 1の図である。図 24は、図 22の処理を説明する第 2 の図である。図 25は、図 22の処理を説明する第 3の図である。 FIGS. 17 to 25 relate to the second embodiment of the present invention. FIG. 17 is a diagram showing a configuration of information stored in the hard disk. FIG. 18 is a first diagram illustrating the second embodiment. FIG. 19 is a second diagram illustrating the second embodiment. FIG. 20 is a third diagram for explaining the second embodiment. FIG. 21 is a flowchart illustrating the processing flow of the CPU according to the second embodiment. FIG. 22 is a flowchart showing the flow of the correction process for the 3D model data of FIG. FIG. 23 is a first diagram illustrating the process of FIG. FIG. 24 is a second diagram for explaining the processing of FIG. FIG. 25 is a third diagram for explaining the processing of FIG.
[0074] 本実施例の構成は、実施例 1とほとんど同じであるので、異なる点のみ説明する。 Since the configuration of the present embodiment is almost the same as that of the first embodiment, only different points will be described.
[0075] 本実施例では、図 17に示すように、ハードディスク 27は、エッジ画像格納領域 27a と、エッジ細線化画像格納領域 27bと、 3次元点列データ格納領域 27cと、対応テー ブル格納領域 27dと、検出病変部格納領域 27eとの他に、さらにモフォロジーパラメ
ータマップ格納領域 27fを有している。その他の構成は実施例 1と同じである。 In this embodiment, as shown in FIG. 17, the hard disk 27 includes an edge image storage area 27a, an edge thinned image storage area 27b, a three-dimensional point sequence data storage area 27c, and a corresponding table storage area. In addition to 27d and the detected lesion storage area 27e, further morphological parameters Data map storage area 27f. Other configurations are the same as those in the first embodiment.
[0076] 上述したように、" Shape From Shading"法では、 2次元画像上にエッジがある 場合、エッジ上の画素数が低い値をとるために、そのエッジ部分に溝が存在するかの ごとく推定値を算出してしまう問題があり、例えば実施例 1のような補正を実施しても、 補正もれにより、スパイク状またはエッジ状のノイズが発生してしまう場合がある。 [0076] As described above, in the “Shape From Shading” method, when there is an edge on a two-dimensional image, the number of pixels on the edge takes a low value, so that there is a groove at the edge portion. There is a problem that an estimated value is calculated. For example, even if correction as in the first embodiment is performed, spike-shaped or edge-shaped noise may be generated due to correction leakage.
[0077] 一般に、ノイズ除去の方法として、モフォロジー変換一つである dilation処理と erosio n処理との組み合わせ力 広く利用される。 [0077] In general, as a noise removal method, a combination power of dilation processing, which is one morphological transformation, and erosion processing is widely used.
[0078] このモフォロジー変換は、 3次元サーフヱイス上に球を転がすときの球の中心を出 力する変換方法であり、 dilation処理は腸管の表面に、 erosion処理は腸管の裏面に 球を転がすこととなる。 [0078] This morphological transformation is a transformation method that outputs the center of the sphere when rolling the sphere on the three-dimensional surface, and dilation processing rolls the sphere to the intestinal tract surface, and erosion processing rolls the sphere to the back of the intestinal tract. Become.
[0079] 上記モフォロジー変換における球の中心は、対象となるノイズの大きさによって決定 される。例えば図 18のように、例えば 1画素の幅のノイズのあるサーフェイス表面に例 えば直径 5の球 300aを転がす場合、その中心の軌跡は直線となるため、その軌跡の なすサーフェイス裏側から同じサイズの球を転がすことによってノイズを埋めることが できる。 [0079] The center of the sphere in the morphological transformation is determined by the magnitude of the target noise. For example, as shown in Fig. 18, for example, when rolling a sphere 300a with a diameter of 5 on a noisy surface surface with a width of 1 pixel, the locus of the center is a straight line, so the same size from the back side of the surface formed by the locus You can fill the noise by rolling the sphere.
[0080] し力し、図 19のように、例えば 5画素の幅のノイズのあるサーフェイス表面に、上記 直径 5の球 300aを転がす場合、その中心の軌跡のなすサーフェイス表面にもノイズ のくぼみが存在するため、サーフェイスの裏側から球を転がしてもノイズを除去できな レ、。 [0080] As shown in Fig. 19, when the sphere 300a having a diameter of 5 is rolled on the surface of the noise surface having a width of 5 pixels, for example, a dent of noise is also formed on the surface surface formed by the locus of the center. Because it exists, noise cannot be removed by rolling the sphere from the back side of the surface.
[0081] 3次元データ全体にゴマシォ上のノイズがのる場合、そのノイズの大きさは小サイズ であるが、エッジ上に存在するスパイク状ノイズまたはエッジ状ノイズの大きさはエッジ の太さに依存する。よって、 2次元画像にエッジが存在する位置では、平滑化のため の球の直径、すなわち平滑化のパラメタを変更する必要がある。 [0081] When the noise on the 3D data is on the edge, the size of the noise is small, but the size of the spike noise or edge noise on the edge depends on the thickness of the edge. Dependent. Therefore, at the position where the edge exists in the 2D image, it is necessary to change the diameter of the sphere for smoothing, that is, the smoothing parameter.
[0082] 例えば、図 20に示すように、モフォロジー変換時の球サイズが十分な大きさの球 30 Obを持っている場合には、球サイズに最適なノイズ大きさから、より小サイズのノイズ までを平滑化することが可能となるが、球サイズを大きくするに従って、処理速度が低 下するため、ノイズの大きさに最適な球サイズを決定して処理を実行しなければ処理 速度が低下する。
[0083] 本実施例では、図 21に示すように、 CPU22のモデル補正部 22dにより、実施例 1 で説明したステップ S2のモデル補正処理後に、ステップ S10にてモフォロジーパラメ ータ Wiによるモフォロジー処理を実行し、実施例 1で説明したステップ S3に処理を移 行する。 [0082] For example, as shown in FIG. 20, when a sphere 30 Ob having a sufficiently large sphere size at the time of morphological conversion is present, a noise having a smaller size is selected from the noise size optimum for the sphere size. However, since the processing speed decreases as the sphere size increases, the processing speed decreases unless the optimal sphere size is determined for the noise size and the processing is executed. To do. In the present embodiment, as shown in FIG. 21, the model correction unit 22d of the CPU 22 performs the morphological processing by the morphological parameter Wi in step S10 after the model correction processing in step S2 described in the first embodiment. Execute, and move to step S3 described in the first embodiment.
[0084] 具体的には、本実施例では、ステップ S2のモデル補正処理において、 CPU22の モデノレネ甫正き 22dfま、図 22(こ示すよう ίこ、ステップ S100及び S101 (こて、図 23(こ示 すような 3次元空間を一定間隔の立方小領域に分割し、各小領域にモフォロジーパ ラメタを格納可能なマップを作成するとともに、図 24に示すエッジ線幅 DiJに対する、 図 25に示すようなモフォロジー変換パラメタ WiJの対応テーブルをハードディスク 27 のモフォロジーパラメータマップ格納領域 27fに格納する。 Specifically, in the present embodiment, in the model correction process in step S2, the CPU 22 moderation correction 22df, FIG. 22 (as shown, steps S100 and S101 (trowel, FIG. 23 ( The three-dimensional space shown here is divided into cubic small regions at regular intervals, a map that can store the morphology parameters in each small region is created, and the edge line width DiJ shown in Fig. 24 is shown in Fig. 25. The morphological conversion parameter WiJ correspondence table is stored in the morphological parameter map storage area 27f of the hard disk 27.
[0085] すなわち、モデル補正部 22dは、ステップ S100にて、エッジ細線化画像のうちの 1 つのエッジを処理対象として選択取得し、エッジ線のうちの 1つの選択点におけるェ ッジ幅 Wi,jを算出する。続いて、モデル補正部 22dは、図 25のエッジ線幅 Di,jとモフ ォロジ一変換パラメタ Wi,jの対応テーブルを用いて、モフォロジー変換パラメタ Wを取 得する。 That is, in step S100, the model correction unit 22d selects and acquires one edge of the edge thinned image as a processing target, and the edge width Wi, j is calculated. Subsequently, the model correction unit 22d obtains the morphology conversion parameter W using the correspondence table of the edge line width Di, j and the morphology conversion parameter Wi, j in FIG.
[0086] そして、モデル補正部 22dは、ステップ S101にて、ステップ S100の処理を実施し たエッジ線上の選択点に対応する 3次元座標を、ハードディスク 27に記録された対 応点テーブルから取得し、取得した 3次座標に対応する 3次元小領域マップの座標 位置を求める。その後、モデル補正部 22dは、求めた 3次元小領域マップの座標位 置へ、前記モフォロジー変換パラメタ Wを加算格納するとともに、座標位置における カウント値を 1つ加算する。 [0086] Then, in step S101, the model correction unit 22d acquires the three-dimensional coordinates corresponding to the selected point on the edge line on which the process of step S100 has been performed from the corresponding point table recorded on the hard disk 27. The coordinate position of the 3D small area map corresponding to the acquired tertiary coordinate is obtained. Thereafter, the model correction unit 22d adds and stores the morphology conversion parameter W to the coordinate position of the obtained three-dimensional small region map, and adds one count value at the coordinate position.
[0087] ここで、エッジ線幅 Di,jは、図 24に示すように、エッジ細線化画像のエッジ細線上の 点において、エッジ線に直交する線を引いたときの、エッジ画像の対応するエッジの 幅を示す。また、エッジ画像におけるモフォロジー変換パラメタ Wi,jは、前述の球(図 18ないし図 20参照)の直径を示す。 Here, as shown in FIG. 24, the edge line width Di, j corresponds to the edge image when a line orthogonal to the edge line is drawn at a point on the edge thin line of the edge thinned image. Indicates the width of the edge. In addition, the morphological transformation parameter Wi, j in the edge image indicates the diameter of the sphere (see FIGS. 18 to 20).
[0088] そして、モデル補正部 22dは、上記ステップ S10にて、モフォロジー変換時のモフ ォロジ一変換パラメタ Wを 3次元座標に基づき決定して、モフォロジー変換処理であ る dilation処理、 erosion処理を連続実行し、ノイズ除去処理を行う。
[0089] 以上の処理を、画像上の全エッジの全点に対して処理を実施した後、 3次元小領 域マップの各座標位置におけるモフォロジー変換パラメタ Wを、各座標位置における カウント値により平均化する。このような処理により、 3次元座標位置に基づき 3次元 小領域マップを参照することにより平滑化に最適なパラメタを取得することを実現する [0088] Then, in step S10, the model correcting unit 22d determines the morphology transformation parameter W at the time of morphology transformation based on the three-dimensional coordinates, and continuously performs dilation processing and erosion processing which are morphology transformation processing. Execute noise removal processing. [0089] After the above processing is performed on all points of all edges on the image, the morphological transformation parameter W at each coordinate position of the 3D small area map is averaged by the count value at each coordinate position. Turn into. Through this process, it is possible to obtain the optimum parameters for smoothing by referring to the 3D small area map based on the 3D coordinate position.
[0090] 本発明は、上述した実施例に限定されるものではなぐ本発明の要旨を変えない範 囲において、種々の変更、改変等が可能である。 The present invention is not limited to the above-described embodiments, and various changes and modifications can be made without departing from the scope of the present invention.
[0091] 本出願は、 2006年 10月 12曰に曰本国に出願された特願 2006— 279236号を優 先権主張の基礎として出願するものであり、上記の開示内容は、本願明細書、請求 の範囲、図面に引用されたものとする。
[0091] This application is filed on the basis of the priority claim of Japanese Patent Application No. 2006-279236, filed in Japan on October 12, 2006. It shall be cited in the claims and drawings.
Claims
[1] 医療用撮像装置から入力される体腔内の生体組織の像の 2次元画像から前記生 体組織の 3次元モデルを推定する 3次元モデル推定手段と、 [1] Three-dimensional model estimation means for estimating a three-dimensional model of the living tissue from a two-dimensional image of the living tissue in the body cavity input from the medical imaging device;
前記 2次元画像画像を構成する画像領域の画像境界線を検出する画像境界線検 出手段と、 Image boundary detection means for detecting an image boundary of an image region constituting the two-dimensional image, and
前記画像境界線の線幅を算出する線幅算出手段と、 A line width calculating means for calculating a line width of the image boundary line;
前記線幅に基づき、前記 3次元モデル推定手段による前記画像境界線の 3次元モ デルの推定結果を補正する補正手段と、 Correction means for correcting the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation means based on the line width;
を備えたことを特徴とする医療用画像処理装置。 A medical image processing apparatus comprising:
[2] 前記補正手段は、 [2] The correction means includes
前記線幅の値と所定値を比較し、 Compare the line width value with a predetermined value,
前記線幅の値が前記所定値以内ならば前記画像境界線を第 1の補正対象画像と し、 If the value of the line width is within the predetermined value, the image boundary line is set as a first correction target image,
前記線幅の値が前記所定値を超えている場合には前記画像境界線を第 2の補正 対象画像とし、 When the line width value exceeds the predetermined value, the image boundary line is set as a second correction target image,
前記第 1の補正対象画像あるいは前記第 2の補正対象画像に応じて、前記 3次元 モデル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを 特徴とする請求項 1に記載の医療用画像処理装置。 The correction result of the three-dimensional model of the image boundary line by the three-dimensional model estimation unit is corrected according to the first correction target image or the second correction target image. Medical image processing device.
[3] 前記第 1の補正対象画像は、前記画像境界線を溝と見なした溝画像であり、 [3] The first correction target image is a groove image in which the image boundary is regarded as a groove,
前記第 2の補正対象画像は、前記画像境界線をォクルージョンと見なしたォクルー ジョン画像であることを特徴とする請求項 2に記載の医療用画像処理装置。 3. The medical image processing apparatus according to claim 2, wherein the second correction target image is an occlusion image in which the image boundary is regarded as occlusion.
[4] 前記補正手段は、前記画像境界線の画素値の平均値に基づいて、前記 3次元モ デル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特 徴とする請求項 1に記載の医療用画像処理装置。 [4] The correction unit corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation unit based on an average value of pixel values of the image boundary line. The medical image processing apparatus according to claim 1.
[5] 前記補正手段は、前記画像境界線の画素値の平均値に基づいて、前記 3次元モ デル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特 徴とする請求項 2に記載の医療用画像処理装置。 [5] The correction unit corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation unit based on an average value of pixel values of the image boundary line. The medical image processing apparatus according to claim 2.
[6] 前記補正手段は、前記画像境界線の画素値の平均値に基づいて、前記 3次元モ
デル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特 徴とする請求項 3に記載の医療用画像処理装置。 [6] The correction unit is configured to generate the three-dimensional model based on an average value of pixel values of the image boundary line. 4. The medical image processing apparatus according to claim 3, wherein the estimation result of the three-dimensional model of the image boundary line by Dell estimation means is corrected.
[7] 前記補正手段は、前記画像境界線の画素値を近似することにより、前記 3次元モデ ル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特徴 とする請求項 1に記載の医療用画像処理装置。 [7] The correction means corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation means by approximating the pixel value of the image boundary line. The medical image processing apparatus according to 1.
[8] 前記補正手段は、前記画像境界線の画素値を近似することにより、前記 3次元モデ ル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特徴 とする請求項 2に記載の医療用画像処理装置。 [8] The correction means corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation means by approximating the pixel value of the image boundary line. The medical image processing apparatus according to 2.
[9] 前記補正手段は、前記画像境界線の画素値を近似することにより、前記 3次元モデ ル推定手段による前記画像境界線の 3次元モデルの推定結果を補正することを特徴 とする請求項 3に記載の医療用画像処理装置。 [9] The correction means corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation means by approximating the pixel value of the image boundary line. The medical image processing apparatus according to 3.
[10] 医療用撮像装置から入力される体腔内の生体組織の像の 2次元画像から前記生 体組織の 3次元モデルを推定する 3次元モデル推定ステップと、 [10] A three-dimensional model estimation step for estimating a three-dimensional model of the biological tissue from a two-dimensional image of the biological tissue in the body cavity input from the medical imaging device;
前記 2次元画像画像を構成する画像領域の画像境界線を検出する画像境界線検 出ステップと、 An image boundary detection step for detecting an image boundary of an image area constituting the two-dimensional image,
前記画像境界線の線幅を算出する線幅算出ステップと、 A line width calculating step of calculating a line width of the image boundary line;
前記線幅に基づき、前記 3次元モデル推定ステップによる前記画像境界線の 3次 元モデルの推定結果を補正する補正ステップと、 A correction step for correcting an estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step based on the line width;
を備えたことを特徴とする医療用画像処理方法。 A medical image processing method comprising:
[11] 前記補正ステップは、 [11] The correction step includes:
前記線幅の値と所定値を比較し、 Compare the line width value with a predetermined value,
前記線幅の値が前記所定値以内ならば前記画像境界線を第 1の補正対象画像と し、 If the value of the line width is within the predetermined value, the image boundary line is set as a first correction target image,
前記線幅の値が前記所定値を超えている場合には前記画像境界線を第 2の補正 対象画像とし、 When the line width value exceeds the predetermined value, the image boundary line is set as a second correction target image,
前記第 1の補正対象画像あるいは前記第 2の補正対象画像に応じて、前記 3次元 モデル推定手段における、前記画像境界線の 3次元モデルの推定結果を補正する ことを特徴とする請求項 10に記載の医療用画像処理方法。
11. The estimation result of the three-dimensional model of the image boundary line in the three-dimensional model estimation unit is corrected according to the first correction target image or the second correction target image. The medical image processing method as described.
[12] 前記第 1の補正対象画像は、前記画像境界線を溝と見なした溝画像であり、 前記第 2の補正対象画像は、前記画像境界線をォクルージョンと見なしたォクルー ジョン画像であることを特徴とする請求項 11に記載の医療用画像処理方法。 [12] The first correction target image is a groove image in which the image boundary line is regarded as a groove, and the second correction target image is an occlusion image in which the image boundary line is regarded as occlusion. 12. The medical image processing method according to claim 11, wherein the medical image processing method is provided.
[13] 前記補正ステップは、前記画像境界線の画素値の平均値に基づいて、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 10に記載の医療用画像処理方法。 [13] The correction step includes correcting the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step based on an average value of pixel values of the image boundary line. Item 11. The medical image processing method according to Item 10.
[14] 前記補正ステップは、前記画像境界線の画素値の平均値に基づいて、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 11に記載の医療用画像処理方法。 [14] The correction step corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step based on an average value of pixel values of the image boundary line. Item 12. The medical image processing method according to Item 11.
[15] 前記補正ステップは、前記画像境界線の画素値の平均値に基づいて、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 12に記載の医療用画像処理方法。 [15] The correction step includes correcting the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step based on an average value of pixel values of the image boundary line. Item 13. A medical image processing method according to Item 12.
[16] 前記補正ステップは、前記画像境界線の画素値を近似することにより、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 10に記載の医療用画像処理方法。 16. The correction step corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step by approximating pixel values of the image boundary line. 10. The medical image processing method according to 10.
[17] 前記補正ステップは、前記画像境界線の画素値を近似することにより、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 11に記載の医療用画像処理方法。 17. The correction step corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step by approximating a pixel value of the image boundary line. The medical image processing method according to 11.
[18] 前記補正ステップは、前記画像境界線の画素値を近似することにより、前記 3次元 モデル推定ステップによる前記画像境界線の 3次元モデルの推定結果を補正するこ とを特徴とする請求項 12に記載の医療用画像処理方法。
18. The correction step corrects the estimation result of the three-dimensional model of the image boundary line by the three-dimensional model estimation step by approximating pixel values of the image boundary line. 12. The medical image processing method according to 12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006279236A JP2008093213A (en) | 2006-10-12 | 2006-10-12 | Medical image processing apparatus and medical image processing method |
JP2006-279236 | 2006-10-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008044365A1 true WO2008044365A1 (en) | 2008-04-17 |
Family
ID=39282576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/061628 WO2008044365A1 (en) | 2006-10-12 | 2007-06-08 | Medical image processing device and medical image processing method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2008093213A (en) |
WO (1) | WO2008044365A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112788978A (en) * | 2019-03-28 | 2021-05-11 | Hoya株式会社 | Processor for endoscope, information processing device, endoscope system, program, and information processing method |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4902735B2 (en) * | 2007-04-24 | 2012-03-21 | オリンパスメディカルシステムズ株式会社 | Medical image processing apparatus and medical image processing method |
JP5658931B2 (en) * | 2010-07-05 | 2015-01-28 | オリンパス株式会社 | Image processing apparatus, image processing method, and image processing program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11337845A (en) * | 1998-05-25 | 1999-12-10 | Mitsubishi Electric Corp | Endoscope device |
JP2005506140A (en) * | 2001-10-16 | 2005-03-03 | ザ・ユニバーシティー・オブ・シカゴ | Computer-aided 3D lesion detection method |
JP2005192880A (en) * | 2004-01-08 | 2005-07-21 | Olympus Corp | Method for image processing |
-
2006
- 2006-10-12 JP JP2006279236A patent/JP2008093213A/en active Pending
-
2007
- 2007-06-08 WO PCT/JP2007/061628 patent/WO2008044365A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11337845A (en) * | 1998-05-25 | 1999-12-10 | Mitsubishi Electric Corp | Endoscope device |
JP2005506140A (en) * | 2001-10-16 | 2005-03-03 | ザ・ユニバーシティー・オブ・シカゴ | Computer-aided 3D lesion detection method |
JP2005192880A (en) * | 2004-01-08 | 2005-07-21 | Olympus Corp | Method for image processing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112788978A (en) * | 2019-03-28 | 2021-05-11 | Hoya株式会社 | Processor for endoscope, information processing device, endoscope system, program, and information processing method |
US11869183B2 (en) | 2019-03-28 | 2024-01-09 | Hoya Corporation | Endoscope processor, information processing device, and endoscope system |
Also Published As
Publication number | Publication date |
---|---|
JP2008093213A (en) | 2008-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4994737B2 (en) | Medical image processing apparatus and medical image processing method | |
US8515141B2 (en) | Medical image processing apparatus and method for detecting locally protruding lesion | |
US7830378B2 (en) | Medical image processing apparatus and medical image processing method | |
JP4832927B2 (en) | Medical image processing apparatus and medical image processing method | |
US8165367B2 (en) | Medical image processing apparatus and medical image processing method having three-dimensional model estimating | |
JP5276225B2 (en) | Medical image processing apparatus and method of operating medical image processing apparatus | |
JP4902735B2 (en) | Medical image processing apparatus and medical image processing method | |
US20090074269A1 (en) | Medical image processing device and medical image processing method | |
US8121369B2 (en) | Medical image processing apparatus and medical image processing method | |
WO2012153568A1 (en) | Medical image processing device and medical image processing method | |
JP4981335B2 (en) | Medical image processing apparatus and medical image processing method | |
WO2008044365A1 (en) | Medical image processing device and medical image processing method | |
EP1992274B1 (en) | Medical image processing device and medical image processing method | |
JP2008023266A (en) | Medical image processing apparatus and medical image processing method | |
JP5148096B2 (en) | Medical image processing apparatus and method of operating medical image processing apparatus | |
WO2024024022A1 (en) | Endoscopic examination assistance device, endoscopic examination assistance method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07767068 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07767068 Country of ref document: EP Kind code of ref document: A1 |