+

CN111369662A - Method and system for reconstructing 3D model of blood vessels in CT images - Google Patents

Method and system for reconstructing 3D model of blood vessels in CT images Download PDF

Info

Publication number
CN111369662A
CN111369662A CN201811591907.4A CN201811591907A CN111369662A CN 111369662 A CN111369662 A CN 111369662A CN 201811591907 A CN201811591907 A CN 201811591907A CN 111369662 A CN111369662 A CN 111369662A
Authority
CN
China
Prior art keywords
dimensional
image
model
segmentation
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811591907.4A
Other languages
Chinese (zh)
Other versions
CN111369662B (en
Inventor
罗园明
范志伟
邹纯锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811591907.4A priority Critical patent/CN111369662B/en
Publication of CN111369662A publication Critical patent/CN111369662A/en
Application granted granted Critical
Publication of CN111369662B publication Critical patent/CN111369662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种CT图像中血管的三维模型重建方法及系统。所述三维模型重建方法包括:基于神经网络建立图像分割模型;所述图像分割模型用于对CT图像中的血管进行分割,并输出所述血管的二维分割图;将待建模的CT图像输入至所述图像分割模型,得到多张二维分割图;对所述二维分割图进行三维重建。本发明构建的图像分割模型能快速、准确的对CT图像中的血管进行分割,并输出血管的二维分割图,进而精确地构建血管的三维模型。

Figure 201811591907

The invention discloses a three-dimensional model reconstruction method and system of blood vessels in CT images. The three-dimensional model reconstruction method includes: establishing an image segmentation model based on a neural network; the image segmentation model is used to segment blood vessels in CT images, and output a two-dimensional segmentation map of the blood vessels; Input to the image segmentation model to obtain a plurality of two-dimensional segmentation maps; perform three-dimensional reconstruction on the two-dimensional segmentation maps. The image segmentation model constructed by the invention can quickly and accurately segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels, thereby accurately constructing a three-dimensional model of the blood vessels.

Figure 201811591907

Description

CT图像中血管的三维模型重建方法及系统Method and system for reconstructing 3D model of blood vessels in CT images

技术领域technical field

本发明涉及医学成像技术领域,特别涉及一种对CT(电子计算机断层扫描)图片进行血管的三维模型重建方法及系统。The invention relates to the technical field of medical imaging, in particular to a method and system for reconstructing a three-dimensional model of blood vessels on CT (Computer Tomography) pictures.

背景技术Background technique

目前的CT技术已经能使人体体内病变区域实现2D(平面图形)可视化,然而从2D的CT图像中很难想象出病变组织的三维结构。而三维CAD(计算机辅助设计)模型的构建能够协助医生对病情进行更加准确的判断。The current CT technology has enabled 2D (planar graphics) visualization of the diseased area in the human body. However, it is difficult to imagine the three-dimensional structure of the diseased tissue from the 2D CT image. The construction of three-dimensional CAD (computer-aided design) model can help doctors to make more accurate judgments on the disease.

现有技术中对于CT图像的三维重构,常常采取人为标记的方法:依靠有一定专业经验的人员对于CT图像进行人为标记,并用相关软件进行进一步分割处理。这种方法明显的缺陷在于额外的人工标记成本,人工标记时间和人工本身可能会带来的误差。其中后两点可能会造成医疗诊断的推迟和影响医疗诊断的精准,从而耽误病人进一步的诊疗,造成较为严重的后果。For the three-dimensional reconstruction of CT images in the prior art, the method of artificial marking is often adopted: the CT images are artificially marked by personnel with certain professional experience, and further segmentation processing is performed with relevant software. The obvious drawback of this method lies in the additional manual labeling cost, manual labeling time and errors that may be caused by the labor itself. The latter two points may cause delay in medical diagnosis and affect the accuracy of medical diagnosis, thereby delaying further diagnosis and treatment of patients, resulting in more serious consequences.

近年来,基于像素对比度差实现图像自动分割的技术迅猛发展,但其适用于分割的对象与其他周围的对象有明显的色彩和对比度差别的图像。对于模糊的CT图像而言,分割对象(比如说血管)可能会很模糊,如此一来基于像素对比度差的方法分割图片效果会非常差,甚至会分割出多个不需要的对象。In recent years, the technology of automatic image segmentation based on pixel contrast difference has developed rapidly, but it is suitable for images with obvious color and contrast differences between the segmented object and other surrounding objects. For blurry CT images, the segmented objects (such as blood vessels) may be blurred, so the method based on poor pixel contrast will be very poor, and even segment multiple unwanted objects.

可见,利用现有技术的图像自动分割技术对分割对象较模糊的CT图像进行分割,精确度很低,达不到后续进行三维模型重构的质量要求。It can be seen that, using the prior art automatic image segmentation technology to segment CT images with relatively blurred segmentation objects has very low accuracy and cannot meet the quality requirements for subsequent three-dimensional model reconstruction.

发明内容SUMMARY OF THE INVENTION

本发明要解决的技术问题是为了克服现有技术的图像自动分割技术对分割对象较模糊的CT图像进行分割,精确度很低的缺陷,提供一种CT图像中血管的三维模型重建方法及系统。The technical problem to be solved by the present invention is to provide a method and system for reconstructing a three-dimensional model of a blood vessel in a CT image, in order to overcome the defect of the prior art automatic image segmentation technology for segmenting a CT image with a relatively blurred segmentation object, and the accuracy is very low. .

本发明是通过下述技术方案来解决上述技术问题:The present invention solves the above-mentioned technical problems through the following technical solutions:

一种CT图像中血管的三维模型重建方法,所述三维模型重建方法包括:A method for reconstructing a three-dimensional model of a blood vessel in a CT image, the three-dimensional model reconstruction method comprising:

基于神经网络建立图像分割模型;所述图像分割模型用于对CT图像中的血管进行分割,并输出所述血管的二维分割图;An image segmentation model is established based on a neural network; the image segmentation model is used to segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels;

将待建模的CT图像输入至所述图像分割模型,得到多张二维分割图;Input the CT image to be modeled into the image segmentation model to obtain multiple two-dimensional segmentation maps;

对所述二维分割图进行三维重建。A three-dimensional reconstruction is performed on the two-dimensional segmentation map.

较佳地,基于神经网络建立图像分割模型的步骤,具体包括:Preferably, the step of establishing an image segmentation model based on a neural network specifically includes:

获取已标识的CT图像作为训练样本;Obtain the identified CT images as training samples;

将所述训练样本输入神经网络模型中,根据所述神经网络模型的损失函数对所述神经网络模型进行训练,得到所述图像分割模型。The training samples are input into a neural network model, and the neural network model is trained according to the loss function of the neural network model to obtain the image segmentation model.

较佳地,所述损失函数为:Preferably, the loss function is:

Figure BDA0001920464520000021
Figure BDA0001920464520000021

SI=∑i,jMpredict(i,j)×Mtrue(i,j);SI=∑ i,j M predict (i,j)×M true (i,j);

Sum(Mpredict)=∑i,j|Mpredict(i,j)|;Sum(M predict )=∑ i,j |M predict (i,j)|;

Sum(Mtrue)=∑i,j|Mtrue(i,j)|;Sum(M true )=∑ i,j |M true (i,j)|;

其中,L表示所述损失函数;Mtrue(i,j)表示训练样本的二维分割图中(i,j)位置的像素值;Mpredict(i,j)表示所述神经网络模型输出的二维分割图中(i,j)位置的像素值;η表示平滑参数。Wherein, L represents the loss function; M true (i, j) represents the pixel value at the position (i, j) in the two-dimensional segmentation map of the training sample; M predict (i, j) represents the output of the neural network model The pixel value at (i,j) position in the 2D segmentation map; η represents the smoothing parameter.

较佳地,对所述二维分割图进行三维重建的步骤,具体包括:Preferably, the step of performing 3D reconstruction on the 2D segmentation map specifically includes:

将所述二维分割图转化为三维二进制的Analyze格式的三维图像;Converting the two-dimensional segmentation map into a three-dimensional image in a three-dimensional binary Analyze format;

基于推进立方体算法对所述Analyze格式的三维图像进行三维图像重建;Performing 3D image reconstruction on the 3D image in the Analyze format based on the Propulsion Cube algorithm;

基于NURBS对重建的三维图像进行拟合。The reconstructed 3D images are fitted based on NURBS.

较佳地,对重建的三维图像进行拟合的步骤之前,还包括:Preferably, before the step of fitting the reconstructed three-dimensional image, the method further includes:

对所述重建的三维图像进行光滑处理。Smoothing is performed on the reconstructed three-dimensional image.

一种CT图像中血管的三维模型重建系统,所述三维模型重建系统包括:A three-dimensional model reconstruction system of blood vessels in CT images, the three-dimensional model reconstruction system includes:

建模模块,用于基于神经网络建立图像分割模型;所述图像分割模型用于对CT图像中的血管进行分割,并输出所述血管的二维分割图;a modeling module for establishing an image segmentation model based on a neural network; the image segmentation model is used to segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels;

三维重建模块,用于将待建模的CT图像输入至所述图像分割模型,得到多张二维分割图,并对所述二维分割图进行三维重建。The three-dimensional reconstruction module is used to input the CT image to be modeled into the image segmentation model, obtain a plurality of two-dimensional segmentation maps, and perform three-dimensional reconstruction on the two-dimensional segmentation maps.

较佳地,所述建模模块具体包括:Preferably, the modeling module specifically includes:

数据获取单元,用于获取已标识的CT图像作为训练样本;a data acquisition unit for acquiring the identified CT images as training samples;

模型训练单元,用于将所述训练样本输入神经网络模型中,根据所述神经网络模型的损失函数对所述神经网络模型进行训练,得到所述图像分割模型。A model training unit, configured to input the training samples into a neural network model, and train the neural network model according to the loss function of the neural network model to obtain the image segmentation model.

较佳地,所述损失函数为:Preferably, the loss function is:

Figure BDA0001920464520000031
Figure BDA0001920464520000031

SI=∑i,jMpredict(i,j)×Mtrue(i,j);SI=∑ i,j M predict (i,j)×M true (i,j);

Sum(Mpredict)=∑i,j|Mpredict(i,j)|;Sum(M predict )=∑ i,j |M predict (i,j)|;

Sum(Mtrue)=∑i,j|Mtrue(i,j)|;Sum(M true )=∑ i,j |M true (i,j)|;

其中,L表示所述损失函数;Mtrue(i,j)表示训练样本的二维分割图中(i,j)位置的像素值;Mpredict(i,j)表示所述神经网络模型输出的二维分割图中(i,j)位置的像素值;η表示平滑参数。Wherein, L represents the loss function; M true (i, j) represents the pixel value at the position (i, j) in the two-dimensional segmentation map of the training sample; M predict (i, j) represents the output of the neural network model The pixel value at (i,j) position in the 2D segmentation map; η represents the smoothing parameter.

较佳地,所述三维重建模块具体包括:Preferably, the three-dimensional reconstruction module specifically includes:

格式转化单元,用于将所述二维分割图转化为三维二进制的Analyze格式的三维图像;A format conversion unit, for converting the two-dimensional segmentation map into a three-dimensional image of a three-dimensional binary Analyze format;

三维重建单元,用于基于推进立方体算法对所述Analyze格式的三维图像进行三维图像重建;a three-dimensional reconstruction unit, configured to perform three-dimensional image reconstruction on the three-dimensional image in the Analyze format based on the advancing cube algorithm;

拟合单元,用于基于NURBS对重建的三维图像进行拟合。The fitting unit is used to fit the reconstructed 3D image based on NURBS.

较佳地,所述三维重建模块还包括:Preferably, the three-dimensional reconstruction module further includes:

光滑处理单元,用于对所述重建的三维图像进行光滑处理后,调用所述拟合单元。The smoothing processing unit is used for calling the fitting unit after smoothing the reconstructed three-dimensional image.

本发明的积极进步效果在于:本发明构建的图像分割模型能快速、准确的对CT图像中的血管进行分割,并输出血管的二维分割图,进而精确地构建血管的三维模型。The positive improvement effect of the present invention is that the image segmentation model constructed by the present invention can quickly and accurately segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels, thereby accurately constructing a three-dimensional model of the blood vessels.

附图说明Description of drawings

图1为本发明实施例1的CT图像中血管的三维模型重建方法的第一流程图。FIG. 1 is a first flowchart of a method for reconstructing a three-dimensional model of a blood vessel in a CT image according to Embodiment 1 of the present invention.

图2为本发明实施例1的CT图像中血管的三维模型重建方法的第二流程图。FIG. 2 is a second flowchart of a method for reconstructing a three-dimensional model of a blood vessel in a CT image according to Embodiment 1 of the present invention.

图3为本发明实施例2的CT图像中血管的三维模型重建系统的模块示意图。FIG. 3 is a schematic block diagram of a system for reconstructing a three-dimensional model of a blood vessel in a CT image according to Embodiment 2 of the present invention.

具体实施方式Detailed ways

下面通过实施例的方式进一步说明本发明,但并不因此将本发明限制在所述的实施例范围之中。The present invention is further described below by way of examples, but the present invention is not limited to the scope of the described examples.

实施例1Example 1

本实施例提供一种CT图像中血管的三维模型重建方法,如图1所示,三维模型重建方法包括以下步骤:This embodiment provides a method for reconstructing a three-dimensional model of a blood vessel in a CT image. As shown in FIG. 1 , the method for reconstructing a three-dimensional model includes the following steps:

步骤101、基于神经网络建立图像分割模型。Step 101 , establishing an image segmentation model based on a neural network.

其中,图像分割模型用于对CT图像进行分割,并输出CT图像中血管的二维分割图。本实施例中,采用全卷积神经网络作为针对CT图像进行自动化血管分割的基本结构。Among them, the image segmentation model is used to segment the CT image, and output a two-dimensional segmentation map of the blood vessels in the CT image. In this embodiment, a fully convolutional neural network is used as the basic structure for automatic blood vessel segmentation for CT images.

参见图2,以下对图像分割模型的建立过程进行说明:Referring to Figure 2, the following describes the establishment process of the image segmentation model:

步骤101-1、获取已标识的CT图像作为训练样本。Step 101-1: Acquire an identified CT image as a training sample.

步骤101-1中获取大量的相关的高精度人工标记过的CT图像,一部分作为神经网络模型的训练样本,用于神经网络模型的训练;一部分作为测试样本,用于测试每次迭代训练后的神经网络模型的精准度。In step 101-1, a large number of relevant high-precision artificially labeled CT images are obtained, some of which are used as training samples of the neural network model for the training of the neural network model; some are used as test samples for testing after each iteration training. The accuracy of the neural network model.

步骤101-2、将训练样本输入神经网络模型中,根据神经网络模型的损失函数对神经网络模型进行训练,得到图像分割模型。Step 101-2: Input the training samples into the neural network model, train the neural network model according to the loss function of the neural network model, and obtain an image segmentation model.

本实施例中,针对血管分割的具体情形,设计专属的卷积神经网络损失函数,损失函数L可以但不限于为:In this embodiment, for the specific situation of blood vessel segmentation, a dedicated convolutional neural network loss function is designed, and the loss function L can be but is not limited to:

Figure BDA0001920464520000051
Figure BDA0001920464520000051

SI=∑i,jMpredict(i,j)×Mtrue(i,j);SI=∑ i,j M predict (i,j)×M true (i,j);

Sum(Mpredict)=∑i,j|Mpredict(i,j)|;Sum(M predict )=∑ i,j |M predict (i,j)|;

Sum(Mtrue)=∑i,j|Mtrue(i,j)|;Sum(M true )=∑ i,j |M true (i,j)|;

其中,Mtrue表示训练样本CT图片的真实分割图(二维分割图)数据,真实分割图是经过训练的标记员对原CT图像用相应的标记软件进行标记所获得的;Mpredict表示模型训练过程中神经网络模型输出的预测分割图(二维分割图)数据;Mtrue(i,j)表示在真实分割图中(i,j)位置的像素值;Mpredict(i,j)表示在预测分割图中(i,j)位置的像素值;SI表示真实分割图与预测分割图的分割交集(segmentation intersection);η表示平滑参数。Among them, M true represents the real segmentation map (two-dimensional segmentation map) data of the CT image of the training sample, and the real segmentation map is obtained by the trained labeler marking the original CT image with the corresponding labeling software; M predict represents the model training The predicted segmentation map (two-dimensional segmentation map) data output by the neural network model in the process; M true (i, j) represents the pixel value at the position of (i, j) in the real segmentation map; M predict (i, j) represents in the The pixel value at position (i, j) in the predicted segmentation map; SI represents the segmentation intersection of the real segmentation map and the predicted segmentation map; η represents the smoothing parameter.

需要说明的是,η可根据实际需求自行设置,本实施例中,出于数值计算考略,将“平滑参数”设为1。对于神经网络模型,训练模型的架构可根据实际需求(比如说不同的病例CT图片、精确度等)自行选择,相应的模型参数也不同。It should be noted that η can be set by itself according to actual requirements. In this embodiment, the "smoothing parameter" is set to 1 for consideration of numerical calculation. For the neural network model, the architecture of the training model can be selected according to the actual needs (such as CT pictures of different cases, accuracy, etc.), and the corresponding model parameters are also different.

以一张训练样本(2D的CT图片,大小为W×L,W表示CT图片的宽度,T表示CT图片的长度)为例,“真实分割图”中分割区域(也即血管所在区域)的像素点为白色,分割区域以外的像素点为黑色。将“真实分割图”表示为矩阵Mtrue∈RW×L,则Mtrue(i,j)=1表示真实分割图中分割区域的像素点,也即白色的像素点;Mtrue(i,j)=0表示真实分割图中分割区域以外的像素点,也即黑色的像素点。同样的,将神经网络模型输出的“预测分割图”表示为Mpredict∈RW×L,则Mpredict(i,j)=1表示预测分割图中分割区域的像素点,也即白色的像素点;Mtrue(i,j)=0表示预测分割图中分割区域以外的像素点,也即黑色的像素点。Taking a training sample (2D CT image, the size is W×L, W represents the width of the CT image, and T represents the length of the CT image) as an example, the segmented area (that is, the area where the blood vessel is located) in the “real segmentation map” is Pixels are white, and pixels outside the segmented area are black. Denote the "true segmentation map" as a matrix M true ∈ R W×L , then M true (i,j)=1 represents the pixels of the segmented area in the true segmentation map, that is, the white pixels; M true (i, j)=0 represents the pixels outside the segmented area in the real segmentation map, that is, the black pixels. Similarly, the "predicted segmentation map" output by the neural network model is represented as M predict ∈R W×L , then M predict (i,j)=1 indicates the pixels of the segmented area in the predicted segmentation map, that is, the white pixels point; M true (i, j)=0 indicates that the pixel points outside the segmentation area in the prediction segmentation map, that is, the black pixel points.

步骤101-2的模型训练过程中,参照整个训练和测试过程中观测到的精准度(根据已标识的测试样本计算模型的精准度),来对卷积神经网络中的结构进行调整,当损失函数收敛到一个局部最优点且在测试数据上观测到满意的表现则停止迭代,从而提高训练的精度。During the model training process in step 101-2, the structure in the convolutional neural network is adjusted with reference to the accuracy observed during the entire training and testing process (the accuracy of the model is calculated according to the identified test samples). The iteration stops when the function converges to a local optimum and satisfactory performance is observed on the test data, thereby improving the training accuracy.

步骤102、将待建模的CT图像输入至图像分割模型,得到多张血管的二维分割图。Step 102: Input the CT image to be modeled into the image segmentation model to obtain a two-dimensional segmentation map of multiple blood vessels.

步骤103、对二维分割图进行三维重构。Step 103: Perform three-dimensional reconstruction on the two-dimensional segmentation map.

本实施例中,步骤103具体包括:In this embodiment, step 103 specifically includes:

步骤103-1、将每张二维分割图转化为三维二进制的Analyze格式(一种医学影像格式)的三维图像。Step 103-1: Convert each two-dimensional segmentation map into a three-dimensional image in a three-dimensional binary Analyze format (a medical image format).

具体的,采用ITK(一种影像分析扩展软件工具)将每个二维分割图转化为三维二进制的Analyze格式的三维图像。Specifically, ITK (an image analysis extension software tool) is used to convert each two-dimensional segmentation map into a three-dimensional image in a three-dimensional binary Analyze format.

其中,生成的三维图像包含多个不相互连通的分割出来的区域,利用ITK计算出其中体积最大的连通区域并去除分割出来的不与最大分割区域连通的其它像素,也即去除图像中孤立的像素,只保留图像中要进行三维几何重构的部分,即为三维图像。Among them, the generated 3D image contains a plurality of divided areas that are not connected to each other. ITK is used to calculate the connected area with the largest volume and remove the other pixels that are not connected to the largest segmented area, that is, to remove the isolated pixels in the image. Pixels, only the part of the image to be reconstructed with 3D geometry is retained, which is a 3D image.

步骤103-2、基于推进立方体算法(Marching Cubes)对Analyze格式的三维图像进行三维图像重建。Step 103-2, performing three-dimensional image reconstruction on the three-dimensional image in the Analyze format based on the Marching Cubes algorithm.

本实施例中,推进立方体算法主要分成两部分:In this embodiment, the advancing cube algorithm is mainly divided into two parts:

第一部分是利用分割和获取(divide-and-conquer)的方法选找图像中分割对象的表面位置。每一个立方体确定对象表面是否如何与其相割并挪动到下一个立方体位置。每一个立方体的顶点根据它们在分割对象内还是外分别赋予1和0。一个立方体有6个顶点,对6个顶点的不同赋值组合代表了对象表面与立方体的不同的相割状态。除去对称性,一共有14种不同的相割模式。对每种相割方式,算法通过对顶点的像素值进行插值可以确定哪一个边与表面相交。The first part is to use a divide-and-conquer method to select the surface position of the segmented object in the image. Each cube determines how the object's surface intersects it and moves to the next cube position. The vertices of each cube are assigned 1 and 0 respectively depending on whether they are inside or outside the split object. A cube has 6 vertices, and different assignment combinations to the 6 vertices represent different secant states of the object surface and the cube. Excluding the symmetry, there are a total of 14 different secant modes. For each secant method, the algorithm determines which edge intersects the surface by interpolating the pixel values of the vertices.

推进立方体算法的第二部分是计算由于对象与立方体形成的三角形的单位法向量。法向量可以用来确定表面的内和外。为了确定三角形的法向量,算法采取对立方体顶点的梯度向量进行插值的方法。每个顶点的梯度向量通过中心差分法来得到:The second part of the advancing cube algorithm is to calculate the unit normal vector of the triangle formed by the object and the cube. The normal vector can be used to determine the inside and outside of the surface. In order to determine the normal vector of the triangle, the algorithm interpolates the gradient vector of the vertices of the cube. The gradient vector of each vertex is obtained by the central difference method:

Figure BDA0001920464520000071
Figure BDA0001920464520000071

Figure BDA0001920464520000072
Figure BDA0001920464520000072

Figure BDA0001920464520000073
Figure BDA0001920464520000073

其中,D(i,j,k)表示三维图像中(i,j,k)位置的像素值。Δx、Δy和Δz分别表示立方体的边长。Gx(i,j,k)、Gy(i,j,k)和Gz(i,j,k)分别表示顶点处x方向、y方向和z方向的梯度值。算法通过在相交边对边的两个顶点进行插值获得相交点的梯度向量,并通过立方体最后在图像的全域推进获得整个分割对象的表面,得到三维网格的结点坐标信息。Among them, D(i,j,k) represents the pixel value at the position (i,j,k) in the 3D image. Δx, Δy, and Δz represent the side lengths of the cube, respectively. G x (i, j, k), G y (i, j, k) and G z (i, j, k) represent the gradient values in the x, y and z directions at the vertex, respectively. The algorithm obtains the gradient vector of the intersection point by interpolating the two vertices of the intersecting edge to the edge, and finally obtains the surface of the entire segmented object by advancing the cube in the whole field of the image, and obtains the node coordinate information of the three-dimensional grid.

步骤103-3、对重建后的三维图像做光滑化处理。Step 103-3, smoothing the reconstructed three-dimensional image.

本实施例中,为了使生成的分割对象表面更加光滑或者说为了减少表面大曲率的变化,本实施例中,采用两步连续的高斯光滑化步骤实现光滑化处理算法。In this embodiment, in order to make the surface of the generated segmentation object smoother or to reduce the change of the large curvature of the surface, in this embodiment, two consecutive Gaussian smoothing steps are used to realize the smoothing processing algorithm.

在高斯光滑化处理中,每个三维网格的三角单元的顶点的位置被其邻近的三角形单元的顶点重新计算。一个顶点的坐标向量Vm邻近的三角形单元的顶点坐标向量Vn,指的是所有和Vm共同在一条边上或者同在一个三角形单元上的三角形单元顶点,并且不包含顶点Vm。如果将Vm临近的三角形单元的顶点定义为Sm={Vn:n=1,2,3,…,p},其中p是邻近的顶点的总数,一个平均向量ΔVm被计算为:In the Gaussian smoothing process, the positions of the vertices of the triangular elements of each 3D mesh are recalculated by the vertices of its neighboring triangular elements. The coordinate vector V m of a vertex is the vertex coordinate vector V n of the adjacent triangle elements, which refers to all the triangle element vertices that are on the same side as V m or on the same triangle element, and does not include the vertex V m . If the vertices of the triangular elements adjacent to V m are defined as S m ={V n :n=1,2,3,...,p}, where p is the total number of adjacent vertices, an average vector ΔV m is calculated as:

Figure BDA0001920464520000075
Figure BDA0001920464520000075

其中,wmo是个权衡参数,其值被设定为

Figure BDA0001920464520000074
1≤o≤p;顶点Vm的坐标被更新为:where w mo is a trade-off parameter whose value is set as
Figure BDA0001920464520000074
1≤o≤p; the coordinates of vertex V m are updated as:

Vm′=Vm+λΔVmV m ′=V m +λΔV m ;

其中,延展参数λ的取值在0到1区间内。Among them, the value of the extension parameter λ is in the range of 0 to 1.

本实施例中,第一步高斯光滑化处理采用一个正的延展参数λ。第二步高斯光滑化处理采用一个负的延展参数μ。并且两个参数满足关系式0<λ<-μ<1。假设第一次高斯光滑化处理得到的顶点坐标为Vm′,第二次高斯光滑化处理为:In this embodiment, the first step of the Gaussian smoothing process adopts a positive extension parameter λ. The second step of Gaussian smoothing uses a negative spread parameter μ. And the two parameters satisfy the relation 0<λ<-μ<1. Assuming that the vertex coordinates obtained by the first Gaussian smoothing process are V m ′, the second Gaussian smoothing process is:

Vm″=Vm′+μΔVm′;V m ″=V m ′+μΔV m ′;

其中,ΔVm′是基于第一次光滑化处理后的新顶点坐标得到的平均向量,Vm″是基于第一次光滑化处理后的新顶点坐标得到的第二次光滑化处理的新顶点坐标。通过对光滑处理之后的表面继续采用以上的连续两步高斯光滑化处理,所得到的表面能够足够光滑。Among them, ΔV m ′ is the average vector obtained based on the coordinates of the new vertexes after the first smoothing process, and V m ″ is the new vertex of the second smoothing process obtained based on the coordinates of the new vertexes after the first smoothing process Coordinates. By continuing to apply the above continuous two-step Gaussian smoothing process to the surface after smoothing, the obtained surface can be sufficiently smooth.

步骤103-4、基于NURBS对经过光滑化处理的三维图像进行拟合。Step 103-4: Fitting the smoothed three-dimensional image based on NURBS.

以下对图像拟合的过程作进一步说明:The process of image fitting is further described below:

(1)点云在三维空间上的主方向分析(1) Main direction analysis of point cloud in three-dimensional space

为了方便表述,将经过光滑化处理的三维图像的三维点云集合Vm″重新定义点云X{X1,X2,…,Xr}∈R3,并定义在开始NURBS CAD三维重构时PCA(Principal ComponentAnalysis)分解得到的前3个主方向为点云坐标系E{E1,E2,E3}。其求解过程简介如下:For the convenience of expression, the 3D point cloud set V m ″ of the smoothed 3D image is redefined as point cloud X{X 1 ,X 2 ,...,X r }∈R 3 , and defined at the beginning of NURBS CAD 3D reconstruction The first three principal directions obtained by PCA (Principal Component Analysis) decomposition are the point cloud coordinate system E{E 1 , E 2 , E 3 }. The solution process is briefly described as follows:

构建伴随矩阵

Figure BDA0001920464520000081
其中
Figure BDA0001920464520000082
对伴随矩阵进行特征值分解,即求解∑E=λE。取E的前3个特征向量为点云初始坐标系E{E1,E2,E3}。build adjoint matrix
Figure BDA0001920464520000081
in
Figure BDA0001920464520000082
Perform eigenvalue decomposition on the adjoint matrix, that is, solve ∑E=λE. Take the first three eigenvectors of E as the point cloud initial coordinate system E{E 1 , E 2 , E 3 }.

(2)点云在三维空间的刚体转动(2) Rigid body rotation of point cloud in three-dimensional space

由于血管重构的点云拍摄时条件不一,在开始NURBS CAD三维重构时,空间取向,位置均不确定。为了确保曲面在进入曲面重构阶段时,都具有相同的坐标系,必须先将点云刚体转动至统一的全局坐标系。本实施例采用的方法是SVD(Singular-valuedecomposition)。设全局坐标系为e{e1,e2,e3},则相似伴随矩阵构建如下:

Figure BDA0001920464520000083
其中
Figure BDA0001920464520000084
H可认为是一个奇异值矩阵。则其可被分解为H=U∑VT。通过分解相似伴随阵H,转动矩阵R可由UVT求得。刚体位移t则可通过
Figure BDA0001920464520000085
求得。于是,转动点云x{x1,x2,…,xr}∈R3为x=RX+t。Due to the different shooting conditions of the reconstructed point cloud of blood vessels, the spatial orientation and position are uncertain when starting the 3D reconstruction of NURBS CAD. In order to ensure that the surfaces have the same coordinate system when entering the surface reconstruction stage, the point cloud rigid body must be rotated to a unified global coordinate system. The method adopted in this embodiment is SVD (Singular-valuedecomposition). Assuming that the global coordinate system is e{e 1 ,e 2 ,e 3 }, the similar adjoint matrix is constructed as follows:
Figure BDA0001920464520000083
in
Figure BDA0001920464520000084
H can be thought of as a singular value matrix. Then it can be decomposed into H=UΣV T . By decomposing the similar adjoint matrix H, the rotation matrix R can be obtained by UV T. The rigid body displacement t can be obtained by
Figure BDA0001920464520000085
beg. Therefore, the rotation point cloud x{x 1 ,x 2 ,...,x r }∈R 3 is x=RX+t.

(3)二维点云的NURBS曲线拟合(3) NURBS curve fitting of 2D point cloud

本专利采用分层式NURBS曲面的拟合。每个分层拟合为环形NURBS曲线。曲线方程为:This patent adopts the fitting of layered NURBS surfaces. Each stratum was fitted as a circular NURBS curve. The curve equation is:

Figure BDA0001920464520000091
Figure BDA0001920464520000091

其中,Pi为控制点坐标向量,ωi和ωj为其相应的权重值。基函数N由以下递归函数求得:Among them, P i is the control point coordinate vector, and ω i and ω j are their corresponding weight values. The basis function N is obtained by the following recursive function:

Figure BDA0001920464520000092
Figure BDA0001920464520000092

Figure BDA0001920464520000093
Figure BDA0001920464520000093

其中,u={u0,…,um}是节点向量,是任意选取的一组非负值、均匀且单调递增的节点向量;p为曲线阶数,p的数值可根据实际需求自行设置,例如本实施例中p=2。每层的拟合曲线使用一套相同的节点向量。通过构建最小二乘法方程组NtNP=NtX,可以求解得到控制点PiAmong them, u={u 0 ,..., um } is the node vector, which is a set of non-negative, uniform and monotonically increasing node vectors arbitrarily selected; p is the curve order, and the value of p can be set according to actual needs. , for example, p=2 in this embodiment. The fitted curve for each layer uses the same set of nodal vectors. By constructing the least squares equation system N t NP=N t X, the control point P i can be obtained by solving.

(4)张量乘积构建NURBS曲面(4) Constructing NURBS surface by tensor product

由于每层的每条曲线使用的是相同的节点向量,对于相同的的节点,可以沿层数方向建立一套新的节点方向v={v0,…,vm}。NURBS曲面则由两个方向的的NURBS曲线通过张量乘积算得。其曲面方程为:Since each curve of each layer uses the same node vector, for the same node, a new set of node directions v={v 0 ,..., vm } can be established along the layer number direction. NURBS surfaces are calculated by multiplying NURBS curves in two directions by tensors. Its surface equation is:

Figure BDA0001920464520000094
Figure BDA0001920464520000094

其中,

Figure BDA0001920464520000095
Pi,j为在新构建v方向,第j层,第i个控制点的值;n,m分别为u和v方向的曲线阶数。n,m的数值可根据实际需求自行设置,例如本实施例中n=2,m=2。in,
Figure BDA0001920464520000095
P i,j is the value of the i-th control point at the j-th layer in the newly constructed v direction; n, m are the curve orders in the u and v directions, respectively. The values of n and m can be set according to actual requirements. For example, in this embodiment, n=2, m=2.

本实施例中,构建的图像分割模型能快速、准确的对CT图像中的血管进行分割,并输出血管的二维分割图,进而实现精确地构建血管的三维模型。In this embodiment, the constructed image segmentation model can quickly and accurately segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels, thereby realizing the accurate construction of a three-dimensional model of the blood vessels.

实施例2Example 2

如图3所示,本实施例的CT图像中血管的三维模型重建系统包括:建模模块1和三维重建模块2。As shown in FIG. 3 , the system for reconstructing a three-dimensional model of a blood vessel in a CT image in this embodiment includes: a modeling module 1 and a three-dimensional reconstruction module 2 .

建模模块1用于基于神经网络建立图像分割模型。其中,图像分割模型用于对CT图像中的血管进行分割,并输出CT图像中血管的二维分割图。本实施例中,采用全卷积神经网络作为针对CT图像进行自动化血管分割的基本结构。The modeling module 1 is used to establish an image segmentation model based on a neural network. The image segmentation model is used to segment the blood vessels in the CT image, and outputs a two-dimensional segmentation map of the blood vessels in the CT image. In this embodiment, a fully convolutional neural network is used as the basic structure for automatic blood vessel segmentation for CT images.

本实施例中,建模模块1具体包括:数据获取单元11和模型训练单元12。In this embodiment, the modeling module 1 specifically includes: a data acquisition unit 11 and a model training unit 12 .

数据获取单元11用于获取已标识的CT图像作为训练样本。具体的,数据获取单元11获取大量的相关的高精度人工标记过的CT图像,一部分作为神经网络模型的训练样本,用于神经网络模型的训练;一部分作为测试样本,用于测试每次迭代训练后的神经网络模型的精准度。The data acquisition unit 11 is used for acquiring the identified CT images as training samples. Specifically, the data acquisition unit 11 acquires a large number of relevant high-precision artificially labeled CT images, some of which are used as training samples of the neural network model for training the neural network model; and some are used as test samples for testing each iteration of training accuracy of the neural network model.

模型训练单元12用于将训练样本输入神经网络模型中,根据神经网络模型的损失函数对神经网络模型进行训练,得到图像分割模型。The model training unit 12 is configured to input the training samples into the neural network model, and train the neural network model according to the loss function of the neural network model to obtain an image segmentation model.

在模型训练的过程中,模型训练单元12参照整个训练和测试过程中观测到的精准度(根据已标识的测试样本计算模型的精准度),对卷积神经网络中的结构进行调整,当损失函数收敛到一个局部最优点且在测试数据上观测到满意的表现则停止迭代,从而提高训练的精度。In the process of model training, the model training unit 12 adjusts the structure in the convolutional neural network with reference to the accuracy observed during the entire training and testing process (calculating the accuracy of the model according to the identified test samples), and when the loss of The iteration stops when the function converges to a local optimum and satisfactory performance is observed on the test data, thereby improving the training accuracy.

本实施例中,针对血管分割的具体情形,设计专属的卷积神经网络损失函数,损失函数L可以但不限于为:In this embodiment, for the specific situation of blood vessel segmentation, a dedicated convolutional neural network loss function is designed, and the loss function L can be but is not limited to:

Figure BDA0001920464520000101
Figure BDA0001920464520000101

SI=∑i,jMpredict(i,j)×Mtrue(i,j);SI=∑ i,j M predict (i,j)×M true (i,j);

Sum(Mpredict)=∑i,j|Mpredict(i,j)|;Sum(M predict )=∑ i,j |M predict (i,j)|;

Sum(Mtrue)=∑i,j|Mtrue(i,j)|;Sum(M true )=∑ i,j |M true (i,j)|;

其中,Mtrue表示训练样本CT图片的真实分割图(二维分割图)数据,真实分割图是经过训练的标记员对原CT图像用相应的标记软件进行标记所获得的;Mpredict表示模型训练过程中神经网络模型输出的预测分割图(二维分割图)数据;Mtrue(i,j)表示在真实分割图中(i,j)位置的像素值;Mpredict(i,j)表示在预测分割图中(i,j)位置的像素值;SI表示真实分割图与预测分割图的分割交集(segmentation intersection);η表示平滑参数。Among them, M true represents the real segmentation map (two-dimensional segmentation map) data of the CT image of the training sample, and the real segmentation map is obtained by the trained labeler marking the original CT image with the corresponding labeling software; M predict represents the model training The predicted segmentation map (two-dimensional segmentation map) data output by the neural network model in the process; M true (i, j) represents the pixel value at the position of (i, j) in the real segmentation map; M predict (i, j) represents in the The pixel value at position (i, j) in the predicted segmentation map; SI represents the segmentation intersection of the real segmentation map and the predicted segmentation map; η represents the smoothing parameter.

需要说明的是,η可根据实际需求自行设置,本实施例中,出于数值计算考略,将“平滑参数”设为1。对于神经网络模型,训练模型的架构可根据实际需求(比如说不同的病例CT图片、精确度等)自行选择,相应的模型参数也不同。It should be noted that η can be set by itself according to actual requirements. In this embodiment, the "smoothing parameter" is set to 1 for consideration of numerical calculation. For the neural network model, the architecture of the training model can be selected according to the actual needs (such as CT pictures of different cases, accuracy, etc.), and the corresponding model parameters are also different.

以一张训练样本(2D的CT图片,大小为W×L,W表示CT图片的宽度,T表示CT图片的长度)为例,“真实分割图”中分割区域(也即血管所在区域)的像素点为白色,分割区域以外的像素点为黑色。将“真实分割图”表示为矩阵Mtrue∈RW×L,则Mtrue(i,j)=1表示真实分割图中分割区域的像素点,也即白色的像素点;Mtrue(i,j)=0表示真实分割图中分割区域以外的像素点,也即黑色的像素点。同样的,将神经网络模型输出的“预测分割图”表示为Mpredict∈RW×L,则Mpredict(i,j)=1表示预测分割图中分割区域的像素点,也即白色的像素点;Mtrue(i,j)=0表示预测分割图中分割区域以外的像素点,也即黑色的像素点。Taking a training sample (2D CT image, the size is W×L, W represents the width of the CT image, and T represents the length of the CT image) as an example, the segmented area (that is, the area where the blood vessel is located) in the “real segmentation map” is Pixels are white, and pixels outside the segmented area are black. Denote the "true segmentation map" as a matrix M true ∈ R W×L , then M true (i,j)=1 represents the pixels of the segmented area in the true segmentation map, that is, the white pixels; M true (i, j)=0 represents the pixels outside the segmented area in the real segmentation map, that is, the black pixels. Similarly, the "predicted segmentation map" output by the neural network model is represented as M predict ∈R W×L , then M predict (i,j)=1 indicates the pixels of the segmented area in the predicted segmentation map, that is, the white pixels point; M true (i, j)=0 indicates that the pixel points outside the segmentation area in the prediction segmentation map, that is, the black pixel points.

三维重建模块2用于将待建模的CT图像输入至图像分割模型,得到多张二维分割图,并对二维分割图进行三维重建。The three-dimensional reconstruction module 2 is used to input the CT image to be modeled into the image segmentation model, obtain a plurality of two-dimensional segmentation maps, and perform three-dimensional reconstruction on the two-dimensional segmentation maps.

本实施例中,三维重建模块2具体包括:格式转化单元21、三维重建单元22、光滑处理单元23和拟合单元24。In this embodiment, the three-dimensional reconstruction module 2 specifically includes: a format conversion unit 21 , a three-dimensional reconstruction unit 22 , a smoothing processing unit 23 and a fitting unit 24 .

格式转化单元21用于将二维分割图转化为三维二进制的Analyze格式的三维图像,并输出至三维重建单元22。具体的,格式转化单元21采用ITK(一种影像分析扩展软件工具)将每个二维分割图转化为三维二进制的Analyze格式的三维图像。The format conversion unit 21 is configured to convert the two-dimensional segmentation map into a three-dimensional image in a three-dimensional binary Analyze format, and output it to the three-dimensional reconstruction unit 22 . Specifically, the format conversion unit 21 uses ITK (an image analysis extension software tool) to convert each two-dimensional segmentation map into a three-dimensional image in a three-dimensional binary Analyze format.

其中,生成的三维图像包含多个不相互连通的分割出来的区域,利用ITK计算出其中体积最大的连通区域并去除分割出来的不与最大分割区域连通的其它像素,也即去除图像中孤立的像素,只保留图像中要进行三维几何重构的部分,即为三维图像。Among them, the generated 3D image contains a plurality of divided areas that are not connected to each other. ITK is used to calculate the connected area with the largest volume and remove the other pixels that are not connected to the largest segmented area, that is, to remove the isolated pixels in the image. Pixels, only the part of the image to be reconstructed with 3D geometry is retained, which is a 3D image.

三维重建单元22用于基于推进立方体算法对Analyze格式的三维图像进行三维图像重建,并输出至光滑处理单元23。The three-dimensional reconstruction unit 22 is configured to perform three-dimensional image reconstruction on the three-dimensional image in the Analyze format based on the advancing cube algorithm, and output the three-dimensional image to the smoothing processing unit 23 .

本实施例中,推进立方体算法主要分成两部分:In this embodiment, the advancing cube algorithm is mainly divided into two parts:

第一部分是利用分割和获取(divide-and-conquer)的方法选找图像中分割对象的表面位置。每一个立方体确定对象表面是否如何与其相割并挪动到下一个立方体位置。每一个立方体的顶点根据它们在分割对象内还是外分别赋予1和0。一个立方体有6个顶点,对6个顶点的不同赋值组合代表了对象表面与立方体的不同的相割状态。除去对称性,一共有14种不同的相割模式。对每种相割方式,算法通过对顶点的像素值进行插值可以确定哪一个边与表面相交。The first part is to use a divide-and-conquer method to select the surface position of the segmented object in the image. Each cube determines how the object's surface intersects it and moves to the next cube position. The vertices of each cube are assigned 1 and 0 respectively depending on whether they are inside or outside the split object. A cube has 6 vertices, and different assignment combinations to the 6 vertices represent different secant states of the object surface and the cube. Excluding the symmetry, there are a total of 14 different secant modes. For each secant method, the algorithm determines which edge intersects the surface by interpolating the pixel values of the vertices.

推进立方体算法的第二部分是计算由于对象与立方体形成的三角形的单位法向量。法向量可以用来确定表面的内和外。为了确定三角形的法向量,算法采取对立方体顶点的梯度向量进行插值的方法。每个顶点的梯度向量通过中心差分法来得到:The second part of the advancing cube algorithm is to calculate the unit normal vector of the triangle formed by the object and the cube. The normal vector can be used to determine the inside and outside of the surface. In order to determine the normal vector of the triangle, the algorithm interpolates the gradient vector of the vertices of the cube. The gradient vector of each vertex is obtained by the central difference method:

Figure BDA0001920464520000121
Figure BDA0001920464520000121

Figure BDA0001920464520000122
Figure BDA0001920464520000122

Figure BDA0001920464520000123
Figure BDA0001920464520000123

其中,D(i,j,k)表示三维图像中(i,j,k)位置的像素值。Δx、Δy和Δz分别表示立方体的边长。Gx(i,j,k)、Gy(i,j,k)和Gz(i,j,k)分别表示顶点处x方向、y方向和z方向的梯度值。算法通过在相交边对边的两个顶点进行插值获得相交点的梯度向量,并通过立方体最后在图像的全域推进获得整个分割对象的表面,得到三维网格的结点坐标信息。Among them, D(i,j,k) represents the pixel value at the position (i,j,k) in the 3D image. Δx, Δy, and Δz represent the side lengths of the cube, respectively. G x (i, j, k), G y (i, j, k) and G z (i, j, k) represent the gradient values in the x, y and z directions at the vertex, respectively. The algorithm obtains the gradient vector of the intersection point by interpolating the two vertices of the intersecting edge to the edge, and finally obtains the surface of the entire segmented object by advancing the cube in the whole field of the image, and obtains the node coordinate information of the three-dimensional grid.

光滑处理单元23用于对重建的三维图像进行光滑处理后,调用拟合单元24。The smoothing processing unit 23 is used to call the fitting unit 24 after smoothing the reconstructed three-dimensional image.

本实施例中,为了使生成的分割对象表面更加光滑或者说为了减少表面大曲率的变化,本实施例中,采用两步连续的高斯光滑化步骤实现光滑化处理算法。In this embodiment, in order to make the surface of the generated segmentation object smoother or to reduce the change of the large curvature of the surface, in this embodiment, two consecutive Gaussian smoothing steps are used to realize the smoothing processing algorithm.

在高斯光滑化处理中,每个三维网格的三角单元的顶点的位置被其邻近的三角形单元的顶点重新计算。一个顶点的坐标向量Vm邻近的三角形单元的顶点坐标向量Vn,指的是所有和Vm共同在一条边上或者同在一个三角形单元上的三角形单元顶点,并且不包含顶点Vm。如果将Vm临近的三角形单元的顶点定义为Sm={Vn:n=1,2,3,…,p},其中p是邻近的顶点的总数,一个平均向量ΔVm被计算为:In the Gaussian smoothing process, the positions of the vertices of the triangular elements of each 3D mesh are recalculated by the vertices of its neighboring triangular elements. The coordinate vector V m of a vertex is the vertex coordinate vector V n of the adjacent triangle elements, which refers to all the triangle element vertices that are on the same side as V m or on the same triangle element, and does not include the vertex V m . If the vertices of the triangular elements adjacent to V m are defined as S m ={V n :n=1,2,3,...,p}, where p is the total number of adjacent vertices, an average vector ΔV m is calculated as:

ΔVm=∑Vo∈smwmo(Vo-Vm);ΔV m =∑ Vo∈sm w mo (V o -V m );

其中,wmo是个权衡参数,其值被设定为

Figure BDA0001920464520000131
1≤o≤p;顶点Vm的坐标被更新为:where w mo is a trade-off parameter whose value is set as
Figure BDA0001920464520000131
1≤o≤p; the coordinates of vertex V m are updated as:

Vm′=Vm+λΔVmV m ′=V m +λΔV m ;

其中,延展参数λ的取值在0到1区间内。Among them, the value of the extension parameter λ is in the range of 0 to 1.

本实施例中,第一步高斯光滑化处理采用一个正的延展参数λ。第二步高斯光滑化处理采用一个负的延展参数μ。并且两个参数满足关系式0<λ<-μ<1。假设第一次高斯光滑化处理得到的顶点坐标为Vm′,第二次高斯光滑化处理为:In this embodiment, the first step of the Gaussian smoothing process adopts a positive extension parameter λ. The second step of Gaussian smoothing uses a negative spread parameter μ. And the two parameters satisfy the relation 0<λ<-μ<1. Assuming that the vertex coordinates obtained by the first Gaussian smoothing process are V m ′, the second Gaussian smoothing process is:

Vm″=Vm′+μΔVm′;V m ″=V m ′+μΔV m ′;

其中,ΔVm′是基于第一次光滑化处理后的新顶点坐标得到的平均向量,Vm″是基于第一次光滑化处理后的新顶点坐标得到的第二次光滑化处理的新顶点坐标。通过对光滑处理之后的表面继续采用以上的连续两步高斯光滑化处理,所得到的表面能够足够光滑。Among them, ΔV m ′ is the average vector obtained based on the coordinates of the new vertexes after the first smoothing process, and V m ″ is the new vertex of the second smoothing process obtained based on the coordinates of the new vertexes after the first smoothing process Coordinates. By continuing to apply the above continuous two-step Gaussian smoothing process to the surface after smoothing, the obtained surface can be sufficiently smooth.

拟合单元24用于基于NURBS对重建的三维图像进行拟合。The fitting unit 24 is used for fitting the reconstructed three-dimensional image based on NURBS.

以下对拟合单元24进行图像拟合的工作原理作进一步说明:The working principle of the image fitting performed by the fitting unit 24 is further described below:

(1)点云在三维空间上的主方向分析(1) Main direction analysis of point cloud in three-dimensional space

为了方便表述,将经过光滑化处理的三维图像的三维点云集合Vm″重新定义点云X{X1,X2,…,Xr}∈R3,并定义在开始NURBS CAD三维重构时PCA(Principal ComponentAnalysis)分解得到的前3个主方向为点云坐标系E{E1,E2,E3}。其求解过程简介如下:For the convenience of expression, the 3D point cloud set V m ″ of the smoothed 3D image is redefined as point cloud X{X 1 ,X 2 ,...,X r }∈R 3 , and defined at the beginning of NURBS CAD 3D reconstruction The first three principal directions obtained by PCA (Principal Component Analysis) decomposition are the point cloud coordinate system E{E 1 , E 2 , E 3 }. The solution process is briefly described as follows:

构建伴随矩阵

Figure BDA0001920464520000141
其中
Figure BDA0001920464520000142
对伴随矩阵进行特征值分解,即求解∑E=λE。取E的前3个特征向量为点云初始坐标系E{E1,E2,E3}。build adjoint matrix
Figure BDA0001920464520000141
in
Figure BDA0001920464520000142
Perform eigenvalue decomposition on the adjoint matrix, that is, solve ∑E=λE. Take the first three eigenvectors of E as the point cloud initial coordinate system E{E 1 , E 2 , E 3 }.

(2)点云在三维空间的刚体转动(2) Rigid body rotation of point cloud in three-dimensional space

由于血管重构的点云拍摄时条件不一,在开始NURBS CAD三维重构时,空间取向,位置均不确定。为了确保曲面在进入曲面重构阶段时,都具有相同的坐标系,必须先将点云刚体转动至统一的全局坐标系。本实施例采用的方法是SVD(Singular-valuedecomposition)。设全局坐标系为e{e1,e2,e3},则相似伴随矩阵构建如下:

Figure BDA0001920464520000143
其中
Figure BDA0001920464520000144
H可认为是一个奇异值矩阵。则其可被分解为H=U∑VT。通过分解相似伴随阵H,转动矩阵R可由UVT求得。刚体位移t则可通过
Figure BDA0001920464520000145
求得。于是,转动点云x{x1,x2,…,xr}∈R3为x=RX+t。Due to the different shooting conditions of the reconstructed point cloud of blood vessels, the spatial orientation and position are uncertain when starting the 3D reconstruction of NURBS CAD. In order to ensure that the surfaces have the same coordinate system when entering the surface reconstruction stage, the point cloud rigid body must be rotated to a unified global coordinate system. The method adopted in this embodiment is SVD (Singular-valuedecomposition). Assuming that the global coordinate system is e{e 1 ,e 2 ,e 3 }, the similar adjoint matrix is constructed as follows:
Figure BDA0001920464520000143
in
Figure BDA0001920464520000144
H can be thought of as a singular value matrix. Then it can be decomposed into H=UΣV T . By decomposing the similar adjoint matrix H, the rotation matrix R can be obtained by UV T. The rigid body displacement t can be obtained by
Figure BDA0001920464520000145
beg. Therefore, the rotation point cloud x{x 1 ,x 2 ,...,x r }∈R 3 is x=RX+t.

(3)二维点云的NURBS曲线拟合(3) NURBS curve fitting of 2D point cloud

本专利采用分层式NURBS曲面的拟合。每个分层拟合为环形NURBS曲线。曲线方程为:This patent adopts the fitting of layered NURBS surfaces. Each stratum was fitted as a circular NURBS curve. The curve equation is:

Figure BDA0001920464520000146
Figure BDA0001920464520000146

其中,Pi为控制点坐标向量,ωi和ωj为其相应的权重值。基函数N由以下递归函数求得:Among them, P i is the control point coordinate vector, and ω i and ω j are their corresponding weight values. The basis function N is obtained by the following recursive function:

Figure BDA0001920464520000147
Figure BDA0001920464520000147

Figure BDA0001920464520000148
Figure BDA0001920464520000148

其中,u={u0,…,um}是节点向量,是任意选取的一组非负值、均匀且单调递增的节点向量;p为曲线阶数,p的数值可根据实际需求自行设置,例如本实施例中p=2。每层的拟合曲线使用一套相同的节点向量。通过构建最小二乘法方程组NtNP=NtX,可以求解得到控制点PiAmong them, u={u 0 ,..., um } is the node vector, which is a set of non-negative, uniform and monotonically increasing node vectors arbitrarily selected; p is the curve order, and the value of p can be set according to actual needs. , for example, p=2 in this embodiment. The fitted curve for each layer uses the same set of nodal vectors. By constructing the least squares equation system N t NP=N t X, the control point P i can be obtained by solving.

(4)张量乘积构建NURBS曲面(4) Constructing NURBS surface by tensor product

由于每层的每条曲线使用的是相同的节点向量,对于相同的的节点,可以沿层数方向建立一套新的节点方向v={v0,…,vm}。NURBS曲面则由两个方向的的NURBS曲线通过张量乘积算得。其曲面方程为:Since each curve of each layer uses the same node vector, for the same node, a new set of node directions v={v 0 ,..., vm } can be established along the layer number direction. NURBS surfaces are calculated by multiplying NURBS curves in two directions by tensors. Its surface equation is:

Figure BDA0001920464520000151
Figure BDA0001920464520000151

其中,

Figure BDA0001920464520000152
Pi,j为在新构建v方向,第j层,第i个控制点的值;n,m分别为u和v方向的曲线阶数。n,m的数值可根据实际需求自行设置,例如本实施例中n=2,m=2。in,
Figure BDA0001920464520000152
P i,j is the value of the i-th control point at the j-th layer in the newly constructed v direction; n, m are the curve orders in the u and v directions, respectively. The values of n and m can be set according to actual requirements. For example, in this embodiment, n=2, m=2.

本实施例中,构建的图像分割模型能快速、准确的对CT图像中的血管进行分割,并输出血管的二维分割图,进而实现精确地构建血管的三维模型。In this embodiment, the constructed image segmentation model can quickly and accurately segment the blood vessels in the CT image, and output a two-dimensional segmentation map of the blood vessels, thereby realizing the accurate construction of a three-dimensional model of the blood vessels.

虽然以上描述了本发明的具体实施方式,但是本领域的技术人员应当理解,这仅是举例说明,本发明的保护范围是由所附权利要求书限定的。本领域的技术人员在不背离本发明的原理和实质的前提下,可以对这些实施方式做出多种变更或修改,但这些变更和修改均落入本发明的保护范围。Although the specific embodiments of the present invention are described above, those skilled in the art should understand that this is only an illustration, and the protection scope of the present invention is defined by the appended claims. Those skilled in the art can make various changes or modifications to these embodiments without departing from the principle and essence of the present invention, but these changes and modifications all fall within the protection scope of the present invention.

Claims (10)

1. A three-dimensional model reconstruction method of a blood vessel in a CT image, the three-dimensional model reconstruction method comprising:
establishing an image segmentation model based on a neural network; the image segmentation model is used for segmenting blood vessels in the CT image and outputting a two-dimensional segmentation map of the blood vessels;
inputting a CT image to be modeled into the image segmentation model to obtain a plurality of two-dimensional segmentation maps;
and performing three-dimensional reconstruction on the two-dimensional segmentation map.
2. The method for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 1, wherein the step of establishing the image segmentation model based on the neural network specifically comprises:
acquiring the identified CT image as a training sample;
and inputting the training sample into a neural network model, and training the neural network model according to a loss function of the neural network model to obtain the image segmentation model.
3. The method of reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 2, wherein the loss function is:
Figure FDA0001920464510000011
SI=∑i,jMpredict(i,j)×Mtrue(i,j);
Sum(Mpredict)=∑i,j|Mpredict(i,j)|;
Sum(Mtrue)=∑i,j|Mtrue(i,j)|;
wherein L represents the loss function; mtrue(i, j) represents the pixel value of the (i, j) location in the two-dimensional segmentation map of the training sample; mpredict(i, j) represents the pixel value of the (i, j) position in the two-dimensional segmentation map output by the neural network model, and η represents the smoothing parameter.
4. The method for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 1, wherein the step of reconstructing the two-dimensional segmentation map in three dimensions specifically comprises:
converting the two-dimensional segmentation graph into a three-dimensional image in an Analyze format of a three-dimensional binary system;
carrying out three-dimensional image reconstruction on the three-dimensional image in the Analyze format based on a marching cube algorithm;
fitting the reconstructed three-dimensional image based on NURBS.
5. The method of reconstructing a three-dimensional model of a blood vessel in a CT image as set forth in claim 4, wherein the step of fitting the reconstructed three-dimensional image is preceded by the steps of:
and smoothing the reconstructed three-dimensional image.
6. A three-dimensional model reconstruction system for a blood vessel in a CT image, the three-dimensional model reconstruction system comprising:
the modeling module is used for establishing an image segmentation model based on a neural network; the image segmentation model is used for segmenting blood vessels in the CT image and outputting a two-dimensional segmentation map of the blood vessels;
and the three-dimensional reconstruction module is used for inputting the CT image to be modeled into the image segmentation model to obtain a plurality of two-dimensional segmentation maps and performing three-dimensional reconstruction on the two-dimensional segmentation maps.
7. The system for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 6, wherein the modeling module comprises:
the data acquisition unit is used for acquiring the identified CT image as a training sample;
and the model training unit is used for inputting the training samples into a neural network model, and training the neural network model according to the loss function of the neural network model to obtain the image segmentation model.
8. The system for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 7, wherein the loss function is:
Figure FDA0001920464510000021
SI=∑i,jMpredict(i,j)×Mtrue(i,j);
Sum(Mpredict)=∑i,j|Mpredict(i,j)|;
Sum(Mtrue)=∑i,j|Mtrue(i,j)|;
wherein L represents the loss function; mtrue(i, j) represents a two-dimensional segmentation of the training samplesPixel value at the (i, j) position in the figure; mpredict(i, j) represents the pixel value of the (i, j) position in the two-dimensional segmentation map output by the neural network model, and η represents the smoothing parameter.
9. The system for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 6, wherein the three-dimensional reconstruction module comprises:
the format conversion unit is used for converting the two-dimensional segmentation graph into a three-dimensional image in an Analyze format of a three-dimensional binary system;
the three-dimensional reconstruction unit is used for reconstructing a three-dimensional image in the Analyze format based on a marching cube algorithm;
and the fitting unit is used for fitting the reconstructed three-dimensional image based on NURBS.
10. The system for reconstructing a three-dimensional model of a blood vessel in a CT image according to claim 9, wherein the three-dimensional reconstruction module further comprises:
and the smoothing unit is used for calling the fitting unit after smoothing the reconstructed three-dimensional image.
CN201811591907.4A 2018-12-25 2018-12-25 Method and system for reconstructing three-dimensional model of blood vessels in CT images Active CN111369662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591907.4A CN111369662B (en) 2018-12-25 2018-12-25 Method and system for reconstructing three-dimensional model of blood vessels in CT images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811591907.4A CN111369662B (en) 2018-12-25 2018-12-25 Method and system for reconstructing three-dimensional model of blood vessels in CT images

Publications (2)

Publication Number Publication Date
CN111369662A true CN111369662A (en) 2020-07-03
CN111369662B CN111369662B (en) 2025-02-28

Family

ID=71211472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811591907.4A Active CN111369662B (en) 2018-12-25 2018-12-25 Method and system for reconstructing three-dimensional model of blood vessels in CT images

Country Status (1)

Country Link
CN (1) CN111369662B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436322A (en) * 2021-07-08 2021-09-24 中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所 Three-dimensional reconstruction method, device, equipment and storage medium for fundus blood vessels
CN114332135A (en) * 2022-03-10 2022-04-12 之江实验室 Semi-supervised medical image segmentation method and device based on dual-model interactive learning
CN114359317A (en) * 2021-12-17 2022-04-15 浙江大学滨江研究院 Blood vessel reconstruction method based on small sample identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271574A (en) * 2008-03-20 2008-09-24 华南师范大学 Method and device for three-dimensional visualization
CN101819679A (en) * 2010-04-19 2010-09-01 李楚雅 Three-dimensional medical image segmentation method
CN103679810A (en) * 2013-12-26 2014-03-26 海信集团有限公司 Method for three-dimensional reconstruction of liver computed tomography (CT) image
CN105844693A (en) * 2016-04-29 2016-08-10 青岛大学附属医院 Liver 3D CT reconstruction data information processing system
CN106296653A (en) * 2016-07-25 2017-01-04 浙江大学 Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system
WO2017028519A1 (en) * 2015-08-18 2017-02-23 青岛海信医疗设备股份有限公司 Hepatic vascular classification method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271574A (en) * 2008-03-20 2008-09-24 华南师范大学 Method and device for three-dimensional visualization
CN101819679A (en) * 2010-04-19 2010-09-01 李楚雅 Three-dimensional medical image segmentation method
CN103679810A (en) * 2013-12-26 2014-03-26 海信集团有限公司 Method for three-dimensional reconstruction of liver computed tomography (CT) image
WO2017028519A1 (en) * 2015-08-18 2017-02-23 青岛海信医疗设备股份有限公司 Hepatic vascular classification method
CN105844693A (en) * 2016-04-29 2016-08-10 青岛大学附属医院 Liver 3D CT reconstruction data information processing system
CN106296653A (en) * 2016-07-25 2017-01-04 浙江大学 Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谷宇 等: "多模态3D 卷积神经网络脑部胶质瘤分割方法", 《科学技术与工程》, vol. 18, no. 7, 31 March 2018 (2018-03-31), pages 18 - 23 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436322A (en) * 2021-07-08 2021-09-24 中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所 Three-dimensional reconstruction method, device, equipment and storage medium for fundus blood vessels
CN114359317A (en) * 2021-12-17 2022-04-15 浙江大学滨江研究院 Blood vessel reconstruction method based on small sample identification
CN114332135A (en) * 2022-03-10 2022-04-12 之江实验室 Semi-supervised medical image segmentation method and device based on dual-model interactive learning
CN114332135B (en) * 2022-03-10 2022-06-10 之江实验室 A semi-supervised medical image segmentation method and device based on dual-model interactive learning

Also Published As

Publication number Publication date
CN111369662B (en) 2025-02-28

Similar Documents

Publication Publication Date Title
CN110599528B (en) Unsupervised three-dimensional medical image registration method and system based on neural network
JP7634017B2 (en) Tooth Segmentation Using Neural Networks
CN109741343B (en) A T1WI-fMRI Image Tumor Collaborative Segmentation Method Based on 3D-Unet and Graph Theory Segmentation
Zhang et al. Patient-specific vascular NURBS modeling for isogeometric analysis of blood flow
CN109272510B (en) A segmentation method for tubular structures in 3D medical images
CN105957066B (en) CT image liver segmentation method and system based on automatic context model
WO2024021523A1 (en) Graph network-based method and system for fully automatic segmentation of cerebral cortex surface
CN106780518A (en) A kind of MR image three-dimensional interactive segmentation methods of the movable contour model cut based on random walk and figure
WO2012017375A2 (en) In-plane and interactive surface mesh adaptation
Wu et al. Segmentation and reconstruction of vascular structures for 3D real-time simulation
Gharleghi et al. Deep learning for time averaged wall shear stress prediction in left main coronary bifurcations
Tobon-Gomez et al. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
Wang et al. Left atrial appendage segmentation based on ranking 2-D segmentation proposals
CN111369662A (en) Method and system for reconstructing 3D model of blood vessels in CT images
Gou et al. Automatic image annotation and deep learning for tooth CT image segmentation
Gsaxner et al. PET-train: Automatic ground truth generation from PET acquisitions for urinary bladder segmentation in CT images using deep learning
Nowinski et al. A 3D model of human cerebrovasculature derived from 3T magnetic resonance angiography
CN117036428A (en) Multitasking abdominal organ registration method based on mutual attention and semantic sharing
CN107516314A (en) Medical image supervoxel segmentation method and device
CN115359002B (en) A system and method for automatically detecting plaques in carotid artery ultrasound images
CN116797726A (en) Organ three-dimensional reconstruction method, device, electronic equipment and storage medium
CN112785562A (en) System for evaluating based on neural network model and related products
CN110689080B (en) Planar atlas construction method of blood vessel structure image
CN103903255B (en) A kind of ultrasonic image division method and system
CN114549396B (en) An interactive and automatic spine segmentation refinement method based on graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载