+

CN119295491B - Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison - Google Patents

Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison Download PDF

Info

Publication number
CN119295491B
CN119295491B CN202411804970.7A CN202411804970A CN119295491B CN 119295491 B CN119295491 B CN 119295491B CN 202411804970 A CN202411804970 A CN 202411804970A CN 119295491 B CN119295491 B CN 119295491B
Authority
CN
China
Prior art keywords
image
domain
source
boundary
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411804970.7A
Other languages
Chinese (zh)
Other versions
CN119295491A (en
Inventor
张乐
林喜
黄晨曦
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Cardiovascular Hospital Xiamen University
Original Assignee
Xiamen Cardiovascular Hospital Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Cardiovascular Hospital Xiamen University filed Critical Xiamen Cardiovascular Hospital Xiamen University
Priority to CN202411804970.7A priority Critical patent/CN119295491B/en
Publication of CN119295491A publication Critical patent/CN119295491A/en
Application granted granted Critical
Publication of CN119295491B publication Critical patent/CN119295491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于边界对比的域适应跨模态医学图像分割方法、装置、设备和介质,涉及医学图像分割技术领域。方法包括:从源域和目标域数据集分别获取源样本和目标域图像;以第一图像分割模型为教师模型,为目标域图像的伪标签分配权重;通过双向跨域cutmix得训练样本;以权重可学习的第一模型为学生模型,进行监督和自我训练;训练时结合监督损失和对比损失,以将查询样本拉向原型正样本、推离边界负样本为目标优化参数,获取第二图像分割模型;最后用该模型分割第二模态医学图像。该方法在跨模态医学图像分割中有显著优势,能有效提升分割性能。

The present invention provides a domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary contrast, and relates to the technical field of medical image segmentation. The method comprises: obtaining source samples and target domain images from source domain and target domain data sets respectively; using a first image segmentation model as a teacher model to assign weights to pseudo labels of target domain images; obtaining training samples through bidirectional cross-domain cutmix; using a first model with learnable weights as a student model to perform supervision and self-training; combining supervision loss and contrast loss during training to optimize parameters with the goal of pulling query samples toward prototype positive samples and pushing away boundary negative samples, obtaining a second image segmentation model; and finally using the model to segment a second modality medical image. This method has significant advantages in cross-modal medical image segmentation and can effectively improve segmentation performance.

Description

Domain-adaptive cross-mode medical image segmentation method, device, equipment and medium based on boundary contrast
Technical Field
The invention relates to the technical field of medical image segmentation, in particular to a domain adaptation cross-mode medical image segmentation method, device, equipment and medium based on boundary contrast.
Background
In recent years, deep convolutional neural networks have demonstrated good performance in medical image segmentation. However, when the model is applied to a realistic clinical scenario, performance often drops significantly because the medical image features acquired by different scan protocols are different. To address domain offset challenges, unsupervised Domain Adaptation (UDA) is an efficient solution to migrate knowledge from a tagged source domain (e.g., MRI) to an untagged target domain (e.g., CT).
In UDA studies of medical image segmentation, the distribution of two domains is aligned using challenge training in many cases. The image appearance alignment distribution is transformed, for example, by a cyclical consistency challenge-generating network (CycleGAN), or domain offset is resolved using image and feature alignment, or the gap is minimized by challenge entropy. However, these methods have problems such as difficult network convergence and easy collapse of training. Furthermore, contrast learning aims at learning discriminant representation space, which some work extends to semantic segmentation tasks.
Meanwhile, self-training achieves knowledge migration by utilizing consistency constraints of student and teacher models on target predictions, such as domain adaptation by cross-domain mixed sampling (DACS) performs excellently, by constructing training samples for self-training. However, current most advanced UDA methods typically focus on the overall segmentation performance of the entire object, ignoring the object boundaries. In the cross-modal medical image segmentation, the phenomenon is more serious due to large intermodal distribution gap and low contrast of different organ structure intensities, so that the segmentation performance is poor.
Disclosure of Invention
The invention provides a domain adaptation cross-mode medical image segmentation method, device, equipment and medium based on boundary contrast, so as to improve at least one of the technical problems.
In a first aspect, the present invention provides a domain-adaptive cross-modality medical image segmentation method based on boundary contrast, which includes steps S1 to S7.
S1, acquiring a source sample from a source domain data set of a first modality. The source sample contains a source domain image and a source tag.
S2, acquiring a target domain image corresponding to the source domain image from a target domain data set of a second modality.
S3, taking a first image segmentation model trained based on the first modal data as a teacher model, inputting the target domain image into the teacher model, obtaining a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of the pixel based on a dynamic weight distribution strategy.
S4, mixing the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mixing the source label and the pseudo label into a mixed label, and obtaining a training sample.
S5, taking the first image segmentation model with the weight capable of being learned as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training.
And S6, during training, acquiring a query sample based on the characteristics extracted by the student model, acquiring a prototype positive sample based on the characteristics extracted by the teacher model from the source domain image, acquiring a boundary negative sample based on the boundary characteristics extracted by the source label from the boundary of each object, performing iterative training by combining the supervision loss and the contrast loss, pulling the query sample to the prototype positive sample to push away from the boundary negative sample as a target, optimizing parameters of the student model, enhancing the distinguishing capability of the model in the boundary area, and acquiring a second image segmentation model.
S7, dividing the medical image of the second mode according to the second image dividing model, and acquiring the divided medical image of the second mode.
In an alternative embodiment, the dynamic weight allocation policy is:
;
;
wherein, Is confidence weight,Representing the number of current training iterations,Is the serial number of the image,A serial number of a pixel in the image,Is the prediction entropy,AndRespectively the firstA first entropy threshold and a second entropy threshold of the multiple iterations,Confidence weights for predefined target unsupervised loss,Indicating the number of categories of labels,A serial number of the label type,To predict probability,Is the firstFirst iterationFirst of imagesThe first pixel ofPredictive probability of class label category,AndRepresenting the height and width of the image, respectively.
AndRespectively byAndCalculating quantiles to obtain:
;
;
;
;
wherein, Is a function in numpy libraries for calculating quantiles,Is a pixel level entropy diagram,Representing the conversion of a two-dimensional array into a one-dimensional array for quantile calculation,AndRespectively distributing related first parameters and second parameters for dynamic weights,Respectively isAndAn initial ratio of (2),Representing the maximum number of iterations.
In an alternative embodiment, step S4 specifically includes steps S41 to S43.
S41, generating a zero center maskAnd the method is used for carrying out bidirectional mixing on the source image and the target image to obtain a mixed image.
;
Wherein, Is the firstA mixed image,Representing a mixture of,Is the firstA source domain image,A serial number representing an image,Is the firstA target domain image,Representing the target domain,Zero center mask,Representing multiplication,AndRepresenting the height and width of the image, respectively.
S42, mixing the source tag and the pseudo tag by adopting a mixing mode with the same mixed image to obtain the mixed tag
S43, acquiring training samples according to the mixed image and the mixed label,A serial number representing an image,Representing the number of blended images.
In an alternative embodiment, the query sample is obtained based on the features extracted by the student model, which specifically includes:
For each category other than the background category, reliable pixels in the current small lot below a preset entropy threshold are taken as query candidates.
In an alternative embodiment, obtaining a prototype positive sample based on features extracted from a source domain image by a teacher model specifically includes:
initializing class-level prototypes using class centers for original source domain pixel features :
;
Wherein, A number of samples representing a source domain dataset,Representing the source domain,Is the serial number of the image,AndHeight and width of representative feature,A serial number of a pixel in the image,An instruction head for the teacher model,Is the firstOutput of teacher model indication head corresponding to each pixel,To output corresponding real labelsDownsampling(s),Representing a real number set,Is thatChannel dimension of (2),A serial number of the label type,As an index function, when meeting the conditionsThe index function has a value of 1 when it is present, and 0 when it is not present.
The class-level prototype is updated in each iteration in a progressive refinement manner. Wherein, the firstSecond iteration (a)Personal class prototypesDefinition is performed by a class-level mean vector of pixel features in a small batch:
;
wherein, Is a momentum coefficient,Represent the firstFirst iterationEach category prototype,The representation belonging to the firstThe number of the pixels,Representing the size of the small lot.
In an alternative embodiment, the boundary negative sample is obtained based on the boundary features extracted from the boundary of each object by the source tag, specifically including:
Pixels which do not belong to the current object class are sampled from the periphery of each object area, the corresponding feature vectors are used as boundary negative samples, and class-level memory banks are used for storing the boundary negative samples. Wherein the output of the instruction head is directed to the teacher model Is the first of (2)Class object, tag with downsamplingPerforming morphological operation to obtain a binary boundary maskThen, useAnd downsampled labelsExtracting feature vector to obtain boundary negative sample. Wherein,
In an alternative embodiment, the loss function during trainingThe method comprises the following steps:
;
wherein, A supervision loss for a tagged source domain image,Representing minimization of predictor and hybrid pseudo tag weighted cross entropy loss,AndIs a balance coefficient,Pixel level contrast learning loss for source domain images,The loss is learned for pixel level contrast of the blended image.
;
Wherein, A number of samples representing a source domain dataset,Is the serial number of the image,Is cross entropy loss,Is the Dice loss,Is a real label,Labels predicted for student models.
;
Wherein, For mixing the number of images, subscripts, or superscriptsRepresenting a mixed image,AndRespectively representing the height and width of the image,For mixing the weights of the pixels in the image,A serial number of a pixel in the image,Is an index function,Is the first of the mixed imagesHybrid labels of individual pixels,The prediction probability of the mixed image is used for the student model.
In source domain data setAnd training sample setsThe pixel level contrast loss is calculated by using the prototype positive sample and the boundary negative sample for the query sample, and the pixel level contrast learning loss of the source domain image is obtainedPixel level contrast learning penalty with blended images
Pixel level contrast loss modelThe method comprises the following steps:
;
wherein, Is the number of label categories,A serial number of the label type,To inquire the sample number,Is a natural exponential function,Category(s)Is the first of (2)Each inquiry sample,Representing a positive prototype,Is the temperature,A negative number of samples,Representing a negative sample.
In an alternative embodiment, the source domain image of the first modality is one of a magnetic resonance imaging MRI image and a tomographic CT image. The target domain image of the second modality is the other of a magnetic resonance imaging MRI image and a tomographic CT image.
The invention provides a domain adaptation cross-mode medical image segmentation device based on boundary contrast, which comprises a source sample acquisition module, a target domain image acquisition module, a pseudo tag acquisition module, a mixing module, a training module and a segmentation module.
And the source sample acquisition module is used for acquiring a source sample from the source domain data set of the first modality. The source sample contains a source domain image and a source tag.
And the target domain image acquisition module is used for acquiring a target domain image corresponding to the source domain image from the target domain data set of the second modality.
The pseudo tag acquisition module is used for taking a first image segmentation model trained based on first modal data as a teacher model, inputting a target domain image into the teacher model to acquire a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of the pixel based on a dynamic weight distribution strategy.
And the mixing module is used for mixing the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mixing the source label and the pseudo label into a mixed label, and obtaining a training sample.
And the student model module is used for taking the first image segmentation model with the weight capable of being learned as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training.
And the training module is used for acquiring a query sample based on the characteristics extracted by the student model, acquiring a prototype positive sample based on the characteristics extracted by the teacher model from the source domain image, acquiring a boundary negative sample based on the boundary characteristics extracted by the source label from the boundary of each object, performing iterative training by combining the supervision loss and the contrast loss, pulling the query sample to the prototype positive sample to push away from the boundary negative sample as a target, optimizing parameters of the student model, enhancing the discrimination capability of the model in the boundary region, and acquiring a second image segmentation model.
And the segmentation module is used for segmenting the medical image of the second modality according to the second image segmentation model and acquiring the segmented medical image of the second modality.
In a third aspect, the invention provides a boundary contrast-based domain-adaptive cross-modality medical image segmentation apparatus comprising a processor, a memory, and a computer program stored within the memory. The computer program is executable by the processor to implement a domain-adaptive cross-modality medical image segmentation method based on boundary contrast as described in any of the paragraphs of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium, the computer readable storage medium including a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform a method for domain-adaptive cross-modality medical image segmentation based on boundary contrast according to any one of the first aspects.
By adopting the technical scheme, the invention can obtain the following technical effects:
Compared with the traditional frame, the domain adaptation cross-mode medical image segmentation method based on boundary comparison is capable of focusing on fuzzy boundary areas positively and achieving excellent segmentation effect at class boundaries. When the cross-modal heart dataset is segmented, the entropy at the boundary of different heart structure categories is low and the confidence is high, so that the segmentation performance is effectively improved. Secondly, unlike the previous method of unidirectional mixing samples, the method creates a mixed image from two directions for self-training, and greatly promotes the middle domain to better learn domain invariant features by pasting blocks of the source domain to the target domain and pasting blocks of the target domain to the source domain. Moreover, the proposed strategy can finely adjust the confidence weight of the pseudo tag, and effectively prevent unstable training and early performance degradation.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is a flow diagram of a domain-adaptive cross-modality medical image segmentation method based on boundary contrast.
Fig. 2 is a logic block diagram of a domain-adaptive cross-modality medical image segmentation method based on boundary contrast.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, a first embodiment of the present invention provides a domain-adaptive cross-modality medical image segmentation method based on boundary contrast, which includes steps S1 to S7.
S1, acquiring a source sample from a source domain data set of a first modality. The source sample contains a source domain image and a source tag.
S2, acquiring a target domain image corresponding to the source domain image from a target domain data set of a second modality.
In this embodiment, the source domain image of the first modality is one of a magnetic resonance imaging MRI image and a tomographic CT image. The target domain image of the second modality is the other of a magnetic resonance imaging MRI image and a tomographic CT image. In other embodiments, the first modality and the second modality may be other medical image acquisition modes, which are not particularly limited by the present invention.
The invention first defines a labeled source domain datasetAnd a label-free target domain data set=. Our goal is to train a segmentation model, which willIs migrated toIs a kind of medium.Is a source domain image,Is a source label,Is the number of source samples,Is a target domain image,Is the target domain sample number.
Fig. 2 shows the overall framework of the unsupervised domain adaptation for medical image segmentation of the present invention. The whole frame has two segmentation models, namely student models with learning weightsAnd a teacher model for calculating weights using an Exponential Moving Average (EMA). Each model consists of an encoder, a classifier and an indicator head.
In the training process, target data is input into a teacher model to obtain a prediction probabilityPseudo tagAnd dynamically assigns weights in the following. Wherein, AndRepresenting the height and width of the image respectively,Representing the number of categories. The training samples are then constructed by blending the two domain images and the source domain label and the target pseudo label using bi-directional cross-domain cutmix. The specific operation is as step S3 to step S6.
S3, taking a first image segmentation model trained based on the first modal data as a teacher model, inputting the target domain image into the teacher model, obtaining a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of the pixel based on a dynamic weight distribution strategy. Specifically, the invention provides a strategy to fine tune the confidence weight of the pseudo tag to prevent training instability and early performance degradation.
Preferably, the dynamic weight allocation policy is:
;
;
wherein, Is confidence weight,Representing the number of current training iterations,Is the serial number of the image,A serial number of a pixel in the image,Is the prediction entropy,AndRespectively the firstA first entropy threshold and a second entropy threshold of the multiple iterations,Confidence weights for predefined target unsupervised loss,Indicating the number of categories of labels,A serial number of the label type,To predict probability,Is the firstFirst iterationFirst of imagesThe first pixel ofPredictive probability of class label category,AndRepresenting the height and width of the image, respectively.
AndRespectively byAndCalculating quantiles to obtain:
;
;
wherein, Is a function in numpy libraries for calculating quantiles,Is a pixel level entropy diagram,Representing the conversion of a two-dimensional array into a one-dimensional array for quantile calculation,AndThe first and second parameters are associated for dynamic weight allocation, respectively. The first parameter and the second parameter are dynamically changed during the training process for determining different entropy thresholds.
In an alternative embodiment, the dynamic partition adjustment strategy is used in a linear fashion.
;
;
Wherein, Respectively isAndAn initial ratio of (2),Representing the number of current training iterations,Representing the maximum number of iterations.
The present invention proposes a new strategy to dynamically assign weights to different parts of the pseudo tag to prevent early performance degradation. In particular, because pseudo tags are typically noisy, domain Adaptation (DACS) of cross-domain mixed samples defines a confidence weight for target unsupervised lossI.e. the proportion of pixels in the whole image that exceed the threshold value. However, this approach allows the confidence weight of each pixel in the entire image to be the same, meaning that it neither removes unreliable pixels nor assigns higher weights to reliable pixels, resulting in a validation bias. In this case, model degradation will occur as training proceeds. To solve this problem, we assign different weights to the pixels based on entropy.
S4, mixing the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mixing the source label and the pseudo label into a mixed label, and obtaining a training sample. Specifically, unlike the conventional method of unidirectional blending samples, the present invention creates a blended image from training in two directions, namely, pasting blocks (pacth) of the source domain to the target domain and pasting blocks (patch) of the target domain to the source domain. This facilitates better learning domain invariant features for the intermediate domain.
On the basis of the above embodiment, in an alternative embodiment of the present invention, step S4 specifically includes steps S41 to S43.
S41, generating a zero center maskAnd the method is used for carrying out bidirectional mixing on the source image and the target image to obtain a mixed image.
;
Wherein, Is the firstA mixed image,Representing a mixture of,Is the firstA source domain image,A serial number representing an image,Is the firstA target domain image,Representing the target domain,Zero center mask,Representing multiplication,AndRepresenting the height and width of the image, respectively.
S42, mixing the source tag and the pseudo tag by adopting a mixing mode with the same mixed image to obtain the mixed tag
S43, acquiring training samples according to the mixed image and the mixed label,A serial number representing an image,Representing the number of blended images.
Specifically, as shown in FIG. 2, two source images are randomly selected from the training set in each iterationAnd two target images. And generates a zero center maskThe method for bi-directionally copy-paste the source image and the target image is as follows:
;
;
wherein, Is the image sequence number.
The source tag and the target pseudo tag are mixed in the same way to generate a mixed tagAnd. Thus we get training samples. Furthermore, we will beConfidence weights for individual source images are set to. Will beCalculated with dynamic weight allocation strategyMixing to obtain
According to the invention, the self-training is carried out by constructing a bidirectional cross-domain mixed sample, and the domain invariant feature is better learned by utilizing the intermediate domain. Specifically, the patch of the source image is pasted onto the target image, while the patch of the target image is also pasted onto the source image.
S5, taking the first image segmentation model with the weight capable of being learned as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training.
And S6, during training, acquiring a query sample based on the characteristics extracted by the student model, acquiring a prototype positive sample based on the characteristics extracted by the teacher model from the source domain image, acquiring a boundary negative sample based on the boundary characteristics extracted by the source label from the boundary of each object, performing iterative training by combining the supervision loss and the contrast loss, pulling the query sample to the prototype positive sample to push away from the boundary negative sample as a target, optimizing parameters of the student model, enhancing the distinguishing capability of the model in the boundary area, and acquiring a second image segmentation model.
The invention constructs positive prototype and negative boundary samples for comparison adaptation, so that the model actively focuses on the ambiguous boundary region. Compared with the traditional adaptive segmentation framework in the medical image field, the adaptive framework in the boundary contrast domain for the cross-mode medical image segmentation actively focuses on the fuzzy boundary region so as to improve the segmentation performance. When the method is used for cross-mode heart dataset segmentation, entropy comparison is lower at boundaries of different heart structure categories, confidence is high, and the segmentation effect is better at category boundaries.
In an alternative embodiment of the present invention based on the above embodiment, the obtaining a prototype positive sample based on the features extracted from the source domain image by the teacher model specifically includes:
initializing class-level prototypes using class centers for original source domain pixel features :
;
Wherein, A number of samples representing a source domain dataset,Representing the source domain,Is the serial number of the image,AndHeight and width of representative feature,A serial number of a pixel in the image,An instruction head for the teacher model,Is the firstOutput of teacher model indication head corresponding to each pixel,To output corresponding real labelsDownsampling(s),Representing a real number set,Is thatChannel dimension of (2),A serial number of the label type,As an index function, when meeting the conditionsThe index function has a value of 1 when it is present, and 0 when it is not present.
To improve the domain-invariant representation capability of prototypes, we updated class-level prototypes in a progressive refinement fashion in each iteration. Wherein, the firstSecond iteration (a)Personal class prototypesDefinition is performed by class-level mean vectors of pixel features in a small batch (mini-batch):
;
wherein, Is a momentum coefficient,Represent the firstFirst iterationEach category prototype,The representation belonging to the firstThe number of the pixels,Representing the size of the small lot.
Based on the above embodiments, in an optional embodiment of the present invention, obtaining a boundary negative sample based on boundary features extracted from a boundary of each object by a source tag specifically includes:
In order to enhance the discriminant of the boundary region, pixels not belonging to the current object class are sampled from the periphery of each object region, and their corresponding feature vectors are taken as boundary negative samples.
Output of instruction header for teacher modelIs the first of (2)Class object, tag with downsamplingPerforming morphological operation to obtain a binary boundary maskThen, useAnd downsampled labelsExtracting feature vector to obtain boundary negative sample
The acquisition model of the boundary negative sample is:
To keep the number of negative samples stable, a class-level memory bank is used to store the boundary negative samples.
Based on the foregoing embodiments, in an optional embodiment of the present invention, obtaining a query sample based on features extracted by a student model specifically includes:
for each category other than the background category, reliable pixels with low entropy (reliable pixels below a preset entropy threshold) in the current small lot (mini-batch) are taken as query candidates. I.e. smaller than Is a pixel of (c).
Based on the above embodiments, in an alternative embodiment of the present invention, the loss function during trainingThe method comprises the following steps:
;
wherein, A supervision loss for a tagged source domain image,Representing minimization of predictor and hybrid pseudo tag weighted cross entropy loss,AndIs a balance coefficient,Pixel level contrast learning loss for source domain images,The loss is learned for pixel level contrast of the blended image.
;
Wherein, A number of samples representing a source domain dataset,Is the serial number of the image,Is cross entropy loss,Is the Dice loss,Is a real label,Labels predicted for student models.
;
Wherein, For mixing the number of images, subscripts, or superscriptsRepresenting a mixed image,AndRespectively representing the height and width of the image,For mixing the weights of the pixels in the image,A serial number of a pixel in the image,Is an index function,Is the first of the mixed imagesHybrid labels of individual pixels,The prediction probability of the mixed image is used for the student model.
Each query sample is combined with a positive prototypeAndNegative samples ofPairing. In source domain data setAnd training sample setsThe pixel level contrast loss is calculated by using the prototype positive sample and the boundary negative sample for the query sample, and the pixel level contrast learning loss of the source domain image is obtainedPixel level contrast learning penalty with blended images
Pixel level contrast loss modelThe method comprises the following steps:
;
wherein, Is the number of label categories,A serial number of the label type,To inquire the sample number,Is a natural exponential function,Category(s)Is the first of (2)Each inquiry sample,Representing a positive prototype,Is the temperature,A negative number of samples,Representing a negative sample.
The current state-of-the-art Unsupervised Domain Adaptation (UDA) method typically focuses on the overall segmentation performance of the entire object, while ignoring the boundaries of the object. The segmentation results given by the previous methods generally have higher entropy and lower confidence in the boundary region, resulting in poor segmentation performance. In cross-modal medical image segmentation, the intensity contrast between different classes of organ structures is lower due to the large distribution gap between different modalities, which is more serious.
The embodiment of the invention provides a new unsupervised domain adaptation framework based on boundary comparison. First, the centroid of the feature on the source domain is calculated to obtain a class prototype feature. Meanwhile, boundary features are extracted from the boundary of each object according to the real tags and stored in a class-level repository. Contrast learning is then introduced into the domain adaptation process. A set of pixel-level representations (queries) are pulled closer to their respective prototypes (positive samples) and farther from their respective boundary features (negative samples). In this way we explicitly enhance the similarity between pixel features and corresponding prototypes to reduce class-level distribution differences between domains while increasing discrimination capability at boundaries.
S7, dividing the medical image of the second mode according to the second image dividing model, and acquiring the divided medical image of the second mode.
It should be noted that in the medical imaging field, different image modes (such as MRI, CT, etc.) generally provide different image information, but when a trained model is applied to different mode images, the performance of the model tends to be significantly reduced. While retraining is typically required to effectively predict a new modality image. This not only requires a significant amount of computational and memory resources to be expended, but also requires pixel-level annotation of new modality data by experienced radiologists, which is costly and time consuming and not practical. Currently, unsupervised Domain Adaptation (UDA) for medical image segmentation, mostly uses countermeasure training to align the distribution between two domains. Although effective, these methods have problems such as difficulty in network convergence and easy collapse during training.
As shown in fig. 2, the domain adaptation cross-mode medical image segmentation method based on boundary contrast is disclosed. First, source data is directly input into a student model for supervised training. Second, we construct a hybrid training sample for self-training by bi-directional cross-domain cutmix and dynamically assign weights. Third, the query samples are pulled toward their respective prototypes and pushed away from the respective boundary features. Where the query samples are all from students and the positive and negative samples are from teachers. Negative samples are stored in a class level repository. According to the method, the boundary fuzzy region is actively focused to improve the segmentation performance, meanwhile, training samples are built through bi-directional cross-domain cutmix, the domain gap is further reduced, and a dynamic weight distribution strategy is introduced to prevent early performance degradation of the model.
Compared with the traditional frame, the domain adaptation cross-mode medical image segmentation method based on boundary comparison is capable of focusing on fuzzy boundary areas positively and achieving excellent segmentation effect at class boundaries. When the cross-modal heart dataset is segmented, the entropy at the boundary of different heart structure categories is low and the confidence is high, so that the segmentation performance is effectively improved. Secondly, unlike the previous method of unidirectional mixing samples, the method creates training samples from two directions for self-training, and greatly promotes the middle domain to better learn domain invariant features by pasting blocks of the source domain to the target domain and pasting blocks of the target domain to the source domain. Moreover, the proposed strategy can finely adjust the confidence weight of the pseudo tag, and effectively prevent unstable training and early performance degradation.
The second embodiment of the invention provides a domain adaptation cross-modal medical image segmentation device based on boundary contrast, which comprises a source sample acquisition module, a target domain image acquisition module, a pseudo tag acquisition module, a mixing module, a training module and a segmentation module.
And the source sample acquisition module is used for acquiring a source sample from the source domain data set of the first modality. The source sample contains a source domain image and a source tag.
And the target domain image acquisition module is used for acquiring a target domain image corresponding to the source domain image from the target domain data set of the second modality.
The pseudo tag acquisition module is used for taking a first image segmentation model trained based on first modal data as a teacher model, inputting a target domain image into the teacher model to acquire a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of the pixel based on a dynamic weight distribution strategy.
And the mixing module is used for mixing the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mixing the source label and the pseudo label into a mixed label, and obtaining a training sample.
And the student model module is used for taking the first image segmentation model with the weight capable of being learned as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training.
And the training module is used for acquiring a query sample based on the characteristics extracted by the student model, acquiring a prototype positive sample based on the characteristics extracted by the teacher model from the source domain image, acquiring a boundary negative sample based on the boundary characteristics extracted by the source label from the boundary of each object, performing iterative training by combining the supervision loss and the contrast loss, pulling the query sample to the prototype positive sample to push away from the boundary negative sample as a target, optimizing parameters of the student model, enhancing the discrimination capability of the model in the boundary region, and acquiring a second image segmentation model.
And the segmentation module is used for segmenting the medical image of the second modality according to the second image segmentation model and acquiring the segmented medical image of the second modality.
The third embodiment provides a domain adaptive cross-modal medical image segmentation device based on boundary contrast, which comprises a processor, a memory and a computer program stored in the memory. The computer program is executable by the processor to implement a boundary contrast based domain-adaptive cross-modality medical image segmentation method as set forth in any one of the embodiments.
The fourth embodiment of the present invention provides a computer readable storage medium, where the computer readable storage medium includes a stored computer program, and when the computer program runs, controls a device where the computer readable storage medium is located to execute a domain adaptation cross-modal medical image segmentation method based on boundary contrast as described in any one of the first embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely an association relationship describing the associated object, and means that there may be three relationships, e.g., a and/or B, and that there may be three cases where a exists alone, while a and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The term "if" as used herein may be interpreted as "at" or "when" depending on the context "or" in response to a determination "or" in response to a detection. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
References to "first\second" in the embodiments are merely to distinguish similar objects and do not represent a particular ordering for the objects, it being understood that "first\second" may interchange a particular order or precedence where allowed. It is to be understood that the "first\second" distinguishing aspects may be interchanged where appropriate, such that the embodiments described herein may be implemented in sequences other than those illustrated or described herein.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The domain-adaptive cross-mode medical image segmentation method based on boundary contrast is characterized by comprising the following steps of:
acquiring a source sample from a source domain data set of a first modality, wherein the source sample comprises a source domain image and a source tag;
Acquiring a target domain image corresponding to the source domain image from a target domain data set of a second modality;
taking a first image segmentation model trained based on first modal data as a teacher model, inputting a target domain image into the teacher model, obtaining a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of pixels based on a dynamic weight distribution strategy;
Mixing the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mixing the source tag and the pseudo tag into a mixed tag, and obtaining a training sample;
Taking a first image segmentation model with a learnable weight as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training;
During training, a query sample is obtained based on the characteristics extracted by the student model, a prototype positive sample is obtained based on the characteristics extracted by the teacher model from the source domain image, a boundary negative sample is obtained based on the boundary characteristics extracted by the source label from the boundary of each object, iterative training is performed by combining the supervision loss and the contrast loss, the query sample is pulled to the prototype positive sample to push away from the boundary negative sample as a target, parameters of the student model are optimized, the discrimination capability of the model in a boundary area is enhanced, and a second image segmentation model is obtained;
Dividing the medical image of the second mode according to the second image dividing model, and acquiring the divided medical image of the second mode;
mixing the source domain image and the target domain image into a mixed image and mixing the source tag and the pseudo tag into a mixed tag through a bidirectional cross-domain cutmix to obtain a training sample, wherein the training sample comprises the following specific steps:
generating a zero center mask The method comprises the steps of performing bidirectional mixing on a source image and a target image to obtain a mixed image;
;
wherein, Is the firstA mixed image,Representing a mixture of,Is the firstA source domain image,Representing the source domain,A serial number representing an image,Is the firstA target domain image,Representing the target domain,Zero center mask,Representing multiplication,AndRepresenting the height and width of the image, respectively;
mixing the source tag and the pseudo tag by adopting the same mixing mode of mixing images to obtain the mixed tag ;
Acquiring training samples according to the mixed image and the mixed label,A serial number representing an image,Representing the number of blended images.
2. The boundary contrast-based domain-adaptive cross-modality medical image segmentation method of claim 1, wherein the dynamic weight distribution strategy is:
;
;
wherein, Is confidence weight,Representing the number of current training iterations,Is the serial number of the image,A serial number of a pixel in the image,Is the prediction entropy,AndRespectively the firstA first entropy threshold and a second entropy threshold of the multiple iterations,Confidence weights for predefined target unsupervised loss,Indicating the number of categories of labels,A serial number of the label type,To predict probability,Is the firstFirst iterationFirst of imagesThe first pixel ofPredictive probability of class label category,AndRepresenting the height and width of the image, respectively;
And Respectively byAndCalculating quantiles to obtain:
;
;
;
;
wherein, Is a function in numpy libraries for calculating quantiles,Is a pixel level entropy diagram,Representing the conversion of a two-dimensional array into a one-dimensional array for quantile calculation,AndRespectively distributing related first parameters and second parameters for dynamic weights,Respectively isAndAn initial ratio of (2),Representing the maximum number of iterations.
3. The domain-adaptive cross-modal medical image segmentation method based on boundary contrast according to claim 1, wherein obtaining a prototype positive sample based on features extracted from a source domain image by a teacher model specifically comprises:
initializing class-level prototypes using class centers for original source domain pixel features :
;
Wherein, A number of samples representing a source domain dataset,Representing the source domain,Is the serial number of the image,AndHeight and width of representative feature,A serial number of a pixel in the image,An instruction head for the teacher model,Is the firstOutput of teacher model indication head corresponding to each pixel,To output corresponding real labelsDownsampling(s),Representing a real number set,Is thatChannel dimension of (2),A serial number of the label type,As an index function, when meeting the conditionsWhen the index function is 1, otherwise, the index function is 0;
Updating the class-level prototype in each iteration by adopting a progressive refinement mode, wherein the first step Second iteration (a)Personal class prototypesDefinition is performed by a class-level mean vector of pixel features in a small batch:
;
wherein, Is a momentum coefficient,Represent the firstFirst iterationEach category prototype,The representation belonging to the firstThe number of the pixels,Representing the size of the small lot.
4. A domain-adaptive cross-modality medical image segmentation method based on boundary contrast as claimed in claim 3, wherein the obtaining of the boundary negative sample from the boundary features extracted from the boundary of each object based on the source tag comprises:
Sampling pixels not belonging to the current object class from the periphery of each object region, taking the corresponding feature vectors as boundary negative samples, and storing the boundary negative samples by using a class-level memory bank, wherein the output of the instruction head for the teacher model Is the first of (2)Class object, tag with downsamplingPerforming morphological operation to obtain a binary boundary maskThen, useAnd downsampled labelsExtracting feature vector to obtain boundary negative sampleWherein, the method comprises the steps of,
5. The boundary contrast-based domain-adaptive cross-modality medical image segmentation method of any one of claims 1 to 4, wherein the source domain image of the first modality is one of a MRI image and a CT image;
obtaining a query sample based on the extracted features of the student model specifically comprises:
For each category other than the background category, reliable pixels in the current small lot below a preset entropy threshold are taken as query candidates.
6. A domain-adaptive cross-modality medical image segmentation method based on boundary contrast as claimed in any one of claims 1 to 4, wherein the loss function during trainingThe method comprises the following steps:
;
wherein, A supervision loss for a tagged source domain image,Representing minimization of predictor and hybrid pseudo tag weighted cross entropy loss,AndIs a balance coefficient,Pixel level contrast learning loss for source domain images,A pixel level contrast learning penalty for the blended image;
;
wherein, A number of samples representing a source domain dataset,Is the serial number of the image,Is cross entropy loss,Is the Dice loss,Is a real label,Labels predicted for student models;
;
wherein, For mixing the number of images, subscripts, or superscriptsRepresenting a mixed image,AndRespectively representing the height and width of the image,For mixing the weights of the pixels in the image,A serial number of a pixel in the image,Is an index function,Is the first of the mixed imagesHybrid labels of individual pixels,The prediction probability of the student model to the mixed image is given;
In source domain data set And training sample setsThe pixel level contrast loss is calculated by using the prototype positive sample and the boundary negative sample for the query sample, and the pixel level contrast learning loss of the source domain image is obtainedPixel level contrast learning penalty with blended images;
Pixel level contrast loss modelThe method comprises the following steps:
;
wherein, Is the number of label categories,A serial number of the label type,To inquire the sample number,Is a natural exponential function,Category(s)Is the first of (2)Each inquiry sample,Representing a positive prototype,Is the temperature,A negative number of samples,Representing a negative sample.
7. A domain-adaptive cross-modality medical image segmentation apparatus based on boundary contrast, comprising:
the system comprises a source sample acquisition module, a source sample acquisition module and a source analysis module, wherein the source sample acquisition module is used for acquiring a source sample from a source domain data set of a first modality;
The target domain image acquisition module is used for acquiring a target domain image corresponding to the source domain image from a target domain data set of a second modality;
The pseudo tag acquisition module is used for taking a first image segmentation model trained based on first modal data as a teacher model, inputting a target domain image into the teacher model to acquire a pseudo tag, and distributing different weights for the pseudo tag according to the prediction entropy of pixels based on a dynamic weight distribution strategy;
the mixing module is configured to mix the source domain image and the target domain image into a mixed image through a bidirectional cross-domain cutmix, mix the source tag and the pseudo tag into a mixed tag, and obtain a training sample;
The student model module is used for taking a first image segmentation model with a learnable weight as a student model, inputting the source sample into the student model for supervision training, and inputting the training sample into the student model for self training;
the training module is used for acquiring a query sample based on the characteristics extracted by the student model, acquiring a prototype positive sample based on the characteristics extracted by the teacher model from the source domain image, acquiring a boundary negative sample based on the boundary characteristics extracted by the source label from the boundary of each object, performing iterative training by combining the supervision loss and the contrast loss, pulling the query sample to the prototype positive sample to push away from the boundary negative sample as a target, optimizing parameters of the student model, enhancing the discrimination capability of the model in a boundary area, and acquiring a second image segmentation model;
the segmentation module is used for segmenting the medical image of the second modality according to the second image segmentation model to obtain a segmented medical image of the second modality;
The mixing module is specifically used for:
generating a zero center mask The method comprises the steps of performing bidirectional mixing on a source image and a target image to obtain a mixed image;
;
wherein, Is the firstA mixed image,Representing a mixture of,Is the firstA source domain image,Representing the source domain,A serial number representing an image,Is the firstA target domain image,Representing the target domain,Zero center mask,Representing multiplication,AndRepresenting the height and width of the image, respectively;
mixing the source tag and the pseudo tag by adopting the same mixing mode of mixing images to obtain the mixed tag ;
Acquiring training samples according to the mixed image and the mixed label,A serial number representing an image,Representing the number of blended images.
8. A boundary contrast based domain adapted cross-modality medical image segmentation apparatus comprising a processor, a memory, and a computer program stored in the memory, the computer program being executable by the processor to implement a boundary contrast based domain adapted cross-modality medical image segmentation method as claimed in any one of claims 1 to 6.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform a boundary contrast based domain adaptation cross-modality medical image segmentation method as claimed in any one of claims 1 to 6.
CN202411804970.7A 2024-12-10 2024-12-10 Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison Active CN119295491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411804970.7A CN119295491B (en) 2024-12-10 2024-12-10 Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411804970.7A CN119295491B (en) 2024-12-10 2024-12-10 Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison

Publications (2)

Publication Number Publication Date
CN119295491A CN119295491A (en) 2025-01-10
CN119295491B true CN119295491B (en) 2025-03-25

Family

ID=94165533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411804970.7A Active CN119295491B (en) 2024-12-10 2024-12-10 Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison

Country Status (1)

Country Link
CN (1) CN119295491B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119943297B (en) * 2025-04-03 2025-07-22 磐技(上海)信息科技有限公司 Medical image data processing method and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3163509A1 (en) * 2015-10-30 2017-05-03 Xiaomi Inc. Method for region extraction, method for model training, and devices thereof
CN117975017A (en) * 2024-02-29 2024-05-03 陕西科技大学 Semi-supervised medical image segmentation method based on collaborative contrastive learning and mixed perturbation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9730643B2 (en) * 2013-10-17 2017-08-15 Siemens Healthcare Gmbh Method and system for anatomical object detection using marginal space deep neural networks
CN110443818B (en) * 2019-07-02 2021-09-07 中国科学院计算技术研究所 A Graffiti-based Weakly Supervised Semantic Segmentation Method and System
CN117058159A (en) * 2023-08-08 2023-11-14 深圳大学 Medical image segmentation method, device, equipment and medium based on boundary point labeling
CN118570229A (en) * 2024-05-24 2024-08-30 西北大学 Remote sensing glacier segmentation method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3163509A1 (en) * 2015-10-30 2017-05-03 Xiaomi Inc. Method for region extraction, method for model training, and devices thereof
CN117975017A (en) * 2024-02-29 2024-05-03 陕西科技大学 Semi-supervised medical image segmentation method based on collaborative contrastive learning and mixed perturbation

Also Published As

Publication number Publication date
CN119295491A (en) 2025-01-10

Similar Documents

Publication Publication Date Title
Yang et al. An improving faster-RCNN with multi-attention ResNet for small target detection in intelligent autonomous transport with 6G
Žbontar et al. Stereo matching by training a convolutional neural network to compare image patches
CN114332578B (en) Image anomaly detection model training method, image anomaly detection method and device
CN112784954B (en) Method and device for determining neural network
CN112634296A (en) RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism
CN115485741A (en) Neural Network Models for Image Segmentation
CN113469186B (en) A cross-domain transfer image segmentation method based on a small number of point annotations
CN119295491B (en) Domain-adaptive cross-modal medical image segmentation method, device, equipment and medium based on boundary comparison
CN116980541B (en) Video editing method, device, electronic equipment and storage medium
JP5311899B2 (en) Pattern detector learning apparatus, learning method, and program
Lin et al. Two stream active query suggestion for active learning in connectomics
CN115471714A (en) Data processing method, data processing device, computing equipment and computer readable storage medium
Yu et al. Distribution-aware margin calibration for semantic segmentation in images
Pham et al. Unsupervised training of Bayesian networks for data clustering
CN104077765B (en) Image segmentation device, image partition method
CN117437423A (en) Weakly supervised medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
Liu et al. SSHMT: Semi-supervised hierarchical merge tree for electron microscopy image segmentation
CN113052242B (en) Training method and device of image processing network, image processing method and device
Fan et al. A novel model-based approach for medical image segmentation using spatially constrained inverted Dirichlet mixture models
Rezaei et al. Bayesian uncertainty estimation for detection of long-tail and unseen conditions in abdominal images
Nguyen Gaussian mixture model based spatial information concept for image segmentation
CN117216305A (en) Image retrieval model training method and device, computing equipment and storage medium
CN117036802A (en) Image recognition method, apparatus, device, storage medium, and program product
CN115331004A (en) A zero-sample semantic segmentation method and device based on meaningful learning
Liu et al. SFOD-Trans: semi-supervised fine-grained object detection framework with transformer module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载