US20070081706A1 - Systems and methods for computer aided diagnosis and decision support in whole-body imaging - Google Patents
Systems and methods for computer aided diagnosis and decision support in whole-body imaging Download PDFInfo
- Publication number
- US20070081706A1 US20070081706A1 US11/505,081 US50508106A US2007081706A1 US 20070081706 A1 US20070081706 A1 US 20070081706A1 US 50508106 A US50508106 A US 50508106A US 2007081706 A1 US2007081706 A1 US 2007081706A1
- Authority
- US
- United States
- Prior art keywords
- whole
- image data
- statistical
- statistics
- atlas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims description 55
- 238000004195 computer-aided diagnosis Methods 0.000 title description 2
- 238000003745 diagnosis Methods 0.000 claims abstract description 52
- 210000000056 organ Anatomy 0.000 claims abstract description 42
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 201000010099 disease Diseases 0.000 claims abstract description 13
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 13
- 238000002600 positron emission tomography Methods 0.000 claims description 43
- 230000001575 pathological effect Effects 0.000 claims description 38
- 238000002603 single-photon emission computed tomography Methods 0.000 claims description 14
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 13
- 230000005856 abnormality Effects 0.000 claims description 8
- 210000004072 lung Anatomy 0.000 claims description 8
- 210000003734 kidney Anatomy 0.000 claims description 6
- 210000004556 brain Anatomy 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 239000000700 radioactive tracer Substances 0.000 claims description 2
- 238000002591 computed tomography Methods 0.000 description 31
- 230000008859 change Effects 0.000 description 4
- 238000002059 diagnostic imaging Methods 0.000 description 4
- 210000001165 lymph node Anatomy 0.000 description 4
- 238000011002 quantification Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 210000000481 breast Anatomy 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 210000001072 colon Anatomy 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 210000004789 organ system Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- 208000034826 Genetic Predisposition to Disease Diseases 0.000 description 1
- 206010027476 Metastases Diseases 0.000 description 1
- 208000037848 Metastatic bone disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000005906 menstruation Effects 0.000 description 1
- 230000009401 metastasis Effects 0.000 description 1
- 230000001394 metastastic effect Effects 0.000 description 1
- 206010061289 metastatic neoplasm Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000771 oncological effect Effects 0.000 description 1
- 201000008482 osteoarthritis Diseases 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 239000013610 patient sample Substances 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H70/00—ICT specially adapted for the handling or processing of medical references
- G16H70/60—ICT specially adapted for the handling or processing of medical references relating to pathologies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present disclosure relates to systems and methods for providing automated diagnosis and decision support in medical imaging and, more particularly, to systems and methods for providing automated diagnosis and decision support in whole-body imaging.
- Medical imaging is generally recognized as important for diagnosis and patient care.
- medical imaging has experienced an explosive growth due to advances in imaging modalities such as x-rays, computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound.
- CT computed tomography
- MRI magnetic resonance imaging
- Many existing and emerging whole-body imaging modalities are showing great potential for supporting new or improved pre-clinical and clinical applications and workflows.
- These modalities include whole-body MRI, positron emission tomography (PET), single-photon emission computed tomography (SPECT), PET/CT, SPECT/CT, whole-body CT (MDCT), and PET/MR.
- Whole-body imaging modalities can be useful for non-organ-specific oncologic staging. For example, studies have shown that whole-body PET and whole-body MRI are more sensitive and specific than traditional skeletal scintigraphy in the assessment of metastatic bone disease. In general, whole-body PET can identify the extent and severity of disease more accurately than CT, MR or other imaging modalities. PET can be useful for detecting disease before it has grown to a detectable size for CT or MR.
- Challenges in whole-body imaging include the increased data volume, anatomical or functional variability throughout the body, breathing/motion artifacts and joint articulations, and the possibly lower spatial resolution as compared to focused organ/sectional scans.
- a whole-body scan yields a large data volume which requires more reading time.
- the reader such as a physician, a radiologist, or a technologist, may not know precisely what to look for or at what location, or how to differentiate normal versus abnormal, that is, physiological versus pathological uptakes in PET.
- whole-body screening has less spatial resolution than a focused image study of a particular organ.
- Non-rigid image matching or registration is difficult for sectional/organ images such as images of the brain or the lungs, and it becomes even more difficult for whole-body scans, because of the additional articulated motion of body parts. Qualitative changes may be easier to see, but quantitative changes over time are difficult to report, particularly when accurate deformable whole-body matching is not readily achievable.
- a system for providing automatic diagnosis and decision support includes: a medical image database; generative learning and modeling modules that build distributional appearance models and spatial relational models of organs or structures using images from the medical image database; a statistical whole-body atlas that includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules; and discriminative learning and modeling modules that build two-class or multi-class classifiers for performing at least one of organ, structure, or disease detection or segmentation.
- a method for providing automatic diagnosis and decision support in whole-body imaging.
- the method includes: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- a method for providing automatic diagnosis and decision support in whole-body imaging.
- the method includes: detecting and segmenting one or more selected regions of interest in whole-body images, detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
- a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: detecting and segmenting one or more selected regions of interest in whole-body images; detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
- FIG. 1 illustrates a learning-based and database-guided framework to support automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- FIG. 2 illustrates a whole-body computer aided detection and diagnosis decision support system for oncology using PET/CT images, according to an exemplary embodiment of the present invention.
- FIG. 3 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- FIG. 4 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- systems and methods for providing automatic diagnosis and decision support in whole-body imaging use advanced image analysis tools and machine intelligence to aid the image reading process with automatic anatomical and functional interpretation, automatic pathology detection and disease diagnosis.
- systems and methods for providing automatic diagnosis and decision support in whole-body imaging enhance current workflow with accurate disease characterization, change quantification, progression monitoring, disease management and speed up clinical workflow.
- FIG. 1 illustrates a learning-based and database-guided framework to support automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- the learning-based and database-guided framework 100 includes a training phase and a testing phase as indicated by the dashed boxes.
- the training phase shown in FIG. 1 includes a database 110 with associated meta data, generative learning and modeling 120 , a statistical whole body atlas 130 , discriminative learning and modeling 140 , and landmark/object detectors 150 .
- the database 110 with associated meta data may comprise a medical image database.
- images include positron emission tomography (PET) images, computed tomography (CT) images, magnetic resonance imaging (MRI) images, single-photon emission computed tomography (SPECT) images, etc.
- meta data include annotation of organs; annotations of pathological findings, for example, by a location vector, or by lines, curves and/or surfaces; and additional information associated with the images, such as for example, patient demographics, clinical history, etc.
- Generative learning and modeling 120 is based on annotated training data and can extract statistics, find clusters, and build distributional appearance models or spatial relational models of organs or structures, or spatial or conceptual dependency models for diseases.
- the statistical whole body atlas 130 may comprise mathematical appearance and/or spatial models of organs or structures in one or multiple whole body imaging modalities.
- Each model may comprise one or more organs, along with their appearances models and a distributional model to capture their relative locations.
- Discriminative learning and modeling 140 is based on annotated training data and can formulate the problem of organ detection/segmentation as a discriminative learning problem, design/select discriminative features, and build two-class or multi-class learning machines, such as classifiers.
- Landmark/object detectors 150 may comprise software modules, which can take whole body images as input, and output the location of one or multiple landmark points, for example, an upper corner of the left or right lung or the center of the left or right kidney, etc., and the location of one or more organs.
- the testing phase shown in FIG. 1 includes whole body scans/images 160 to be analyzed by the system, automatic landmarking 170 , segmentation, fusion and tracking 180 , and whole-body CAD 190 .
- the automatic landmarking 170 uses both the landmark/object detectors 150 and the statistical whole body atlas 130 .
- the output of this module is an annotated “map” of the whole-body images, with multiple organs localized jointly, based on initial detection of the detector module, and verification by the statistical whole body atlas 130 .
- This module combines the results from the landmark/object detectors 150 , statistical whole body atlas 130 and the automatic landmarking 170 , and outputs the segmentation, for example, in the forms of a bounding box, a bounding surface, and/or a segmentation mask, of the detected/localized organs.
- the results of the automatic landmarking 170 and segmentation, fusion and tracking 180 are used to support any of the following: disease detection in each organ of interest, for example, by comparison with the normal appearance, shape, and location of an organ encoded in the statistical whole body atlas 130 ; diagnosis, for example, by comparison with different models of the detected disease; characterization, such as for example, size, type, stage, etc.; and change quantification across time points.
- a system for providing automatic diagnosis and decision support in whole-body imaging includes a medical image database, generative learning and modeling modules, a statistical whole-body atlas and discriminative learning and modeling modules.
- the medical image database comprises 2-D image data, 3-D image data and/or higher-dimensional image data.
- the generative learning and modeling modules build distributional appearance models and spatial relational models of organs or structures using images from the medical image database.
- the statistical whole-body atlas includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules.
- a statistical whole-body atlas contains one or more 3D canonical whole body scans, each with associated statistics on voxel intensities, statistics on global and local shape deformations, and statistics on joint articulations.
- Each canonical whole body scan can be the average of multiple scans from a database that are rigidly and/or non-rigidly aligned by organs and body structures.
- the associated statistics on voxel intensities can be a non-parametric or parametric representation of the underlying database.
- the associated statistics on voxel intensities can stored in the form of a whole body scan, with each voxel coding the variance of the distribution.
- the statistics on global and local shape deformations can be represented as deformation fields, or as search ranges (in X, Y, and Z directions) coded at each voxel location. This can be learned from ground truth annotations in the form of landmark points, lines, curves, or surfaces.
- the statistical whole-body atlas is statistical in the sense that it encodes variations in the database in terms of appearance, deformations, and articulations.
- the statistics on joint articulations can be represented in a table, recording either the distributions of each joint articulation in the database, the means and variances of articulations, or simply the range of articulations.
- the statistical atlas is a form of descriptive or generative modeling of the database. It encodes anatomical or functional variations of the whole body or body parts (either physiological or pathological).
- the statistical whole-body atlas may include models and priors for both generative and discriminative learning in later stages.
- the statistical whole-body atlas comprises one or more three-dimensional canonical whole-body scans, each with associated statistics on voxel intensities, statistics on global and local shape deformations, and statistics on joint articulations.
- the discriminative learning and modeling modules build two-class or multi-class classifiers for performing at least one of organ or structure detection or segmentation.
- the generative learning and modeling modules may extract statistics and find clusters using images from the medical image database.
- the statistics comprise at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations.
- the discriminative learning and modeling modules may formulate organ detection and segmentation as discriminative learning and at least one of design or select discriminative features.
- a system for providing automatic diagnosis and decision support in whole-body imaging further includes software modules to output a location of one or more landmark points and/or a location of more or more organs, using one or more images from the medical image database.
- landmark points may comprise an upper corner of a left lung, an upper corner of a right lung, a tip of a left kidney or a tip of a right kidney.
- Automatic landmarking in accordance with an exemplary embodiment of the present invention, comprises discriminative learning and modeling of local appearance and shape. Discriminative techniques, including but not limited to, Boosting, AdaBoosting, Support Vector Machines and Decision Trees may be used for the modeling, detection, and localization of whole-body landmarks.
- FIG. 2 illustrates a whole-body computer aided detection and diagnosis decision support system for oncology using PET/CT images, according to an exemplary embodiment of the present invention.
- the input to the system 200 is PET and CT scans of the whole body of a patient.
- the CT scan may include thin slices 220 and/or thick slices 230 .
- the anatomical segmentation module 240 segments the CT images into anatomical organs, with labels for each.
- Each of the organs can be processed by an organ-specific CAD (computer aided detection/diagnosis) module 250 , for example, LungCAD module.
- organ-specific CAD module 250 can be a colon CAD, cardiac CAD, etc.
- the module 240 shown in FIG. 2 performs the linking between the CT and the PET data and the corresponding segmentation results. That is, the segmentation results for each data set are enhanced by using information coming from the other.
- the functional interpretation module 270 segments and interprets the PET image: where are the brains, and where is the bladder, etc., incorporating information from the scan itself, as well as information from learned models and statistics.
- the PET-CAD module 280 performs lesion detection and characterization, incorporating information from the CT segmentation if available.
- the output of the organ CAD module and the PET-CAD module 280 is combined for an overall decision on the disease.
- the output is shown as a decision for cancer staging. For example, is the lesion cancerous or benign; is it primary cancer, lymph node involvement, or metastasis?
- FIG. 3 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- step 310 whole-body imaging is used to obtain a first set of image data of a patient.
- the first set of image data is obtained using one or more imaging modalities.
- the first set of image data may be obtained using whole-body positron emission tomography (PET) and one or more imaging modalities other than whole-body PET.
- PET positron emission tomography
- the first set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI.
- the first set of image data may comprise 2-D image data, 3-D image data and/or higher-dimensional image data.
- a statistical whole-body atlas is fitted using the first set of image data.
- the statistical whole-body atlas includes statistics on voxel intensities, statistics on global and local shape deformations, and/or statistics on joint articulations.
- the statistical whole-body atlas may comprise distributional appearance models and/or spatial relational models of organs or structures in one or more whole-body imaging modalities.
- the first set of image data includes PET data
- fitting the statistical whole-body atlas comprises automatically outlining selected regions of interest with pathological tracer uptakes while discounting physiological uptakes in the PET data.
- the first set of image data may include image data acquired by one or more imaging modalities other than PET.
- Using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis may comprise detecting anatomical or functional abnormalities of the whole body or body parts using the first set of image data.
- Using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis may comprise automatically outlining selected regions of interest in the first set of image data and automatically extracting candidate features of interest from the selected regions of interest.
- Each of the candidate features of interest may be automatically contoured and characterized.
- automatic contouring and characterizing may be based on standard uptake value and/or brain uptake normalized data.
- step 330 the statistical whole-body atlas is used to characterize pathological findings in terms of a diagnosis.
- a method for providing automatic diagnosis and decision support in whole-body imaging further includes using whole-body imaging to obtain a second set of image data of the patient, using the whole-body atlas for pathological findings, and updating pathological findings based on the statistical whole-body atlas using the second set of image data.
- the second set of image data is obtained using one or more imaging modalities.
- the second set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI.
- Updating pathological findings based on the statistical whole-body atlas may comprise encoding anatomical or functional variations of the whole body or body parts using the second set of image data.
- a method for providing automatic diagnosis and decision support in whole-body imaging further includes using whole-body imaging to obtain a third set of image data of the patient, using the whole-body atlas for pathological findings, and updating pathological findings based on the statistical whole-body atlas using the third set of image data.
- the third set of image data is obtained using one or more imaging modalities.
- the third set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI.
- Updating pathological findings based on the statistical whole-body atlas may comprise encoding anatomical or functional variations of the whole body or body parts using the third set of image data.
- FIG. 4 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention.
- one or more selected regions of interest are detected and segmented. Segmenting the selected regions of interest may be accomplished using bounding boxes, centroid locations, bounding surfaces and/or a bounding mask. Segmenting the selected regions of interest may be automatically performed.
- a method of providing automatic diagnosis and decision support in whole-body imaging in accordance with an exemplary embodiment of the present invention may include detecting hotspots in the selected regions of interest from PET or SPECT images.
- the hotspots may be localized based on anatomical dependencies, and may be segmented using organ-specific thresholds. For example, to detect whole body metastatic spread in bones, one needs to see the bones first. To detect and characterize tumor involvement of lymph nodes, one may want to detect great vessels, since lymph nodes appear near these structures.
- CT in a PET/CT scan may be regarded as the source of anatomical context for functional hotspots revealed by PET.
- one or more abnormalities are detected by automatically interpreting whole-body images of the selected regions of interest for pathological findings.
- the whole-body images may be obtained using one or more imaging modalities.
- the whole-body images may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI.
- the selected regions of interest may comprise cells, tissues, organs and/or organ systems.
- the selected regions of interest may comprise a liver, a lung or a kidney.
- the pathological findings are characterized in terms of a diagnosis.
- longitudinal data When longitudinal data is available, clinical analysis of the longitudinal data may be performed, and changes may be output in a clinically meaningful way. Since whole-body scans are often used for longitudinal studies, such as for example, oncology. Change quantification is useful during, for example, therapy response monitoring. Change quantification may be regarded as pattern detection in the context of time. Clinical priors or predispositions such as genetic predisposition are sometimes helpful for interpreting whole body images. For example, knowledge of the primary tumor location can help the assessment of regional lymph node involvement. For example, osteoarthritis will affect the PET uptake level in affected joints, menstruation cycle affects breast uptakes, and radiation therapy can result in elevated uptake levels in the axial skeleton and in neck fat.
- the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
- the present invention may be implemented in software as an application program tangibly embodied on a program storage device.
- the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
- a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: detecting and segmenting one or more selected regions of interest in whole-body images; detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
A system for providing automatic diagnosis and decision support includes: a medical image database; generative learning and modeling modules that build distributional appearance models and spatial relational models of organs or structures using images from the medical image database; a statistical whole-body atlas that includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules; and discriminative learning and modeling modules that build two-class or multi-class classifiers for performing at least one of organ, structure or disease detection or segmentation.
Description
- This application claims the benefit of U.S. Provisional Application Ser. No. 60/721,457 (Attorney Docket No. 2005P17641US), filed Sep. 28, 2005 and entitled “Method and Apparatus for Computer Aided Diagnosis and Therapy Decision Support in Whole Body Imaging”, the content of which is herein incorporated by reference in its entirety.
- 1. Technical Field
- The present disclosure relates to systems and methods for providing automated diagnosis and decision support in medical imaging and, more particularly, to systems and methods for providing automated diagnosis and decision support in whole-body imaging.
- 2. Discussion of Related Art
- Medical imaging is generally recognized as important for diagnosis and patient care. In recent years, medical imaging has experienced an explosive growth due to advances in imaging modalities such as x-rays, computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound. Many existing and emerging whole-body imaging modalities are showing great potential for supporting new or improved pre-clinical and clinical applications and workflows. These modalities include whole-body MRI, positron emission tomography (PET), single-photon emission computed tomography (SPECT), PET/CT, SPECT/CT, whole-body CT (MDCT), and PET/MR.
- Whole-body imaging modalities can be useful for non-organ-specific oncologic staging. For example, studies have shown that whole-body PET and whole-body MRI are more sensitive and specific than traditional skeletal scintigraphy in the assessment of metastatic bone disease. In general, whole-body PET can identify the extent and severity of disease more accurately than CT, MR or other imaging modalities. PET can be useful for detecting disease before it has grown to a detectable size for CT or MR.
- Challenges in whole-body imaging include the increased data volume, anatomical or functional variability throughout the body, breathing/motion artifacts and joint articulations, and the possibly lower spatial resolution as compared to focused organ/sectional scans. For example, a whole-body scan yields a large data volume which requires more reading time. In a non-organ-specific whole-body scan, the reader such as a physician, a radiologist, or a technologist, may not know precisely what to look for or at what location, or how to differentiate normal versus abnormal, that is, physiological versus pathological uptakes in PET. In many cases, whole-body screening has less spatial resolution than a focused image study of a particular organ. Non-rigid image matching or registration is difficult for sectional/organ images such as images of the brain or the lungs, and it becomes even more difficult for whole-body scans, because of the additional articulated motion of body parts. Qualitative changes may be easier to see, but quantitative changes over time are difficult to report, particularly when accurate deformable whole-body matching is not readily achievable.
- There is a vast amount of literature relating to the topics of computer aided detection, diagnosis, and decision support for medical imaging applications. Most of these are organ-specific, focusing on one of breast, brain, heart, lung, colon, etc. Image analysis applications that employ active shape models, active motion models and active appearance models generally use over-simplified statistical assumptions such as the Gaussian assumption. Existing solutions for image segmentation and registration rely on predefined procedures, such as first edge/corner detection and Hough transform, or other hand-crafted features that are not readily scalable or adaptable to changing patient sample bases, evolving disease statistics, or ever-advancing hardware technologies.
- According to an exemplary embodiment of the present invention, a system for providing automatic diagnosis and decision support includes: a medical image database; generative learning and modeling modules that build distributional appearance models and spatial relational models of organs or structures using images from the medical image database; a statistical whole-body atlas that includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules; and discriminative learning and modeling modules that build two-class or multi-class classifiers for performing at least one of organ, structure, or disease detection or segmentation.
- According to an exemplary embodiment of the present invention, a method is provided for providing automatic diagnosis and decision support in whole-body imaging. The method includes: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- According to an exemplary embodiment of the present invention, a method is provided for providing automatic diagnosis and decision support in whole-body imaging. The method includes: detecting and segmenting one or more selected regions of interest in whole-body images, detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
- According to an exemplary embodiment of the present invention, there is provided a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- According to an exemplary embodiment of the present invention, there is provided a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: detecting and segmenting one or more selected regions of interest in whole-body images; detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
- The present invention will become more apparent to those of ordinary skill in the art when descriptions of exemplary embodiments thereof are read with reference to the accompanying drawings.
-
FIG. 1 illustrates a learning-based and database-guided framework to support automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. -
FIG. 2 illustrates a whole-body computer aided detection and diagnosis decision support system for oncology using PET/CT images, according to an exemplary embodiment of the present invention. -
FIG. 3 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. -
FIG. 4 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. - Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
- In exemplary embodiments of the present invention, systems and methods for providing automatic diagnosis and decision support in whole-body imaging use advanced image analysis tools and machine intelligence to aid the image reading process with automatic anatomical and functional interpretation, automatic pathology detection and disease diagnosis. In exemplary embodiments of the present invention, systems and methods for providing automatic diagnosis and decision support in whole-body imaging enhance current workflow with accurate disease characterization, change quantification, progression monitoring, disease management and speed up clinical workflow.
-
FIG. 1 illustrates a learning-based and database-guided framework to support automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. Referring toFIG. 1 , the learning-based and database-guidedframework 100 includes a training phase and a testing phase as indicated by the dashed boxes. - The training phase shown in
FIG. 1 includes adatabase 110 with associated meta data, generative learning andmodeling 120, a statisticalwhole body atlas 130, discriminative learning andmodeling 140, and landmark/object detectors 150. - The
database 110 with associated meta data may comprise a medical image database. Examples of images include positron emission tomography (PET) images, computed tomography (CT) images, magnetic resonance imaging (MRI) images, single-photon emission computed tomography (SPECT) images, etc. Examples of meta data include annotation of organs; annotations of pathological findings, for example, by a location vector, or by lines, curves and/or surfaces; and additional information associated with the images, such as for example, patient demographics, clinical history, etc. - Generative learning and
modeling 120 is based on annotated training data and can extract statistics, find clusters, and build distributional appearance models or spatial relational models of organs or structures, or spatial or conceptual dependency models for diseases. - The statistical
whole body atlas 130 may comprise mathematical appearance and/or spatial models of organs or structures in one or multiple whole body imaging modalities. Each model may comprise one or more organs, along with their appearances models and a distributional model to capture their relative locations. - Discriminative learning and
modeling 140 is based on annotated training data and can formulate the problem of organ detection/segmentation as a discriminative learning problem, design/select discriminative features, and build two-class or multi-class learning machines, such as classifiers. - Landmark/
object detectors 150 may comprise software modules, which can take whole body images as input, and output the location of one or multiple landmark points, for example, an upper corner of the left or right lung or the center of the left or right kidney, etc., and the location of one or more organs. - The testing phase shown in
FIG. 1 includes whole body scans/images 160 to be analyzed by the system,automatic landmarking 170, segmentation, fusion andtracking 180, and whole-body CAD 190. - The
automatic landmarking 170 uses both the landmark/object detectors 150 and the statisticalwhole body atlas 130. The output of this module is an annotated “map” of the whole-body images, with multiple organs localized jointly, based on initial detection of the detector module, and verification by the statisticalwhole body atlas 130. This module combines the results from the landmark/object detectors 150, statisticalwhole body atlas 130 and theautomatic landmarking 170, and outputs the segmentation, for example, in the forms of a bounding box, a bounding surface, and/or a segmentation mask, of the detected/localized organs. - In whole-
body CAD 190, the results of theautomatic landmarking 170 and segmentation, fusion andtracking 180 are used to support any of the following: disease detection in each organ of interest, for example, by comparison with the normal appearance, shape, and location of an organ encoded in the statisticalwhole body atlas 130; diagnosis, for example, by comparison with different models of the detected disease; characterization, such as for example, size, type, stage, etc.; and change quantification across time points. - In an exemplary embodiment of the present invention, a system for providing automatic diagnosis and decision support in whole-body imaging includes a medical image database, generative learning and modeling modules, a statistical whole-body atlas and discriminative learning and modeling modules. The medical image database comprises 2-D image data, 3-D image data and/or higher-dimensional image data.
- The generative learning and modeling modules build distributional appearance models and spatial relational models of organs or structures using images from the medical image database.
- The statistical whole-body atlas includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules. For each given modality, such as for example, PET, CT, or MR, a statistical whole-body atlas contains one or more 3D canonical whole body scans, each with associated statistics on voxel intensities, statistics on global and local shape deformations, and statistics on joint articulations. Each canonical whole body scan can be the average of multiple scans from a database that are rigidly and/or non-rigidly aligned by organs and body structures.
- The associated statistics on voxel intensities can be a non-parametric or parametric representation of the underlying database. As a special case, if we represent the distribution as a Gaussian, the associated statistics on voxel intensities can stored in the form of a whole body scan, with each voxel coding the variance of the distribution. The statistics on global and local shape deformations can be represented as deformation fields, or as search ranges (in X, Y, and Z directions) coded at each voxel location. This can be learned from ground truth annotations in the form of landmark points, lines, curves, or surfaces.
- The statistical whole-body atlas is statistical in the sense that it encodes variations in the database in terms of appearance, deformations, and articulations. The statistics on joint articulations can be represented in a table, recording either the distributions of each joint articulation in the database, the means and variances of articulations, or simply the range of articulations.
- The statistical atlas is a form of descriptive or generative modeling of the database. It encodes anatomical or functional variations of the whole body or body parts (either physiological or pathological). The statistical whole-body atlas may include models and priors for both generative and discriminative learning in later stages. The statistical whole-body atlas comprises one or more three-dimensional canonical whole-body scans, each with associated statistics on voxel intensities, statistics on global and local shape deformations, and statistics on joint articulations.
- The discriminative learning and modeling modules according to an exemplary embodiment of the present invention build two-class or multi-class classifiers for performing at least one of organ or structure detection or segmentation. The generative learning and modeling modules may extract statistics and find clusters using images from the medical image database. The statistics comprise at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations. The discriminative learning and modeling modules may formulate organ detection and segmentation as discriminative learning and at least one of design or select discriminative features.
- In an exemplary embodiment of the present invention, a system for providing automatic diagnosis and decision support in whole-body imaging further includes software modules to output a location of one or more landmark points and/or a location of more or more organs, using one or more images from the medical image database. For example, landmark points may comprise an upper corner of a left lung, an upper corner of a right lung, a tip of a left kidney or a tip of a right kidney. Automatic landmarking, in accordance with an exemplary embodiment of the present invention, comprises discriminative learning and modeling of local appearance and shape. Discriminative techniques, including but not limited to, Boosting, AdaBoosting, Support Vector Machines and Decision Trees may be used for the modeling, detection, and localization of whole-body landmarks.
-
FIG. 2 illustrates a whole-body computer aided detection and diagnosis decision support system for oncology using PET/CT images, according to an exemplary embodiment of the present invention. As indicated by the dashed box labeled withreference numeral 210, the input to thesystem 200 is PET and CT scans of the whole body of a patient. For example, the CT scan may includethin slices 220 and/orthick slices 230. - The
anatomical segmentation module 240 segments the CT images into anatomical organs, with labels for each. Each of the organs can be processed by an organ-specific CAD (computer aided detection/diagnosis)module 250, for example, LungCAD module. It is to be understood that the organ-specific CAD module 250 can be a colon CAD, cardiac CAD, etc. - The
module 240 shown inFIG. 2 performs the linking between the CT and the PET data and the corresponding segmentation results. That is, the segmentation results for each data set are enhanced by using information coming from the other. Thefunctional interpretation module 270 segments and interprets the PET image: where are the brains, and where is the bladder, etc., incorporating information from the scan itself, as well as information from learned models and statistics. - The PET-
CAD module 280 performs lesion detection and characterization, incorporating information from the CT segmentation if available. The output of the organ CAD module and the PET-CAD module 280 is combined for an overall decision on the disease. Here, the output is shown as a decision for cancer staging. For example, is the lesion cancerous or benign; is it primary cancer, lymph node involvement, or metastasis? -
FIG. 3 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. - Referring to
FIG. 3 , instep 310, whole-body imaging is used to obtain a first set of image data of a patient. - The first set of image data is obtained using one or more imaging modalities. For example, the first set of image data may be obtained using whole-body positron emission tomography (PET) and one or more imaging modalities other than whole-body PET. For example, the first set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI. The first set of image data may comprise 2-D image data, 3-D image data and/or higher-dimensional image data.
- In
step 320, a statistical whole-body atlas is fitted using the first set of image data. The statistical whole-body atlas includes statistics on voxel intensities, statistics on global and local shape deformations, and/or statistics on joint articulations. The statistical whole-body atlas may comprise distributional appearance models and/or spatial relational models of organs or structures in one or more whole-body imaging modalities. - In an exemplary embodiment of the present invention, the first set of image data includes PET data, and fitting the statistical whole-body atlas comprises automatically outlining selected regions of interest with pathological tracer uptakes while discounting physiological uptakes in the PET data. The first set of image data may include image data acquired by one or more imaging modalities other than PET. Using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis may comprise detecting anatomical or functional abnormalities of the whole body or body parts using the first set of image data.
- Using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis may comprise automatically outlining selected regions of interest in the first set of image data and automatically extracting candidate features of interest from the selected regions of interest. Each of the candidate features of interest may be automatically contoured and characterized. For example, automatic contouring and characterizing may be based on standard uptake value and/or brain uptake normalized data.
- In
step 330, the statistical whole-body atlas is used to characterize pathological findings in terms of a diagnosis. - In an exemplary embodiment of the present invention, a method for providing automatic diagnosis and decision support in whole-body imaging further includes using whole-body imaging to obtain a second set of image data of the patient, using the whole-body atlas for pathological findings, and updating pathological findings based on the statistical whole-body atlas using the second set of image data. The second set of image data is obtained using one or more imaging modalities. For example, the second set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI. Updating pathological findings based on the statistical whole-body atlas may comprise encoding anatomical or functional variations of the whole body or body parts using the second set of image data.
- In an exemplary embodiment of the present invention, a method for providing automatic diagnosis and decision support in whole-body imaging further includes using whole-body imaging to obtain a third set of image data of the patient, using the whole-body atlas for pathological findings, and updating pathological findings based on the statistical whole-body atlas using the third set of image data. The third set of image data is obtained using one or more imaging modalities. For example, the third set of image data may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI. Updating pathological findings based on the statistical whole-body atlas may comprise encoding anatomical or functional variations of the whole body or body parts using the third set of image data.
-
FIG. 4 is a flowchart showing a method of providing automatic diagnosis and decision support in whole-body imaging, according to an exemplary embodiment of the present invention. - Referring to
FIG. 4 , instep 410, one or more selected regions of interest are detected and segmented. Segmenting the selected regions of interest may be accomplished using bounding boxes, centroid locations, bounding surfaces and/or a bounding mask. Segmenting the selected regions of interest may be automatically performed. - Although not shown in
FIG. 4 , a method of providing automatic diagnosis and decision support in whole-body imaging in accordance with an exemplary embodiment of the present invention may include detecting hotspots in the selected regions of interest from PET or SPECT images. The hotspots may be localized based on anatomical dependencies, and may be segmented using organ-specific thresholds. For example, to detect whole body metastatic spread in bones, one needs to see the bones first. To detect and characterize tumor involvement of lymph nodes, one may want to detect great vessels, since lymph nodes appear near these structures. In a general sense, CT in a PET/CT scan may be regarded as the source of anatomical context for functional hotspots revealed by PET. - In
step 420, one or more abnormalities are detected by automatically interpreting whole-body images of the selected regions of interest for pathological findings. The whole-body images may be obtained using one or more imaging modalities. For example, the whole-body images may be obtained using whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, and/or PET/MRI. The selected regions of interest may comprise cells, tissues, organs and/or organ systems. For example, the selected regions of interest may comprise a liver, a lung or a kidney. - In
step 430, the pathological findings are characterized in terms of a diagnosis. When longitudinal data is available, clinical analysis of the longitudinal data may be performed, and changes may be output in a clinically meaningful way. Since whole-body scans are often used for longitudinal studies, such as for example, oncology. Change quantification is useful during, for example, therapy response monitoring. Change quantification may be regarded as pattern detection in the context of time. Clinical priors or predispositions such as genetic predisposition are sometimes helpful for interpreting whole body images. For example, knowledge of the primary tumor location can help the assessment of regional lymph node involvement. For example, osteoarthritis will affect the PET uptake level in affected joints, menstruation cycle affects breast uptakes, and radiation therapy can result in elevated uptake levels in the axial skeleton and in neck fat. - It is to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one embodiment, the present invention may be implemented in software as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
- According to an exemplary embodiment of the present invention, there is provided a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
- According to an exemplary embodiment of the present invention, there is provided a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising: detecting and segmenting one or more selected regions of interest in whole-body images; detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and characterizing the pathological findings in terms of a diagnosis.
- Although exemplary embodiments of the present invention have been described in detail with reference to the accompanying drawings for the purpose of illustration, it is to be understood that the inventive processes and systems are not to be construed as limited thereby. It will be readily apparent to those of reasonable skill in the art that various modifications to the foregoing exemplary embodiments may be made without departing from the scope of the invention as defined by the appended claims, with equivalents of the claims to be included therein.
Claims (29)
1. A system for providing automatic diagnosis and decision support in whole-body imaging, comprising:
a medical image database;
generative learning and modeling modules that build distributional appearance models and spatial relational models of organs or structures using images from the medical image database;
a statistical whole-body atlas that includes one or more distributional appearance models and spatial relational models of organs or structure, in one or more whole-body imaging modalities, built by the generative learning and modeling modules; and
discriminative learning and modeling modules that build two-class or multi-class classifiers for performing at least one of organ, structure or disease detection or segmentation.
2. The system of claim 1 , wherein the generative learning and modeling modules extract statistics and find clusters using images from the medical image database.
3. The method of claim 2 , wherein the statistics comprise at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations.
4. The method of claim 1 , wherein the statistical whole-body atlas comprises one or more three-dimensional canonical whole-body scans, each with associated statistics on voxel intensities, statistics on global and local shape deformations, and statistics on joint articulations.
5. The system of claim 1 , wherein the discriminative learning and modeling modules formulate organ detection and segmentation as discriminative learning and at least one of design or select discriminative features.
6. The system of claim 1 , further comprising software modules to output at least one of a location of one or more landmark points or a location of one or more organs, using images from the medical image database.
7. The system of claim 6 , wherein landmark points comprise at least one of an upper corner of a left lung, an upper corner of a right lung, a center of a left kidney or a center of a right kidney.
8. A method for providing automatic diagnosis and decision support in whole-body imaging, comprising:
using whole-body imaging to obtain a first set of image data of a patient;
fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and
using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
9. The method of claim 8 , wherein the first set of image data is obtained using one or more imaging modalities.
10. The method of claim 8 , wherein the first set of image data is obtained using whole-body positron emission tomography (PET) and one or more imaging modalities other than whole-body PET.
11. The method of claim 8 , wherein the statistical whole-body atlas comprises at least one of distributional appearance models or spatial relational models of organs or structures in one or more whole-body imaging modalities.
12. The method of claim 8 , wherein the first set of image data includes PET data, and wherein fitting the statistical whole-body atlas comprises automatically outlining selected regions of interest with pathological tracer uptakes while discounting physiological uptakes in the PET data.
13. The method of claim 8 , wherein the first set of image data further includes image data acquired by one or more imaging modalities other than PET, and wherein using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis comprises detecting anatomical or functional abnormalities of the whole body or body parts using the first set of image data.
14. The method of claim 8 , wherein using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis comprises automatically outlining selected regions of interest in the first set of image data and automatically extracting candidate features of interest from the selected regions of interest.
15. The method of claim 14 , wherein using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis further comprises automatically contouring and characterizing each of the candidate features of interest.
16. The method of claim 15 , wherein automatic contouring and characterizing is based on at least one of a standard uptake value or brain uptake normalized data.
17. The method of claim 8 , wherein the first set of image data comprises at least one of 2-D image data, 3-D image data or higher-dimensional image data.
18. The method of claim 8 further comprising:
using whole-body imaging to obtain a second set of image data of the patient; and
updating pathological findings based on the statistical whole-body atlas using the second set of image data.
19. The method of claim 18 , wherein updating pathological findings based on the statistical whole-body atlas comprises encoding anatomical or functional variations of the whole body or body parts using the second set of image data.
20. The method of claim 18 further comprising:
using whole-body imaging to obtain a third set of image data of the patient; and
updating pathological findings based on the statistical whole-body atlas using the third set of image data.
21. The method of claim 20 , wherein updating pathological findings based on the statistical whole-body atlas comprises encoding anatomical or functional variations of the whole body or body parts using the third set of image data.
22. A method for providing automatic diagnosis and decision support in whole body imaging, comprising:
detecting and segmenting one or more selected regions of interest in whole-body images;
detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and
characterizing the pathological findings in terms of a diagnosis.
23. The method of claim 22 , wherein segmenting the selected regions of interest is accomplished using at least one of bounding boxes, centroid locations, bounding surfaces, or a bounding mask.
24. The method of claim 22 , further comprising detecting hotspots in the selected regions of interest.
25. The method of claim 24 , wherein the hotspots are segmented using organ-specific thresholds.
26. The method of claim 22 , wherein the whole-body images are obtained using at least one of whole-body PET, CT, MRI, SPECT, PET/CT, SPECT/CT, or PET/MRI.
27. The method of claim 22 , wherein, when a longitudinal data is available, performing clinical analysis of the longitudinal data and outputting changes in a clinically meaningful way.
28. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising:
using whole-body imaging to obtain a first set of image data of a patient;
fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and
using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
29. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for providing automatic diagnosis and decision support, the method steps comprising:
detecting and segmenting one or more selected regions of interest in whole-body images;
detecting one or more abnormalities by automatically interpreting whole-body images of the selected regions of interest for pathological findings; and
characterizing the pathological findings in terms of a diagnosis.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/505,081 US20070081706A1 (en) | 2005-09-28 | 2006-08-16 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
PCT/US2006/032760 WO2007037848A2 (en) | 2005-09-28 | 2006-08-22 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
US12/968,492 US8588495B2 (en) | 2005-09-28 | 2010-12-15 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US72145705P | 2005-09-28 | 2005-09-28 | |
US11/505,081 US20070081706A1 (en) | 2005-09-28 | 2006-08-16 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/968,492 Division US8588495B2 (en) | 2005-09-28 | 2010-12-15 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070081706A1 true US20070081706A1 (en) | 2007-04-12 |
Family
ID=37563173
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/505,081 Abandoned US20070081706A1 (en) | 2005-09-28 | 2006-08-16 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
US12/968,492 Active US8588495B2 (en) | 2005-09-28 | 2010-12-15 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/968,492 Active US8588495B2 (en) | 2005-09-28 | 2010-12-15 | Systems and methods for computer aided diagnosis and decision support in whole-body imaging |
Country Status (2)
Country | Link |
---|---|
US (2) | US20070081706A1 (en) |
WO (1) | WO2007037848A2 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080144940A1 (en) * | 2006-12-19 | 2008-06-19 | Fujifilm Corporation | Method and apparatus of using probabilistic atlas for feature removal/positioning |
US20080159611A1 (en) * | 2006-11-22 | 2008-07-03 | Xiaodong Tao | System and method for automated patient anatomy localization |
US20080232658A1 (en) * | 2005-01-11 | 2008-09-25 | Kiminobu Sugaya | Interactive Multiple Gene Expression Map System |
US20080298659A1 (en) * | 2007-05-31 | 2008-12-04 | Spence Jeffrey S | Systems and Methods for Processing Medical Image Data to Facilitate Comparisons Among Groups of Subjects |
US20080317314A1 (en) * | 2007-06-20 | 2008-12-25 | Schwartz Lawrence H | Automated Determination of Lymph Nodes in Scanned Images |
US20090028443A1 (en) * | 2007-07-26 | 2009-01-29 | Palo Alto Research Center Incorporated | Innovative ocr systems and methods that combine a template based generative model with a discriminative model |
US20090245609A1 (en) * | 2006-09-25 | 2009-10-01 | Fujiflim Corporation | Anatomical illustration selecting method, anatomical illustration selecting device, and medical network system |
US20090257635A1 (en) * | 2008-02-20 | 2009-10-15 | Simon Harvey | System for defining volumes of interest with reference to anatomical features |
US20090326336A1 (en) * | 2008-06-25 | 2009-12-31 | Heinz Ulrich Lemke | Process for comprehensive surgical assist system by means of a therapy imaging and model management system (TIMMS) |
US20100077358A1 (en) * | 2005-01-11 | 2010-03-25 | Kiminobu Sugaya | System for Manipulation, Modification and Editing of Images Via Remote Device |
WO2010099360A1 (en) * | 2009-02-25 | 2010-09-02 | Mohamed Rashwan Mahfouz | Customized orthopaedic implants and related methods |
US20100259263A1 (en) * | 2007-11-14 | 2010-10-14 | Dominic Holland | Longitudinal registration of anatomy in magnetic resonance imaging |
US20100310181A1 (en) * | 2006-12-22 | 2010-12-09 | Art Advanced Research Technologies Inc. | Registration of optical images of turbid media |
US20110182493A1 (en) * | 2010-01-25 | 2011-07-28 | Martin Huber | Method and a system for image annotation |
WO2011137370A2 (en) * | 2010-04-30 | 2011-11-03 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
WO2012058310A2 (en) * | 2010-10-26 | 2012-05-03 | The Johns Hopkins University | A computer-aided-detection (cad) system for personalized disease detection, assessment, and tracking, in medical imaging based on user selectable criteria |
WO2012096882A1 (en) * | 2011-01-11 | 2012-07-19 | Rutgers, The State University Of New Jersey | Method and apparatus for segmentation and registration of longitudinal images |
US20120207268A1 (en) * | 2011-01-04 | 2012-08-16 | Edda Technology (Suzhou) Ltd. | System and methods for functional analysis of soft organ segments in spect-ct images |
US20120283546A1 (en) * | 2011-05-05 | 2012-11-08 | Siemens Medical Solutions Usa, Inc. | Automatic or Semi-Automatic Whole Body MR Scanning System |
US8831302B2 (en) | 2007-08-17 | 2014-09-09 | Mohamed Rashwan Mahfouz | Implant design analysis suite |
US9078755B2 (en) | 2009-02-25 | 2015-07-14 | Zimmer, Inc. | Ethnic-specific orthopaedic implants and custom cutting jigs |
US9082193B2 (en) * | 2013-04-19 | 2015-07-14 | Siemens Medical Solutions Usa, Inc. | Shape-based image segmentation |
US9129372B2 (en) | 2012-07-30 | 2015-09-08 | General Electric Company | Methods and systems for determining a transformation function to automatically register different modality medical images |
US20150363963A1 (en) * | 2014-06-12 | 2015-12-17 | Siemens Medical Solutions Usa, Inc. | Visualization With Anatomical Intelligence |
CN105451663A (en) * | 2013-06-28 | 2016-03-30 | 皇家飞利浦有限公司 | Ultrasound acquisition feedback guidance to a target view |
US9404986B2 (en) | 2011-05-06 | 2016-08-02 | The Regents Of The University Of California | Measuring biological tissue parameters using diffusion magnetic resonance imaging |
US9568580B2 (en) | 2008-07-01 | 2017-02-14 | The Regents Of The University Of California | Identifying white matter fiber tracts using magnetic resonance imaging (MRI) |
US9741131B2 (en) | 2013-07-17 | 2017-08-22 | Siemens Medical Solutions Usa, Inc. | Anatomy aware articulated registration for image segmentation |
US20170296032A1 (en) * | 2015-03-06 | 2017-10-19 | Fujifilm Corporation | Branching structure determination apparatus, method, and program |
US20180061091A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
WO2019132067A1 (en) * | 2017-12-28 | 2019-07-04 | (재)대구포교성베네딕도수녀회 | Medical information providing system |
US10357218B2 (en) * | 2016-06-30 | 2019-07-23 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for extracting blood vessel |
JP2019149093A (en) * | 2018-02-28 | 2019-09-05 | 富士フイルム株式会社 | Diagnostic assistance system, diagnostic assistance method, and program |
US20200211692A1 (en) * | 2018-12-31 | 2020-07-02 | GE Precision Healthcare, LLC | Facilitating artificial intelligence integration into systems using a distributed learning platform |
CN111602173A (en) * | 2017-10-23 | 2020-08-28 | 布莱恩欧米克斯有限公司 | Tomographic data analysis |
US11183280B2 (en) * | 2011-10-03 | 2021-11-23 | Emerge Clinical Solutions, LLC | System and method for optimizing nuclear imaging appropriateness decisions |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672497B2 (en) * | 2005-12-29 | 2010-03-02 | Carestream Health, Inc. | Computer aided disease detection system for multiple organ systems |
JP4960024B2 (en) * | 2006-06-07 | 2012-06-27 | オリンパスメディカルシステムズ株式会社 | Medical image management method and medical image management apparatus using the same |
US8160323B2 (en) * | 2007-09-06 | 2012-04-17 | Siemens Medical Solutions Usa, Inc. | Learning a coarse-to-fine matching pursuit for fast point search in images or volumetric data using multi-class classification |
US8165361B2 (en) * | 2008-01-14 | 2012-04-24 | General Electric Company | System and method for image based multiple-modality cardiac image alignment |
US20110161854A1 (en) * | 2009-12-28 | 2011-06-30 | Monica Harit Shukla | Systems and methods for a seamless visual presentation of a patient's integrated health information |
US20120116804A1 (en) * | 2010-11-04 | 2012-05-10 | International Business Machines Corporation | Visualization of social medical data |
DE102011079270B4 (en) * | 2011-07-15 | 2016-11-03 | Siemens Healthcare Gmbh | Method and a CT system for recording and distributing whole-body CT data of a polytraumatized patient |
JP6426608B2 (en) * | 2012-08-30 | 2018-11-21 | ザ リージェンツ オブ ユニバーシティー オブ ミシガン | Analytical Morphomics: A Fast Medical Image Analysis Method |
JP5977158B2 (en) * | 2012-11-30 | 2016-08-24 | ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー | Detection apparatus, magnetic resonance apparatus, detection method, and program |
US9113781B2 (en) * | 2013-02-07 | 2015-08-25 | Siemens Aktiengesellschaft | Method and system for on-site learning of landmark detection models for end user-specific diagnostic medical image reading |
EP2881916B1 (en) * | 2013-12-06 | 2018-01-31 | Siemens Healthcare GmbH | Query-specific generation and retrieval of medical volume images |
US9990433B2 (en) | 2014-05-23 | 2018-06-05 | Samsung Electronics Co., Ltd. | Method for searching and device thereof |
US11314826B2 (en) | 2014-05-23 | 2022-04-26 | Samsung Electronics Co., Ltd. | Method for searching and device thereof |
DE102014108357B4 (en) * | 2014-06-13 | 2019-06-19 | FotoFinder Systems GmbH | Whole body imaging and image processing system and method of operation |
CA2967003C (en) | 2014-11-07 | 2023-08-08 | Joel KULLBERG | Whole body image registration method and method for analyzing images thereof |
US10249041B2 (en) | 2015-02-26 | 2019-04-02 | Brainlab Ag | Adaptation of image data sets to an updated atlas-based reference system |
US11026649B2 (en) | 2018-06-25 | 2021-06-08 | Siemens Medical Solutions Usa, Inc. | Method and system for determining tumor burden in medical images |
US10937172B2 (en) | 2018-07-10 | 2021-03-02 | International Business Machines Corporation | Template based anatomical segmentation of medical images |
WO2020132713A1 (en) * | 2018-12-24 | 2020-07-02 | Body Composition Technologies Pty Ltd | Analysing a body |
WO2021254427A1 (en) * | 2020-06-17 | 2021-12-23 | 谈斯聪 | Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition |
CN112991478B (en) * | 2021-02-25 | 2024-02-02 | 复旦大学附属中山医院 | Method for analyzing multi-time different characteristic region parameters based on deep learning image |
CN113392890B (en) * | 2021-06-08 | 2024-10-15 | 南京大学 | Data enhancement-based abnormal sample detection method outside distribution |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040030246A1 (en) * | 1999-10-14 | 2004-02-12 | Cti Pet Systems, Inc. | Combined PET and X-ray CT tomograph |
US20050010445A1 (en) * | 2003-06-27 | 2005-01-13 | Arun Krishnan | CAD (computer-aided decision) support for medical imaging using machine learning to adapt CAD process with knowledge collected during routine use of CAD system |
US6947579B2 (en) * | 2002-10-07 | 2005-09-20 | Technion Research & Development Foundation Ltd. | Three-dimensional face recognition |
US20060004282A1 (en) * | 2004-06-22 | 2006-01-05 | Fuji Photo Film Co., Ltd. | Image generation apparatus, image generation method, and program therefor |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920319A (en) * | 1994-10-27 | 1999-07-06 | Wake Forest University | Automatic analysis in virtual endoscopy |
AU4196299A (en) * | 1998-05-23 | 1999-12-13 | Eolas Technologies, Incorporated | Identification of features of multi-dimensional image data in hypermedia systems |
US6694046B2 (en) * | 2001-03-28 | 2004-02-17 | Arch Development Corporation | Automated computerized scheme for distinction between benign and malignant solitary pulmonary nodules on chest images |
US7283859B2 (en) * | 2001-04-20 | 2007-10-16 | Brigham And Womens' Hospital, Inc. | Artifact suppression in dynamic magnetic resonance imaging |
CN1639721A (en) * | 2002-04-04 | 2005-07-13 | 佳能株式会社 | Cooperative diagnosis system |
US7697972B2 (en) * | 2002-11-19 | 2010-04-13 | Medtronic Navigation, Inc. | Navigation system for cardiac therapies |
US7668351B1 (en) * | 2003-01-17 | 2010-02-23 | Kestrel Corporation | System and method for automation of morphological segmentation of bio-images |
US8090164B2 (en) * | 2003-08-25 | 2012-01-03 | The University Of North Carolina At Chapel Hill | Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning |
US6952097B2 (en) * | 2003-10-22 | 2005-10-04 | Siemens Aktiengesellschaft | Method for slice position planning of tomographic measurements, using statistical images |
US7447345B2 (en) * | 2003-12-16 | 2008-11-04 | General Electric Company | System and method for generating PET-CT images |
-
2006
- 2006-08-16 US US11/505,081 patent/US20070081706A1/en not_active Abandoned
- 2006-08-22 WO PCT/US2006/032760 patent/WO2007037848A2/en active Application Filing
-
2010
- 2010-12-15 US US12/968,492 patent/US8588495B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040030246A1 (en) * | 1999-10-14 | 2004-02-12 | Cti Pet Systems, Inc. | Combined PET and X-ray CT tomograph |
US6947579B2 (en) * | 2002-10-07 | 2005-09-20 | Technion Research & Development Foundation Ltd. | Three-dimensional face recognition |
US20050010445A1 (en) * | 2003-06-27 | 2005-01-13 | Arun Krishnan | CAD (computer-aided decision) support for medical imaging using machine learning to adapt CAD process with knowledge collected during routine use of CAD system |
US20060004282A1 (en) * | 2004-06-22 | 2006-01-05 | Fuji Photo Film Co., Ltd. | Image generation apparatus, image generation method, and program therefor |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100077358A1 (en) * | 2005-01-11 | 2010-03-25 | Kiminobu Sugaya | System for Manipulation, Modification and Editing of Images Via Remote Device |
US8774560B2 (en) * | 2005-01-11 | 2014-07-08 | University Of Central Florida Research Foundation, Inc. | System for manipulation, modification and editing of images via remote device |
US20080232658A1 (en) * | 2005-01-11 | 2008-09-25 | Kiminobu Sugaya | Interactive Multiple Gene Expression Map System |
US20090245609A1 (en) * | 2006-09-25 | 2009-10-01 | Fujiflim Corporation | Anatomical illustration selecting method, anatomical illustration selecting device, and medical network system |
US20080159611A1 (en) * | 2006-11-22 | 2008-07-03 | Xiaodong Tao | System and method for automated patient anatomy localization |
US8045771B2 (en) * | 2006-11-22 | 2011-10-25 | General Electric Company | System and method for automated patient anatomy localization |
US8135199B2 (en) * | 2006-12-19 | 2012-03-13 | Fujifilm Corporation | Method and apparatus of using probabilistic atlas for feature removal/positioning |
US20080144940A1 (en) * | 2006-12-19 | 2008-06-19 | Fujifilm Corporation | Method and apparatus of using probabilistic atlas for feature removal/positioning |
US8620051B2 (en) * | 2006-12-22 | 2013-12-31 | Salim Djerizi | Registration of optical images of turbid media |
US20100310181A1 (en) * | 2006-12-22 | 2010-12-09 | Art Advanced Research Technologies Inc. | Registration of optical images of turbid media |
US7961922B2 (en) | 2007-05-31 | 2011-06-14 | The Board Of Regents Of The University Of Texas System | Systems and methods for processing medical image data to facilitate comparisons among groups of subjects |
US20080298659A1 (en) * | 2007-05-31 | 2008-12-04 | Spence Jeffrey S | Systems and Methods for Processing Medical Image Data to Facilitate Comparisons Among Groups of Subjects |
US20080317314A1 (en) * | 2007-06-20 | 2008-12-25 | Schwartz Lawrence H | Automated Determination of Lymph Nodes in Scanned Images |
US8355552B2 (en) * | 2007-06-20 | 2013-01-15 | The Trustees Of Columbia University In The City Of New York | Automated determination of lymph nodes in scanned images |
US20090028443A1 (en) * | 2007-07-26 | 2009-01-29 | Palo Alto Research Center Incorporated | Innovative ocr systems and methods that combine a template based generative model with a discriminative model |
US7945101B2 (en) * | 2007-07-26 | 2011-05-17 | Palo Alto Research Center Incorporated | Innovative OCR systems and methods that combine a template based generative model with a discriminative model |
US8831302B2 (en) | 2007-08-17 | 2014-09-09 | Mohamed Rashwan Mahfouz | Implant design analysis suite |
US20100259263A1 (en) * | 2007-11-14 | 2010-10-14 | Dominic Holland | Longitudinal registration of anatomy in magnetic resonance imaging |
US20090257635A1 (en) * | 2008-02-20 | 2009-10-15 | Simon Harvey | System for defining volumes of interest with reference to anatomical features |
US20090326336A1 (en) * | 2008-06-25 | 2009-12-31 | Heinz Ulrich Lemke | Process for comprehensive surgical assist system by means of a therapy imaging and model management system (TIMMS) |
US9568580B2 (en) | 2008-07-01 | 2017-02-14 | The Regents Of The University Of California | Identifying white matter fiber tracts using magnetic resonance imaging (MRI) |
US11026799B2 (en) | 2009-02-25 | 2021-06-08 | Zimmer, Inc. | Ethnic-specific orthopaedic implants and custom cutting jigs |
US9078755B2 (en) | 2009-02-25 | 2015-07-14 | Zimmer, Inc. | Ethnic-specific orthopaedic implants and custom cutting jigs |
US9937046B2 (en) | 2009-02-25 | 2018-04-10 | Zimmer, Inc. | Method of generating a patient-specific bone shell |
US9675461B2 (en) | 2009-02-25 | 2017-06-13 | Zimmer Inc. | Deformable articulating templates |
US11806242B2 (en) | 2009-02-25 | 2023-11-07 | Zimmer, Inc. | Ethnic-specific orthopaedic implants and custom cutting jigs |
WO2010099360A1 (en) * | 2009-02-25 | 2010-09-02 | Mohamed Rashwan Mahfouz | Customized orthopaedic implants and related methods |
US10052206B2 (en) | 2009-02-25 | 2018-08-21 | Zimmer Inc. | Deformable articulating templates |
CN102438559A (en) * | 2009-02-25 | 2012-05-02 | 穆罕默德·拉什万·马赫福兹 | Custom orthopedic implants and related methods |
US10070960B2 (en) | 2009-02-25 | 2018-09-11 | Zimmer, Inc. | Method of generating a patient-specific bone shell |
US10130478B2 (en) | 2009-02-25 | 2018-11-20 | Zimmer, Inc. | Ethnic-specific orthopaedic implants and custom cutting jigs |
US8884618B2 (en) | 2009-02-25 | 2014-11-11 | Zimmer, Inc. | Method of generating a patient-specific bone shell |
US8989460B2 (en) | 2009-02-25 | 2015-03-24 | Mohamed Rashwan Mahfouz | Deformable articulating template (formerly: customized orthopaedic implants and related methods) |
US10213311B2 (en) | 2009-02-25 | 2019-02-26 | Zimmer Inc. | Deformable articulating templates |
US9895230B2 (en) | 2009-02-25 | 2018-02-20 | Zimmer, Inc. | Deformable articulating templates |
US11219526B2 (en) | 2009-02-25 | 2022-01-11 | Zimmer, Inc. | Method of generating a patient-specific bone shell |
US20110182493A1 (en) * | 2010-01-25 | 2011-07-28 | Martin Huber | Method and a system for image annotation |
WO2011137370A3 (en) * | 2010-04-30 | 2011-12-15 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
US10984527B2 (en) | 2010-04-30 | 2021-04-20 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
WO2011137370A2 (en) * | 2010-04-30 | 2011-11-03 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
US20160335768A1 (en) * | 2010-04-30 | 2016-11-17 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
US9171369B2 (en) * | 2010-10-26 | 2015-10-27 | The Johns Hopkins University | Computer-aided detection (CAD) system for personalized disease detection, assessment, and tracking, in medical imaging based on user selectable criteria |
WO2012058310A3 (en) * | 2010-10-26 | 2012-06-21 | The Johns Hopkins University | A computer-aided-detection (cad) system for personalized disease detection, assessment, and tracking, in medical imaging based on user selectable criteria |
US20130208963A1 (en) * | 2010-10-26 | 2013-08-15 | The Johns Hopkins University | Computer-aided detection (cad) system for personalized disease detection, assessment, and tracking, in medical imaging based on user selectable criteria |
WO2012058310A2 (en) * | 2010-10-26 | 2012-05-03 | The Johns Hopkins University | A computer-aided-detection (cad) system for personalized disease detection, assessment, and tracking, in medical imaging based on user selectable criteria |
US9058651B2 (en) * | 2011-01-04 | 2015-06-16 | Edda Technology, Inc. | System and methods for functional analysis of soft organ segments in SPECT-CT images |
US20120207268A1 (en) * | 2011-01-04 | 2012-08-16 | Edda Technology (Suzhou) Ltd. | System and methods for functional analysis of soft organ segments in spect-ct images |
US9721338B2 (en) | 2011-01-11 | 2017-08-01 | Rutgers, The State University Of New Jersey | Method and apparatus for segmentation and registration of longitudinal images |
WO2012096882A1 (en) * | 2011-01-11 | 2012-07-19 | Rutgers, The State University Of New Jersey | Method and apparatus for segmentation and registration of longitudinal images |
US9295406B2 (en) * | 2011-05-05 | 2016-03-29 | Siemens Medical Solutions Usa, Inc. | Automatic or semi-automatic whole body MR scanning system |
US20120283546A1 (en) * | 2011-05-05 | 2012-11-08 | Siemens Medical Solutions Usa, Inc. | Automatic or Semi-Automatic Whole Body MR Scanning System |
US9404986B2 (en) | 2011-05-06 | 2016-08-02 | The Regents Of The University Of California | Measuring biological tissue parameters using diffusion magnetic resonance imaging |
US20220122703A1 (en) * | 2011-10-03 | 2022-04-21 | Emerge Clinical Solutions, LLC | System and Method for Optimizing Nuclear Imaging Appropriateness Decisions |
US11183280B2 (en) * | 2011-10-03 | 2021-11-23 | Emerge Clinical Solutions, LLC | System and method for optimizing nuclear imaging appropriateness decisions |
US9342885B2 (en) | 2012-07-30 | 2016-05-17 | General Electric Company | Method of generating a multi-modality anatomical atlas |
US9129372B2 (en) | 2012-07-30 | 2015-09-08 | General Electric Company | Methods and systems for determining a transformation function to automatically register different modality medical images |
US9082193B2 (en) * | 2013-04-19 | 2015-07-14 | Siemens Medical Solutions Usa, Inc. | Shape-based image segmentation |
US20160143627A1 (en) * | 2013-06-28 | 2016-05-26 | Koninklijke Philips N.V. | Ultrasound acquisition feedback guidance to a target view |
CN105451663A (en) * | 2013-06-28 | 2016-03-30 | 皇家飞利浦有限公司 | Ultrasound acquisition feedback guidance to a target view |
RU2683720C2 (en) * | 2013-06-28 | 2019-04-01 | Конинклейке Филипс Н.В. | Ultrasound acquisition feedback guidance to target view |
US10702248B2 (en) * | 2013-06-28 | 2020-07-07 | Koninklijke Philips N.V. | Ultrasound acquisition feedback guidance to a target view |
US9741131B2 (en) | 2013-07-17 | 2017-08-22 | Siemens Medical Solutions Usa, Inc. | Anatomy aware articulated registration for image segmentation |
US20150363963A1 (en) * | 2014-06-12 | 2015-12-17 | Siemens Medical Solutions Usa, Inc. | Visualization With Anatomical Intelligence |
US10460508B2 (en) * | 2014-06-12 | 2019-10-29 | Siemens Healthcare Gmbh | Visualization with anatomical intelligence |
US20170296032A1 (en) * | 2015-03-06 | 2017-10-19 | Fujifilm Corporation | Branching structure determination apparatus, method, and program |
US10357218B2 (en) * | 2016-06-30 | 2019-07-23 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for extracting blood vessel |
US11344273B2 (en) * | 2016-06-30 | 2022-05-31 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for extracting blood vessel |
US10410384B2 (en) * | 2016-08-31 | 2019-09-10 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
US10614599B2 (en) * | 2016-08-31 | 2020-04-07 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
US10304220B2 (en) * | 2016-08-31 | 2019-05-28 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
US20180061091A1 (en) * | 2016-08-31 | 2018-03-01 | International Business Machines Corporation | Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning |
CN111602173A (en) * | 2017-10-23 | 2020-08-28 | 布莱恩欧米克斯有限公司 | Tomographic data analysis |
WO2019132067A1 (en) * | 2017-12-28 | 2019-07-04 | (재)대구포교성베네딕도수녀회 | Medical information providing system |
JP2019149093A (en) * | 2018-02-28 | 2019-09-05 | 富士フイルム株式会社 | Diagnostic assistance system, diagnostic assistance method, and program |
US10957442B2 (en) * | 2018-12-31 | 2021-03-23 | GE Precision Healthcare, LLC | Facilitating artificial intelligence integration into systems using a distributed learning platform |
US20200211692A1 (en) * | 2018-12-31 | 2020-07-02 | GE Precision Healthcare, LLC | Facilitating artificial intelligence integration into systems using a distributed learning platform |
US12040075B2 (en) | 2018-12-31 | 2024-07-16 | GE Precision Healthcare, LLC | Facilitating artificial intelligence integration into systems using a distributed learning platform |
Also Published As
Publication number | Publication date |
---|---|
WO2007037848A2 (en) | 2007-04-05 |
US8588495B2 (en) | 2013-11-19 |
US20110142320A1 (en) | 2011-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8588495B2 (en) | Systems and methods for computer aided diagnosis and decision support in whole-body imaging | |
US7876938B2 (en) | System and method for whole body landmark detection, segmentation and change quantification in digital images | |
Bi et al. | Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies | |
EP2126609B1 (en) | Tools for aiding in the diagnosis of neurodegenerative diseases | |
US7672491B2 (en) | Systems and methods providing automated decision support and medical imaging | |
EP1851722B1 (en) | Image processing device and method | |
JP5954769B2 (en) | Medical image processing apparatus, medical image processing method, and abnormality detection program | |
Kang et al. | Heart chambers and whole heart segmentation techniques | |
US7653263B2 (en) | Method and system for volumetric comparative image analysis and diagnosis | |
US11443433B2 (en) | Quantification and staging of body-wide tissue composition and of abnormal states on medical images via automatic anatomy recognition | |
US8170347B2 (en) | ROI-based assessment of abnormality using transformation invariant features | |
Liu | Symmetry and asymmetry analysis and its implications to computer-aided diagnosis: A review of the literature | |
US20080021301A1 (en) | Methods and Apparatus for Volume Computer Assisted Reading Management and Review | |
Peña-Solórzano et al. | Findings from machine learning in clinical medical imaging applications–Lessons for translation to the forensic setting | |
Ben-Cohen et al. | Liver lesion detection in CT using deep learning techniques | |
Giaccone et al. | PET images atlas-based segmentation performed in native and in template space: a radiomics repeatability study in mouse models | |
Dubey et al. | The brain MR image segmentation techniques and use of diagnostic packages | |
Dhalia Sweetlin et al. | Patient-Specific Model Based Segmentation of Lung Computed Tomographic Images. | |
Liu et al. | Propagation graph fusion for multi-modal medical content-based retrieval | |
Walluscheck et al. | MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs | |
Wakchaure et al. | The Detection and Visualization of Brain Tumors on T2-Weighted MRI Images Using Multiparameter Feature Blocks | |
Vinutha et al. | A comprehensive survey on tools for effective Alzheimer’s disease detection | |
Exarchos et al. | Handbook of research on advanced techniques in diagnostic imaging and biomedical applications | |
Malik | Evaluation of automated organ segmentation for total-body PET-CT | |
Hsu et al. | Multi-Modal Fusion in Thermal Imaging and MRI for Early Cancer Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SIEMENS MEDICAL SOLUTIONS USA, INC., PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHOU, XIANG;KRISHNAN, ARUN;GUPTA, ALOK;REEL/FRAME:018426/0064;SIGNING DATES FROM 20061019 TO 20061020 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |