+

CN119169179B - Three-dimensional reconstruction method, device, image processing equipment and endoscope system - Google Patents

Three-dimensional reconstruction method, device, image processing equipment and endoscope system

Info

Publication number
CN119169179B
CN119169179B CN202311711391.3A CN202311711391A CN119169179B CN 119169179 B CN119169179 B CN 119169179B CN 202311711391 A CN202311711391 A CN 202311711391A CN 119169179 B CN119169179 B CN 119169179B
Authority
CN
China
Prior art keywords
target
dimensional reconstruction
physiological
target physiological
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311711391.3A
Other languages
Chinese (zh)
Other versions
CN119169179A (en
Inventor
曾好
张贻彤
邓君坪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Lianying Zhirong Medical Technology Co ltd
Original Assignee
Changzhou Lianying Zhirong Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Lianying Zhirong Medical Technology Co ltd filed Critical Changzhou Lianying Zhirong Medical Technology Co ltd
Priority to CN202311711391.3A priority Critical patent/CN119169179B/en
Publication of CN119169179A publication Critical patent/CN119169179A/en
Application granted granted Critical
Publication of CN119169179B publication Critical patent/CN119169179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The application is suitable for the technical field of medical treatment, and provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, image processing equipment and an endoscope system. The three-dimensional reconstruction method comprises the steps of obtaining a plurality of target images shot for a target physiological area containing a focus under the condition that a plurality of different wave band lights are used for irradiating the target physiological area, wherein the penetration depth of the different wave band lights for the target physiological area is not identical, the number of the target images shot under each wave band light is greater than or equal to 1, and carrying out three-dimensional reconstruction on the target physiological area according to the obtained target images. The embodiment of the application can reflect the focus tissue outline more accurately.

Description

Three-dimensional reconstruction method, three-dimensional reconstruction device, image processing apparatus, and endoscope system
Technical Field
The application belongs to the technical field of medical treatment, and particularly relates to a three-dimensional reconstruction method, a three-dimensional reconstruction device, image processing equipment and an endoscope system.
Background
In the field of lesion screening, some doctors extract lesion tissues by taking biopsies to determine the infiltration depth and the cancerous type, and the method has the defects of high operation difficulty, high cost and possibility of stimulating the activity of the lesions. Therefore, at present, when a doctor performs an endoscopic submucosal dissection (Endoscopic Submucosal Dissection, ESD), the doctor usually determines the operation range by injecting a dye, an electrotome mark, or the like, and performs a dissection operation. When the staining agent is input during the operation, the staining agent can only be used for observing the size of the focus on a plane, the depth condition of the focus can not be reflected, and the accuracy and the efficiency of the operation are easily affected.
Therefore, a way to more accurately reflect the contour of focal tissue is needed.
Disclosure of Invention
The embodiment of the application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, image processing equipment and an endoscope system, which can reflect the outline of focus tissue more accurately.
The embodiment of the application provides a three-dimensional reconstruction method, which comprises the steps of obtaining a plurality of target images shot on a target physiological area containing a focus under the condition that a plurality of different wave band lights are used for irradiating the target physiological area, wherein the penetration depth of the different wave band lights on the target physiological area is not identical, the number of the target images shot under each wave band light is greater than or equal to 1, and carrying out three-dimensional reconstruction on the target physiological area according to the obtained target images.
In some embodiments of the first aspect, the three-dimensional reconstruction is performed on the target physiological area according to the obtained target image, and after obtaining the feedback signals of the target physiological area on the optical signals of different wavebands, the three-dimensional reconstruction further includes determining, according to the obtained target image and the feedback signals respectively corresponding to different wavebands, a range where the first target object of the focus is located at each penetration depth of the target physiological area, so as to perform three-dimensional reconstruction on the focus according to the range where the focus is located.
In some embodiments of the first aspect, the determining the range of the focus at each penetration depth of the target physiological area according to the acquired target image includes determining physiological tissue to which each position belongs at each penetration depth of the target physiological area according to the acquired target image, determining the distribution of the blood vessels at each penetration depth of the target physiological area according to the physiological tissue to which each position belongs, and determining the range of the focus at each penetration depth of the target physiological area according to the distribution of the blood vessels.
In some embodiments of the first aspect, the distribution of the blood vessels comprises at least one of a distance between the blood vessels, a number of the blood vessels per unit range.
In some embodiments of the first aspect, the three-dimensional reconstruction of the target physiological region according to the acquired target image includes determining point cloud data at each penetration depth of the target physiological region according to the acquired target image, performing clustering processing of different physiological tissues on the point cloud data, and performing three-dimensional reconstruction of the target physiological region according to a clustering processing result.
In some embodiments of the first aspect, the point cloud data includes a characteristic value of the point cloud at each location in the target physiological region, the characteristic value being related to a signal intensity, the signal intensity being a signal intensity of reflection of light by physiological tissue at the corresponding location.
In some embodiments of the first aspect, the three-dimensional reconstruction method further includes determining three-dimensional size data of the lesion based on a result of the three-dimensional reconstruction, the three-dimensional size data including length, width, and height data of the lesion, and transmitting the three-dimensional size data to a display device to be superimposed and displayed in a two-dimensional image displayed in real time by the display device.
In some embodiments of the first aspect, the two-dimensional image or the result of the three-dimensional reconstruction is used for performing a surgical planning. Wherein the surgical planning includes at least determining a cutting range of the lesion.
In some embodiments of the first aspect, the three-dimensional reconstruction method further includes determining a range of a target blood vessel at each penetration depth of the target physiological region according to the obtained target image, wherein the target blood vessel is a blood vessel with a blood vessel diameter greater than a preset diameter or is a blood vessel of a preset type, and generating prompt information according to the range of the target blood vessel and the range of the focus, wherein the prompt information is used for prompting the position of the target blood vessel relative to the focus.
In some embodiments of the first aspect, the lesion includes a cell cancer, before the three-dimensional reconstruction of the target physiological region, the three-dimensional reconstruction method further includes obtaining the target image corresponding to a longest wavelength light ray of the plurality of different wavelength light rays, the longest wavelength light ray corresponding to a deepest penetration depth of the target physiological region, performing cell cancer detection at the deepest penetration depth according to the target image corresponding to the longest wavelength light ray, and stopping three-dimensional reconstruction if the cell cancer is detected at the deepest penetration depth, and performing a cancer degree prompt.
In some embodiments of the first aspect, the longest wavelength corresponding to the plurality of different band light rays is no more than 950nm.
In some embodiments of the first aspect, the acquiring a plurality of target images captured in a target physiological area including a focus when a plurality of different-band light rays are used for irradiating the target physiological area includes acquiring a plurality of target images captured in the target physiological area when the plurality of different-band light rays are used for sequentially irradiating the target physiological area, or the light rays irradiated to the target physiological area include the plurality of different-band light rays, and the reconstructing the target physiological area in three dimensions according to the acquired target images includes reconstructing the target images to obtain images to be processed corresponding to each of the plurality of different-band light rays, and reconstructing the target physiological area in three dimensions according to the images to be processed.
The three-dimensional reconstruction device provided by the second aspect of the embodiment of the application comprises an image acquisition unit and a three-dimensional reconstruction unit, wherein the image acquisition unit is used for acquiring a plurality of target images shot on a target physiological area containing a focus under the condition that a plurality of different wave band lights are used for irradiating the target physiological area, the penetration depth of the different wave band lights on the target physiological area is not identical, and the number of the target images shot under each wave band light is greater than or equal to 1, and the three-dimensional reconstruction unit is used for carrying out three-dimensional reconstruction on the target physiological area according to the acquired target images.
A third aspect of the embodiments of the present application provides an image processing apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the three-dimensional reconstruction method described above when executing the computer program.
A fourth aspect of the embodiments of the present application provides an image processing system, including an image processing apparatus and a camera, the camera being configured to capture an image of a target, the image processing apparatus being configured to perform the steps of the three-dimensional reconstruction method described above.
A fifth aspect of the embodiments of the present application provides an endoscope system, including a light source, a scope, an image processing apparatus, and a display apparatus, where the image processing apparatus is configured to perform the steps of the three-dimensional reconstruction method described above.
A sixth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the three-dimensional reconstruction method described above.
A seventh aspect of the embodiments of the present application provides a computer program product for causing an image processing apparatus to perform the steps of the three-dimensional reconstruction method described above when the computer program product is run on the image processing apparatus.
In the embodiment of the application, under the condition that a plurality of different wave band lights are utilized to irradiate a target physiological area containing a focus, a plurality of target images shot on the target physiological area are acquired, and according to the acquired target images, the three-dimensional reconstruction is carried out on the target physiological area, because the penetration depths of the different wave band lights on the target physiological area are not identical, and the number of the target images shot under each wave band light is greater than or equal to 1, the condition of the target physiological area can be analyzed on different penetration depths based on the target images, and the contour condition of the focus in the target physiological area on different penetration depths can be better reflected. By adopting the three-dimensional reconstruction mode provided by the application, a doctor can accurately cut off the focus based on the contour conditions of the focus on different penetration depths, thereby being beneficial to improving the accuracy and efficiency of the operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an image processing system according to an embodiment of the present application;
FIG. 2 is a schematic view of an endoscope system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an implementation flow of a three-dimensional reconstruction method according to an embodiment of the present application;
FIG. 4 is a schematic view of light transmitted through the wall of the digestive tract according to an embodiment of the present application;
Fig. 5 is a schematic flow chart of a specific implementation of determining a focus range according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a three-dimensional reconstruction device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be protected by the present application based on the embodiments of the present application.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description of the present specification and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the field of lesion screening, some doctors extract lesion tissues by taking biopsies to determine the infiltration depth and the cancerous type, and the method has the defects of high operation difficulty, high cost and possibility of stimulating the activity of the lesions. Therefore, at present, in general, a doctor performs an endoscopic submucosal dissection operation by determining the operation range by injecting a dye, an electric knife mark, or the like, and performing a dissection operation. When the staining agent is input during the operation, the staining agent can only be used for observing the size of the focus on a plane, the depth condition of the focus can not be reflected, and the accuracy and the efficiency of the operation are easily affected.
In view of this, the application provides a three-dimensional reconstruction method, which is based on images obtained by light rays with different penetrating capacities, and performs three-dimensional reconstruction on a physiological region so as to reproduce the contour condition of a focus in the physiological region at different depths, and can more accurately reflect the contour of focus tissue, thereby helping doctors to promote the accuracy and efficiency of surgery.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Referring to fig. 1, fig. 1 illustrates an image processing system provided by the present application. The image processing system may include a camera and an image processing device.
Wherein the camera may be used to capture an image of the target. In an embodiment of the present application, the camera may be an optical camera, a video camera, or other types of cameras, to which the present application is not limited.
The target image is an image obtained by photographing the target physiological region by the camera. The target physiological region may refer to any physiological region of any human tissue. In an embodiment of the present application, the target image may specifically refer to an image taken of a target physiological region including a lesion in the case of irradiating the target physiological region with light of a plurality of different wavelength bands. Moreover, the penetration depths of the light rays of different wave bands to the target physiological region are not completely the same, and the number of the target images shot under the light rays of each wave band is greater than or equal to 1.
The image processing device can acquire a target image shot by the camera and reconstruct the target physiological area in three dimensions.
In some embodiments of the present application, the image processing system may further include a light source, where the light source can emit light rays of multiple different wavebands toward the target physiological area, so as to meet the requirement of capturing the target image under the light rays of different wavebands. It should be noted that, the light source may emit light rays of different wavebands sequentially, or may emit light rays of different wavebands simultaneously. When the light source emits light rays of a plurality of different wave bands at the same time, the light rays can form mixed light, such as white light. The application is not limited in this regard.
In some scenarios, the image processing system described above may be specifically referred to as an endoscopic system. As shown in fig. 2, the endoscope system may include an image processing device, a light source, a display device, and a scope. The camera can be configured on the mirror body, and the camera can shoot the target image along with the mirror body entering the human body. The photographed target image may be displayed on the display device in real time.
In some embodiments, a control component may also be disposed on the mirror body, and the control component may be used to control the camera to take images.
Of course, fig. 1 and fig. 2 are merely schematic illustrations of an image processing system and an endoscope system, and in practical applications, the image processing system and the endoscope system may further include more or fewer devices, for example, may further include a bus, a power supply device, and the like, which is not a limitation of the present application.
Referring to fig. 3, fig. 3 shows a schematic implementation flow chart of a three-dimensional reconstruction method according to an embodiment of the present application, where the method may be applied to an image processing apparatus, and may be applicable to a situation where a focal tissue contour needs to be reflected more accurately.
The image processing device may be an intelligent device such as a computer or a mobile phone, or may be a device specially used for image processing in a medical system, for example, an image processing device in an endoscope system, which is not limited to the present application.
Specifically, the three-dimensional reconstruction method may include the following steps S301 to S302.
Step S301, acquiring a plurality of target images captured for a target physiological region including a focus when the target physiological region is irradiated with a plurality of different wavelength band light.
Wherein, the target physiological region may refer to any physiological region of any human tissue that includes a lesion. In some embodiments of the present application, the approximate area where the lesion is located may be determined by a lesion screening means such as an endoscope, and the area is three-dimensionally modeled as a target physiological area, so that a lesion excision process is performed according to a three-dimensional model within the target physiological area.
In order to obtain a rough outline of the target physiological region, in an embodiment of the present application, the target physiological region including the lesion may be irradiated with light of a plurality of different wavelength bands. The penetration depth of the light rays in different wave bands to the target physiological area is not identical, and the longer the wavelength is, the deeper the penetration depth is.
For ease of understanding, please refer to fig. 4, fig. 4 shows a schematic diagram of the transmission of light of different wavelength bands to the target physiological region. The wavelength of light A, B, C increases in sequence. In fig. 4, the curve shown at 41 may represent the skin or the wall of the alimentary canal through which light A, B, C may penetrate and reach penetration depths a ', B ', C ', respectively. The depths corresponding to penetration depths a ', B ', C ' are sequentially increased. At this time, the physiological tissue at the penetration depth a' can absorb and reflect the light a, and the reflected signal can obtain the target image under the light a through shooting. Similarly, the physiological tissue at the penetration depth B 'can absorb and reflect the light B, the reflected signal can obtain the target image under the light B through shooting, the physiological tissue at the penetration depth C' can absorb and reflect the light C, and the reflected signal can obtain the target image under the light C through shooting.
In practical applications, the light rays with different wave bands can be narrowband light with different wave bands. For example, it may be 400-480nm band narrowband light, 540-580nm band narrowband light, and 600-900nm band narrowband light. Wherein, the narrow-band light of 400-480nm wave band can reflect the information of the mucous membrane of the shallow layer (the penetration depth is about 0-0.5 mm), the narrow-band light of 540-580nm wave band can reflect the information of the mucous membrane of the position of the vein blood vessel in the middle layer of the mucous membrane (the penetration depth is about 0.5-1.5 mm), and the narrow-band light of 600-900nm wave band can reflect the information from the deep layer of the mucous membrane to the submucosa (the penetration depth is about 1.5-3 mm).
As an embodiment of the present application, in the case that the lesion is cancerous, since cancerous cells are treated by surgery after being penetrated to a certain depth, the treatment is not suitable for early cancer treatment, the longest wavelength corresponding to the light rays of the above various different wave bands may not exceed 950nm, and the wave band (i.e., the detection depth) used can be adapted to early cancer treatment. In other words, if the light wavelength exceeds 950nm, there is still a lesion at the corresponding penetration depth, and a surgical operation is required.
In the embodiment of the present application, the number of the target images photographed under each band of light is 1 or more. As can be seen from the description of fig. 4, the target image under the light of a single wavelength band can reflect the condition of the physiological tissue at the penetration depth corresponding to the target image. The target images under the light rays of different wave bands can respectively reflect the physiological tissue conditions at different penetration depths, so as to reflect the outline of an object (such as a focus, a blood vessel and the like) at each penetration depth.
Step S302, performing three-dimensional reconstruction on the target physiological area according to the acquired target image.
In embodiments of the present application, the target image under a single band of light may reflect two-dimensional planar information over a single penetration depth. Based on the target images under the light rays of different wave bands, two-dimensional plane information of different depths of the target physiological region can be obtained, so that the image processing device can reconstruct the target physiological region in three dimensions according to the obtained target images.
It will be understood that the three-dimensional reconstruction of the target physiological region may be performed on the whole target physiological region, or may be performed on only a part of specific physiological tissues in the target physiological region, for example, only a focus and a blood vessel in the target physiological region, which is not limited to the present application.
The three-dimensional model obtained by three-dimensional reconstruction can be applied to different scenes. For example, the method can be applied to different scenes such as operation planning, lesion degree analysis, lesion display and the like, and the method is not limited in this aspect.
In the embodiment of the application, under the condition that a plurality of different wave band lights are utilized to irradiate a target physiological area containing a focus, a plurality of target images shot on the target physiological area are acquired, and according to the acquired target images, the three-dimensional reconstruction is carried out on the target physiological area, because the penetration depths of the different wave band lights on the target physiological area are not identical, and the number of the target images shot under each wave band light is greater than or equal to 1, the condition of the target physiological area can be analyzed on different penetration depths based on the target images, and the contour condition of the focus in the target physiological area on different penetration depths can be better reflected. By adopting the three-dimensional reconstruction mode provided by the application, a doctor can accurately cut off the focus based on the contour conditions of the focus on different penetration depths, thereby being beneficial to improving the accuracy and efficiency of the operation.
In some embodiments of the present application, the step S302 may include determining point cloud data at each penetration depth of the target physiological region according to the acquired target image, performing clustering processing on different physiological tissues on the point cloud data, and performing three-dimensional reconstruction on the target physiological region according to the clustering processing result.
Specifically, the image processing apparatus can determine point cloud data on the corresponding penetration depth based on the target image corresponding to the single-band light. The point cloud data may include a characteristic value of the point cloud at each location in the target physiological region. The characteristic value may refer to a gray value obtained after image processing, which is related to the signal intensity. The signal intensity refers to the signal intensity of the physiological tissue at the corresponding position for reflecting the light. Because the components of the same physiological tissue are the same or similar, the signal intensity of the reflected light is the same or similar, and when the point cloud data is clustered, the point clouds of the same physiological tissue can be clustered into a set according to the characteristic value and the position to obtain a clustering result. At this time, triangular patches (TRIANGLE MESH) may be processed according to the clustering result to reconstruct three-dimensionally the physiological tissue in the target physiological region.
To facilitate surgical planning or lesion analysis, in step S302, the image processing device may perform three-dimensional reconstruction of lesions in the target physiological region.
Specifically, in some embodiments of the present application, step S302 may include determining a range of the lesion at each penetration depth of the target physiological region according to the acquired target image, so as to reconstruct the lesion in three dimensions according to the range of the lesion.
On the single penetration depth, the range of the focus at the penetration depth can be screened based on the target image under the light corresponding to the penetration depth. At this time, according to the obtained target images corresponding to the light rays of different wave bands, the range of the focus can be determined on each penetration depth of the target physiological region, so as to obtain the two-dimensional contour of the focus on each penetration depth, and further, the focus can be reconstructed in three dimensions according to the range of the focus.
In some embodiments of the application, the image processing device may determine the range of the lesion based on the distribution of the tissue.
Specifically, referring to fig. 5, determining the range of the lesion at each penetration depth of the target physiological region according to the obtained target image may include steps S501 to S503.
Step S501, determining the physiological tissue of each position at each penetration depth of the target physiological region according to the acquired target image.
Specifically, the pixel information of the pixel points in the target image may reflect the condition of the physiological tissue at the corresponding position, for example, different pixel values may correspond to different physiological tissues. Based on the target image under the light of a single wave band, the physiological tissue of each position can be determined on a two-dimensional plane of the corresponding penetration depth.
Wherein the physiological tissue can include but is not limited to mucous membranes and blood vessels.
Step S502, determining the distribution condition of blood vessels according to the physiological tissues of each position at each penetration depth of the target physiological region.
In an embodiment of the application, for a single penetration depth, the distribution of the blood vessels over the penetration depth can be determined from the tissue to which the individual locations belong.
The distribution of the blood vessels may include at least one of a distance between the blood vessels, the number of blood vessels in a unit range (or referred to as density), and the like.
Step S503, according to the distribution of the blood vessels, determining the range of the focus on each penetration depth of the target physiological area.
Because a great amount of oxygen is needed for lesions such as cell canceration, blood vessels are more dense than normal cells in the range of the lesions, and the blood vessels are always in an adhesion state. Therefore, according to the distribution conditions of blood vessels such as the distance between blood vessels, the number of blood vessels in a unit range and the like, whether the blood vessels are adhered or not can be determined, and the area where the blood vessels are adhered is the range where the focus is located. For example, vascular adhesion may be confirmed when the distance between blood vessels is less than a distance threshold, or the number of blood vessels per unit range is greater than a number threshold.
In the embodiment of the present application, the range of the lesion is determined at each penetration depth of the target physiological region, mainly depending on the distribution of capillaries.
The range of the focus on each penetration depth is obtained, namely the two-dimensional contour of the focus on each penetration depth is obtained, and at the moment, three-dimensional reconstruction can be carried out according to the range of the focus to obtain a three-dimensional model of the focus.
In order to facilitate the doctor to perform operation planning, after obtaining the range of the focus, the image processing device can also reflect the range of the focus in which different penetration depths are located in the two-dimensional image.
Specifically, in some embodiments of the present application, the image processing apparatus may determine three-dimensional size data of the lesion based on the result of the three-dimensional reconstruction, and transmit the three-dimensional size data to the display apparatus to superimpose and display the three-dimensional size data on a two-dimensional image displayed in real time by the display apparatus.
The three-dimensional size data may include, among other things, lesion length, width, and height data. The two-dimensional image may be a white light image obtained by photographing the target physiological region. The three-dimensional size data reflected by the range of different penetration depths of the focus is superimposed on the two-dimensional plane presented by the two-dimensional image, so that the depth profile of the focus can be reflected on the two-dimensional image.
Accordingly, the two-dimensional image or the result of the three-dimensional reconstruction can be used for performing the surgical planning. Wherein the surgical planning includes at least determining a cutting range of the lesion. In other words, when the doctor views the white light image, the doctor can comprehensively consider the area of the lower layer beyond the surface layer range to finally confirm the cutting range of the focus, so that the cutting range can cover the range of the focus on any penetration depth. Of course, for an endoscope system, it is also possible to determine the cutting range of a lesion from the dimensional image or the result of three-dimensional reconstruction, and to mark the cutting range on the two-dimensional image for reference by a doctor.
Considering that the surgical procedure needs to avoid larger blood vessels to avoid major bleeding risk, in some embodiments of the present application, the image processing apparatus may further determine, according to the obtained target image, the range of the target blood vessel at each penetration depth of the target physiological region, respectively. Then, according to the range of the target blood vessel and the range of the focus, generating prompt information.
The target blood vessel may specifically refer to a blood vessel having a blood vessel diameter larger than a preset diameter. Or the target vessel is a vessel of a preset type, such as an arterial vessel.
In embodiments of the present application, the hint information may be used to hint the position of the target vessel relative to the lesion. The prompt information may be marked in the two-dimensional image, or may be output in the form of text information or voice information, which is not limited to the present application. Based on the prompt information, the physician can avoid the target blood vessel during the operation to reduce the risk of massive hemorrhage.
As described above, after the cancerous cells have been deeply penetrated to a certain depth, the cancerous cells need to be treated by a surgical operation, and thus, in some embodiments of the present application, before three-dimensional reconstruction of the target physiological region, the three-dimensional reconstruction method may further include acquiring a target image corresponding to the light of the longest wavelength band, performing cell cancer detection at the deepest penetration depth according to the target image corresponding to the light of the longest wavelength band, and stopping three-dimensional reconstruction if the cell cancer is detected at the deepest penetration depth, and prompting the degree of the cancer.
The light rays with the longest wave band are light rays with the longest wavelength in the plurality of different wave band light rays, and the light rays with the longest wave band correspond to the deepest penetration depth of the target physiological area.
The method for detecting the canceration of the cells can refer to the method for determining the range of the focus, and the application is not repeated. The cancerous degree cue is used to suggest that the current lesion is not an early stage cancer.
That is, if a cell cancerous is detected at the deepest penetration depth, indicating that the cancerous cells have penetrated to a depth other than early stage cancer, the treatment is required by a surgical operation, and the image processing apparatus may stop the three-dimensional reconstruction and perform a cancerous degree cue in consideration of the fact that the three-dimensional reconstruction mainly serves the treatment of early stage cancer. In this way, the throughput of the image processing apparatus can be reduced while the prompt for advanced cancer is made fast to prompt the patient to perform surgical treatment as early as possible.
In addition, in step S301 and step S302, when the light source adopts different illumination modes, the present application can provide different processing modes.
Specifically, in some embodiments of the present application, step S301 may include acquiring a plurality of target images obtained by photographing a target physiological region while sequentially irradiating the target physiological region with a plurality of different wavelength bands of light.
The application is not limited to the irradiation sequence of the light.
In some embodiments, the irradiation may be performed in order of wavelength from low to high. For example, the light source may emit narrowband light from the 400nm band to the 900nm band with an increase in wavelength of 10nm every 50 milliseconds. Therefore, if no cell canceration is detected at any penetration depth, the acquisition of the target image with the deeper penetration depth can be stopped, or the image processing of the target image with the deeper penetration depth can be stopped, so that the three-dimensional reconstruction efficiency can be improved.
In other embodiments, the irradiation may be performed in order of the wavelength from high to low. For example, the light source may emit narrowband light from the 900nm band to the 400nm band with a 10nm wavelength reduction every 50 milliseconds. In this way, if the cell cancer is detected at the deepest penetration depth, the three-dimensional reconstruction can be stopped and the degree of cancer can be presented.
Under the condition that the light source is controlled to sequentially irradiate light rays with different wave bands, the obtained target images respectively correspond to one wave band, and at the moment, three-dimensional reconstruction can be completed according to the mode of three-dimensional reconstruction according to the target images.
In other embodiments of the present application, the light illuminating the target physiological region may include a plurality of different wavelength bands of light, for example, the light source may output a mixture of 400nm to 900 nm.
At this time, in step S302, the processing device may perform image reconstruction on the target image to obtain to-be-processed images corresponding to each of the plurality of different band light beams, and then perform three-dimensional reconstruction on the target physiological region according to the to-be-processed images.
Specifically, in this case, it is necessary to calculate image information of a specific wavelength band based on a target image including light of each wavelength band, based on the reflection coefficient L of the camera, the filter transmittance F, the light source wavelength band distribution E, the reflectance O of the physiological tissue to light of different wavelength bands, and the response coefficient S of the camera to each wavelength. The specific formula can be expressed as:
vi=Ei×Fi×Li×Si×oi;
wherein i represents the wavelength value of the image to be processed, and v i is the image to be processed in a specific wave band.
After the images to be processed corresponding to the light rays of each wave band are obtained, the target physiological area can be subjected to three-dimensional reconstruction according to the images to be processed. The method for performing three-dimensional reconstruction according to the image to be processed can refer to the description of the method for performing three-dimensional reconstruction according to the target image, and the description of the method is omitted.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may occur in other orders in accordance with the application.
Fig. 6 is a schematic structural diagram of a three-dimensional reconstruction device 600 according to an embodiment of the present application, where the three-dimensional reconstruction device 600 is configured on an image processing apparatus.
Specifically, the three-dimensional reconstruction apparatus 600 may include:
An image obtaining unit 601, configured to obtain a plurality of target images captured for a target physiological area including a focus when a plurality of different band lights are used to irradiate the target physiological area, where penetration depths of the different band lights for the target physiological area are not exactly the same, and the number of the target images captured under each band light is greater than or equal to 1;
and the three-dimensional reconstruction unit 602 is configured to perform three-dimensional reconstruction on the target physiological region according to the acquired target image.
In some embodiments of the present application, the three-dimensional reconstruction unit 602 may be specifically configured to determine, according to the obtained target image, a range in which the focus is located at each penetration depth of the target physiological region, so as to perform three-dimensional reconstruction on the focus according to the range in which the focus is located.
In some embodiments of the present application, the three-dimensional reconstruction unit 602 may be specifically configured to determine, according to the obtained target image, a tissue to which each position belongs at each penetration depth of the target physiological region, determine, at each penetration depth of the target physiological region, a distribution of the blood vessels according to the tissue to which each position belongs, and determine, according to the distribution of the blood vessels, a range in which the lesion is located at each penetration depth of the target physiological region.
In some embodiments of the application, the distribution of blood vessels comprises at least one of the distance between the blood vessels, the number of blood vessels in a unit range.
In some embodiments of the present application, the three-dimensional reconstruction unit 602 may be specifically configured to determine, according to the obtained target image, point cloud data at each penetration depth of the target physiological region, perform clustering processing on different physiological tissues on the point cloud data, and perform three-dimensional reconstruction on the target physiological region according to a clustering processing result.
In some embodiments of the present application, the point cloud data may include a characteristic value of the point cloud at each location in the target physiological region, where the characteristic value is related to a signal intensity, and the signal intensity is a signal intensity of reflecting light by the physiological tissue at the corresponding location.
In some embodiments of the present application, the three-dimensional reconstruction apparatus 600 may further include an information processing unit for determining three-dimensional size data of the lesion based on a result of the three-dimensional reconstruction, where the three-dimensional size data includes length, width and height data of the lesion, and transmitting the three-dimensional size data to a display device to be superimposed and displayed in a two-dimensional image displayed in real time by the display device.
In some embodiments of the present application, the above information processing unit may be further specifically configured to determine, according to the obtained target image, a range in which a target blood vessel is located at each penetration depth of the target physiological area, and generate, according to the range in which the target blood vessel is located and the range in which the focus is located, prompt information, where the prompt information is used to prompt an orientation of the target blood vessel relative to the focus.
In some embodiments of the present application, the information processing unit may be further specifically configured to, before the three-dimensional reconstruction of the target physiological area, obtain the target image corresponding to a longest-band light, where the longest-band light is a light with a longest wavelength in the plurality of different-band lights, and the longest-band light corresponds to a deepest penetration depth of the target physiological area, perform cell cancer detection at the deepest penetration depth according to the target image corresponding to the longest-band light, and stop three-dimensional reconstruction if the cell cancer is detected at the deepest penetration depth, and perform a cancer degree prompt.
In some embodiments of the application, the longest wavelength corresponding to the plurality of different wavelength bands of light does not exceed 950nm.
In some embodiments of the present application, the image acquisition unit 601 may be specifically configured to acquire a plurality of target images obtained by capturing the target physiological region while sequentially irradiating the target physiological region with the plurality of different wavelength bands of light.
In some embodiments of the present application, the light irradiated to the target physiological region includes the plurality of different wave band light rays, the three-dimensional reconstruction unit 602 may be specifically configured to reconstruct an image of the target image, obtain an image to be processed corresponding to each wave band light ray of the plurality of different wave band light rays, and reconstruct the target physiological region in three dimensions according to the image to be processed.
It should be noted that, for convenience and brevity, the specific working process of the three-dimensional reconstruction device 600 may refer to the corresponding process of the method described in fig. 1 to 5, which is not described herein again.
As shown in fig. 7, a schematic diagram of an image processing apparatus 7 according to an embodiment of the present application is provided. In particular, the image processing device 7 may comprise a processor 70, a memory 71 and a computer program 72, such as a three-dimensional reconstruction program, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps of the various three-dimensional reconstruction method embodiments described above, such as steps S301 to S302 shown in fig. 3. Or the processor 70, when executing the computer program 72, performs the functions of the modules/units in the above-described device embodiments, such as the functions of the image acquisition unit 601 and the three-dimensional reconstruction unit 602 shown in fig. 6.
The computer program may be divided into one or more modules/units which are stored in the memory 71 and executed by the processor 70 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the image processing device.
For example, the computer program may be split into an image acquisition unit and a three-dimensional reconstruction unit. The image acquisition unit is used for acquiring a plurality of target images shot on a target physiological area containing a focus under the condition that a plurality of different wave band lights are used for irradiating the target physiological area, wherein the penetration depth of the different wave band lights on the target physiological area is not completely the same, and the number of the target images shot under each wave band light is larger than or equal to 1; and the three-dimensional reconstruction unit is used for carrying out three-dimensional reconstruction on the target physiological region according to the acquired target image.
The image processing device 7 may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of an image processing device and is not meant to be limiting, and that more or fewer components than shown may be included, or certain components may be combined, or different components may be included, for example, the image processing device may also include an input-output device, a network access device, a bus, etc.
The Processor 70 may be a central processing unit (Central Processing Unit, CPU), or may be another general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the image processing apparatus, such as a hard disk or a memory of the image processing apparatus. The memory 71 may also be an external storage device of the image processing apparatus, such as a plug-in hard disk provided on the image processing apparatus, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 71 may also include both an internal storage unit and an external storage device of the image processing apparatus. The memory 71 is used to store the computer program and other programs and data required by the image processing apparatus. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It should be noted that, for convenience and brevity of description, the structure of the image processing apparatus 7 may refer to the specific description of the structure in the method embodiment, which is not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A three-dimensional reconstruction method, comprising:
acquiring a plurality of target images shot on a target physiological area containing a focus under the condition that the target physiological area is irradiated by utilizing a plurality of different wave band lights, wherein the penetration depth of the different wave band lights on the target physiological area is not completely the same, and the number of the target images shot under each wave band light is greater than or equal to 1;
According to the obtained target image, carrying out three-dimensional reconstruction on the target physiological area;
According to the obtained target image, determining the range of a target blood vessel on each penetration depth of the target physiological region, wherein the target blood vessel is a blood vessel with a blood vessel diameter larger than a preset diameter or is a blood vessel of a preset type;
generating prompt information according to the range of the target blood vessel and the range of the focus, wherein the prompt information is used for prompting the position of the target blood vessel relative to the focus.
2. The three-dimensional reconstruction method according to claim 1, wherein the three-dimensional reconstruction of the target physiological region from the acquired target image comprises:
And determining the range of the focus on each penetration depth of the target physiological region according to the acquired target image, so as to reconstruct the focus in three dimensions according to the range of the focus.
3. The three-dimensional reconstruction method according to claim 2, wherein the determining the range of the lesion at each penetration depth of the target physiological region from the acquired target image comprises:
Determining physiological tissues of each position on each penetration depth of the target physiological region according to the acquired target image;
determining the distribution condition of blood vessels according to the physiological tissues of each position at each penetration depth of the target physiological region;
And determining the range of the focus on each penetration depth of the target physiological area according to the distribution condition of the blood vessels.
4. The three-dimensional reconstruction method as set forth in claim 3, wherein the distribution of the blood vessels includes at least one of a distance between the blood vessels, a number of the blood vessels in a unit range.
5. The three-dimensional reconstruction method according to claim 1, wherein the three-dimensional reconstruction of the target physiological region from the acquired target image comprises:
according to the obtained target image, determining point cloud data of the target physiological region at each penetration depth;
and carrying out clustering treatment on different physiological tissues on the point cloud data, and carrying out three-dimensional reconstruction on the target physiological region according to a clustering treatment result.
6. The method of claim 5, wherein the point cloud data includes characteristic values of point clouds at respective locations in the target physiological region, the characteristic values being related to signal intensities, the signal intensities being signal intensities at which physiological tissue at the respective locations reflects light.
7. The three-dimensional reconstruction method according to claim 1, further comprising:
Determining three-dimensional size data of the lesion based on a result of the three-dimensional reconstruction, the three-dimensional size data including length, width, and height data of the lesion;
And sending the three-dimensional size data to a display device to be overlapped and displayed in a two-dimensional image displayed by the display device in real time.
8. The three-dimensional reconstruction method according to any one of claims 1 to 7, wherein the lesion comprises a cellular canceration;
before the three-dimensional reconstruction of the target physiological region, the three-dimensional reconstruction method further comprises:
Acquiring the target image corresponding to the longest-wave-band light, wherein the longest-wave-band light is the light with the longest wavelength in the plurality of different-wave-band lights, and the longest-wave-band light corresponds to the deepest penetration depth of the target physiological area;
Detecting the cell canceration at the deepest penetration depth according to the target image corresponding to the longest-band light;
if the cell canceration is detected at the deepest penetration depth, stopping three-dimensional reconstruction, and prompting the canceration degree.
9. The three-dimensional reconstruction method according to any one of claims 1 to 7, wherein the longest wavelengths corresponding to the plurality of different band light rays do not exceed 950nm.
10. The three-dimensional reconstruction method according to any one of claims 1 to 7, wherein the acquiring a plurality of target images taken of a target physiological region including a lesion while being irradiated with a plurality of different-band light rays includes acquiring a plurality of target images taken of the target physiological region while being sequentially irradiated with the plurality of different-band light rays;
The method comprises the steps of obtaining a target physiological area, and obtaining a target physiological area, wherein the target physiological area comprises a plurality of different wave band light rays, or the light rays irradiated to the target physiological area comprise the plurality of different wave band light rays, and correspondingly, carrying out three-dimensional reconstruction on the target physiological area according to the obtained target image, wherein the image reconstruction on the target image comprises the steps of obtaining to-be-processed images corresponding to each wave band light ray in the plurality of different wave band light rays respectively, and carrying out three-dimensional reconstruction on the target physiological area according to the to-be-processed images.
11. A three-dimensional reconstruction apparatus, comprising:
an image acquisition unit, configured to acquire a plurality of target images captured for a target physiological area including a focus under irradiation of a plurality of different-band light rays, where penetration depths of the different-band light rays for the target physiological area are not exactly the same, and the number of the target images captured under each band light ray is greater than or equal to 1;
The three-dimensional reconstruction unit is used for carrying out three-dimensional reconstruction on the target physiological area according to the acquired target image;
The information processing unit is used for respectively determining the range of a target blood vessel on each penetration depth of the target physiological region according to the acquired target image, wherein the target blood vessel is a blood vessel with a blood vessel diameter larger than a preset diameter or is a blood vessel of a preset type, and generating prompt information according to the range of the target blood vessel and the range of the focus, wherein the prompt information is used for prompting the position of the target blood vessel relative to the focus.
12. An image processing device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the three-dimensional reconstruction method according to any one of claims 1 to 10 when the computer program is executed by the processor.
13. An endoscope system is characterized by comprising a light source, a mirror body, an image processing device and a display device;
wherein the image processing device is adapted to perform the steps of the three-dimensional reconstruction method as claimed in any one of claims 1 to 10.
CN202311711391.3A 2023-12-12 2023-12-12 Three-dimensional reconstruction method, device, image processing equipment and endoscope system Active CN119169179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311711391.3A CN119169179B (en) 2023-12-12 2023-12-12 Three-dimensional reconstruction method, device, image processing equipment and endoscope system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311711391.3A CN119169179B (en) 2023-12-12 2023-12-12 Three-dimensional reconstruction method, device, image processing equipment and endoscope system

Publications (2)

Publication Number Publication Date
CN119169179A CN119169179A (en) 2024-12-20
CN119169179B true CN119169179B (en) 2025-09-05

Family

ID=93890305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311711391.3A Active CN119169179B (en) 2023-12-12 2023-12-12 Three-dimensional reconstruction method, device, image processing equipment and endoscope system

Country Status (1)

Country Link
CN (1) CN119169179B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120182502B (en) * 2025-05-16 2025-09-23 天津医科大学总医院 Three-dimensional reconstruction method for pathological image features of proliferative sheath tumor

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114599263A (en) * 2019-08-21 2022-06-07 艾科缇弗外科公司 Systems and methods for medical imaging

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6737705B2 (en) * 2013-11-14 2020-08-12 ザ・ジョージ・ワシントン・ユニバーシティThe George Washingtonuniversity Method of operating system for determining depth of injury site and system for generating images of heart tissue
CN106500627B (en) * 2016-10-19 2019-02-01 杭州思看科技有限公司 3-D scanning method and scanner containing multiple and different long wavelength lasers
CN109771052B (en) * 2018-12-28 2021-07-27 合刃科技(深圳)有限公司 Three-dimensional image establishing method and system based on multi-view imaging and multi-polarization state imaging
JP7164890B2 (en) * 2019-06-27 2022-11-02 国立大学法人岩手大学 3D blood vessel recognition method and 3D blood vessel recognition device
CN115715668A (en) * 2022-11-15 2023-02-28 浙江大学 Method and device for detecting lipid plaque by combining OCT imaging and absorption spectrum
CN116958233B (en) * 2023-07-19 2025-10-03 中国科学院深圳先进技术研究院 Skin burn area calculation method based on multi-band infrared structured light system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114599263A (en) * 2019-08-21 2022-06-07 艾科缇弗外科公司 Systems and methods for medical imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
内镜成像新技术在消化道早癌诊断中应用;陈晔;郑嘉岗;;外科研究与新技术;20191228;第8卷(第04期);第278-283页 *

Also Published As

Publication number Publication date
CN119169179A (en) 2024-12-20

Similar Documents

Publication Publication Date Title
US11457817B2 (en) Systems and methods for hyperspectral analysis of cardiac tissue
CN108471949B (en) Reflection-mode multispectral time-resolved optical imaging method and device for tissue classification
CN104066367B (en) Somatoscopic apparatus
CN119169179B (en) Three-dimensional reconstruction method, device, image processing equipment and endoscope system
JP2007190364A (en) Image processing method and apparatus
JP2011254936A (en) Electronic endoscope system, processor device for electronic endoscope, and tracing method
US11564560B2 (en) Image processing apparatus, operating method of image processing apparatus, and computer-readable recording medium
US10408749B2 (en) Imaging method, imaging apparatus, imaging system, surgery support system, and a storage medium
CN215305780U (en) System for assessing survival of parathyroid glands
US20230122835A1 (en) Methods and systems for generating clarified and enhanced intraoperative imaging data
US20220254043A1 (en) A compression area identification platform and method thereof using content analysis
US10825175B2 (en) Apparatus, method, and medium for reducing pixel values in unnecessary regions in medical images
CN116035697B (en) Ultra-pulse CO2 fractional laser scar automatic treatment device based on skin image
Zhao et al. Deep Learning‐Based Denoising in High‐Speed Portable Reflectance Confocal Microscopy
JP5844447B2 (en) Electronic endoscope system, processor device for electronic endoscope, and method for operating electronic endoscope system
CN114472356B (en) An eschar penetration peeling device based on ultrasonic cleaning
CN112867430B (en) Medical image processing system and learning method
KR102348430B1 (en) Apparatus for training eye movement and method for operating the same
CN118557127B (en) Endoscopic device
CN119131001B (en) Image processing method, device, equipment, medium and product based on image signals with different central wavelengths
CN118141305B (en) A dye endoscope capable of laser treatment and its application
KR20180020191A (en) Apparatus for detecting lesion, method for detecting lesion and lesion diagnosis apparatus
Harrison et al. Photoacoustic imaging of brachytherapy seeds using a channel-domain ultrasound array system
CN119693215A (en) Ultrasonic image processing device, image processing method, system and program
CN120641024A (en) Medical device, medical system, operating method and procedure of medical device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载