WO2018127007A1 - Procédé et système d'acquisition d'image de profondeur - Google Patents
Procédé et système d'acquisition d'image de profondeur Download PDFInfo
- Publication number
- WO2018127007A1 WO2018127007A1 PCT/CN2017/119992 CN2017119992W WO2018127007A1 WO 2018127007 A1 WO2018127007 A1 WO 2018127007A1 CN 2017119992 W CN2017119992 W CN 2017119992W WO 2018127007 A1 WO2018127007 A1 WO 2018127007A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- parallax
- depth map
- sub
- matching cost
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000005070 sampling Methods 0.000 claims description 16
- 238000005457 optimization Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 11
- 230000002776 aggregation Effects 0.000 claims description 7
- 238000004220 aggregation Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 abstract description 21
- 230000000694 effects Effects 0.000 abstract description 10
- 230000006870 function Effects 0.000 description 36
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 238000013519 translation Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013517 stratification Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Definitions
- the present invention relates to the field of computer vision technology, and in particular, to a method and system for acquiring a depth map.
- Depth precision is one of the most important characteristics of sensors used for distance estimation.
- Depth maps are a very common application in the positioning and dimensioning of automated systems.
- the stereo camera system uses pixel-level correspondence between two images taken from different angles to achieve image depth estimation.
- the depth map accuracy based on integer parallax is not enough, because the depth map based on the stereo matching of integer pixels is discretely distributed in the parallax space, and the layering effect is obvious, so that some high precision cannot be achieved.
- the measurement accuracy requirements of the application scenario of the demand To this end, it is necessary to optimize the depth map in units of integer parallax, so that the depth map information is continuous, so that accurate three-dimensional measurement information can be obtained in the application.
- the optimization method for depth map mainly focuses on depth map processing based on image information. These methods rely on the pixel information, edge, etc. of the depth map, and basically process the two-dimensional depth image by filtering, classical interpolation and the like. To some extent, it is possible to improve the imaging of the depth map, but it does not meet the requirements of measurement applications with very high precision requirements. Therefore, how to obtain a depth map with accurate depth accuracy is a technical problem that a person skilled in the art needs to solve.
- the object of the present invention is to provide a method and a system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
- the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
- the present invention provides a method for obtaining a depth map, including:
- a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and the integer parallax space is multi-fitted by the continuous matching cost fitting function to obtain a continuous parallax space, and the calculation sub- Pixel-level precision pixel coordinates to obtain a sub-pixel parallax space;
- a depth map is obtained by calculating a depth value from the sub-pixel parallax space.
- the disparity of the left image and the right image is obtained by using a parallax acquisition method, including:
- the parallax of the left image and the right image is obtained by the fast matching method of the same name image point.
- calculating, according to the disparity, a matching cost difference between two adjacent pixels in the depth map including:
- C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point
- C d-1 is the current pixel point corresponding to the disparity d-1
- the matching cost C d+1 is the matching cost of the current pixel point at the parallax d+1
- LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point
- RightDif is the current pixel point and the pixel point to the right of the same name point
- the matching cost is poor.
- the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference includes:
- calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space including:
- the depth value is calculated according to the sub-pixel parallax space, and the depth map is obtained, including:
- Q is the re-projection matrix
- Z is the sub- The depth value after pixel optimization.
- the method further includes:
- the depth map is output through a display.
- the invention also provides a depth map acquisition system, comprising:
- a disparity calculation module configured to acquire a disparity of a left image and a right image by using a parallax acquisition method
- a matching cost difference calculation module configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
- a sub-pixel disparity space obtaining module configured to perform a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and use the continuous matching cost fitting function to perform multiple times A continuous parallax space is obtained, and pixel coordinates of sub-pixel precision are calculated to obtain a sub-pixel parallax space;
- a depth map obtaining module configured to calculate a depth value according to the sub-pixel parallax space, to obtain a depth map.
- the sub-pixel disparity space acquiring module includes:
- Continuous matching cost fit function determining unit for utilizing Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
- LeftDif is the matching cost difference between the current pixel point and the left pixel point
- RightDif is the matching cost difference between the current pixel point and the right pixel point
- d is the parallax after the current pixel point integer 3D reconstruction
- C d is the parallax d of the current pixel point.
- system further includes:
- an output module configured to output the depth map through a display.
- a method for obtaining a depth map according to the present invention includes: obtaining a disparity of a left image and a right image by using a disparity acquisition method; and calculating, between the pixel points of each pixel point adjacent to the same name point in the depth map according to the disparity Matching cost difference; a continuous matching cost fitting function based on integer pixel sampling determined by matching cost difference, and fitting the integer parallax space to the continuous parallax space by using the continuous matching cost fitting function to obtain a continuous parallax space, Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space; calculating a depth value according to the sub-pixel disparity space to obtain a depth map;
- the method directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the sub-pixel space-based stereo matching greatly reduces the required memory space and shortens the time.
- the running time of the algorithm is fitted by the continuous function fitting method to fit the discrete parallax space to obtain a continuous parallax space, thus eliminating the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved.
- the three-dimensional measurement scenes that require different measurement accuracy, especially the three-dimensional measurement scenes that require very high measurement accuracy; the present invention also provides a depth map acquisition system, which has the above-mentioned beneficial effects, and will not be described herein.
- FIG. 1 is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a stereo matching integer parallax space according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of a schematic diagram of a depth information space according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of matching cost curve fitting according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a pixel of a sensor according to an embodiment of the present invention.
- FIG. 6 is a structural block diagram of an acquisition system of a depth map according to an embodiment of the present invention.
- FIG. 7 is a structural block diagram of an acquisition system of another depth map according to an embodiment of the present invention.
- the core of the present invention is to provide a method and system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
- the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
- the research on the acquisition method of depth map usually focuses on a simple window solution based on stereo matching.
- the original classification proposed by Scharstein and Szeliski divides the stereo algorithm into two main groups: the local method and the global method.
- the local algorithm class uses the limited support area around each point to calculate the disparity. This method is based on the selected matching window, and usually uses matching cost aggregation to achieve a smoothing effect. Large windows reduce the number of unsuccessful matches and reduce the mismatch rate of deep discontinuities.
- the main advantage of the local method is that the computational complexity is small and can be realized in real time.
- the main disadvantage is that only local information near the pixel is used at each step, resulting in the inability of these methods to handle uncharacterized regions or repeated texture regions.
- the method provides a method for acquiring a depth map; the method can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
- the algorithm requires less memory, simple calculation, less time, and good real-time performance. .
- FIG. 1 is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention
- the main purpose of this step is to obtain the parallax of pixel-level precision, that is, to obtain the disparity d of the left image and the right image by using the parallax acquisition method.
- This embodiment does not limit the calculation method of the specific parallax. Since the parallax is the basis of subsequent calculations, in order to ensure the reliability and accuracy of the subsequent calculated values, a highly accurate parallax calculation method, such as a fast matching method of the same name image point, can be selected here.
- the user should not only consider the accuracy of the parallax calculation, but also the calculation speed of the hardware and the requirements of the real-time performance of the system.
- the camera system calibration is first performed.
- the internal parameters and external parameters of the camera in the binocular camera system are first calibrated, and the camera matrix K, the distortion matrix D, the rotation matrix R, the translation vector T and the re-projection matrix Q of the binocular camera system are obtained.
- the re-projection matrix is Wherein, x T x of a component T, T is the translation vector.
- the pixel-level parallax d is calculated.
- the parallax is the difference between the left and right cameras to observe the same target.
- the stereoscopic vision is described as the distance of the same name in the left image and the right image on the X axis.
- the mathematical description is:
- d x l -x r ; where x l is the distance of the point of the same name in the left image on the X axis, and x r is the distance of the point of the same name in the right image on the X axis.
- S120 A continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and fitting the integer parallax space by the continuous matching cost fitting function to obtain a continuous parallax space. Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel parallax space;
- the precise position of an ideal point on the sensor pixel in the world coordinate system cannot be reflected on the image.
- the pixel coordinate position obtained on the image is only a part of the pixel.
- the position information (such as the center point), the image information represented by this coordinate cannot reflect the image information of the entire pixel, which fundamentally causes the pixel point coordinate positioning error, that is, the image recognition error, to the image.
- the integer parallax obtained by stereo matching is discontinuous.
- x is an image plane x-axis
- y is an image plane y-axis.
- parallax d is discontinuous, and in the d-layer parallax, from the near to the far, the 0th layer, the 1st layer, the 2nd layer, ..., the d-1th layer, the dth layer.
- the conversion relationship between integer parallax and depth obtained by stereo matching is:
- the integer parallax is discontinuous. Then, after three-dimensional recovery, the depth information transformed by the integer parallax is also distributed in discrete layers, that is, the corresponding depth information space is also discontinuous, as shown in FIG. Shown.
- the distance between the d+1 layer and the d layer is:
- ⁇ is the pixel size, that is, the spatial size occupied by each pixel
- b is the baseline length
- z is the depth value. It can be seen that the larger the depth value, the more obvious the stratification effect, and the smaller the parallax, the greater the error brought in. The smaller the depth value, the larger the parallax and the smaller the error.
- Sub-pixel optimization based on depth map is generally implemented using a matching cost curve fitting method based on shaped sampling. Briefly described as: curve fitting using the point to be optimized and the matching cost of two points adjacent to it. As shown in Fig. 4, the minimum value point C s (minimum point) can be obtained by curve fitting of points C 1 , C 2 , and C 3 . Point C s is the matching cost function corresponding to the parallax of the sub-pixel level, that is, the point optimized for C 2 .
- Steps S110 to S120 are sub-pixel optimization processes by curve fitting. That is, step S110 to step S120 mainly include a matching cost difference calculation process, a fitting variable determination process, a continuous matching cost fitting function determining process, and a pixel coordinate calculation process of sub-pixel level precision. This embodiment does not limit the specific implementation forms of these several processes. As long as it is guaranteed to use the continuous matching cost fit function to perform the matching cost fit, the sub-pixel disparity space can be determined.
- calculating a matching cost difference between two adjacent pixel points in the depth map may include, that is, the matching cost difference calculation process may include:
- C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point
- C d-1 is the current pixel point corresponding to the disparity d-1
- the matching cost C d+1 is the matching cost of the current pixel point at the parallax d+1
- LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point
- RightDif is the current pixel point and the pixel point to the right of the same name point
- the matching cost is poor.
- the method of continuous function fitting is used to fit and interpolate the discrete parallax spaces to obtain a continuous parallax space, thereby weakening the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for Measurement accuracy requires different 3D measurement scenarios, especially 3D measurement scenarios where measurement accuracy is very high.
- a cosine function as a continuous matching cost fitting function for matching cost fitting. That is, the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference may include, that is, the fitting variable and the continuous matching cost fitting function determining process may include:
- the cosine function is used to perform the matching cost fitting of the integer pixel sampling, and the pixel locking phenomenon can be substantially eliminated. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
- the layering is obvious, which reflects that the integer pixel precision is insufficient to describe accurate image information, however, the image information acquired by the image acquisition sensor is pixel-based. Image information.
- image information In order to obtain accurate image depth information, it is necessary to sub-pixel optimize the depth map based on the parallax space of the integer, that is, optimize the parallax space of the integer.
- the image pixel obtained by the sensor occupies a certain space.
- the pixel position in the stereo matching based on the integer pixel is only a certain point of the pixel, and generally takes the geometric center point coordinate of the pixel as the position coordinate of the integer pixel, as shown in FIG. 5 .
- a is the coordinates of the pixel A in the sensor, but in reality, the pixel A also contains points such as b, c, d, etc., therefore, only the point a is image information that cannot represent the entire pixel.
- This adjustment range is within a half pixel range of the integer parallax, that is, a pixel with sub-pixel precision is preferably calculated. Coordinates, resulting in subpixel parallax space can include:
- sub-pixel optimization is a process of fitting a floating-point parallax space in an integer parallax space, using a continuous matching cost fitting function to perform multiple fittings, and the process of obtaining discrete data by curve fitting to obtain continuous data. .
- step S110 to step S120 are sub-pixel optimization processes.
- the sub-pixel parallax space is converted to obtain a depth value, thereby obtaining a depth map after sub-pixel optimization based on the depth map.
- This embodiment does not limit the conversion process from the sub-pixel parallax space to the depth map.
- the depth value is calculated according to the sub-pixel disparity space, and the obtaining the depth map may include:
- Q is the re-projection matrix
- Z The depth value after optimization for subpixels.
- the method for obtaining the depth map directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the stereo matching based on the sub-pixel space is greatly While reducing the required memory space, the running time of the algorithm is also shortened, and the discrete parallax space is fitted and fitted by the continuous function fitting method to obtain a continuous parallax space, thereby eliminating the layering effect of the depth map.
- the accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for a three-dimensional measurement scene with different measurement accuracy requirements, especially a three-dimensional measurement scene with very high measurement accuracy; further, a cosine function is used for matching of integer pixel sampling.
- the cost fit can substantially eliminate pixel locking. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
- the embodiment may further include: after acquiring the depth map:
- the depth map is output through a display.
- the depth map after optimization is displayed. From the method in the above embodiment, the depth map obtained is very dense, that is, the depth information is continuous and highly accurate.
- the display here may be a display device such as a display screen.
- the acquisition system of the depth map provided by the embodiment of the present invention is described below.
- the acquisition system of the depth map described below and the acquisition method of the depth map described above may refer to each other.
- FIG. 6 is a structural block diagram of a system for acquiring a depth map according to an embodiment of the present invention.
- the acquiring system may include:
- a disparity calculation module 100 configured to acquire disparity of a left image and a right image by using a disparity acquisition method
- the matching cost difference calculation module 200 is configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
- the sub-pixel disparity space obtaining module 300 is configured to use a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and perform the integer disparity space multiple times by using the continuous matching cost fitting function Fitting to obtain a continuous parallax space, calculating pixel coordinates of sub-pixel precision, and obtaining a sub-pixel parallax space;
- the depth map obtaining module 400 is configured to calculate a depth value according to the sub-pixel parallax space to obtain a depth map.
- the sub-pixel disparity space obtaining module 300 may include:
- Continuous matching cost fit function determining unit for utilizing Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
- LeftDif is the matching cost difference between the current pixel point and the left pixel point
- RightDif is the matching cost difference between the current pixel point and the right pixel point
- d is the parallax after the current pixel point integer 3D reconstruction
- C d is the parallax d of the current pixel point.
- the acquiring system may further include:
- the output module 500 is configured to output the depth map through a display.
- the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
- the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
Landscapes
- Measurement Of Optical Distance (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
L'invention concerne un procédé et un système d'acquisition d'image de profondeur, ledit procédé comprenant : l'utilisation d'un procédé d'acquisition de parallaxe pour obtenir la parallaxe d'une image gauche et d'une image droite; en fonction de la parallaxe, calculer la différence de coût de correspondance entre chaque point de pixel dans une image de profondeur et deux points de pixel adjacents homologues de celle-ci; à l'aide d'une fonction d'ajustement de coût d'appariement continu sur un espace de parallaxe entier pour effectuer un ajustement d'une pluralité de fois pour obtenir un espace de parallaxe continu, et calculer une coordonnée de pixel ayant une précision de niveau de sous-pixel pour obtenir un espace de parallaxe de sous-pixel; calculer une valeur de profondeur selon l'espace de parallaxe de sous-pixel, et obtenir l'image de profondeur. Dans le présent procédé, un ajustement de sous-pixel est réalisé directement sur une image de profondeur basée sur un espace de parallaxe, par rapport à un algorithme d'image de profondeur basé sur une correspondance stéréo de l'espace de sous-pixel, le temps de fonctionnement d'algorithme est raccourci tandis que l'espace de stockage requis est fortement réduit; un procédé d'ajustement de fonction continue est utilisé pour effectuer un ajustement et une interpolation sur un espace de parallaxe discret pour obtenir l'espace de parallaxe continu, éliminant ainsi un effet de superposition de l'image de profondeur de telle sorte que la précision des mesures tridimensionnelles à base de parallaxe est augmentée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710001725.6A CN106780590B (zh) | 2017-01-03 | 2017-01-03 | 一种深度图的获取方法及系统 |
CN201710001725.6 | 2017-01-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018127007A1 true WO2018127007A1 (fr) | 2018-07-12 |
Family
ID=58952072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/119992 WO2018127007A1 (fr) | 2017-01-03 | 2017-12-29 | Procédé et système d'acquisition d'image de profondeur |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106780590B (fr) |
WO (1) | WO2018127007A1 (fr) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853080A (zh) * | 2019-09-30 | 2020-02-28 | 广西慧云信息技术有限公司 | 一种田间果实尺寸的测量方法 |
CN111145271A (zh) * | 2019-12-30 | 2020-05-12 | 广东博智林机器人有限公司 | 相机参数的精确度的确定方法、装置、存储介质及终端 |
CN111260713A (zh) * | 2020-02-13 | 2020-06-09 | 青岛联合创智科技有限公司 | 一种基于图像的深度计算方法 |
CN111382654A (zh) * | 2018-12-29 | 2020-07-07 | 北京市商汤科技开发有限公司 | 图像处理方法和装置以及存储介质 |
CN111415402A (zh) * | 2019-01-04 | 2020-07-14 | 中国科学院沈阳计算技术研究所有限公司 | 内外相似度聚集的立体匹配算法 |
CN112101209A (zh) * | 2020-09-15 | 2020-12-18 | 北京百度网讯科技有限公司 | 用于路侧计算设备的确定世界坐标点云的方法和装置 |
CN112116641A (zh) * | 2020-09-11 | 2020-12-22 | 南京理工大学智能计算成像研究院有限公司 | 一种基于OpenCL的散斑图像匹配方法 |
CN112712477A (zh) * | 2020-12-21 | 2021-04-27 | 东莞埃科思科技有限公司 | 结构光模组的深度图像评价方法及其装置 |
CN113034568A (zh) * | 2019-12-25 | 2021-06-25 | 杭州海康机器人技术有限公司 | 一种机器视觉深度估计方法、装置、系统 |
CN113936055A (zh) * | 2021-10-19 | 2022-01-14 | 北京安铁软件技术有限公司 | 一种列车闸瓦剩余厚度测量方法、系统、存储介质和计算机设备 |
CN114723967A (zh) * | 2022-03-10 | 2022-07-08 | 北京的卢深视科技有限公司 | 视差图优化方法、人脸识别方法、装置、设备及存储介质 |
CN116188558A (zh) * | 2023-04-27 | 2023-05-30 | 华北理工大学 | 基于双目视觉的立体摄影测量方法 |
CN119048781A (zh) * | 2024-10-29 | 2024-11-29 | 三业电气有限公司 | 一种具有线路覆冰监测功能的监测装置及监测方法 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780590B (zh) * | 2017-01-03 | 2019-12-24 | 成都通甲优博科技有限责任公司 | 一种深度图的获取方法及系统 |
CN111465818B (zh) * | 2017-12-12 | 2022-04-12 | 索尼公司 | 图像处理设备、图像处理方法、程序和信息处理系统 |
CN109919991A (zh) * | 2017-12-12 | 2019-06-21 | 杭州海康威视数字技术股份有限公司 | 一种深度信息确定方法、装置、电子设备及存储介质 |
CN108876835A (zh) * | 2018-03-28 | 2018-11-23 | 北京旷视科技有限公司 | 深度信息检测方法、装置和系统及存储介质 |
CN110533701A (zh) * | 2018-05-25 | 2019-12-03 | 杭州海康威视数字技术股份有限公司 | 一种图像视差确定方法、装置及设备 |
WO2021035627A1 (fr) * | 2019-08-29 | 2021-03-04 | 深圳市大疆创新科技有限公司 | Procédé et dispositif d'acquisition de carte de profondeur et support de stockage informatique |
CN110533703B (zh) * | 2019-09-04 | 2022-05-03 | 深圳市道通智能航空技术股份有限公司 | 一种双目立体视差确定方法、装置及无人机 |
CN110853086A (zh) * | 2019-10-21 | 2020-02-28 | 北京清微智能科技有限公司 | 基于散斑投影的深度图像生成方法及系统 |
CN112749594B (zh) * | 2019-10-31 | 2022-04-22 | 浙江商汤科技开发有限公司 | 信息补全方法、车道线识别方法、智能行驶方法及相关产品 |
CN111179327B (zh) * | 2019-12-30 | 2023-04-25 | 青岛联合创智科技有限公司 | 一种深度图的计算方法 |
CN111402313B (zh) * | 2020-03-13 | 2022-11-04 | 合肥的卢深视科技有限公司 | 图像深度恢复方法和装置 |
CN112184793B (zh) * | 2020-10-15 | 2021-10-26 | 北京的卢深视科技有限公司 | 深度数据的处理方法、装置及可读存储介质 |
CN112348859B (zh) * | 2020-10-26 | 2024-09-06 | 浙江理工大学 | 一种渐近全局匹配的双目视差获取方法和系统 |
CN114820744A (zh) * | 2021-01-29 | 2022-07-29 | 合肥的卢深视科技有限公司 | 场景深度信息获取的方法、电子设备及存储介质 |
CN114897665A (zh) * | 2022-04-01 | 2022-08-12 | 中国科学院自动化研究所 | 可配置实时视差点云计算装置及方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101915571A (zh) * | 2010-07-20 | 2010-12-15 | 桂林理工大学 | 基于相位相关的影像匹配初始视差的全自动获取方法 |
US8472699B2 (en) * | 2006-11-22 | 2013-06-25 | Board Of Trustees Of The Leland Stanford Junior University | Arrangement and method for three-dimensional depth image construction |
CN104065947A (zh) * | 2014-06-18 | 2014-09-24 | 长春理工大学 | 一种集成成像系统的深度图获取方法 |
CN105953777A (zh) * | 2016-04-27 | 2016-09-21 | 武汉讯图科技有限公司 | 一种基于深度图的大比例尺倾斜影像测图方法 |
CN106780590A (zh) * | 2017-01-03 | 2017-05-31 | 成都通甲优博科技有限责任公司 | 一种深度图的获取方法及系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100505334B1 (ko) * | 2003-03-28 | 2005-08-04 | (주)플렛디스 | 운동 시차를 이용한 입체 영상 변환 장치 |
JP5337218B2 (ja) * | 2011-09-22 | 2013-11-06 | 株式会社東芝 | 立体画像変換装置、立体画像出力装置および立体画像変換方法 |
CN103106688B (zh) * | 2013-02-20 | 2016-04-27 | 北京工业大学 | 基于双层配准方法的室内三维场景重建方法 |
CN103702098B (zh) * | 2013-12-09 | 2015-12-30 | 上海交通大学 | 一种时空域联合约束的三视点立体视频深度提取方法 |
-
2017
- 2017-01-03 CN CN201710001725.6A patent/CN106780590B/zh active Active
- 2017-12-29 WO PCT/CN2017/119992 patent/WO2018127007A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8472699B2 (en) * | 2006-11-22 | 2013-06-25 | Board Of Trustees Of The Leland Stanford Junior University | Arrangement and method for three-dimensional depth image construction |
CN101915571A (zh) * | 2010-07-20 | 2010-12-15 | 桂林理工大学 | 基于相位相关的影像匹配初始视差的全自动获取方法 |
CN104065947A (zh) * | 2014-06-18 | 2014-09-24 | 长春理工大学 | 一种集成成像系统的深度图获取方法 |
CN105953777A (zh) * | 2016-04-27 | 2016-09-21 | 武汉讯图科技有限公司 | 一种基于深度图的大比例尺倾斜影像测图方法 |
CN106780590A (zh) * | 2017-01-03 | 2017-05-31 | 成都通甲优博科技有限责任公司 | 一种深度图的获取方法及系统 |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111382654A (zh) * | 2018-12-29 | 2020-07-07 | 北京市商汤科技开发有限公司 | 图像处理方法和装置以及存储介质 |
CN111382654B (zh) * | 2018-12-29 | 2024-04-12 | 北京市商汤科技开发有限公司 | 图像处理方法和装置以及存储介质 |
CN111415402A (zh) * | 2019-01-04 | 2020-07-14 | 中国科学院沈阳计算技术研究所有限公司 | 内外相似度聚集的立体匹配算法 |
CN111415402B (zh) * | 2019-01-04 | 2023-05-02 | 中国科学院沈阳计算技术研究所有限公司 | 内外相似度聚集的立体匹配算法 |
CN110853080A (zh) * | 2019-09-30 | 2020-02-28 | 广西慧云信息技术有限公司 | 一种田间果实尺寸的测量方法 |
CN113034568A (zh) * | 2019-12-25 | 2021-06-25 | 杭州海康机器人技术有限公司 | 一种机器视觉深度估计方法、装置、系统 |
CN113034568B (zh) * | 2019-12-25 | 2024-03-29 | 杭州海康机器人股份有限公司 | 一种机器视觉深度估计方法、装置、系统 |
CN111145271B (zh) * | 2019-12-30 | 2023-04-28 | 广东博智林机器人有限公司 | 相机参数的精确度的确定方法、装置、存储介质及终端 |
CN111145271A (zh) * | 2019-12-30 | 2020-05-12 | 广东博智林机器人有限公司 | 相机参数的精确度的确定方法、装置、存储介质及终端 |
CN111260713A (zh) * | 2020-02-13 | 2020-06-09 | 青岛联合创智科技有限公司 | 一种基于图像的深度计算方法 |
CN112116641A (zh) * | 2020-09-11 | 2020-12-22 | 南京理工大学智能计算成像研究院有限公司 | 一种基于OpenCL的散斑图像匹配方法 |
CN112116641B (zh) * | 2020-09-11 | 2024-02-20 | 南京理工大学智能计算成像研究院有限公司 | 一种基于OpenCL的散斑图像匹配方法 |
CN112101209A (zh) * | 2020-09-15 | 2020-12-18 | 北京百度网讯科技有限公司 | 用于路侧计算设备的确定世界坐标点云的方法和装置 |
CN112101209B (zh) * | 2020-09-15 | 2024-04-09 | 阿波罗智联(北京)科技有限公司 | 用于路侧计算设备的确定世界坐标点云的方法和装置 |
CN112712477A (zh) * | 2020-12-21 | 2021-04-27 | 东莞埃科思科技有限公司 | 结构光模组的深度图像评价方法及其装置 |
CN113936055A (zh) * | 2021-10-19 | 2022-01-14 | 北京安铁软件技术有限公司 | 一种列车闸瓦剩余厚度测量方法、系统、存储介质和计算机设备 |
CN114723967B (zh) * | 2022-03-10 | 2023-01-31 | 合肥的卢深视科技有限公司 | 视差图优化方法、人脸识别方法、装置、设备及存储介质 |
CN114723967A (zh) * | 2022-03-10 | 2022-07-08 | 北京的卢深视科技有限公司 | 视差图优化方法、人脸识别方法、装置、设备及存储介质 |
CN116188558A (zh) * | 2023-04-27 | 2023-05-30 | 华北理工大学 | 基于双目视觉的立体摄影测量方法 |
CN119048781A (zh) * | 2024-10-29 | 2024-11-29 | 三业电气有限公司 | 一种具有线路覆冰监测功能的监测装置及监测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106780590B (zh) | 2019-12-24 |
CN106780590A (zh) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018127007A1 (fr) | Procédé et système d'acquisition d'image de profondeur | |
CN106651938B (zh) | 一种融合高分辨率彩色图像的深度图增强方法 | |
CN108596965B (zh) | 一种光场图像深度估计方法 | |
CN102073874B (zh) | 附加几何约束的航天三线阵ccd相机多影像立体匹配方法 | |
WO2021120846A1 (fr) | Procédé et dispositif de reconstruction tridimensionnelle, et support lisible par ordinateur | |
Zhao et al. | Geometric-constrained multi-view image matching method based on semi-global optimization | |
CN110223222B (zh) | 图像拼接方法、图像拼接装置和计算机可读存储介质 | |
WO2020119467A1 (fr) | Procédé et dispositif de génération d'image de profondeur dense à haute précision | |
CN107767440A (zh) | 基于三角网内插及约束的文物序列影像精细三维重建方法 | |
CN106485690A (zh) | 基于点特征的点云数据与光学影像的自动配准融合方法 | |
CN108961383A (zh) | 三维重建方法及装置 | |
CN102831601A (zh) | 基于联合相似性测度和自适应支持权重的立体匹配方法 | |
CN116129037B (zh) | 视触觉传感器及其三维重建方法、系统、设备及存储介质 | |
CN106408513A (zh) | 深度图超分辨率重建方法 | |
CN113129352B (zh) | 一种稀疏光场重建方法及装置 | |
CN106023147B (zh) | 一种基于gpu的快速提取线阵遥感影像中dsm的方法及装置 | |
Shivakumar et al. | Real time dense depth estimation by fusing stereo with sparse depth measurements | |
CN110349249A (zh) | 基于rgb-d数据的实时稠密重建方法及系统 | |
CN111105452A (zh) | 基于双目视觉的高低分辨率融合立体匹配方法 | |
CN113850293B (zh) | 基于多源数据和方向先验联合优化的定位方法 | |
CN106408596A (zh) | 基于边缘的局部立体匹配方法 | |
CN106408531A (zh) | 基于gpu加速的层次化自适应三维重建方法 | |
CN111739071A (zh) | 基于初始值的快速迭代配准方法、介质、终端和装置 | |
CN105466399A (zh) | 快速半全局密集匹配方法和装置 | |
CN116246017A (zh) | 基于双目影像数据的海浪三维重建方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17889854 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17889854 Country of ref document: EP Kind code of ref document: A1 |