+

CN107167811A - The road drivable region detection method merged based on monocular vision with laser radar - Google Patents

The road drivable region detection method merged based on monocular vision with laser radar Download PDF

Info

Publication number
CN107167811A
CN107167811A CN201710283453.3A CN201710283453A CN107167811A CN 107167811 A CN107167811 A CN 107167811A CN 201710283453 A CN201710283453 A CN 201710283453A CN 107167811 A CN107167811 A CN 107167811A
Authority
CN
China
Prior art keywords
pixel
super
point
road
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710283453.3A
Other languages
Chinese (zh)
Other versions
CN107167811B (en
Inventor
郑南宁
余思雨
刘子熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710283453.3A priority Critical patent/CN107167811B/en
Publication of CN107167811A publication Critical patent/CN107167811A/en
Application granted granted Critical
Publication of CN107167811B publication Critical patent/CN107167811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种基于单目视觉与激光雷达融合的道路可行驶区域检测方法,属于智能交通领域。现有的无人车道路检测方法主要是基于单目视觉、立体视觉、激光传感器和多传感器融合等的方法,存在对光照不鲁棒、三维匹配复杂、激光稀疏和整体融合效率低等缺点。一些有监督的方法虽然取得较好的精度,但训练过程复杂,泛化效果差。本发明提出的基于单目视觉与激光雷达融合的道路可行驶区域检测方法,使用超像素与点云数据融合,利用特征使机器自学习出道路区域,通过贝叶斯框架融合各个特征得到的道路信息以确定最终区域。本方法不需要强假设信息和复杂训练过程,泛化性和鲁棒性优越,速度极快,精度极高,在实际应用中更加易于推广和使用。The invention discloses a road drivable area detection method based on the fusion of monocular vision and laser radar, which belongs to the field of intelligent transportation. Existing road detection methods for unmanned vehicles are mainly based on monocular vision, stereo vision, laser sensors, and multi-sensor fusion, which have disadvantages such as not being robust to illumination, complex 3D matching, sparse laser light, and low overall fusion efficiency. Although some supervised methods achieve better accuracy, the training process is complicated and the generalization effect is poor. The road drivable area detection method based on the fusion of monocular vision and laser radar proposed by the present invention uses superpixels and point cloud data fusion, uses features to make the machine self-learn the road area, and integrates the road obtained by Bayesian framework information to determine the final region. This method does not require strong hypothesis information and complex training process, and has superior generalization and robustness, extremely fast speed and high precision, and is easier to promote and use in practical applications.

Description

基于单目视觉与激光雷达融合的道路可行驶区域检测方法Road drivable area detection method based on fusion of monocular vision and lidar

技术领域technical field

本发明属于智能交通领域中的方法研究,涉及一种基于单目视觉与激光 雷达融合的道路可行驶区域检测方法。The invention belongs to method research in the field of intelligent transportation, and relates to a road drivable area detection method based on the fusion of monocular vision and laser radar.

背景技术Background technique

近年以来,道路检测一直是无人驾驶领域研究的重要内容。目前广泛采 用的道路检测方法有:单目视觉方法、立体视觉方法、激光雷达方法和基于 融合的方法。其中,单目视觉方法只考虑了场景的视觉信息,极易受光照条 件,天气状况影响;立体视觉的方法在三维重建上时间耗费巨大,不适用于 实际运用;激光雷达的方法存在点云数据稀疏的缺点。基于像素信息和深度 信息融合的道路检测方法既充分利用了来自照相机提供的关于场景的纹理、 颜色等信息,又结合激光雷达的深度信息弥补视觉信息对环境不鲁棒的缺点, 在算法效率上克服了非融合方法效率低,难以进行实时运算,难以实际运用 的问题,故基于融合的道路检测方法迅速发展起来为无人车道路检测的首选。 基于融合的道路检测方法是在单目视觉,激光雷达方法,传感器融合等基础 上发展起来的一种最佳的道路检测。从而在工程实际中,尤其在无人车驾驶中 得到了广泛的应用。In recent years, road detection has been an important part of research in the field of driverless driving. Currently widely used road detection methods are: monocular vision method, stereo vision method, lidar method and fusion-based method. Among them, the monocular vision method only considers the visual information of the scene, which is easily affected by lighting conditions and weather conditions; the stereo vision method consumes a lot of time in 3D reconstruction and is not suitable for practical applications; the lidar method has point cloud data Sparse disadvantage. The road detection method based on the fusion of pixel information and depth information not only makes full use of the texture, color and other information about the scene provided by the camera, but also combines the depth information of the lidar to make up for the shortcomings of visual information that are not robust to the environment. In terms of algorithm efficiency It overcomes the problems of low efficiency, difficulty in real-time calculation, and practical application of non-fusion methods. Therefore, the road detection method based on fusion has developed rapidly and become the first choice for road detection of unmanned vehicles. Fusion-based road detection method is an optimal road detection developed on the basis of monocular vision, lidar method, sensor fusion, etc. Therefore, it has been widely used in engineering practice, especially in unmanned vehicle driving.

无人车道路检测还分为有监督方法与无监督方法。由于路面信息的多样 性、场景信息的复杂性和光照天气条件的多变性,无人车对于道路检测方法 的鲁棒性和泛化性能要求很高。故无监督的无人车道路检测方法也是无人驾 驶领域研究的重要内容。一方面,无监督的道路检测方法不需要大量的标记 数据和费时的训练过程,能够根据提取的特征自主地学习出道路信息,是一 种高泛化能力的方法。另一方面,现实世界的交通场景复杂多变,在不可能 为无人驾驶提供所有场景的训练样本的情况下,有监督的方法在遇到与训练 样本的场景相差较大的驾驶场景时危险性极大,而无监督的道路检测方法对 几乎所有场景鲁棒,适用于无人驾驶的实际应用。Road detection for unmanned vehicles is also divided into supervised methods and unsupervised methods. Due to the diversity of road surface information, the complexity of scene information, and the variability of light and weather conditions, unmanned vehicles have high requirements for the robustness and generalization performance of road detection methods. Therefore, the unsupervised road detection method for unmanned vehicles is also an important part of the research in the field of unmanned driving. On the one hand, the unsupervised road detection method does not require a large amount of labeled data and time-consuming training process, and can learn road information independently according to the extracted features, which is a method with high generalization ability. On the other hand, the traffic scenes in the real world are complex and changeable. When it is impossible to provide training samples for all scenes for unmanned driving, supervised methods are dangerous when encountering driving scenes that are quite different from the training samples. The reliability is great, and the unsupervised road detection method is robust to almost all scenarios and is suitable for practical applications of unmanned driving.

发明内容Contents of the invention

本发明的目的在于提供一种基于单目视觉与激光雷达融合的道路可行 驶区域检测方法。The object of the present invention is to provide a road drivable area detection method based on the fusion of monocular vision and laser radar.

为达到上述目的,本发明采用了以下技术方案。In order to achieve the above object, the present invention adopts the following technical solutions.

首先,该融合的方法使用超像素与激光点云数据的融合,将点云数据按 照激光参数标定投影到超像素分割后的图片上,超像素方法充分利用了场景 的纹理特征,极大地缩小定位道路区域的范围,大大提升了算法效率;其次, 运用三角剖分寻找点的空间关系,根据得出的空间关系三角形建立无向图和 并计算每一点的法向量,根据无向图分类障碍点;然后,本方法采用基于最 小滤波的方法定义新的特征(ray),并找到道路区域的初始备选区域,进一 步缩小道路区域的检测范围,极大提升算法效率;通过定义新的特征(level)从深度信息方面数值化各点的可行驶程度,有效地利用了深度信息。另外, 融合方法还利用一种无监督的融合方法,即基于自学习的贝叶斯框架,融合 各个特征(颜色特征,level特征,法向量特征,强度特征)学习到的备选道 路区域的概率信息,这种算法效率高,鲁棒性能强。First of all, the fusion method uses the fusion of superpixel and laser point cloud data, and projects the point cloud data onto the superpixel segmented image according to the laser parameter calibration. The superpixel method makes full use of the texture characteristics of the scene and greatly reduces the positioning accuracy. The scope of the road area greatly improves the efficiency of the algorithm; secondly, use triangulation to find the spatial relationship of points, establish an undirected graph and calculate the normal vector of each point according to the obtained spatial relationship triangle, and classify obstacle points according to the undirected graph ; Then, this method defines a new feature (ray) based on the minimum filtering method, and finds the initial candidate area of the road area, further reduces the detection range of the road area, and greatly improves the algorithm efficiency; by defining a new feature (level ) quantifies the drivability of each point from the aspect of depth information, and effectively utilizes the depth information. In addition, the fusion method also uses an unsupervised fusion method, that is, based on the self-learning Bayesian framework, the probability of the candidate road area learned by fusing each feature (color feature, level feature, normal vector feature, intensity feature) Information, this algorithm has high efficiency and strong robustness.

所述超像素与激光点云数据融合具体步骤如下:The specific steps of the superpixel and laser point cloud data fusion are as follows:

利用已有的结合边缘分割的改进的线性迭代聚类的方法对相机采集的 图片进行超像素分割,将图片分割为N个超像素,每个超像素pc=(xc,yc,zc,1)T包含若干个像素点,其中xc,yc,zc表示该超像素内所有像素点的相机坐标系下 的位置信息的平均值,同时,这些像素点的RGB均统一为该超像素内所有像 素点的平均RGB。再利用已有的标定技术将激光雷达获得的每一点 pl=(xl,yl,zl,1T)投影到超像素分割后的图片上,最终得到点集其中 Pi=(xi,yi,zi,ui,vi),xi,yi,zi表示该点的激光坐标系下的位置信息的,(ui,vi)表 示该点对应的相机坐标系下的位置信息。最后通过共点约束,只保留被投影 在超像素边缘附近的激光点。Use the existing improved linear iterative clustering method combined with edge segmentation to perform superpixel segmentation on the pictures collected by the camera, and divide the picture into N superpixels, each superpixel p c =(x c ,y c ,z c , 1) T contains several pixels, where x c , y c , z c represent the average value of the position information of all pixels in the superpixel in the camera coordinate system. At the same time, the RGB of these pixels are unified as The average RGB of all pixels in the superpixel. Then use the existing calibration technology to project each point p l = (x l ,y l ,z l ,1 T ) obtained by the lidar onto the image after superpixel segmentation, and finally obtain the point set Among them, P i = ( xi , y i , zi , u i , v i ), xi , y i , zi represent the position information of the point in the laser coordinate system, (u i , v i ) represents The position information in the camera coordinate system corresponding to this point. Finally, through the co-point constraint, only the laser points projected near the edge of the superpixel are kept.

基于最小滤波的方法定义新的特征(ray)找到道路区域的初始备选区域; 具体步骤如下:Define a new feature (ray) based on the minimum filtering method to find the initial candidate area of the road area; the specific steps are as follows:

首先,定义该道路区域的初始备选区域为其中, Si代表是超像素Si所包含的所有像素点的集合,定义IDRM为”direction ray map”,并点集将Pi=(xi,yi,zi,ui,vi)的(ui,vi)坐标转化到以图片最后一行的中心点 (Pbase)为原点的极坐标下,则有是点集(ui,vi)的真子集, 其中 表示第i个点属于第h角度,表示的是中的障碍点的集合,计算方法见后文及流程图。First, define the initial candidate area for this road area as Among them, S i represents is the set of all pixels contained in the superpixel S i , define I DRM as "direction ray map", and point set P i = ( xi , y i , z i , u i , v i ) i , v i ) coordinates are converted to polar coordinates with the center point (P base ) of the last line of the picture as the origin, then is the set of points A proper subset of (u i ,v i ), where Indicates that the i-th point belongs to the h-th angle, means The collection of obstacle points in , the calculation method is shown in the following text and the flow chart.

其次,为解决激光射线泄露的问题,采用最小滤波的方法处理得到的IDRM得到期望的IDRM,最终得到 Secondly, in order to solve the problem of laser ray leakage, the minimum filtering method is used to process the obtained I DRM to obtain the expected I DRM , and finally get

定义新的特征(level);具体步骤如下:Define a new feature (level); the specific steps are as follows:

定义每一点的level特征为算法见流程图, 并与超像素结合得到超像素Si的level特征L(Si):define each point The level feature is The algorithm is shown in the flow chart, and combined with the superpixel to obtain the level feature L(S i ) of the superpixel S i :

所述采用自学习的贝叶斯框架来融合的具体步骤如下:The specific steps for fusion using the self-learning Bayesian framework are as follows:

首先,结合初始备选区域为分别利用者4种特征无监督学习地备选区 域的概率。First, combined with the initial candidate area as The probabilities of candidate regions for unsupervised learning using four types of features respectively.

对于初始备选区域为中的超像素点Si,Si包含的每一个像素点 Pi=(xi,yi,zi,ui,vi)的RGB值已经统一了,利用高斯参数μc自学习颜色特 征,公式如下:For the initial candidate region is The superpixel S i in S i , the RGB values of each pixel P i = ( xi , y i , zi , u i , v i ) contained in S i have been unified, using the Gaussian parameters μ c and Self-learning color features, the formula is as follows:

θ=45°θ=45°

利用高斯参数μl自学习超像素Si的level特征L(Si)的公式为:Using Gaussian parameters μ l and The formula of the level feature L(S i ) of the self-learning superpixel S i is:

利用高斯参数μn自学习超像素Si的法向量特征N(Si)的公式为:Using Gaussian parameters μ n and The formula of the normal vector feature N(S i ) of the self-learning superpixel S i is:

定义Sg(Si)为穿过超像素Si的ray的数量,自学习超像素Si的强度特征 Sg(Si)的公式为:Define Sg(S i ) as the number of rays passing through the superpixel S i , and the formula for the intensity feature Sg(S i ) of the self-learning superpixel S i is:

最后,建立贝叶斯框架融合4种特征,公式如下:Finally, the Bayesian framework is established to fuse the four features, the formula is as follows:

其中,p(Si=R|Obs)表示超像素Si属于道路区域的概率,Obs表示基于这4种 特征的观测。Among them, p(S i =R|Obs) represents the probability that the superpixel S i belongs to the road area, and Obs represents the observation based on these four features.

本发明的有益效果体现在:The beneficial effects of the present invention are reflected in:

其一,由于传统的融合方法采用全局融合,这大大限制了这些算法的实 用性和计算效率。本发明提出使用超像素与激光点云数据的融合。该方法极 大缩小了道路区域的备选范围,极大提升了算法效率。其二,因此,本发明 提出的新的特征(ray)找到道路区域的初始备选区域,进一步缩小道路区域 的检测范围,极大提升算法效率。其三,本发明提出的level特征从深度信息 方面数值化各点的可行驶程度,克服了深度信息的稀疏问题,有效地利用了 深度信息,对算法精度贡献极大。其四,本发明提出的利用强度特征量化超 像素与深度信息的融合关系,充分考虑了视觉信息近大远小的问题,对算法 精度贡献极大。故算法具有较为重要的研究意义和广泛的工程应用价值。其 五,自学习的贝叶斯框架,融合各个特征学习到的备选道路区域的概率信息, 这种算法效率高,鲁棒性能强First, since traditional fusion methods employ global fusion, this greatly limits the practicability and computational efficiency of these algorithms. The present invention proposes the use of fusion of superpixels with laser point cloud data. This method greatly reduces the range of candidates for the road area and greatly improves the efficiency of the algorithm. Second, therefore, the new feature (ray) proposed by the present invention finds the initial candidate area of the road area, further reduces the detection range of the road area, and greatly improves the algorithm efficiency. Third, the level feature proposed by the present invention digitizes the drivability of each point from the aspect of depth information, overcomes the sparseness of depth information, effectively utilizes depth information, and greatly contributes to the accuracy of the algorithm. Fourth, the fusion relationship between superpixels and depth information quantified by the intensity feature proposed by the present invention fully considers the problem that the visual information is near large and far small, and contributes greatly to the accuracy of the algorithm. Therefore, the algorithm has important research significance and extensive engineering application value. Fifth, the self-learning Bayesian framework integrates the probability information of the candidate road area learned by each feature. This algorithm has high efficiency and strong robustness.

附图说明Description of drawings

图1是基于单目视觉与激光雷达融合的道路可行驶区域检测方法原理框 图;Figure 1 is a schematic block diagram of a road drivable area detection method based on monocular vision and lidar fusion;

图2是得到ray特征的算法流程图;Figure 2 is an algorithm flow chart for obtaining ray features;

图3是由未利用最小滤波处理ray泄露(下)与使用后(上)的初始备 选区域效果图;Figure 3 is the effect diagram of the initial candidate area without using the minimum filter to process ray leakage (bottom) and after use (top);

图4是得到level特征的算法流程图;Figure 4 is an algorithm flow chart for obtaining level features;

图5是由颜色特征自学习得到的备选道路区域概率分布效果图;Figure 5 is an effect map of the probability distribution of the alternative road area obtained by self-learning of color features;

图6是由level特征自学习得到的备选道路区域概率分布效果图;Figure 6 is an effect diagram of the probability distribution of the alternative road area obtained by level feature self-learning;

图7是由法向量特征自学习得到的备选道路区域概率分布效果图;Figure 7 is an effect diagram of the probability distribution of the alternative road area obtained by the self-learning of the normal vector feature;

图8是由强度特征自学习得到的备选道路区域概率分布效果图;Figure 8 is an effect diagram of the probability distribution of the alternative road area obtained by self-learning of intensity features;

图9是自学习的贝叶斯框架的融合得到的最终区域的概率分布图;Fig. 9 is a probability distribution diagram of the final region obtained by the fusion of the self-learning Bayesian framework;

具体实施方式detailed description

参照图1所示,利用已有的结合边缘分割的改进的线性迭代聚类的方 法对相机采集的图片进行超像素分割,将图片分割为N个超像素,每个超 像素pc=(xc,yc,zc,1)T包含若干个像素点,其中xc,yc,zc表示该超像素内所有像素 点的相机坐标系下的位置信息的平均值,同时,这些像素点的RGB均统一为 该超像素内所有像素点的平均RGB。再利用已有的标定技术和旋转矩阵和转化矩阵按照公式(1)得到转化矩阵 Referring to Fig. 1, use the existing improved linear iterative clustering method combined with edge segmentation to perform superpixel segmentation on the picture collected by the camera, and divide the picture into N superpixels, each superpixel p c =(x c , y c , z c ,1) T contains several pixels, where x c , y c , z c represent the average value of the position information of all pixels in the superpixel in the camera coordinate system. At the same time, these pixels The RGB of a point is unified as the average RGB of all pixels in the superpixel. Reuse existing calibration techniques and rotation matrices and transformation matrix According to the formula (1) to get the conversion matrix

利用旋转矩阵建立2个坐标系的转化关系,如公式(2):Use the rotation matrix with Establish the conversion relationship between the two coordinate systems, such as formula (2):

将激光雷达获得的每一点pl=(xl,yl,zl,1)T投影到超像素分割后的图片上,如 公式(3):Project each point p l =(x l ,y l ,z l ,1) T obtained by the lidar onto the image after superpixel segmentation, such as formula (3):

得到点集其中Pi=(xi,yi,zi,ui,vi),xi,yi,zi表示该点的激光坐标系下的位置信息的,(ui,vi)表示该点对应的相机坐标系下的位置信息。最后,只保 留超像素边缘附近的激光点。get point set Among them, P i = ( xi , y i , zi , u i , v i ), xi , y i , zi represent the position information of the point in the laser coordinate system, (u i , v i ) represents The position information in the camera coordinate system corresponding to this point. Finally, only the laser spots near the edge of the superpixel are kept.

利用数据融合分类障碍点,得到映射关系ob(Pi),ob(Pi)=1表示Pi为障碍 点,反之为0。对于Pi的坐标系(ui,vi)运用三角剖分(Delaunay triangulation), 得到众多空间三角形和生成无向图E表示图中与节点Pi存在关联 关系的边缘的集合。剔除坐标系(ui,vi)下欧式距离不满足公式(4)的边缘 (Pi,Pj):Using data fusion to classify obstacle points, the mapping relationship ob(P i ) is obtained, ob(P i )=1 means that P i is an obstacle point, otherwise it is 0. For the coordinate system (u i , v i ) of P i , use triangulation (Delaunay triangulation) to obtain many spatial triangles and generate undirected graphs E represents the set of edges in the graph that are associated with node P i . Eliminate the edges (P i , P j ) whose Euclidean distance does not satisfy the formula (4) in the coordinate system (u i , v i ):

||Pi-Pj||<ε…………………………………………………………………(4)||P i -P j ||<ε………………………………………………………(4)

定义为与Pi连接的点的集合为Nb(Pi),则与有关的空间三角形的表面即为 {(uj,vj)|j=iorPj∈Nb(Pi)}。计算各个空间三角形的法向量,显然,当Pi周围的 空间三角形越平坦接近地面,Pi成为非障碍点的可能性越大,我们取Pi周围 的空间三角形的法向量的平均值作为Pi的法向量公式(5) 表示ob(Pi)的判断方法:Defined as the set of points connected to P i as Nb(P i ), then the surface of the space triangle related to and is {(u j ,v j )|j=iorP j ∈ Nb(P i )}. Calculate the normal vectors of each space triangle. Obviously, when the space triangles around Pi are flatter and closer to the ground, the possibility of Pi becoming a non-obstacle point is greater. We take the average value of the normal vectors of the space triangles around Pi as P i 's normal vector Formula (5) expresses the judging method of ob(P i ):

基于最小滤波的方法定义ray特征并找到道路区域的初始备选区域首先,根据如图2所示的算法流程图得到”direction ray map”--IDRM,其中,表示根据上一步障碍点分类方法计算的中的 障碍点的集合, 表示Pi属于第h角度。算法将Pi=(xi,yi,zi,ui,vi) 的(ui,vi)坐标转化到以图片最后一行的中心点(Pbase)为原点的极坐标下,则 有是点集(ui,vi)的真子集。其次,如图3所示,由于激光 数据的稀疏性,需要处理ray泄露的问题,本方法创新地采用最小滤波的方 法处理得到的IDRM得到期望的IDRM。结合超像素分割,定义该道路区域的初 始备选区域为其中,Si代表是超像素Si所包含的 所有像素点的集合,最终融合超像素得到 Define the ray feature based on the minimum filter method and find the initial candidate area of the road area First, according to the algorithm flowchart shown in Figure 2, the "direction ray map"--I DRM is obtained, wherein, Indicates that calculated according to the obstacle point classification method in the previous step The collection of obstacle points in , Indicates that P i belongs to the hth angle. The algorithm transforms the (u i , v i ) coordinates of P i = ( xi , y i , z i , u i , v i ) into polar coordinates with the center point (P base ) of the last line of the picture as the origin, then there is is the set of points Proper subset of (u i , v i ). Secondly, as shown in Figure 3, due to the sparsity of laser data, the problem of ray leakage needs to be dealt with. This method innovatively adopts the minimum filtering method to process the obtained I DRM to obtain the expected I DRM . Combined with superpixel segmentation, the initial candidate area for this road area is defined as Among them, S i represents is the set of all pixels contained in the superpixel S i , and finally fused with the superpixels to obtain

定义level特征,图4给出计算属于第h角度的每一点 的level特征公式(6)表示,与超像素结合 得到超像素Si的level特征L(Si):Define the level feature, Figure 4 gives the calculation of each point belonging to the h-th angle The level feature Formula (6) expresses that the level feature L(S i ) of the superpixel S i is obtained by combining with the superpixel:

如图5,利用颜色特征得到的备选道路区域的可行驶度概率信息。对于 初始备选区域中的超像素点Si,Si包含的每一个像素点Pi=(xi,yi,zi,ui,vi)的 RGB值已经统一了,由于RGB颜色对光照条件和天气状况不鲁棒,故利 用颜色空间转换方法,将处于RBG空间的原始图像I转化为 illuminant-invariant颜色空间下的图像Ilog,如公式(7):As shown in Figure 5, the drivability probability information of the candidate road area obtained by using color features. For the initial candidate area The RGB value of the superpixel point S i in S i , each pixel point P i = ( xi , y i , zi , u i , v i ) contained in S i has been unified. The situation is not robust, so the color space conversion method is used to convert the original image I in the RBG space into the image I log in the illuminant-invariant color space, such as formula (7):

其中Ilog(u,v)是在Ilog坐标系(u,v)下的像素值,IR,IG,IB表示I的RGB值,θ表 示正交于光照变化线的不变角度。公式(8)利用高斯参数μc自学习 颜色特征以得到备选道路区域的可行驶度概率信息:Among them, I log (u, v) is the pixel value under the I log coordinate system (u, v), I R , I G , I B represent the RGB value of I, and θ represents the constant angle perpendicular to the light change line . Equation (8) uses Gaussian parameters μ c and Self-learning color features to obtain drivability probability information of candidate road areas:

如图6,利用level特征得到的备选道路区域的可行驶度概率信息。公式 (9)利用高斯参数μl自学习超像素Si的level特征L(Si):As shown in Figure 6, the drivability probability information of the candidate road area obtained by using the level feature. Equation (9) uses Gaussian parameters μ l and The level feature L(S i ) of the self-learning superpixel S i :

如图7,利用法向量特征得到的备选道路区域的可行驶度概率信息。计 算中的超像素Si的法向量特征N(Si),即Si中的法向量最低的点的高度坐标 值如公式(10):As shown in Figure 7, the drivability probability information of the candidate road area obtained by using the normal vector feature. calculate The normal vector feature N(S i ) of the superpixel S i in S i , that is, the height coordinate value of the point with the lowest normal vector in S i Such as formula (10):

公式(11)利用高斯参数μn自学习超像素Si的法向量特征N(Si):Equation (11) uses Gaussian parameters μ n and The normal vector feature N(S i ) of the self-learning superpixel S i :

如图8,利用强度特征得到的备选道路区域的可行驶度概率信息。Sg(Si) 为穿过超像素Si的ray的数量,自学习超像素Si的强度特征Sg(Si)如公式(12):As shown in Figure 8, the drivability probability information of the candidate road area obtained by using the intensity feature. Sg(S i ) is the number of rays passing through the superpixel S i , and the intensity feature Sg(S i ) of the self-learning superpixel S i is shown in formula (12):

最后,如图9,建立贝叶斯框架融合4种特征得到自学习的贝叶斯框架 的融合得到的最终区域的概率分布图,如公式(13):Finally, as shown in Figure 9, the Bayesian framework is established to fuse the four features to obtain the probability distribution map of the final area obtained by the fusion of the self-learning Bayesian framework, such as formula (13):

其中,p(Si=R|Obs)表示超像素Si属于道路区域的概率,Obs表示基于这4种 特征的观测,从图9可以看出本方法很好地完成了道路检测任务。Among them, p(S i =R|Obs) represents the probability that the superpixel S i belongs to the road area, and Obs represents the observation based on these four features. It can be seen from Fig. 9 that this method completes the road detection task well.

为了证明本方法的优势,我们在ROAD-KITTI benchmark上利用3中不 同环境的数据集,标记的城市环境(Urban Marked,UM),多标记的城市环 境(Urban Multiple Marked,UMM)和未标记的城市环境(Urban Unmarked, UU)测试本方法,从最大F-measure(Max F-measure,MaxF),平均精度 (Average Precision,AP),精度(Precision,PRE),召回率(Recall,REC), 假阳性率(False Positive Rate,FPR),和假阴性率(False NegativeRate,FNR) 这六个指标进行分析。分析的同时,进行对比实验,与目前已公布的在 ROAD-KITTI benchmark数据集上利用激光取得了最好效果的方法 MixedCRF和融合方法RES3D-Velo对比,对比结果见表1--4。In order to demonstrate the advantages of this method, we use datasets of 3 different environments on ROAD-KITTI benchmark, marked urban environment (Urban Marked, UM), multi-labeled urban environment (Urban Multiple Marked, UMM) and unlabeled Urban environment (Urban Unmarked, UU) test this method, from the maximum F-measure (Max F-measure, MaxF), average precision (Average Precision, AP), precision (Precision, PRE), recall rate (Recall, REC), The false positive rate (False Positive Rate, FPR) and the false negative rate (False Negative Rate, FNR) are analyzed. At the same time of the analysis, a comparative experiment was carried out, and compared with the currently published method MixedCRF and the fusion method RES3D-Velo which use laser to achieve the best results on the ROAD-KITTI benchmark data set, the comparison results are shown in Tables 1-4.

表1是本方法(Ours Test),MixedCRF,RES3D-Velo在UM数据集上的对比 实验结果:Table 1 is the comparative experimental results of this method (Ours Test), MixedCRF, and RES3D-Velo on the UM dataset:

表1在UM数据集上的对比实验结果Table 1 Comparative experimental results on the UM dataset

表2是本方法(Ours Test),MixedCRF,RES3D-Velo在UMM数据集上的对 比实验结果:Table 2 is the comparative experimental results of this method (Ours Test), MixedCRF, and RES3D-Velo on the UMM dataset:

表2在UMM数据集上的对比实验结果Table 2 Comparative experimental results on the UMM dataset

表3是本方法(Ours Test),MixedCRF,RES3D-Velo在UU数据集上的对比 实验结果:Table 3 is the comparative experimental results of this method (Ours Test), MixedCRF, and RES3D-Velo on the UU dataset:

表3在UU数据集上的对比实验结果.Table 3 Comparative experimental results on the UU dataset.

表4是本方法(Ours Test),MixedCRF,RES3D-Velo在URBAN(即UM, UMM,UU整体考虑)数据集上的实验结果的平均值的对比结果:Table 4 is the comparison result of the mean value of the experimental results of this method (Ours Test), MixedCRF, RES3D-Velo on the URBAN (ie UM, UMM, UU overall consideration) data set:

表4在URBAN数据集的对比实验结果.Table 4 Comparative experimental results in the URBAN dataset.

MixedCRF是需要训练的方法,本方法在不需要任何训练的条件下得到 了相似的精度,并且在AP这一项指标上取得了最高的精度,说明了本方法 的优越性。MixedCRF is a method that requires training. This method has obtained similar accuracy without any training, and has achieved the highest accuracy in the index of AP, which shows the superiority of this method.

为了表明本方法所采用的自学习贝叶斯框架融合的优越性,在 ROAD-KITTIbenchmark上利用3中不同环境的数据集,标记的城市环境 (Urban Marked,UM),多标记的城市环境(Urban Multiple Marked,UMM) 和未标记的城市环境(Urban Unmarked,UU)测试本方法,从最大F-measure (Max F-measure,MaxF),平均精度(Average Precision,AP),精度(Precision, PRE),召回率(Recall,REC),假阳性率(False Positive Rate,FPR),和 假阴性率(False Negative Rate,FNR)这六个指标,分析单一采用ray特征 得到的初始备选区域(Initial),颜色特征(Color),强度特征(Strength),level 特征和法向量特征(Normal)的精度,与贝叶斯框架融合(Fusion)精度对 比,对比结果见表5--8。In order to show the superiority of the self-learning Bayesian framework fusion adopted by this method, three different environment datasets were used on the ROAD-KITTI benchmark, the marked urban environment (Urban Marked, UM), the multi-labeled urban environment (Urban Marked, Multiple Marked, UMM) and unmarked urban environment (Urban Unmarked, UU) test this method, from the maximum F-measure (Max F-measure, MaxF), average precision (Average Precision, AP), precision (Precision, PRE) , Recall rate (Recall, REC), false positive rate (False Positive Rate, FPR), and false negative rate (False Negative Rate, FNR), these six indicators, analyze the initial candidate area (Initial) obtained by using a single ray feature , the accuracy of color feature (Color), intensity feature (Strength), level feature and normal vector feature (Normal), compared with the accuracy of Bayesian framework fusion (Fusion), the comparison results are shown in Table 5-8.

表5是单一采用ray特征得到的初始备选区域(Initial)、颜色特征(Color), 强度特征(Strength)、level特征和法向量特征(Normal),与贝叶斯框架融 合(Fusion)在UM数据集上的对比结果:Table 5 is the initial candidate area (Initial), color feature (Color), intensity feature (Strength), level feature and normal vector feature (Normal) obtained by using ray feature alone, and Bayesian framework fusion (Fusion) in UM Comparison results on the dataset:

表5Comparison on UM Training Set(BEV).Table 5Comparison on UM Training Set(BEV).

表6是单一采用ray特征得到的初始备选区域(Initial)、颜色特征(Color), 强度特征(Strength)、level特征和法向量特征(Normal),与贝叶斯框架融 合(Fusion)在UMM数据集上的对比结果:Table 6 is the initial candidate area (Initial), color feature (Color), intensity feature (Strength), level feature and normal vector feature (Normal) obtained by using ray feature alone, and Bayesian framework fusion (Fusion) in UMM Comparison results on the dataset:

表6Comparison on UMM Training Set(Bev).Table 6Comparison on UMM Training Set(Bev).

表7是单一采用ray特征得到的初始备选区域(Initial)、颜色特征(Color), 强度特征(Strength)、level特征和法向量特征(Normal),与贝叶斯框架融 合(Fusion)在UU数据集上的对比结果:Table 7 is the initial candidate area (Initial), color feature (Color), intensity feature (Strength), level feature and normal vector feature (Normal) obtained by using ray feature alone, and Bayesian framework fusion (Fusion) in UU Comparison results on the dataset:

表7Comparison on UU Training Set(Bev).Table 7Comparison on UU Training Set(Bev).

表8是单一采用ray特征得到的初始备选区域(Initial)、颜色特征(Color), 强度特征(Strength)、level特征和法向量特征(Normal),与贝叶斯框架融 合(Fusion)在UM,UMM,UU数据集上的实验结果的平均值的对比结果:Table 8 is the initial candidate area (Initial), color feature (Color), intensity feature (Strength), level feature and normal vector feature (Normal) obtained by using ray feature alone, and Bayesian framework fusion (Fusion) in UM , UMM, the comparison results of the average of the experimental results on the UU data set:

表8Comparison on URBAN Training Set(BEV).Table 8Comparison on URBAN Training Set(BEV).

由表4和表8可知,基于单目视觉与激光雷达融合的道路可行驶区域检 测方法取得了当前的最高精度AP,AP也是衡量检测方法的最重要指标,在 其他指标上也取得了良好优势,故本方法适用于实际应用。It can be seen from Table 4 and Table 8 that the road drivable area detection method based on the fusion of monocular vision and lidar has achieved the highest accuracy AP. AP is also the most important indicator for measuring the detection method, and has also achieved good advantages in other indicators. , so this method is suitable for practical application.

以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明, 不能认定本发明的具体实施方式仅限于此,对于本发明所属技术领域的普通 技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单的推演 或替换,都应当视为属于本发明由所提交的权利要求书确定专利保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments. It cannot be determined that the specific embodiments of the present invention are limited thereto. Under the circumstances, some simple deduction or replacement can also be made, all of which should be regarded as belonging to the scope of patent protection determined by the submitted claims of the present invention.

Claims (5)

1. a kind of road drivable region detection method merged based on monocular vision with laser radar, it is characterised in that:
First, the method for the fusion merging using super-pixel and laser point cloud data, by cloud data according to laser parameter mark Surely project on the picture after super-pixel segmentation;
Secondly, the spatial relationship of point is found with triangulation, non-directed graph and simultaneously is set up according to the spatial relationship triangle drawn The normal vector of every bit is calculated, according to non-directed graph classification barrier point;
Then, the initial alternative area of road area is found using the method based on minimum filtering, road area is further reduced Detection range;By defining, new feature (level) quantizes the wheeled degree of each point in terms of depth information, in addition, Fusion method also utilizes a kind of unsupervised fusion method, the i.e. Bayesian frame based on self study, merges each feature, i.e. face Color characteristic, level features, normal direction measure feature, the probabilistic information for the alternative road area that strength characteristic learns.
2. the road drivable region detection method merged according to claim 1 based on monocular vision with laser radar, institute State super-pixel and merged with laser point cloud data and comprised the following steps that:
Clustered using linear iteraction and super-pixel segmentation is carried out to the picture that camera is gathered, be N number of super-pixel by picture segmentation, each Super-pixel pc=(xc,yc,zc,1)TInclude several pixels, wherein xc,yc,zcRepresent the phase of all pixels point in the super-pixel The average value of positional information under machine coordinate system, meanwhile, the RGB of these pixels unifies as all pixels point in the super-pixel Average RGB, recycle the calibration technique every bit p that obtains laser radarl=(xl,yl,zl,1)TProject to super-pixel segmentation On picture afterwards, during laser is merged with point cloud, it is proposed that concurrent is constrained, concurrent constraint refers to, in super-pixel and laser point cloud data The laser spots of super-pixel adjacent edges are only retained in during fusion, point set is finally givenWherein Pi=(xi, yi,zi,ui,vi), xi,yi,ziRepresent positional information under the laser coordinate system of the point, (ui,vi) represent the corresponding phase of point Positional information under machine coordinate system.
3. the road drivable region detection method merged according to claim 1 based on monocular vision with laser radar, its It is characterised by:Method definition based on minimum filtering, finds the initial alternative area of road area;Comprise the following steps that:
First, the initial alternative area for defining the road area isWherein, SiRepresentIt is Super-pixel SiComprising all pixels point set, define IDRMFor " oriented radial figure (direction ray map) ", and By point set Pi=(xi,yi,zi,ui,vi) (ui,vi) coordinate transformation is to the central point (P of picture bottommost a linebase) it is original Under the polar coordinates of point, then haveIt is point set(ui,vi) proper subclass, wherein Table Show that belong to h angle at i-th point,Represent beIn barrier point set, Calculating process is as follows:
1) I is initializedDRMIt is size and to be originally inputted picture identical full 0 matrix;
2) for h angle set a littleFind obstacle point set therein
If 3)Construct containerOtherwise,
4) willIt is included into IDRM
If 5) h=H, terminate;Otherwise, 2 are returned;
The problem of next is " ray leakage ", carries out subsequent treatment using the method for minimum filtering, obtains desired IDRM, finally Obtain
4. the road drivable region detection method merged according to claim 1 based on monocular vision with laser radar, its It is characterised by:Define new feature (level);The wheeled degree of level character representation corresponding points, calculating process is as follows:
1) initialization h angle set a littleThe level features of middle every bit
2) forIn i-th point, if
If 3) i≤N(h), return to 2;
If 4) h=H, terminate;Otherwise, 1 is returned;
And combined with super-pixel and obtain super-pixel SiLevel feature L (Si):
5. the road drivable region detection method merged according to claim 1 based on monocular vision with laser radar, its It is characterised by:Merged using self study Bayesian frame;Comprise the following steps that:
Self study -- color characteristic is carried out using feature in four, level features, normal direction measure feature, strength characteristic, first, with reference to Initially alternative area isFour kinds of feature unsupervised learnings of the person of being utilized respectively alternative area probability;
It is for initial alternative areaIn super-pixel point Si, SiComprising each pixel Pi=(xi,yi,zi,ui,vi) Rgb value unified, utilize Gaussian parameter μcWithSelf study color characteristic, formula is as follows:
Utilize Gaussian parameter μlWithSelf study super-pixel SiLevel feature L (Si) formula be:
Utilize Gaussian parameter μnWithSelf study super-pixel SiNormal direction measure feature N (Si) formula be:
Define Sg (Si) it is through super-pixel SiRay quantity, self study super-pixel SiStrength characteristic Sg (Si) formula be:
Finally, set up Bayesian frame and merge 4 kinds of features, formula is as follows:
Wherein, p (Si=R | Obs) represent super-pixel SiBelong to the probability of road area, Obs represents the sight based on these four features Survey.
CN201710283453.3A 2017-04-26 2017-04-26 A road drivable area detection method based on the fusion of monocular vision and lidar Active CN107167811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710283453.3A CN107167811B (en) 2017-04-26 2017-04-26 A road drivable area detection method based on the fusion of monocular vision and lidar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710283453.3A CN107167811B (en) 2017-04-26 2017-04-26 A road drivable area detection method based on the fusion of monocular vision and lidar

Publications (2)

Publication Number Publication Date
CN107167811A true CN107167811A (en) 2017-09-15
CN107167811B CN107167811B (en) 2019-05-03

Family

ID=59813240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710283453.3A Active CN107167811B (en) 2017-04-26 2017-04-26 A road drivable area detection method based on the fusion of monocular vision and lidar

Country Status (1)

Country Link
CN (1) CN107167811B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992850A (en) * 2017-12-20 2018-05-04 大连理工大学 A Classification Method of 3D Color Point Cloud in Outdoor Scenes
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108519773A (en) * 2018-03-07 2018-09-11 西安交通大学 A path planning method for unmanned vehicles in a structured environment
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN109239727A (en) * 2018-09-11 2019-01-18 北京理工大学 A kind of distance measuring method of combination solid-state face battle array laser radar and double CCD cameras
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A ranging device combining solid-state area array lidar and dual CCD cameras
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109696173A (en) * 2019-02-20 2019-04-30 苏州风图智能科技有限公司 A kind of car body air navigation aid and device
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A Dense System and Method for Depth Filling Based on LiDAR and Image
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110488320A (en) * 2019-08-23 2019-11-22 南京邮电大学 A method of vehicle distances are detected using stereoscopic vision
CN110738223A (en) * 2018-07-18 2020-01-31 郑州宇通客车股份有限公司 Point cloud data clustering method and device for laser radars
CN110781720A (en) * 2019-09-05 2020-02-11 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN111104849A (en) * 2018-10-29 2020-05-05 安波福技术有限公司 Automatic annotation of environmental features in maps during vehicle navigation
CN111582280A (en) * 2020-05-11 2020-08-25 吉林省森祥科技有限公司 Deep data fusion image segmentation method for multispectral rescue robot
CN111898687A (en) * 2020-08-03 2020-11-06 成都信息工程大学 A Radar Reflectance Data Fusion Method Based on Dillonie Triangulation
CN112567259A (en) * 2018-08-16 2021-03-26 标致雪铁龙汽车股份有限公司 Method for determining a confidence index associated with an object detected by a sensor in the environment of a motor vehicle
CN112633326A (en) * 2020-11-30 2021-04-09 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112749662A (en) * 2021-01-14 2021-05-04 东南大学 Method for extracting travelable area in unstructured environment based on laser radar
CN113284163A (en) * 2021-05-12 2021-08-20 西安交通大学 Three-dimensional target self-adaptive detection method and system based on vehicle-mounted laser radar point cloud
CN113421217A (en) * 2020-03-02 2021-09-21 北京京东乾石科技有限公司 Method and device for detecting travelable area
CN114898321A (en) * 2022-06-02 2022-08-12 重庆交通职业学院 Road drivable area detection method, device, equipment, medium and system
CN115984583A (en) * 2022-12-30 2023-04-18 广州沃芽科技有限公司 Data processing method, apparatus, computer device, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030117412A1 (en) * 2001-12-21 2003-06-26 General Electric Company Method for high dynamic range image construction based on multiple images with multiple illumination intensities
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN103760569A (en) * 2013-12-31 2014-04-30 西安交通大学 Drivable region detection method based on laser radar
CN104569998A (en) * 2015-01-27 2015-04-29 长春理工大学 Laser-radar-based vehicle safety running region detection method and device
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Road detection method based on monocular vision
CN106529417A (en) * 2016-10-17 2017-03-22 北海益生源农贸有限责任公司 Visual and laser data integrated road detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030117412A1 (en) * 2001-12-21 2003-06-26 General Electric Company Method for high dynamic range image construction based on multiple images with multiple illumination intensities
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN103760569A (en) * 2013-12-31 2014-04-30 西安交通大学 Drivable region detection method based on laser radar
CN104569998A (en) * 2015-01-27 2015-04-29 长春理工大学 Laser-radar-based vehicle safety running region detection method and device
CN105989334A (en) * 2015-02-12 2016-10-05 中国科学院西安光学精密机械研究所 Road detection method based on monocular vision
CN106529417A (en) * 2016-10-17 2017-03-22 北海益生源农贸有限责任公司 Visual and laser data integrated road detection method

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992850A (en) * 2017-12-20 2018-05-04 大连理工大学 A Classification Method of 3D Color Point Cloud in Outdoor Scenes
CN108519773A (en) * 2018-03-07 2018-09-11 西安交通大学 A path planning method for unmanned vehicles in a structured environment
CN108509918A (en) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN108932475B (en) * 2018-05-31 2021-11-16 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN110738223B (en) * 2018-07-18 2022-04-08 宇通客车股份有限公司 Point cloud data clustering method and device of laser radar
CN110738223A (en) * 2018-07-18 2020-01-31 郑州宇通客车股份有限公司 Point cloud data clustering method and device for laser radars
CN112567259B (en) * 2018-08-16 2024-02-02 标致雪铁龙汽车股份有限公司 Method for determining a confidence index associated with an object detected by a sensor in the environment of a motor vehicle
CN112567259A (en) * 2018-08-16 2021-03-26 标致雪铁龙汽车股份有限公司 Method for determining a confidence index associated with an object detected by a sensor in the environment of a motor vehicle
CN109358335A (en) * 2018-09-11 2019-02-19 北京理工大学 A ranging device combining solid-state area array lidar and dual CCD cameras
CN109239727B (en) * 2018-09-11 2022-08-05 北京理工大学 A ranging method combining solid-state area array lidar and dual CCD cameras
CN109239727A (en) * 2018-09-11 2019-01-18 北京理工大学 A kind of distance measuring method of combination solid-state face battle array laser radar and double CCD cameras
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 A method for detecting, identifying and locating unmanned surface targets based on monocular camera and lidar information fusion
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
US12140446B2 (en) 2018-10-29 2024-11-12 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN111104849B (en) * 2018-10-29 2022-05-31 动态Ad有限责任公司 Automatic annotation of environmental features in a map during navigation of a vehicle
CN111104849A (en) * 2018-10-29 2020-05-05 安波福技术有限公司 Automatic annotation of environmental features in maps during vehicle navigation
US11774261B2 (en) 2018-10-29 2023-10-03 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN109858460B (en) * 2019-02-20 2022-06-10 重庆邮电大学 Lane line detection method based on three-dimensional laser radar
CN109696173A (en) * 2019-02-20 2019-04-30 苏州风图智能科技有限公司 A kind of car body air navigation aid and device
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A Dense System and Method for Depth Filling Based on LiDAR and Image
CN109917419B (en) * 2019-04-12 2021-04-13 中山大学 Depth filling dense system and method based on laser radar and image
CN110378196B (en) * 2019-05-29 2022-08-02 电子科技大学 Road visual detection method combining laser point cloud data
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110488320A (en) * 2019-08-23 2019-11-22 南京邮电大学 A method of vehicle distances are detected using stereoscopic vision
CN110488320B (en) * 2019-08-23 2023-02-03 南京邮电大学 A Method of Using Stereo Vision to Detect Vehicle Distance
CN110781720A (en) * 2019-09-05 2020-02-11 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN110781720B (en) * 2019-09-05 2022-08-19 国网江苏省电力有限公司 Object identification method based on image processing and multi-sensor fusion
CN113421217B (en) * 2020-03-02 2025-01-07 北京京东乾石科技有限公司 Drivable area detection method and device
CN113421217A (en) * 2020-03-02 2021-09-21 北京京东乾石科技有限公司 Method and device for detecting travelable area
CN111582280A (en) * 2020-05-11 2020-08-25 吉林省森祥科技有限公司 Deep data fusion image segmentation method for multispectral rescue robot
CN111582280B (en) * 2020-05-11 2023-10-17 吉林省森祥科技有限公司 A deep data fusion image segmentation method for multispectral rescue robots
CN111898687A (en) * 2020-08-03 2020-11-06 成都信息工程大学 A Radar Reflectance Data Fusion Method Based on Dillonie Triangulation
CN112633326B (en) * 2020-11-30 2022-04-29 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112633326A (en) * 2020-11-30 2021-04-09 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112749662A (en) * 2021-01-14 2021-05-04 东南大学 Method for extracting travelable area in unstructured environment based on laser radar
CN112749662B (en) * 2021-01-14 2022-08-05 东南大学 A lidar-based method for extracting drivable areas in unstructured environments
CN113284163B (en) * 2021-05-12 2023-04-07 西安交通大学 Three-dimensional target self-adaptive detection method and system based on vehicle-mounted laser radar point cloud
CN113284163A (en) * 2021-05-12 2021-08-20 西安交通大学 Three-dimensional target self-adaptive detection method and system based on vehicle-mounted laser radar point cloud
CN114898321A (en) * 2022-06-02 2022-08-12 重庆交通职业学院 Road drivable area detection method, device, equipment, medium and system
CN114898321B (en) * 2022-06-02 2024-10-08 重庆交通职业学院 Road drivable area detection method, device, equipment, medium and system
CN115984583A (en) * 2022-12-30 2023-04-18 广州沃芽科技有限公司 Data processing method, apparatus, computer device, storage medium and program product
CN115984583B (en) * 2022-12-30 2024-02-02 广州沃芽科技有限公司 Data processing method, apparatus, computer device, storage medium, and program product

Also Published As

Publication number Publication date
CN107167811B (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN107167811A (en) The road drivable region detection method merged based on monocular vision with laser radar
CN111626217B (en) Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
CN110264468B (en) Point cloud data labeling, segmentation model determination, target detection methods and related equipment
Caltagirone et al. Fast LIDAR-based road detection using fully convolutional neural networks
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
CN103258203B (en) The center line of road extraction method of remote sensing image
Sarlin et al. Snap: Self-supervised neural maps for visual positioning and semantic understanding
Chen et al. Mvlidarnet: Real-time multi-class scene understanding for autonomous driving using multiple views
CN110533048A (en) The realization method and system of combination semantic hierarchies link model based on panoramic field scene perception
KR101907883B1 (en) Object detection and classification method
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
CN113671522A (en) Dynamic environment laser SLAM method based on semantic constraint
Drobnitzky et al. Survey and systematization of 3D object detection models and methods
Yan et al. Sparse semantic map building and relocalization for UGV using 3D point clouds in outdoor environments
Kukolj et al. Road edge detection based on combined deep learning and spatial statistics of LiDAR data
Huang et al. Overview of LiDAR point cloud target detection methods based on deep learning
Laupheimer et al. The importance of radiometric feature quality for semantic mesh segmentation
Gao et al. Toward effective 3d object detection via multimodal fusion to automatic driving for industrial cyber-physical systems
Burger et al. Fast dual decomposition based mesh-graph clustering for point clouds
Delmerico et al. Building facade detection, segmentation, and parameter estimation for mobile robot stereo vision
Berrio et al. Fusing lidar and semantic image information in octree maps
Deng et al. Elc-ois: Ellipsoidal clustering for open-world instance segmentation on lidar data
Shi et al. SS-BEV: multi-camera BEV object detection based on multi-scale spatial structure understanding
Huang et al. A coarse-to-fine LiDAR-based SLAM with dynamic object removal in dense urban areas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载