+

CN117518189A - Laser radar-based camera processing method and device, electronic equipment and medium - Google Patents

Laser radar-based camera processing method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117518189A
CN117518189A CN202311740328.2A CN202311740328A CN117518189A CN 117518189 A CN117518189 A CN 117518189A CN 202311740328 A CN202311740328 A CN 202311740328A CN 117518189 A CN117518189 A CN 117518189A
Authority
CN
China
Prior art keywords
point cloud
cloud data
obstacle
target
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311740328.2A
Other languages
Chinese (zh)
Inventor
卜言跃
马连洋
张硕
钱永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mooe Robot Technology Co ltd
Original Assignee
Shanghai Mooe Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mooe Robot Technology Co ltd filed Critical Shanghai Mooe Robot Technology Co ltd
Priority to CN202311740328.2A priority Critical patent/CN117518189A/en
Publication of CN117518189A publication Critical patent/CN117518189A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the invention provides a laser radar-based camera processing method, a laser radar-based camera processing device, electronic equipment and a medium. Determining first point cloud data corresponding to a target depth camera, wherein the first point cloud data is point cloud data for describing an obstacle, which is obtained by performing obstacle detection by using the target depth camera in a target travelling direction by target equipment; determining second point cloud data corresponding to the first laser radar, wherein the second point cloud data are laser point cloud data obtained by detecting obstacles by using the first laser radar in the target advancing direction of target equipment, and the first point cloud data and the second point cloud data are obtained at the same detection time; and denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data. The noise data collected by the depth camera can be removed, and the detection precision of the depth camera is improved.

Description

Laser radar-based camera processing method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of environmental awareness technologies, and in particular, to a laser radar-based camera processing method, apparatus, electronic device, and medium.
Background
Depth cameras are widely used in the fields of obstacle detection and the like, and particularly in an automatic driving scene, the depth cameras are used for participating in obstacle detection of a target travelling direction to take an important role. For example, the point cloud data is obtained using a depth camera to perceive the environment, and then the obstacle detection of the target traveling direction can be performed using the point cloud data. However, when the depth camera is used in the daytime, a large amount of point cloud noise points appear, so that the type of point cloud data collected by the depth camera cannot be determined, and the detection accuracy of the obstacle is affected.
Disclosure of Invention
The invention provides a camera processing method, a device, electronic equipment and a medium based on a laser radar, which are used for removing noise data collected by a depth camera and improving the detection precision of the depth camera.
In a first aspect, an embodiment of the present invention provides a method for processing a camera based on a lidar, where the method includes:
determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction;
Determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by detecting an obstacle by the first laser radar in a target advancing direction of the target equipment, and the first point cloud data and the second point cloud data are obtained at the same detection time;
and denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
In a second aspect, an embodiment of the present invention further provides a camera processing device based on a lidar, where the device includes:
the first determining module is used for determining first point cloud data corresponding to a target depth camera, the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data which are used for describing obstacles and are obtained by the target equipment to detect the obstacles by adopting the target depth camera in the target advancing direction;
The second determining module is used for determining second point cloud data corresponding to a first laser radar, the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by performing obstacle detection by the first laser radar on target equipment in a target travelling direction, and the first point cloud data and the second point cloud data are obtained at the same detection time;
and the processing module is used for denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the lidar-based camera processing method of any of the embodiments described above.
In a fourth aspect, there is also provided in an embodiment of the present invention a computer readable medium storing computer instructions for causing a processor to implement the laser radar-based camera processing method according to any one of the above embodiments.
In the embodiment of the invention, when the target depth camera acquires first point cloud data, second point cloud data corresponding to a first laser radar is determined, the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the second point cloud data is laser point cloud data obtained by detecting an obstacle of the first laser radar in a target travelling direction along a direction parallel to the target ground operated by the target equipment, the first point cloud data and the second point cloud data are obtained at the same detection time, and then the first point cloud data corresponding to the target depth camera can be denoised by using the first point cloud data and the second point cloud data. According to the scheme, the second point cloud data detected by the laser radar is utilized to carry out noise detection on the first point cloud data acquired by the depth camera and process the noise, the noise data generated by the depth camera can be removed through cross verification on the second point cloud data detected by the laser radar, the higher reliability, the higher accuracy and the reduction of data processing amount are realized, and the denoising detection speed and the denoising detection precision of the depth camera are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
The above and other features, advantages and aspects of embodiments of the present invention will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a camera processing method based on a laser radar according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the installation positions of a laser radar and a depth camera on a target device according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of another laser radar-based camera processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart of still another laser radar based camera processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a detection range of a second lidar according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a camera processing device based on a lidar according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device for implementing a laser radar-based camera processing method according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the invention is susceptible of embodiment in the drawings, it is to be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the invention. It should be understood that the drawings and embodiments of the invention are for illustration purposes only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the embodiments of the present invention are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a schematic flow chart of a laser radar-based camera processing method according to an embodiment of the present invention, where the embodiment of the present invention is suitable for denoising point cloud data detected by a depth camera, the method may be performed by a laser radar-based camera processing device, and the laser radar-based camera processing device may be implemented in a software and/or hardware form and is generally integrated on any electronic device with a network communication function, where the electronic device may be a mobile terminal, a PC end, a server, or the like.
As shown in fig. 1, the laser radar-based camera processing method according to the embodiment of the invention includes the following steps:
s110, determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction.
The target equipment can comprise unmanned forklifts, intelligent warehouse forklifts, automatic driving forklifts, outdoor carrier vehicles, logistics carrier robots and other machine equipment capable of moving autonomously.
Referring to fig. 2, in order to enable the target device to detect an obstacle in the traveling process, the depth camera and the laser radar are pre-mounted to detect the obstacle in the traveling direction, so that the target device can detect whether the obstacle exists in the traveling direction in real time by using the depth camera and the laser radar to generate traveling blockage, so as to perform obstacle avoidance operation. Wherein the obstacle avoidance operation includes instructing the target device to continue moving along the direction of travel or rescheduling the path of travel to avoid the obstacle.
Optionally, the depth camera on the target device and the laser radar are mounted on the target device through a rigid connection mode, and the rigid connection mode may refer to that when one sensor generates displacement or stress, the other sensors connected with the depth camera and the laser radar do not generate displacement or relative deformation relative to the first sensor, that is, the two sensors are integrated.
Referring to fig. 2, a target depth camera 210 is configured on the target device, and a detection angle corresponding to the target depth camera 210 is biased towards the target ground on which the target device operates. In the process that the target device moves along the target travelling direction, the target device may adopt the target depth camera 210 to deflect to the target ground on which the target device operates to detect the obstacle, so as to obtain first point cloud data for describing the obstacle in the target travelling direction, so as to determine whether the obstacle in the target travelling direction blocks travelling. Wherein the target ground is an actual ground.
As an optional but non-limiting implementation manner, determining the first point cloud data corresponding to the target depth camera includes the following steps A1-A2:
and A1, controlling the target equipment to move on the target ground in the target advancing direction, and acquiring data by biasing the target depth camera towards the target ground operated by the target equipment to obtain third point cloud data.
And A2, obtaining first point cloud data for describing the obstacle by dividing and clustering the third point cloud data.
S120, determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below a target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by performing obstacle detection by the first laser radar on the target equipment in the target advancing direction, and the first point cloud data and the second point cloud data are obtained at the same detection time.
When the target depth camera is used in daytime, large-scale point cloud noise points appear, and the accuracy of obstacle detection of the target depth camera is affected. Noise points are often meaningless, irregular points, and may be caused by erroneous measurements of the sensor, interference with the environment, and the like. When processing point cloud data, the noise points need to be removed so as to improve the data quality and the subsequent processing effect.
Referring to fig. 2, considering that the first lidar 220 is simultaneously mounted on the target device, the first lidar 220 is configured at a first height position on the target device from the target ground and is located at a first distance position below the target depth camera, and the first height of the first lidar 220 is configured to be smaller than a preset height threshold and the first distance is configured to be smaller than a first preset distance threshold, so as to ensure that the first lidar is known to be determined at the mounting height from the target ground.
S130, denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
According to the technical scheme, after second point cloud data acquired by the first laser radar are acquired, the obstacle hanging height corresponding to the first point cloud data is analyzed according to the distance condition between the second point cloud data and the first point cloud data, and the obstacle hanging height is the height distance between the obstacle and the target ground. The distance of the suspension height can be preset according to specific conditions of the obstacle in the operation scene of the target equipment, when the technical scheme is implemented, the suspension height of the obstacle in the scene is not more than 2m, and if the suspension height is more than 2m, the first point cloud data of the target depth camera is considered to be noise point data. Based on the above situation, after determining the obstacle suspending height corresponding to the description of the first point cloud data, further judging whether the obstacle suspending height is reasonable or not, and removing the unreasonable first point cloud data cloud as noise data.
Optionally, when calculating the obstacle suspension height, the height from the obstacle detected by the depth camera to the first laser radar can be determined according to the first point cloud data and the second point cloud data, and then the obstacle suspension height distance from the obstacle to the target ground is determined by combining the first height position from the first laser radar to the target ground, and then unreasonable first point cloud data can be removed as noise data according to the obstacle suspension height distance.
According to the technical scheme, the characteristics that the first laser radar and the depth camera detect the same obstacle at the same position are utilized, the second point cloud data detected by the laser radar are utilized to detect noise points of the first point cloud data collected by the depth camera and process the noise points, the second point cloud data detected by the laser radar are used for cross verification to remove the noise point data collected by the depth camera, so that the method has the advantages of higher reliability, higher accuracy, reduced data processing capacity and high denoising detection speed and precision of the depth camera.
Fig. 3 is a schematic flow chart of another laser radar-based camera processing method according to an embodiment of the present invention, where the process of denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data in the foregoing embodiment is further optimized based on the technical solution of the foregoing embodiment, and the present embodiment may be combined with each of the alternatives in the foregoing one or more embodiments.
As shown in fig. 3, the laser radar-based camera processing method according to the embodiment of the invention includes the following steps:
s310, determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction.
S320, determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below a target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by performing obstacle detection by the first laser radar on the target equipment in the target advancing direction, and the first point cloud data and the second point cloud data are obtained at the same detection time.
S330, determining convex hull contours of the obstacles, which are obtained by adopting independent clustering of the first point cloud data or adopting common clustering of the first point cloud data and the second point cloud data, and searching fourth point cloud data in the second point cloud data, wherein the fourth point cloud data is point cloud data which is formed by laser points, which are located in the second point cloud data and are away from the convex hull contours of the obstacles, are smaller than a second preset distance threshold value.
And S340, under the condition that the fourth point cloud data is searched, determining the obstacle hanging height corresponding to the first point cloud data according to the first point cloud data and the fourth point cloud data, wherein the obstacle hanging height is the height distance between the obstacle and the target ground.
And determining point cloud data (which are corresponding to the target depth camera and are used for describing the obstacle A, wherein the point cloud data of the obstacle A can be obtained by singly clustering first point cloud data acquired by the depth camera or jointly clustering first point cloud data acquired by the depth camera and second point cloud data acquired by the first laser radar), and taking laser point clouds (which are described by the point cloud data and are within a second preset distance threshold range (3 m distance is selected when the technical scheme is actually implemented) around the convex hull outline of the obstacle A as fourth point cloud data. And screening out second point cloud data positioned around the obstacle through the outline of the convex hull of the obstacle to serve as fourth point cloud data, eliminating second point cloud data which does not belong to the preset distance range around the outline of the convex hull of the obstacle, reducing the calculated amount of the suspension distance calculation by adopting the second point cloud data, improving the noise removal speed of the point cloud data of the depth camera, and calculating the suspension height of the obstacle corresponding to the first point cloud data by using the fourth point cloud data and the first point cloud data.
If the first point cloud data is noise point data generated by the depth camera, the first point cloud data describes that no second laser point cloud exists in a certain distance range of the obstacle, and therefore the calculated suspension distance can be infinite. If the first point cloud data is an obstacle point cloud generated by the depth camera, the obstacle is detected by the depth camera and the first laser radar at the same time, and the calculated suspension distance is in a reasonable range.
As an optional but non-limiting implementation manner, determining the suspension height of the obstacle corresponding to the first point cloud data according to the first point cloud data and the fourth point cloud data includes the following steps C1-C2:
and C1, determining a target distance corresponding to each laser point in the fourth point cloud data, wherein the target distance is the shortest distance from each laser point to each obstacle point of the obstacle described by the first point cloud data.
And C2, determining the suspension height of the obstacle corresponding to the first point cloud data according to the target distance corresponding to each laser point in the fourth point cloud data.
Calculating the shortest distance from each laser point in the fourth point cloud data to each obstacle point of the obstacle described by the first point cloud data, and taking the maximum value of the shortest distance from each laser point to each obstacle point of the obstacle described by the first point cloud data as the suspension distance, namely taking the maximum value of the distance from the laser point in the fourth point cloud data to the obstacle described by the first point cloud data as the suspension height of the obstacle.
As an optional but non-limiting implementation manner, denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data, further includes the following steps:
and under the condition that the fourth point cloud data is not searched, directly removing the first point cloud data corresponding to the target depth camera as noise data.
S350, denoising the first point cloud data corresponding to the target depth camera according to the obstacle suspension distance.
As an optional but non-limiting implementation manner, denoising the first point cloud data corresponding to the target depth camera according to the obstacle suspension height, including the following steps D1-D2:
and D1, if the obstacle suspension height meets the preset suspension height condition, determining that the first point cloud data corresponding to the target depth camera does not belong to the noise data.
And D2, if the obstacle suspension height does not meet the preset suspension height condition, removing the first point cloud data corresponding to the target depth camera as noise data.
According to the technical scheme, the characteristics that the first laser radar and the depth camera can detect the same obstacle at the same position are utilized, the second point cloud data detected by the laser radar are utilized to detect noise points of the first point cloud data collected by the depth camera and process the noise points, the second point cloud data detected by the laser radar are utilized to detect and remove the noise point data collected by the depth camera, and a plurality of sensors are utilized to carry out cross verification, so that the problem that single sensor is easy to have false detection or the noise removal effect is not ideal is avoided; the noise point of the camera is determined by detecting the suspension distance of the obstacle, and the sensor on the target equipment is utilized for cross check, so that the method has the advantages of higher reliability, higher accuracy, reduced data processing amount and improved denoising detection speed and precision of the depth camera.
Fig. 4 is a schematic flow chart of still another laser radar-based camera processing method according to an embodiment of the present invention, where the process of denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data in the foregoing embodiment is further optimized based on the foregoing embodiment of the present invention, and the present embodiment may be combined with each of the alternatives in the foregoing one or more embodiments.
As shown in fig. 4, the laser radar-based camera processing method according to the embodiment of the invention includes the following steps:
s410, determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction.
S420, determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below a target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by performing obstacle detection by the first laser radar on the target equipment in the target advancing direction, and the first point cloud data and the second point cloud data are obtained at the same detection time.
S430, determining convex hull contours of the obstacles, which are obtained by adopting independent clustering of the first point cloud data or adopting common clustering of the first point cloud data and the second point cloud data, and searching fourth point cloud data in the second point cloud data, wherein the fourth point cloud data is point cloud data which is formed by laser points, which are located in the second point cloud data and are located in the convex hull contours of the obstacles, and are located away from the obstacles, the convex hull contours of the obstacles are smaller than a second preset distance threshold.
And S440, detecting whether the obstacle corresponding to each first point cloud data is in the detection range of a second laser radar, wherein the second laser radar is configured on the target equipment and is positioned at a second distance position above the target depth camera, the second laser radar detects along the direction parallel to the ground of the target where the target equipment operates, and the second distance is smaller than a third preset distance threshold value.
Referring to fig. 2, a second lidar 230 is simultaneously mounted on the target device, where the second lidar 230 is configured on the target device and located at a second distance above the target depth camera, and the second lidar detects the second lidar along a direction parallel to the ground of the target on which the target device operates, where the second distance is less than a third preset distance threshold. The first laser radar can select single-line laser, and the second laser radar can select multi-line laser. For the case that the first laser radar does not scan the obstacle point cloud, the second laser radar can be combined to detect the first point cloud data of the target depth camera again, and the detection range of the second laser radar is shown in fig. 5. Thus, under normal conditions, if the first point cloud data detected by the target depth camera is an obstacle point cloud, the denoising processing can be further performed on the first point cloud data by judging whether the obstacle corresponding to the first point cloud data is in the detection range of the second laser.
S450, if the obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data, and reserving.
As an optional but non-limiting implementation manner, detecting whether the obstacle corresponding to each first point cloud data is within the detection range of the second laser radar includes the following steps E1-E4:
and E1, determining the height of a reference obstacle point corresponding to each first point cloud data, wherein the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data.
And E2, determining a reference height corresponding to the second laser radar, wherein the reference height is the lowest height which can be detected by the second laser radar.
And E3, if the height difference between the reference height and the reference obstacle point height is larger than a first calibration value, determining that the obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar.
And E4, if the height difference between the reference height and the reference obstacle point height is not larger than a first calibration value, determining that the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar.
Referring to fig. 2, the obstacle corresponding to each first point cloud data is polled, the highest obstacle point among the obstacle points of the obstacle described by the first point cloud data is found, the highest point of the vertical axis coordinate Z value in the obstacle is rs_top_point, and the lowest height which can be detected by the second laser radar is named as height_1. If the height_1-rs_top_point > is a first calibration value (for example, the first calibration value is set to 0.1), the obstacle corresponding to the first point cloud data is not considered to be in the detection range of the second laser radar; if the height_1-rs_top_point < = first calibration value (for example, the first calibration value is set to 0.1), the obstacle corresponding to the first point cloud data is considered to be within the detection range of the second laser radar.
S460, if the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar, determining fifth point cloud data obtained by performing obstacle detection on the target equipment by adopting the second laser radar in the target travelling direction, and denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data, wherein the first point cloud data and the fifth point cloud data are obtained at the same detection time.
As an optional but non-limiting implementation manner, denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data, including the following steps F1-F3:
And F1, detecting whether the second laser radar detects an obstacle in an area where the first point cloud data corresponds to the obstacle according to the fifth point cloud data.
F2, if the second laser radar detects an obstacle, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data and reserving the noise data;
and F3, if the second laser radar does not detect the obstacle, removing the first point cloud data corresponding to the target depth camera as noise data.
If the obstacle detected by the depth camera is not in the detection range of the second laser radar, the obstacle corresponding to the first point cloud data cannot be detected secondarily by the second laser radar, and the first point cloud data is reserved. If the obstacle detected by the depth camera is in the detection range of the second laser radar, fifth point cloud data acquired by the second laser radar are acquired, whether the second laser radar detects the obstacle in the area where the obstacle corresponding to the first point cloud data is located is detected according to the fifth point cloud data, and if the second laser radar is also detected, the obstacle is detected secondarily through the second laser radar, and the first point cloud data is reserved. If the second laser radar does not detect the obstacle in the area where the first point cloud data corresponds to the obstacle, the second laser radar does not acquire the point cloud data describing the obstacle corresponding to the first point cloud data, which indicates that the obstacle corresponding to the first point cloud data is a noise point, and the first point cloud data is filtered.
As an optional but non-limiting implementation manner, detecting whether the second lidar detects an obstacle in an area where the first point cloud data corresponds to the obstacle according to the fifth point cloud data includes the following steps H1-H4:
and H1, determining sixth point cloud data from the fifth point cloud data, wherein the obstacle points described by the sixth point cloud data are positioned in a fourth preset distance range around the reference obstacle point, and the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data.
And H2, determining the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data.
And step H3, if the height of the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data is smaller than or equal to the height of the reference obstacle point, determining that the second laser radar can detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located, wherein the reference obstacle point is the obstacle point with the highest height in the obstacle points described by the first point cloud data.
And step H4, if the height of the lowest obstacle point in the obstacle points described by the sixth point cloud data is larger than the height of the reference obstacle point, determining that the second laser radar cannot detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located.
And searching sixth point cloud data within 0.4 m around the rs_top_point from fifth point cloud data acquired by the second laser radar, recording the sixth point cloud data as S, and calculating the height of the lowest barrier point in all barrier points described by the sixth point cloud data as height_2. If the height of the height_2< = rs_top_point point is higher, the first point cloud data is considered to correspond to the obstacle and is detected by the second laser radar at the same time, and the first point cloud data is reserved; otherwise, the obstacle corresponding to the first point cloud data is not detected by the second laser radar at the same time, and the first point cloud data is deleted.
It will be appreciated that the first calibration value of 0.1 is a fault tolerance distance set for the second lidar (e.g. when the sensor is mounted obliquely, it may cause a deviation in the detected height of the sensor at the same place); the fourth preset distance of 0.4 m is selected by determining that the obstacle of the camera is within the detection range of the second laser radar and the second laser radar should sweep to the obstacle of the camera, and then looking for the second laser radar of 0.4 m around the point to check, and the second laser radar can be set arbitrarily according to the actual situation during the selection.
According to the technical scheme, the characteristics that the first laser radar and the depth camera can detect the same obstacle at the same position are utilized, the second point cloud data detected by the laser radar is utilized to detect noise points of the first point cloud data collected by the depth camera and process the noise points, the second point cloud data detected by the laser radar is utilized to detect and remove the noise point data collected by the depth camera, and other sensors are utilized to carry out cross verification, so that the problems that single sensor is prone to false detection and the denoising effect is not ideal are avoided; the noise point of the camera is determined by detecting the suspension distance of the obstacle, and the sensor on the target equipment is utilized for cross check, so that the method has the advantages of higher reliability, higher accuracy, reduced data processing amount and improved denoising detection speed and precision of the depth camera.
Fig. 6 is a schematic structural diagram of a laser radar-based camera processing device according to an embodiment of the present invention, where the embodiment of the present invention is suitable for denoising point cloud data detected by a depth camera, and the laser radar-based camera processing device may be implemented in a software and/or hardware form and is generally integrated on any electronic device having a network communication function, where the electronic device may be a mobile terminal, a PC or a server.
As shown in fig. 6, a camera processing device based on a lidar according to an embodiment of the present invention includes: a first determination module 610, a second determination module 620, and a processing module 630. Wherein:
a first determining module 610, configured to determine first point cloud data corresponding to a target depth camera, where the target depth camera is configured on a target device, the target depth camera is biased to a target ground on which the target device operates to detect, and the first point cloud data is point cloud data for describing an obstacle, where the target device uses the target depth camera to detect the obstacle in a target travelling direction;
a second determining module 620, configured to determine second point cloud data corresponding to a first laser radar, where the first laser radar is configured at a first height position on a target device, where the first height is less than a preset height threshold, the first distance is less than a first preset distance threshold, the first laser radar detects along a direction parallel to a target ground on which the target device operates, and the second point cloud data is laser point cloud data obtained by performing obstacle detection by using the first laser radar on a target travelling direction, where the first point cloud data and the second point cloud data are obtained at the same detection time;
And the processing module 630 is configured to perform denoising processing on the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
On the basis of the technical solution of the foregoing embodiment, optionally, determining first point cloud data corresponding to the target depth camera includes:
the target equipment is controlled to move on the target ground in the target travelling direction, and data acquisition is carried out by deflecting the target depth camera towards the target ground operated by the target equipment to obtain third point cloud data;
and obtaining first point cloud data for describing the obstacle by dividing and clustering the third point cloud data.
On the basis of the technical solution of the foregoing embodiment, optionally, denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data, including:
determining convex hull contours of the obstacles, which are obtained by adopting independent clustering of the first point cloud data or adopting common clustering of the first point cloud data and the second point cloud data, and searching fourth point cloud data in the second point cloud data, wherein the fourth point cloud data is point cloud data formed by laser points, which are less than a second preset distance threshold and are not in the convex hull contours of the obstacles, from the convex hull contours of the obstacles in the second point cloud data;
Under the condition that fourth point cloud data are searched, determining obstacle hanging height corresponding to the first point cloud data according to the first point cloud data and the fourth point cloud data, wherein the obstacle hanging height is the height distance between an obstacle and the target ground;
denoising the first point cloud data corresponding to the target depth camera according to the obstacle suspension distance
On the basis of the technical solution of the foregoing embodiment, optionally, determining, according to the first point cloud data and the fourth point cloud data, a suspension height of an obstacle corresponding to the first point cloud data includes:
determining a target distance corresponding to each laser point in the fourth point cloud data, wherein the target distance is the shortest distance from each laser point to each obstacle point of the obstacle described by the first point cloud data;
and determining the suspension height of the obstacle corresponding to the first point cloud data according to the target distance corresponding to each laser point in the fourth point cloud data.
On the basis of the technical solution of the foregoing embodiment, optionally, denoising the first point cloud data corresponding to the target depth camera according to the suspension height of the obstacle includes:
if the suspension height of the obstacle meets the preset suspension height condition, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data;
And if the obstacle suspension height does not meet the preset suspension height condition, removing the first point cloud data corresponding to the target depth camera as noise data.
On the basis of the technical solution of the foregoing embodiment, optionally, denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data, further includes:
and under the condition that the fourth point cloud data is not searched, directly removing the first point cloud data corresponding to the target depth camera as noise data.
On the basis of the technical solution of the foregoing embodiment, optionally, denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data, further includes:
detecting whether an obstacle corresponding to each first point cloud data is in a detection range of a second laser radar, wherein the second laser radar is configured on target equipment and is positioned at a second distance position above a target depth camera, the second laser radar detects along a direction parallel to the ground of a target operated by the target equipment, and the second distance is smaller than a third preset distance threshold;
If the obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data and reserving the noise data;
if the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar, determining fifth point cloud data obtained by performing obstacle detection on the target equipment by adopting the second laser radar in the target travelling direction, and denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data, wherein the first point cloud data and the fifth point cloud data are obtained at the same detection time.
On the basis of the technical solution of the foregoing embodiment, optionally, denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data includes:
detecting whether the second laser radar detects an obstacle in an area where the first point cloud data corresponds to the obstacle according to the fifth point cloud data;
if the second laser radar detects an obstacle, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data and reserving the noise data;
and if the second laser radar does not detect the obstacle, removing the first point cloud data corresponding to the target depth camera as noise data.
On the basis of the technical solution of the foregoing embodiment, optionally, detecting whether the obstacle corresponding to each first point cloud data is within the detection range of the second laser radar includes:
determining the height of a reference obstacle point corresponding to each first point cloud data, wherein the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data;
determining a reference height corresponding to the second laser radar, wherein the reference height is the lowest height which can be detected by the second laser radar;
if the height difference between the reference height and the reference obstacle point height is larger than a first calibration value, determining that an obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar;
and if the height difference between the reference height and the reference obstacle point height is not greater than the first calibration value, determining that the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar.
On the basis of the technical solution of the foregoing embodiment, optionally, detecting, according to the fifth point cloud data, whether the second lidar detects an obstacle in an area where the first point cloud data corresponds to the obstacle includes:
determining sixth point cloud data from the fifth point cloud data, wherein the obstacle points described by the sixth point cloud data are located in a fourth preset distance range around a reference obstacle point, and the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data;
Determining the obstacle point with the lowest height in each obstacle point described by the sixth point cloud data;
if the height of the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data is smaller than or equal to the height of the reference obstacle point, determining that the second laser radar can detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located;
and if the height of the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data is larger than the height of the reference obstacle point, determining that the second laser radar cannot detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located.
According to the technical scheme provided by the embodiment of the invention, when the target depth camera acquires first point cloud data, second point cloud data corresponding to a first laser radar are determined, the first laser radar is configured at a first height position which is away from the target ground on target equipment and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the second point cloud data are laser point cloud data acquired by detecting obstacles in the target travelling direction along the direction parallel to the target ground operated by the target equipment, the first point cloud data and the second point cloud data are acquired at the same detection time, further, the first point cloud data corresponding to the target depth camera can be denoised by utilizing the first point cloud data and the second point cloud data, the first point cloud data acquired by the depth camera can be subjected to point noise detection and processing aiming at the noise point by utilizing the second point cloud data detected by the laser radar, and the problem that the noise point cloud data are not ideal to be detected by utilizing the second point cloud data detected by the laser radar is avoided, and the false noise sensor is not easy to be detected; meanwhile, the noise point of the camera is determined by detecting the suspension distance of the obstacle, and the sensor on the target equipment is used for cross check, so that the method has the advantages of higher reliability, higher accuracy, reduced data processing amount and improved denoising detection speed and precision of the depth camera.
The camera processing device based on the laser radar provided by the embodiment of the invention can execute the camera processing method based on the laser radar provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the camera processing method based on the laser radar.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present invention.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. Referring now to fig. 7, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 7) 700 suitable for use in implementing embodiments of the present invention is shown. The terminal device in the embodiment of the present invention may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An edit/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present invention, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present invention include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing a lidar-based camera processing method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. When executed by the processing device 701, the computer program performs the functions defined above in the laser radar-based camera processing method of the embodiment of the present invention.
The names of messages or information interacted between the devices in the embodiments of the present invention are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present invention belongs to the same inventive concept as the laser radar-based camera processing method provided by the above embodiment, and technical details not described in detail in the present embodiment can be seen in the above embodiment, and the present embodiment has the same beneficial effects as the above embodiment.
An embodiment of the present invention provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the laser radar-based camera processing method provided by the above embodiment.
The computer readable medium of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction; determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by detecting an obstacle by the first laser radar in a target advancing direction of the target equipment, and the first point cloud data and the second point cloud data are obtained at the same detection time; denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented in software or in hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The above description is only illustrative of the preferred embodiments of the present invention and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in the present invention is not limited to the specific combinations of technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the spirit of the disclosure. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the invention. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A laser radar-based camera processing method, the method comprising:
determining first point cloud data corresponding to a target depth camera, wherein the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data for describing an obstacle, which is obtained by the target equipment performing obstacle detection by adopting the target depth camera in the target travelling direction;
determining second point cloud data corresponding to a first laser radar, wherein the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by detecting an obstacle by the first laser radar in a target advancing direction of the target equipment, and the first point cloud data and the second point cloud data are obtained at the same detection time;
And denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
2. The method of claim 1, wherein denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data comprises:
determining convex hull contours of the obstacles, which are obtained by adopting independent clustering of the first point cloud data or adopting common clustering of the first point cloud data and the second point cloud data, and searching fourth point cloud data in the second point cloud data, wherein the fourth point cloud data is point cloud data formed by laser points, which are less than a second preset distance threshold and are not in the convex hull contours of the obstacles, from the convex hull contours of the obstacles in the second point cloud data;
under the condition that fourth point cloud data are searched, determining obstacle hanging height corresponding to the first point cloud data according to the first point cloud data and the fourth point cloud data, wherein the obstacle hanging height is the height distance between an obstacle and the target ground;
and denoising the first point cloud data corresponding to the target depth camera according to the obstacle suspension distance.
3. The method of claim 2, wherein determining the flying height of the obstacle corresponding to the first point cloud data based on the first point cloud data and the fourth point cloud data comprises:
determining a target distance corresponding to each laser point in the fourth point cloud data, wherein the target distance is the shortest distance from each laser point to each obstacle point of the obstacle described by the first point cloud data;
and determining the suspension height of the obstacle corresponding to the first point cloud data according to the target distance corresponding to each laser point in the fourth point cloud data.
4. The method of claim 2, wherein denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data, further comprises:
detecting whether an obstacle corresponding to each first point cloud data is in a detection range of a second laser radar, wherein the second laser radar is configured on target equipment and is positioned at a second distance position above a target depth camera, the second laser radar detects along a direction parallel to the ground of a target operated by the target equipment, and the second distance is smaller than a third preset distance threshold;
If the obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data, and reserving;
if the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar, determining fifth point cloud data obtained by performing obstacle detection on the target equipment by adopting the second laser radar in the target travelling direction, and denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data, wherein the first point cloud data and the fifth point cloud data are obtained at the same detection time.
5. The method of claim 4, wherein denoising the first point cloud data corresponding to the target depth camera according to the fifth point cloud data comprises:
detecting whether the second laser radar detects an obstacle in an area where the first point cloud data corresponds to the obstacle according to the fifth point cloud data;
if the second laser radar detects an obstacle, determining that the first point cloud data corresponding to the target depth camera does not belong to noise data, and reserving the first point cloud data;
and if the second laser radar does not detect the obstacle, removing the first point cloud data corresponding to the target depth camera as noise data.
6. The method of claim 4, wherein detecting whether the obstacle corresponding to each first point cloud data is within a detection range of the second lidar comprises:
determining the height of a reference obstacle point corresponding to each first point cloud data, wherein the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data;
determining a reference height corresponding to the second laser radar, wherein the reference height is the lowest height which can be detected by the second laser radar;
if the height difference between the reference height and the reference obstacle point height is larger than a first calibration value, determining that an obstacle corresponding to the first point cloud data is not in the detection range of the second laser radar;
and if the height difference between the reference height and the reference obstacle point height is not greater than the first calibration value, determining that the obstacle corresponding to the first point cloud data is in the detection range of the second laser radar.
7. The method of claim 5, wherein detecting whether the second lidar detects an obstacle in an area where the first point cloud data corresponds to the obstacle based on the fifth point cloud data, comprises:
determining sixth point cloud data from the fifth point cloud data, wherein the obstacle points described by the sixth point cloud data are located in a fourth preset distance range around a reference obstacle point, and the reference obstacle point is the highest obstacle point in all obstacle points of the obstacle described by the first point cloud data;
Determining the obstacle point with the lowest height in each obstacle point described by the sixth point cloud data;
if the height of the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data is smaller than or equal to the height of the reference obstacle point, determining that the second laser radar can detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located;
and if the height of the obstacle point with the lowest height in the obstacle points described by the sixth point cloud data is larger than the height of the reference obstacle point, determining that the second laser radar cannot detect the obstacle in the area where the obstacle corresponding to the first point cloud data is located.
8. A laser radar-based camera processing apparatus, the apparatus comprising:
the first determining module is used for determining first point cloud data corresponding to a target depth camera, the target depth camera is configured on target equipment, the target depth camera deflects to the target ground operated by the target equipment to detect, and the first point cloud data is point cloud data which are used for describing obstacles and are obtained by the target equipment to detect the obstacles by adopting the target depth camera in the target advancing direction;
the second determining module is used for determining second point cloud data corresponding to a first laser radar, the first laser radar is configured at a first height position on target equipment, which is away from the target ground, and is positioned at a first distance position below the target depth camera, the first height is smaller than a preset height threshold value, the first distance is smaller than a first preset distance threshold value, the first laser radar detects along a direction parallel to the target ground operated by the target equipment, the second point cloud data is laser point cloud data obtained by performing obstacle detection by the first laser radar on target equipment in a target travelling direction, and the first point cloud data and the second point cloud data are obtained at the same detection time;
And the processing module is used for denoising the first point cloud data corresponding to the target depth camera according to the first point cloud data and the second point cloud data.
9. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the lidar-based camera processing method of any of claims 1-7.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the lidar-based camera processing method of any of claims 1 to 7.
CN202311740328.2A 2023-12-15 2023-12-15 Laser radar-based camera processing method and device, electronic equipment and medium Pending CN117518189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311740328.2A CN117518189A (en) 2023-12-15 2023-12-15 Laser radar-based camera processing method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311740328.2A CN117518189A (en) 2023-12-15 2023-12-15 Laser radar-based camera processing method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117518189A true CN117518189A (en) 2024-02-06

Family

ID=89762761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311740328.2A Pending CN117518189A (en) 2023-12-15 2023-12-15 Laser radar-based camera processing method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117518189A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN118752498A (en) * 2024-09-09 2024-10-11 湖南大学 Autonomous navigation method and system for a steel bar tying robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN117934324B (en) * 2024-03-25 2024-06-11 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN118752498A (en) * 2024-09-09 2024-10-11 湖南大学 Autonomous navigation method and system for a steel bar tying robot

Similar Documents

Publication Publication Date Title
CN109188438B (en) Yaw angle determination method, device, equipment and medium
CN110687549B (en) Obstacle detection method and device
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN117518189A (en) Laser radar-based camera processing method and device, electronic equipment and medium
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113466822B (en) Method and device for detecting obstacles
CN114219770B (en) Ground detection method, device, electronic equipment and storage medium
WO2022141116A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN114812539B (en) Map searching method, map using method, map searching device, map using device, robot and storage medium
CN114419601A (en) Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
CN112818792A (en) Lane line detection method, lane line detection device, electronic device, and computer storage medium
CN115685249A (en) Obstacle detection method and device, electronic equipment and storage medium
CN111721305B (en) Positioning method and apparatus, autonomous vehicle, electronic device and storage medium
CN113313654A (en) Laser point cloud filtering and denoising method, system, equipment and storage medium
CN114740854A (en) Robot obstacle avoidance control method and device
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN112528711B (en) Method and device for processing information
CN115649178A (en) Road boundary detection method and electronic equipment
CN117705125B (en) Positioning method, readable storage medium and intelligent device
CN112526477B (en) Method and device for processing information
CN112630787A (en) Positioning method, positioning device, electronic equipment and readable storage medium
CN111914784A (en) Method and device for detecting intrusion of trackside obstacle in real time and electronic equipment
CN110068834B (en) Road edge detection method and device
CN109839645B (en) Speed detection method, system, electronic device and computer readable medium
CN116299534A (en) Method, device, equipment and storage medium for determining vehicle pose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载