+

CN118746809B - Method and device for removing clutter from Doppler radar data based on optical flow information - Google Patents

Method and device for removing clutter from Doppler radar data based on optical flow information Download PDF

Info

Publication number
CN118746809B
CN118746809B CN202410751394.8A CN202410751394A CN118746809B CN 118746809 B CN118746809 B CN 118746809B CN 202410751394 A CN202410751394 A CN 202410751394A CN 118746809 B CN118746809 B CN 118746809B
Authority
CN
China
Prior art keywords
image
optical flow
radar
clutter
flow information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410751394.8A
Other languages
Chinese (zh)
Other versions
CN118746809A (en
Inventor
左翠华
刘盖
李雅琴
吴煜煌
胡婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Polytechnic University
Original Assignee
Wuhan Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Polytechnic University filed Critical Wuhan Polytechnic University
Priority to CN202410751394.8A priority Critical patent/CN118746809B/en
Publication of CN118746809A publication Critical patent/CN118746809A/en
Application granted granted Critical
Publication of CN118746809B publication Critical patent/CN118746809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for removing Doppler radar data clutter based on optical Flow information, which comprise the steps of obtaining radar data, extracting detail data from the radar data, preprocessing an extracted Doppler radar image to obtain a radar image, improving an LM-Flow model based on a RAFT model, analyzing the continuous preprocessed radar image by using the LM-Flow model to extract optical Flow information between front and rear images, screening the obtained optical Flow information by using a motion threshold value, generating a binary mask to separate the clutter, removing the clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by using a bilinear interpolation method to obtain a radar data image from which the clutter is finally removed. According to the invention, the requirement on manual intervention is reduced through the deployment of the deep learning method, the risk of human errors is reduced, and the accuracy of meteorological image processing is improved through accurately processing the optical flow information of Doppler data.

Description

Method and device for removing Doppler radar data clutter based on optical flow information
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for removing Doppler radar data clutter based on optical flow information.
Background
Accurate estimation of precipitation with grid as center is critical for applications such as flood forecasting, water resource management and environmental protection. Doppler radar is known for its high spatial-temporal resolution and wide range, and is a widely used reliable tool for detecting precipitation patterns, and can perform instantaneous and continuous observation on precipitation dynamics. However, effective utilization of doppler radar data is often hindered by ground clutter, which can lead to instability and inaccuracy in precipitation measurements, thereby severely impacting the accuracy of precipitation estimation.
To overcome the challenges presented by ground clutter and to improve the accuracy of precipitation estimation, various methods and techniques have been employed. Initially, notch filters were the preferred solution, which effectively suppressed certain frequency components, but possibly distorted the desired signal. To address the shortcomings of notch filters, gaussian Model Adaptive Processing (GMAP) algorithms were introduced, showing efficacy in the spectral domain. However, GMAP operates primarily in the spectral domain and is susceptible to uncertainty associated with the time window selection, resulting in spectral leakage problems.
In addition, these conventional ground clutter suppression methods rely on techniques such as signal processing, filtering, spectrum analysis, threshold processing, feature engineering, and the like, and have significant effects in specific usage scenarios based on existing knowledge and manually defined data processing rules. However, their efficacy may be limited when applied to different data sets and environmental scenarios.
Disclosure of Invention
In order to overcome the defects of the prior art, the embodiment of the invention provides a method for removing Doppler radar data clutter based on optical flow information, which aims to accurately identify and remove ground clutter in radar data and remarkably improve the detection precision and reliability of the radar on weather phenomena.
In order to achieve the above object, a method for removing doppler radar data clutter based on optical flow information according to an embodiment of the present invention includes:
Acquiring radar data, and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler radar images;
Preprocessing the extracted Doppler radar image to obtain a radar picture;
obtaining an LM-Flow model based on RAFT model improvement, and analyzing continuous preprocessed radar pictures by using the obtained LM-Flow model to extract optical Flow information between front and rear images, wherein the optical Flow information represents the motion direction and the size of pixels;
Generating a binary mask by utilizing optical flow information obtained by screening a motion threshold value to separate ground clutter;
and removing ground clutter according to the binary mask, and repairing erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
Preferentially, the preprocessing the extracted doppler image to obtain a radar picture includes:
Carrying out standardization processing on Doppler radar images;
smoothing filtering is carried out on the standardized radar image data;
Performing edge extraction on the radar image obtained by smoothing filtering to obtain weather edges and structural information in the radar image;
According to the resolution of radar data, carrying out image segmentation on the Doppler image subjected to edge extraction, filling the radar edge with 0, deleting abnormal values, and enhancing or weakening the image.
Preferably, the LM-Flow model modified based on RAFT model includes:
three layers of feature encoders, depth separable convolutions, correlation layer models, and gating loop units.
Preferably, the analyzing the successive preprocessed radar pictures by using the obtained LM-Flow model to extract optical Flow information between the front and rear images specifically includes:
the three-layer feature encoder receives an input front image and an input rear image, performs feature extraction, and extracts context information of a first image as an initial value for performing optical flow iterative update by a gating circulation unit;
the correlation layer model calculates the feature vector of the extracted features of the front image and the back image, and calculates the correlation volume according to the dot product of the feature vectors of the two continuous images;
The gating loop unit receives the initial value of the optical flow iterative update, retrieves the correlation quantity and the potential hidden state from the correlation volume, calculates the updated optimized optical flow information and the hidden state, and the calculation formula is shown as follows:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
Wherein the update gate Z t and the reset gate r t are calculated by a depth separation convolution and a sigmoid activation function, and the candidate hidden states are obtained Processing by a tanh activation function;
The final hidden state h t is updated by combining the previous state with the new candidate state, and finally the optical flow information of the two pictures is output.
Preferably, the generating the binary mask to separate ground clutter by using the optical flow information obtained by the motion threshold value screening specifically includes:
determining a motion threshold according to the absolute value of the pixel motion;
classifying pixels with the motion threshold, pixels below which are classified as stationary pixels;
A binary mask based on a motion threshold is generated for the processed optical flow information, the size of the binary mask being the same as the original image, wherein the pixels of the ground stationary element are set to 0 and the pixels of the moving cloud layer are set to 1.
Preferably, the removing ground clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by using a bilinear interpolation method to obtain a radar data map from which the ground clutter is finally removed specifically includes:
the binary mask is acted on the original image to obtain a multiple-radar data graph with ground clutter removed;
Judging the difference of the current picture relative to the front picture and the rear picture, determining whether the current picture has a problem, and performing bilinear interpolation on the picture with the problem by using the front picture and the rear picture;
and repeating bilinear interpolation restoration pictures by circularly traversing and iterating all the pictures until the pictures with problems are not increased.
On the other hand, the embodiment of the invention also provides a device for removing Doppler radar data clutter based on optical flow information, which specifically comprises:
the image acquisition module is used for acquiring radar data and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler radar images;
The preprocessing module is used for preprocessing the extracted Doppler radar image to obtain a radar picture;
the optical Flow information acquisition module is used for obtaining an LM-Flow model based on RAFT model improvement, analyzing the continuous radar pictures obtained through pretreatment by using the obtained LM-Flow model to extract optical Flow information between the front image and the rear image, wherein the optical Flow information represents the movement direction and the size of the pixels;
The clutter separation module is used for generating a binary mask to separate ground clutter by utilizing optical flow information obtained by screening a motion threshold value;
And the correction module is used for removing ground clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
Preferably, the preprocessing module specifically includes:
the standardized processing unit is used for carrying out standardized processing on the Doppler radar image;
the filtering unit is used for carrying out smoothing filtering processing on the standardized radar image data;
The edge extraction unit is used for carrying out edge extraction on the radar image obtained by the smoothing filtering treatment to obtain weather edges and structural information in the radar image;
and the image increasing and decreasing unit is used for dividing the Doppler image subjected to edge extraction according to the resolution of the radar data, filling 0 for the radar edge, deleting the abnormal value and enhancing or weakening the image.
Preferably, the LM-Flow model modified based on RAFT model includes:
three layers of feature encoders, depth separable convolutions, correlation layer models, and gating loop units.
Preferably, the method comprises the steps of,
The three-layer feature encoder is used for receiving the input front and back images, extracting features, extracting context information of the first picture as an initial value for optical flow iteration update by the gating loop unit;
The correlation layer model is used for carrying out feature vector calculation on the extracted features of the front image and the rear image, and calculating a correlation volume according to the dot product of the feature vectors of the two continuous images;
The gating loop unit is used for receiving the initial value of the optical flow iterative update, retrieving the correlation quantity and the potential hidden state from the correlation volume, and calculating to obtain updated optimized optical flow information and the hidden state, wherein the calculation formula is shown as follows:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
Wherein the update gate Z t and the reset gate r t are calculated by a depth separation convolution and a sigmoid activation function, and the candidate hidden states are obtained Processing by a tanh activation function;
The final hidden state h t is updated by combining the previous state with the new candidate state, and finally the optical flow information of the two pictures is output.
According to the embodiment of the invention, the LM-Flow module based on the RAFT model is used for carrying out optical Flow calculation, and according to the obtained optical Flow information, the operations of motion threshold screening, masking, bilinear interpolation and the like are adopted to remove clutter in Doppler radar data. The deployment of the deep learning method reduces the requirement for manual intervention, reduces the risk of human errors, ensures high-quality results, and not only improves the precision and efficiency of meteorological image processing, but also greatly enhances the application value and practicability of meteorological data by accurately processing the optical flow information of Doppler data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only preferred embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for removing Doppler radar data clutter based on deep learning to obtain optical flow information according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an overall frame and information flow of a method for removing Doppler radar data clutter based on deep learning to obtain optical flow information according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an LM-Flow optical Flow calculation model proposed in a method for removing Doppler radar data clutter based on deep learning to obtain optical Flow information according to an embodiment of the present invention;
FIG. 4 is an example of a data set used in training a deep learning optical flow estimation model according to the method for removing Doppler radar data clutter based on deep learning optical flow information according to an embodiment of the present invention;
FIG. 5 is a performance comparison chart of a deep learning optical flow information calculation model and other models provided by an embodiment of the present invention;
FIG. 6 is a graph showing the effect of different optical flow pictures on Doppler radar data clutter removal according to the embodiment of the present invention;
FIG. 7 is a graph showing performance of different deep learning optical flow models using different modules according to an embodiment of the present invention;
Fig. 8 is a radar rainfall fitting performance chart of the optical flow calculation model provided by the embodiment of the invention after the data clutter of the data ground object of the doppler Lei Leida is removed.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the illustrated embodiments are provided for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
Referring to fig. 1, an embodiment of the present invention provides a method for removing doppler radar data clutter based on an optical flow method, the method comprising the following steps:
S1, acquiring radar data, and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler images;
s2, preprocessing the extracted Doppler image to obtain a radar picture;
The radar image comprises a cloud layer image and a future image, namely radar data are acquired and analyzed, detail image data information such as region, longitude and latitude, resolution and the like is obtained from the data, and a radar reflection gray scale cloud layer image with a specific region time interval of 6 minutes and a quantitative rainfall estimation gray scale image with a time interval of 1 hour are obtained from the image data.
Wherein, the pretreatment comprises the following steps:
s21, standardization processing;
According to the embodiment of the invention, through carrying out standardized processing on Doppler radar images, radar data from different areas can have consistent scales and ranges, so that data difference from different radar systems can be eliminated, and the comparability and the fusibility of the data can be improved.
This is achieved by scaling the data to a fixed range (typically 0, 1) by using the formula for each picture:
Here, x j is the original value, min (x j) is the minimum value of the feature, and max (x j) is the maximum value of the feature.
S22, performing smooth filtering processing on the standardized radar data;
The smoothing filter processing can effectively reduce noise and detail change in the data, so that the radar image is smoother and more stable. This not only helps to improve the quality and clarity of the radar image, but also reduces errors and interference in subsequent data processing steps.
In order to perform gaussian filtering processing on the normalized radar data, first, parameters of a filter including a window size and a standard deviation need to be determined. The window size is typically chosen to be odd, such as 3x3, 5x5 or 7x7, to ensure a center point. The standard deviation (sigma) determines the width of the gaussian function and thus influences the intensity of the smoothing effect.
Next, a Gaussian kernel is constructed, which is a weight matrix defined by a Gaussian function, where the center point has the highest weight, decreasing with increasing distance from the center point. This gaussian kernel is then applied to each pixel of radar data, the value of which is updated by calculating a weighted average of all points within a window covering that pixel. This process needs to be repeated for each pixel in the image to achieve smoothing of the entire image.
Finally, the result is a radar image with reduced noise and smooth detail changes, which helps to improve the quality and sharpness of the image and reduces errors and disturbances in subsequent data processing steps.
S23, carrying out edge extraction on the obtained radar image to obtain weather edges and structural information in the radar image;
first a suitable edge detection algorithm is selected, and embodiments of the present invention select the Canny operator, which identifies edges by calculating gradient magnitude and direction.
This algorithm is applied to the Gaussian filtered radar image, and an appropriate threshold is set to determine significant edges.
And then, performing non-maximum suppression to refine edges, ensuring that the edges are single-pixel wide, and further optimizing the recognition of the edges through hysteresis threshold processing to obtain weather edges and structural information in radar images, such as cloud layer boundaries and storm systems, so as to provide key data support for weather analysis and prediction, thereby improving the accuracy and effectiveness of forecasting.
S24, according to the resolution of radar data, image segmentation is carried out on the Doppler image subjected to edge extraction, the radar edge is filled with 0, abnormal values are deleted, and the image is enhanced or weakened.
According to radar resolution, the extracted Doppler image is divided into 1200-800 pictures, the pictures are filled with 0 for radar boundary, abnormal values are deleted, the picture of the image with too low intensity is enhanced, and the image with too high intensity is weakened.
For the extracted radar image, the image is divided into 1200x800 according to the resolution of the radar system, so that image data acquired by different radar devices or different scanning time points are standardized, and the consistency and comparability of subsequent analysis are ensured.
And filling the incomplete part of the radar image boundary by using 0 so as to ensure that the edge of the image cannot be introduced into errors due to missing data when image processing and feature analysis are performed, thereby improving the integrity and processing precision of the whole data.
Outlier processing and picture enhancement recognize and delete outliers in radar images, and enhance image portions with too low intensity and weaken portions with too high intensity.
The method optimizes the visual effect and the data quality of the image, is favorable for more clearly identifying and analyzing the weather phenomenon, reduces noise and interference in data processing, improves the accuracy of analysis, enhances the application value of the data in weather analysis, and provides higher-quality and more reliable data support for weather prediction and research.
S3, obtaining an LM-Flow model based on RAFT model improvement, and analyzing continuous preprocessed radar pictures by using the obtained LM-Flow model to extract optical Flow information between the front image and the rear image;
Referring to fig. 3, an LM-Flow optical Flow estimation model schematic diagram proposed in a method for removing doppler radar data clutter based on an optical Flow method according to an embodiment of the present invention is provided.
Based on RAFT optical Flow estimation model expansion, an LM-Flow model is provided, the model is divided into different components, wherein three layers of feature encoders are used for extracting features, a correlation layer model is used for carrying out correlation calculation, and a gating circulation unit is used for realizing iterative updating of optical Flow information.
In terms of feature extraction, RAFT employs a three-layer feature encoder, each layer consisting of two residual blocks, aimed at capturing features of the two front and rear images of the input, and enhancing these features by means of a CBAM attention mechanism module. Besides feature extraction of the front picture and the rear picture, the model additionally extracts the context information of the first picture as the initial value of the optical flow iterative updating module due to the fact that the optical flow information approximates to the picture outline.
The correlation layer model is used for carrying out feature vector calculation on the continuous images, and calculating a 4D correlation volume according to the dot product of the feature vectors of the two continuous images so as to represent the correlation of each pixel between the images. Each grid in the 4D correlation volume contains the sum of the feature dot products of all pixels in the two feature maps, and the correlation volume is calculated as follows:
Cijkl=gθ(I1)·gθ(I2)
Where C represents a four-dimensional correlation quantity and g θ represents a feature extraction function applied to the input image I 1 and the image I 2.
A gated loop unit (GRU) is used to solve the inherent time dependence problem of continuous data, in this model the GRU mimics the traditional least squares optimization process. The input of the GRU is the optimized optical flow delta F output in the last iteration, the correlation quantity retrieved from the correlation volume and the potential hidden state, and the output is the updated optimized optical flow delta F and the hidden state, wherein the initial state of delta F is based on the context information extracted by the feature extraction module as an initial value. The following formula is shown:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
The update gate Z t and reset gate r t are calculated by a depth-separated convolution (DSC) and sigmoid activation function, which together regulate the continuity and variability of the hidden states in the time series. Candidate hidden states Is processed by the tanh activation function to provide a potential new state, while the final hidden state h t is updated by merging the previous state with this new candidate state, and finally outputs optical flow information for two pictures.
The LM-Flow model provided by the embodiment of the invention is an improved optical Flow calculation model based on a RAFT network, can process complex cloud layer dynamics in a multi-channel radar reflectivity (MCR) image, effectively aims at challenges caused by multi-layer cloud layer superposition, optimizes the parameter scale of the model, and ensures high-efficiency performance under limited calculation resources. For MCR data, the rapid change of the cloud layer is captured at the time interval of 6 minutes, so that the optical flow calculation process is optimized, and subtle changes of the cloud layer movement can be accurately reflected. This capability is particularly critical for understanding and predicting meteorological phenomena.
The depth separable convolution adopted by the LM-Flow model is used for reconstructing the RAFT model, so that required model parameters are effectively reduced. The method not only reduces the calculation burden of the model, but also keeps high-precision optical flow estimation, so that the model can be efficiently operated even in a standard meteorological laboratory with limited GPU resources.
The LM-Flow model also integrates CBAM (Convolutional Block Attention Module) focusing mechanisms, CBAM enhances the expressive force of the model in capturing complex cloud layer motions by focusing more key feature areas, and remarkably improves the extraction capability of cloud layer features.
S4, utilizing optical flow information obtained by screening a motion threshold value to generate a binary mask so as to separate ground clutter;
Wherein the optical flow information represents the motion direction and the size of the pixel, a motion threshold value is determined according to the absolute value of the motion of the pixel to represent the motion state of the pixel, the pixel is classified by the motion threshold value, and the pixel below the motion threshold value is classified as a static pixel. Too low a threshold will result in incomplete ground clutter removal, while too high a threshold will simultaneously remove cloud information. Thus, the appropriate threshold may be selected based on the characteristics of the particular data set. If the data set has more cloud information and it is desired to retain a large portion of the cloud information, a lower threshold may be selected, whereas a higher threshold may be selected. The specific threshold may be determined based on the fit of the Z-I relationship after removal of the data set used.
A binary mask based on a motion threshold is generated for the processed optical flow information, the binary mask having the same size as the original image, wherein the pixels of the stationary ground elements are set to 0 and the pixels of the moving cloud layer are set to 1.
According to the embodiment of the invention, the motion state of the pixel is classified by accurately setting the motion threshold value, the optical flow information obtained by screening the motion threshold value is utilized, and the binary mask generation method can effectively distinguish dynamic cloud layers from static ground clutter, so that errors caused by ground reflection are reduced, clearer cloud layer dynamic images are provided, the reliability and accuracy of meteorological data are enhanced, the complexity of the images is simplified, the ground clutter is accurately removed, and false alarms caused by mistakenly identifying the clutter as cloud layers can be reduced.
In addition, by accurately processing the optical flow information, the embodiment of the invention not only improves the accuracy and efficiency of meteorological image processing, but also greatly enhances the application value and practicability of meteorological data. The improvements provide powerful technical support for weather prediction and related research, and are helpful for improving the overall effect and response capability of weather service.
And S5, removing ground clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
According to the embodiment of the invention, a binary mask is acted on an original image to obtain a multi-PRE radar data graph with ground clutter removed, a cloud layer set to 1 is reserved, and the ground set to 0 is removed to achieve the effect of removing the ground clutter.
According to the embodiment of the invention, the radar data graph with ground clutter removed is obtained by acting the binary mask on the original image.
However, the high pixel similarity characteristic of radar images means that even small weather changes may result in the loss of important features in the image. This phenomenon is particularly evident when processing continuous radar images by means of conventional methods only, since standard image processing algorithms tend to have difficulty capturing small but weather-significant changes, optical flow calculations relying only on two consecutive images may miss key pixels due to high similarity between pixels, which directly affects the accuracy of accurate tracking and prediction of dynamic weather events.
At this time, the bilinear interpolation method is adopted to correct the optical flow information, whether the current picture has a problem is judged by judging the difference between the current picture and the front picture and the rear picture, the bilinear interpolation picture is restored by utilizing the front picture and the rear picture for the picture with the problem, the bilinear interpolation picture is repeatedly carried out by circularly traversing and iterating all the pictures until the picture with the problem is not increased any more, and then the iteration is stopped. And finally obtaining the output radar data without ground clutter.
That is, in order to repair information lost in optical flow calculation, each pixel point in an image can be more accurately reconstructed or corrected by performing bilinear interpolation on the image to be repaired in combination with information of the previous and subsequent frames. The method utilizes the values of four points around to estimate the values of unknown points, and effectively improves the overall quality of the image and the accuracy of analysis. The time sequence information of all pictures is fully utilized by an iterative method, and the process ensures the stability and reliability of a final result by iterative traversal and repeated restoration until the images with problems in the image set are not increased. Bilinear interpolation exploits the time-series nature of the dataset to enhance consistency of image quality.
The specific implementation of correcting the optical flow information by adopting the bilinear interpolation method is as follows:
The bilinear interpolation estimates the value of an unknown point according to the nearest four points, so that a smooth interpolation effect is realized, linear interpolation is performed in one direction, and then linear interpolation is performed in the other direction, so that a final interpolation result is obtained. The bilinear interpolation process includes two steps:
First, two points Q 11 and Q 21 are linearly interpolated in the x-direction to obtain R 1. Similarly, Q 12 and Q 22 are linearly interpolated to give R 2.
The interpolation formula is as follows:
next, linear interpolation is performed on r_1 and r_2 in the y direction, to obtain a final interpolation result P:
performing bilinear interpolation on a series of front and back images for each image in the repair queue;
The iteration is repeated until the number of images that need to be repaired does not increase.
This iterative process ensures that the quality of the repaired image is more stable.
In a specific implementation environment, one embodiment of the present invention uses a data set from the Wuhan region of China, located in the east warp 114.0075 and north weft 30.2225, covering an area with a radius of 5 km, which includes the average composite reflectance (MCR) and Quantitative Precipitation Estimate (QPE) images acquired in 2021.
Wherein MCR images were acquired every 6 minutes, while QPE images were acquired every hour. The dataset contained a total of about 65,000 MCR images and 6,000 QPE images.
Wherein, the MCR value is defined as follows:
MCR=2×dBZ+66
Here, CMR represents the minimum detectable reflectivity or comparable weather index, dBZ represents the radar reflectivity measured in decibels.
The unit value of the precipitation amount in QPE is 0.1mm.
Since convolution operation is used in the LM-Flow optical Flow model, modification of the original image of 1200×800 pixels in size is required. The modification process involves cropping and resizing the image to 288x288 pixels, which is then converted to a three-channel gray scale image. This preprocessing ensures compatibility with subsequent optical flow calculations.
As shown in fig. 2, the entire process takes two adjacent images as input, calculates optical Flow information using the LM-Flow model, and the optical Flow information is applied to the first image to remove clutter.
Referring to fig. 3, an LM-Flow optical Flow estimation model schematic diagram for a method for removing doppler radar data clutter according to an embodiment of the present invention is provided.
Complex cloud movements can be captured in a short time due to the 6 minute time interval of the multi-channel radar reflectivity (MCR) data, but MCR images are uniquely challenging for deep learning models, especially because of the superposition of multiple clouds, making it difficult to directly extract the cloud features.
Thus, accurate optical flow data is required to accurately present subtle dynamics of these cloud movements. Furthermore, limited by the limitations of GPU computing resources for deep learning in standard meteorological laboratories, efficient management of model parameter sizes is required to ensure efficient generalization and deployment.
To address these challenges, embodiments of the present invention are based on LM-Flow models developed by RAFT modeling framework.
In the feature extraction stage, convolution features are first extracted from successive images I 1 and I 2;
These features are then enhanced by CBAM an attention mechanism module, which also extracts additional context information from image I 1. This step is critical because the shape of the optical flow image is similar to the original image, effectively increasing the constraints on the image contours during the iterative optimization process.
Then, a 4D correlation volume is calculated from the dot products of the feature vectors of the two images for representing the correlation of the individual pixels between the images, each grid in the correlation volume containing the sum of the feature dot products of all pixels in the two feature maps.
The correlation volume is calculated as follows:
Cijkl=gθ(I1)·gθ(I2)
where C represents a four-dimensional correlation quantity, g θ represents a feature extraction function applied to the input image I 1 and the image I 2.
Again, the gating loop unit (GRU) is utilized to solve the inherent time dependence problem of continuous data, in this model, the GRU simulates a conventional least squares optimization process. The input of the GRU is the optimized optical flow, delta F, the related quantity and the potential hidden state which are obtained by retrieving from the related volume and are output by the last iteration, and the output is the updated optimized optical flow, delta F and hidden state, wherein the initial state of delta F is as an initial value according to the context information extracted by the feature extraction module, and the initial state is as follows:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
The update gate Z t and reset gate r t are calculated by a depth-separated convolution (DSC) and sigmoid activation function, which together regulate the continuity and variability of the hidden states in the time series. Candidate hidden states Is processed by the tanh activation function to provide a potential new state, while the final hidden state h t is updated by merging the previous state with this new candidate state, and finally outputs optical flow information for two pictures.
LM-Flow is based on a RAFT model, integrates a Convolution Block Attention Module (CBAM) and Depth Separable Convolution (DSC), and reduces model parameters while maintaining high performance. DSC reduces computational complexity, enables models to perform deep learning tasks using limited computational resources, while CBAM enhances characterization by applying channel and spatial attention mechanisms in turn, and this integration enables models to compute efficiently and process input data accurately.
The calculation formula is as follows:
Mc(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
where F is the input feature map and σ represents the sigmoid function. The spatial attention is focused on a meaningful "place", and the calculation formula is:
Ms(F)=σ(f7×7([AvgPool(F);MaxPool(F)]))
Where f 7×7 is a convolution operation applied to the combined average and maximum pool features.
Depth separable convolution, which aims to reduce the number of parameters and computational load in convolutional neural networks, is widely used in various neural networks and breaks down standard convolution operations into two simpler steps, depth convolution and point convolution.
In the deep convolution, each input channel is convolved independently using a separate filter. This means that the convolution operation for each channel is performed independently. For a convolutional layer with D in input channels and D out output channels, the depth convolution would use D in filters, each of size K x K, where K is the size of the filter. The output will have D in channels, each of which is convolved by the input channel with its corresponding filter. The deep convolution operation can be expressed as:
wherein, Is the output of the kth channel, X k is the kth input channel, and F k is the kth filter of size K times K.
On the other hand, the point convolution takes as input the output of the depth convolution, combines the channels resulting from the depth convolution, and generates the final output channel. For D out output channels, the point convolution uses the D out×Din weight because each output channel is a weighted sum of the input channels. The point convolution operation can be expressed as:
wherein, Is the j-th output channel and P j,k is a1 x1 point convolution filter applied to the k-th channel.
Integrating CBAM and DSC in the LM-Flow can take full advantage of both. Although DSC reduces computational complexity and model parameters, some expressive power may be sacrificed because each input channel is processed independently, and CBAM alleviates this problem by applying a attentive mechanism to selectively focus on important features and suppress extraneous information. This combination approach enables the model to maintain higher performance with less computing resources and is therefore well suited for deployment on limited functionality devices.
In the LM-Flow model, the input is composed of two consecutive images, and the output is a map containing optical Flow information. The map maintains the original dimensions of the image but contains two channels that convey motion information for each pixel along the X and Y axes, respectively. By extracting the optical flow data, the dynamics of the image, particularly the movement pattern of the cloud layer, can be more deeply understood.
In optical flow computation, one basic assumption is that the gray scale intensity of a pixel remains unchanged, and the optical flow data reflects the change in pixel brightness from between two consecutive images. However, since the pixel similarity of radar images is high, using only two adjacent images results in a large number of pixel losses when calculating optical flow.
In order to solve the problem, the embodiment of the invention adopts a bilinear interpolation method to correct the optical Flow information by utilizing the characteristics of continuous image frames of a data set, thereby obviously improving the image quality, and especially when one continuous image frame has larger difference with the front image frame and the rear image frame, the bilinear interpolation method is adopted to correct the optical Flow information, thereby improving the accuracy of reconstructing the cloud motion track, and further enhancing the performance and the reliability of an LM-Flow result in a Z-I relation fitting task.
That is, the embodiment of the invention adopts post-processing technology to the streaming data, detects the images which have too much change compared with the previous frame and the next frame, adds the images which have too much change compared with the previous frame and the next frame into the repair queue, carries out bilinear interpolation processing to the pictures in the queue, and can stop the cycle traversal if the reduction of the number of the pictures in the queue after the cycle traversal indicates that the bilinear interpolation can not further repair the pictures.
In addition, embodiments of the present invention use the absolute value of the pixel motion to determine an empirical threshold for classifying the motion state of the pixel, below which pixels are classified as stationary pixels.
In a real situation, too low a threshold value can cause incomplete ground clutter removal, while too high a threshold value can simultaneously remove cloud information. Thus, embodiments of the present invention select appropriate thresholds based on the characteristics of a particular dataset.
The processed optical flow information is used to generate a threshold-based binary mask. The binary mask is the same size as the original image, with the pixels representing the stationary ground elements set to 0 and the pixels representing the moving cloud set to 1. Thereby eliminating the interference of static ground objects and maintaining the integrity of cloud layers.
The improved image provides a more reliable basis for subsequent Z-I relationship fits.
The embodiment of the invention adopts a statistical method to fit a Z-I relation, substitutes the radar reflectivity Z and the estimated precipitation intensity I into an equation, and uses a least square method to estimate the values of a and b, wherein the main equation is as follows:
Z=aIb
where the coefficient a is a proportionality constant and b is an exponential coefficient that varies with the type of precipitation, the stage of development and the geographical location.
Based on the Z-I relation, the reflection coefficient measured by the radar is converted into the rainfall intensity by using mathematical statistics. In order to accurately fit the Z-I relationship, the embodiment of the invention adopts a mathematical statistics method, substitutes the numerical values into an equation, and estimates parameters by using a least square method. The method is characterized by simplicity and reliability, meets the model verification requirement, highlights the effectiveness of precipitation estimation, and can be used as an index of the technical efficiency of ground clutter removal.
Because the data size is large, the embodiment of the invention also adopts the RANSAC method to eliminate the interference of abnormal values, and ensures the stability of the result by obtaining the fitting result which accords with most data. To ensure accuracy of the fit results, 90% of the data was used for the fit and the remaining 10% was used for verification. To fully evaluate the quality and accuracy of images processed using different methods, two indicators, structural Similarity Index (SSIM) and peak signal to noise ratio (PSNR), were introduced.
Notably, there is a time difference between QPE and MCR. Thus, embodiments of the present invention have performed additional experiments to investigate the effect of different time delays on the accuracy of the $Z-I $relationship. The results show that the fit of the Z-I relationship is best when the Quantitative Precipitation Estimate (QPE) is delayed by 5 images (equivalent to half an hour) relative to the average composite reflectivity (MCR). This is because the measurement of cloud thickness is based on doppler radar measurements, while the measurement of precipitation is performed on the ground, and the cloud takes a certain time to form precipitation and reach the ground.
Referring to fig. 4, a method for removing doppler radar data clutter based on obtaining optical Flow information by deep learning according to an embodiment of the present invention is provided, and a data set example used in training a deep learning optical Flow estimation model, and a performance comparison graph of the deep learning optical Flow estimation model LM-Flow and other models provided by the embodiment of the present invention are provided.
The embodiment of the invention relates to training and comparison of several model architectures, including a traditional characteristic clutter removal method based on a filter, a Farneback intensive optical Flow method, flowNet, a basic RAFT model and an LM-Flow model. To ensure consistency of ratings, embodiments of the present invention use a "pair" open source dataset to train a deep learning model.
Fig. 4 shows an example of this dataset. Training was performed on the same machine, with an upper limit of 120,000 times. Early termination criteria were employed during training, i.e., training was stopped if no improvement in validation loss was observed for 15 consecutive hours. All training iterations meet this criterion and therefore it is not necessary to reach an upper limit of 120,000 iterations. In addition, a learning rate scheduler is used to reduce the learning rate to one tenth of the previous value if the validation loss does not improve during four cycles. The initial learning rate was set to 0.0003 and the optimization procedure selected the Adam optimizer widely used.
The whole training sequence is executed on NVIDIA4090 video card, and the video memory capacity is 24GB, so that the high-efficiency processing of the calculation load is ensured. After the three deep learning models are trained, the models are used to evaluate performance metrics on the test set.
Fig. 5 shows the computing power of LM-Flow in the optical Flow domain very well. The embodiment of the invention evaluates the parameters such as iteration times, calculation time, parameter size and the like, and evaluates the optical flow calculation performance on a chair data set. It can be seen that FlowNet model is characterized by a broad set of parameters, longest run time, and non-ideal performance. In contrast, the original RAFT model is in an intermediate position in terms of parameter size, and has superior performance. On the other hand, the LM-Flow has the least parameter quantity, shorter running time and performance approaching the RAFT model. Compared with FlowNet with larger rule, the number of parameters of RAFT is only one third of that of RAFT, and the performance is better.
The LM-Flow developed from RAFT introduces a significant change, namely the traditional convolution is replaced by the deep split convolution, thus effectively reducing the number of parameters in the RAFT model by half. Meanwhile, an efficient attention mechanism is skillfully added, and the feature extraction capability of the model is greatly improved although the number of parameters is slightly increased. The strategic optimization enables the LM-Flow to achieve subtle balance, and optimizes the parameter quantity and the running time, so that the efficiency of the model approaches to that of the original model on the premise of not affecting the performance.
Fig. 6 is a graph showing the effect of different optical flow pictures on removing clutter in doppler radar data according to an embodiment of the present invention.
FIG. 7 is a graph of performance versus performance of different modules for different deep-learning optical flow models provided by embodiments of the present invention, showing the impact of the different modules on model parameters and performance. The image sequence is an original image, a filtered image for removing ground clutter, farneback dense optical Flow, flownet optical Flow estimation model, an original RAFT model and an LM-Flow result for removing ground clutter from left to right in sequence.
Fig. 8 is a radar rainfall fitting performance chart of the optical flow estimation model provided by the embodiment of the invention after the data ground clutter of the doppler Lei Leida is removed.
The ground clutter removal technology used in the embodiment of the invention obviously improves the accuracy of the $Z-I $relation fitting, the RAFT model is outstanding in Root Mean Square Error (RMSE) and excellent in performance, and the LM-Flow parameter is only half of that of the RAFT model, and the performance is almost equivalent. The difference between the two methods in terms of Structural Similarity Index (SSIM) is very small, but LM-Flow ranks highest. Interestingly, conventional filtering methods exhibit higher peak signal-to-noise ratios (PSNRs), possibly because they can retain more cloud information while effectively reducing ground clutter.
In the method and the device for removing the Doppler radar data clutter based on the optical flow information, the traditional Farneback intensive optical flow method has relatively general performance, but the running time is very short and is only 0.016 seconds. This speed makes it suitable for platforms with limited computational resources and for scenes with low precision requirements. In the field of deep learning optical flow methods, flownet is very effective in removing ground clutter, but at the cost of removing some cloud layer information, resulting in non-ideal performance. In contrast, RAFT models exhibit superior performance in this area. The LM-Flow only keeps half of the parameters of the original RAFT model, and similar results are obtained, but the calculation efficiency is higher. In the post-processing of the image, bilinear interpolation is simple and efficient, so that bilinear interpolation is adopted, and the technology compensates for the loss in the optical flow calculation process, thereby remarkably improving the overall effect.
The embodiment of the invention also provides a device for removing Doppler radar data clutter based on optical flow information, which specifically comprises:
the image acquisition module is used for acquiring radar data and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler radar images;
The preprocessing module is used for preprocessing the extracted Doppler radar image to obtain a radar picture;
the optical Flow information acquisition module is used for obtaining an LM-Flow model based on RAFT model improvement, analyzing the continuous radar pictures obtained through pretreatment by using the obtained LM-Flow model to extract optical Flow information between the front image and the rear image, wherein the optical Flow information represents the movement direction and the size of the pixels;
The clutter separation module is used for generating a binary mask to separate ground clutter by utilizing optical flow information obtained by screening a motion threshold value;
And the correction module is used for removing ground clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
Wherein, the preprocessing module specifically includes:
the standardized processing unit is used for carrying out standardized processing on the Doppler radar image;
the filtering unit is used for carrying out smoothing filtering processing on the standardized radar image data;
The edge extraction unit is used for carrying out edge extraction on the radar image obtained by the smoothing filtering treatment to obtain weather edges and structural information in the radar image;
and the image increasing and decreasing unit is used for dividing the Doppler image subjected to edge extraction according to the resolution of the radar data, filling 0 for the radar edge, deleting the abnormal value and enhancing or weakening the image.
Wherein the LM-Flow model obtained based on RAFT model improvement comprises:
three layers of feature encoders, depth separable convolutions, correlation layer models, and gating loop units.
Wherein,
The three-layer feature encoder is used for receiving the input front and back images, extracting features, extracting context information of the first picture as an initial value for optical flow iteration update by the gating loop unit;
The correlation layer model is used for carrying out feature vector calculation on the extracted features of the front image and the rear image, and calculating a correlation volume according to the dot product of the feature vectors of the two continuous images;
The gating loop unit is used for receiving the initial value of the optical flow iterative update, retrieving the correlation quantity and the potential hidden state from the correlation volume, and calculating to obtain updated optimized optical flow information and the hidden state, wherein the calculation formula is shown as follows:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
Wherein the update gate Z t and the reset gate r t are calculated by a depth separation convolution and a sigmoid activation function, and the candidate hidden states are obtained Processing by a tanh activation function;
The final hidden state h t is updated by combining the previous state with the new candidate state, and finally the optical flow information of the two pictures is output.
The implementation method of the device for removing the doppler radar data clutter based on the deep learning to obtain the optical flow information in the invention and the implementation method of the method for removing the doppler radar data clutter in the foregoing embodiment are not described in detail here.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A method for removing doppler radar data clutter based on optical flow information, the method comprising:
Acquiring radar data, and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler radar images;
Preprocessing the extracted Doppler radar image to obtain a radar picture;
obtaining an LM-Flow model based on RAFT model improvement, analyzing a continuous pre-processed radar picture by using the obtained LM-Flow model to extract optical Flow information between front and rear images, wherein the optical Flow information represents the motion direction and the size of pixels, and the LM-Flow model based on RAFT model improvement comprises:
three layers of feature encoders, depth separable convolutions, related layer models and gating circulation units;
The analyzing the continuous pre-processed radar pictures by using the obtained LM-Flow model to extract the optical Flow information between the front image and the rear image specifically comprises the following steps:
the three-layer feature encoder receives an input front image and an input rear image, performs feature extraction, and extracts context information of a first image as an initial value for performing optical flow iterative update by a gating circulation unit;
the correlation layer model calculates the feature vector of the extracted features of the front image and the back image, and calculates the correlation volume according to the dot product of the feature vectors of the two continuous images;
The gating loop unit receives the initial value of the optical flow iterative update, retrieves the correlation quantity and the potential hidden state from the correlation volume, calculates the updated optimized optical flow information and the hidden state, and the calculation formula is shown as follows:
Zt=σ(DSC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
Wherein the update gate Z t and the reset gate r t are calculated by a depth separation convolution and a sigmoid activation function, and the candidate hidden states are obtained Processing by a tanh activation function;
The final hidden state h t is updated by combining the previous state with the new candidate state, and finally the optical flow information of the two pictures is output;
Generating a binary mask by utilizing optical flow information obtained by screening a motion threshold value to separate ground clutter;
and removing ground clutter according to the binary mask, and repairing erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
2. The method for removing clutter of doppler radar data based on optical flow information of claim 1, wherein the performing preprocessing on the extracted doppler image to obtain a radar picture comprises:
Carrying out standardization processing on Doppler radar images;
smoothing filtering is carried out on the standardized radar image data;
Performing edge extraction on the radar image obtained by smoothing filtering to obtain weather edges and structural information in the radar image;
According to the resolution of radar data, carrying out image segmentation on the Doppler image subjected to edge extraction, filling the radar edge with 0, deleting abnormal values, and enhancing or weakening the image.
3. The method for removing clutter in doppler radar data based on optical flow information according to claim 1, wherein the filtering the obtained optical flow information using a motion threshold to generate a binary mask to separate clutter specifically comprises:
determining a motion threshold according to the absolute value of the pixel motion;
classifying pixels with the motion threshold, pixels below which are classified as stationary pixels;
A binary mask based on a motion threshold is generated for the processed optical flow information, the size of the binary mask being the same as the original image, wherein the pixels of the ground stationary element are set to 0 and the pixels of the moving cloud layer are set to 1.
4. The method for removing clutter of doppler radar data based on optical flow information according to claim 3, wherein the removing clutter according to the binary mask and repairing the erroneously deleted cloud layer information by bilinear interpolation to obtain the radar data map from which clutter is finally removed specifically comprises:
the binary mask is acted on the original image to obtain a multiple-radar data graph with ground clutter removed;
Judging the difference of the current picture relative to the front picture and the rear picture, determining whether the current picture has a problem, and performing bilinear interpolation on the picture with the problem by using the front picture and the rear picture;
and repeating bilinear interpolation restoration pictures by circularly traversing and iterating all the pictures until the pictures with problems are not increased.
5. An apparatus for removing doppler radar data clutter based on optical flow information, the apparatus specifically comprising:
the image acquisition module is used for acquiring radar data and extracting detail data from the radar data, wherein the detail data comprises longitude and latitude, resolution and Doppler radar images;
The preprocessing module is used for preprocessing the extracted Doppler radar image to obtain a radar picture;
the optical Flow information acquisition module is used for obtaining an LM-Flow model based on RAFT model improvement, analyzing the continuous radar pictures obtained through pretreatment by using the obtained LM-Flow model to extract optical Flow information between the front image and the rear image, wherein the optical Flow information represents the movement direction and the size of the pixels;
Wherein, the LM-Flow model based on RAFT model improvement comprises:
three layers of feature encoders, depth separable convolutions, related layer models and gating circulation units;
the three-layer feature encoder is used for receiving the input front and back images, extracting features, extracting context information of the first picture as an initial value for optical flow iteration update by the gating loop unit;
The correlation layer model is used for carrying out feature vector calculation on the extracted features of the front image and the rear image, and calculating a correlation volume according to the dot product of the feature vectors of the two continuous images;
The gating loop unit is used for receiving the initial value of the optical flow iterative update, retrieving the correlation quantity and the potential hidden state from the correlation volume, and calculating to obtain updated optimized optical flow information and the hidden state, wherein the calculation formula is shown as follows:
Zt=σ(DXC([ht-1,xt],Wz))
rt=σ(DSC([ht-1,xt],Wr))
Wherein the update gate Z t and the reset gate r t are calculated by a depth separation convolution and a sigmoid activation function, and the candidate hidden states are obtained Processing by a tanh activation function;
The final hidden state h t is updated by combining the previous state with the new candidate state, and finally the optical flow information of the two pictures is output;
The clutter separation module is used for generating a binary mask to separate ground clutter by utilizing optical flow information obtained by screening a motion threshold value;
And the correction module is used for removing ground clutter according to the binary mask, and repairing the erroneously deleted cloud layer information by adopting a bilinear interpolation method to obtain a radar data diagram for finally removing the ground clutter.
6. The apparatus for removing clutter of doppler radar data based on optical flow information of claim 5, wherein the preprocessing module specifically comprises:
the standardized processing unit is used for carrying out standardized processing on the Doppler radar image;
the filtering unit is used for carrying out smoothing filtering processing on the standardized radar image data;
The edge extraction unit is used for carrying out edge extraction on the radar image obtained by the smoothing filtering treatment to obtain weather edges and structural information in the radar image;
and the image increasing and decreasing unit is used for dividing the Doppler image subjected to edge extraction according to the resolution of the radar data, filling 0 for the radar edge, deleting the abnormal value and enhancing or weakening the image.
CN202410751394.8A 2024-06-12 2024-06-12 Method and device for removing clutter from Doppler radar data based on optical flow information Active CN118746809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410751394.8A CN118746809B (en) 2024-06-12 2024-06-12 Method and device for removing clutter from Doppler radar data based on optical flow information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410751394.8A CN118746809B (en) 2024-06-12 2024-06-12 Method and device for removing clutter from Doppler radar data based on optical flow information

Publications (2)

Publication Number Publication Date
CN118746809A CN118746809A (en) 2024-10-08
CN118746809B true CN118746809B (en) 2025-01-24

Family

ID=92918754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410751394.8A Active CN118746809B (en) 2024-06-12 2024-06-12 Method and device for removing clutter from Doppler radar data based on optical flow information

Country Status (1)

Country Link
CN (1) CN118746809B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539879A (en) * 2020-04-15 2020-08-14 清华大学深圳国际研究生院 Video blind denoising method and device based on deep learning
CN114677412A (en) * 2022-03-18 2022-06-28 苏州大学 Method, device and device for optical flow estimation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170400A (en) * 2022-04-06 2022-10-11 腾讯科技(深圳)有限公司 Video repair method, related device, equipment and storage medium
CN116047451A (en) * 2023-01-05 2023-05-02 浙江索思科技有限公司 Low-speed small target radar echo identification method and device based on optical flow method
CN117808689A (en) * 2023-11-02 2024-04-02 南京邮电大学 Depth complement method based on fusion of millimeter wave radar and camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539879A (en) * 2020-04-15 2020-08-14 清华大学深圳国际研究生院 Video blind denoising method and device based on deep learning
CN114677412A (en) * 2022-03-18 2022-06-28 苏州大学 Method, device and device for optical flow estimation

Also Published As

Publication number Publication date
CN118746809A (en) 2024-10-08

Similar Documents

Publication Publication Date Title
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN112232349A (en) Model training method, image segmentation method and device
CN111126472A (en) Improved target detection method based on SSD
CN108510451B (en) Method for reconstructing license plate based on double-layer convolutional neural network
Chen et al. TempEE: Temporal–spatial parallel transformer for radar echo extrapolation beyond autoregression
CN112418149A (en) Abnormal behavior detection method based on deep convolutional neural network
CN119151974B (en) A method, medium and system for detecting wave height based on semantic segmentation
CN114067019A (en) Urban waterlogging risk map rapid prefabricating method coupling deep learning and numerical simulation
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN115587987B (en) Storage battery defect detection method and device, storage medium and electronic equipment
CN117475357B (en) Surveillance video image occlusion detection method and system based on deep learning
CN119164955A (en) Precision plastic mold detection method, device, equipment and storage medium
CN118470659B (en) Waterlogging detection method and device based on denoising diffusion model from the perspective of urban monitoring
Qin et al. Advancing sun glint correction in high-resolution marine UAV RGB imagery for coral reef monitoring
CN117253157A (en) Instance constraint change detection method and device for broken image spots of remote sensing image
CN113222898A (en) Double-voyage SAR image trace detection method based on multivariate statistics and deep learning
CN119916373B (en) Radar echo prediction method and device based on potential diffusion model
CN115830514A (en) Method and system for calculating surface flow velocity of whole river section of riverway with curve
CN117408989A (en) Space-time fusion model method for cloud removal of remote sensing image
CN118247146A (en) Remote sensing image super-resolution learning method and device based on expert knowledge supervision
CN120182873A (en) Video stream analysis method and system based on drone inspection
CN119313717B (en) Vehicle-mounted camera visibility inversion method, device, medium and electronic equipment
Wang et al. Object counting in video surveillance using multi-scale density map regression
CN119600029A (en) Bridge crack width monitoring method and device
CN118365866B (en) SAR sea surface ship target detection method and system based on scattering feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载