CN115015869A - Can learn low frequency broadband radar target parameter estimation method, equipment and program product - Google Patents
Can learn low frequency broadband radar target parameter estimation method, equipment and program product Download PDFInfo
- Publication number
- CN115015869A CN115015869A CN202210735768.8A CN202210735768A CN115015869A CN 115015869 A CN115015869 A CN 115015869A CN 202210735768 A CN202210735768 A CN 202210735768A CN 115015869 A CN115015869 A CN 115015869A
- Authority
- CN
- China
- Prior art keywords
- valued
- complex
- vector
- parameter
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
Description
技术领域technical field
本申请涉及雷达信号处理领域,具体而言,涉及一种可学习低频宽带雷达目标参数估计方法、设备及程序产品。The present application relates to the field of radar signal processing, and in particular, to a method, device and program product for learning low-frequency broadband radar target parameter estimation.
背景技术Background technique
低频宽带雷达信号具备反隐身、穿透墙体和土壤的能力,在国防领域和民用领域都有着重要的应用。基于几何绕射理论(geometric theory of diffraction,GTD)模型的雷达目标参数估计,是低频宽带雷达距离向信号处理的关键步骤。Low-frequency broadband radar signals have the ability to resist stealth, penetrate walls and soil, and have important applications in the defense and civilian fields. The estimation of radar target parameters based on the geometric theory of diffraction (GTD) model is a key step in the processing of low-frequency broadband radar range signals.
然而现有的GTD参数估计方法具有计算复杂度高、超参数调整复杂等问题,导致现有的GTD参数估计方法的算法效率低,且对于不同场景的低频宽带雷达数据的泛化能力较弱;此外,在现有的GTD参数估计方法中,信噪比对算法性能影响较为严重,导致在低信噪比的场景下,目标显著性较低,参数估计性能下降明显。However, the existing GTD parameter estimation methods have problems such as high computational complexity and complex hyperparameter adjustment, resulting in low algorithm efficiency of the existing GTD parameter estimation methods, and weak generalization ability for low-frequency broadband radar data in different scenarios; In addition, in the existing GTD parameter estimation methods, the signal-to-noise ratio has a serious impact on the performance of the algorithm, resulting in a low target saliency in low signal-to-noise ratio scenarios and a significant drop in parameter estimation performance.
发明内容SUMMARY OF THE INVENTION
本申请实施例在于提供一种可学习低频宽带雷达目标参数估计方法、设备及程序产品,旨在解决现有技术中存在的算法效率低、超参数调整复杂、泛化能力较弱,且在低信噪比场景下目标显著性较低的问题。The embodiments of the present application provide a method, device, and program product for estimating target parameters of a learnable low-frequency broadband radar, aiming to solve the problems of low algorithm efficiency, complex hyperparameter adjustment, weak generalization ability, and low performance in the prior art. The problem of low target saliency in signal-to-noise ratio scenarios.
本申请实施例第一方面提供了可学习低频宽带雷达目标参数估计方法,包括:A first aspect of the embodiments of the present application provides a method for estimating target parameters of a learnable low-frequency broadband radar, including:
获取低频宽带雷达采集的数据;Obtain data collected by low-frequency broadband radar;
将所述低频宽带雷达采集的数据输入预先训练的参数估计网络,得到所述参数估计网络输出的目标参数估计值,其中,所述参数估计网络是以多个具有随机相位和噪声的训练数据作为训练样本,对基于循环卷积网络层结构的神经网络进行训练得到的。Input the data collected by the low-frequency broadband radar into a pre-trained parameter estimation network to obtain the target parameter estimation value output by the parameter estimation network, wherein the parameter estimation network uses a plurality of training data with random phases and noise as the The training samples are obtained by training the neural network based on the recurrent convolutional network layer structure.
可选地,所述神经网络包括:Optionally, the neural network includes:
输入模块,用于将输入所述神经网络的所述训练样本中的复数值观测向量转化为实数值观测向量,并利用所述实数值观测向量和实数值GTD字典矩阵计算所述神经网络的初始输入向量;The input module is used to convert the complex-valued observation vector in the training sample input into the neural network into a real-valued observation vector, and use the real-valued observation vector and the real-valued GTD dictionary matrix to calculate the initial value of the neural network. input vector;
迭代优化模块,用于对所述神经网络的初始输入向量进行多次迭代优化,得到所述神经网络的实数值输出向量;an iterative optimization module for performing multiple iterative optimizations on the initial input vector of the neural network to obtain a real-valued output vector of the neural network;
输出模块,用于将所述神经网络的实数值输出向量转化为复数值输出向量,作为目标参数预测值进行输出。The output module is used to convert the real-valued output vector of the neural network into a complex-valued output vector, and output it as the predicted value of the target parameter.
可选地,所述迭代优化模块包括:Optionally, the iterative optimization module includes:
可学习变换结构所述可学习变换结构是基于循环卷积的网络层结构,对所述神经网络的初始输入向量进行迭代优化,输出所述神经网络的实数值输出向量,其中,所述可学习变换结构包含变形算子循环卷积层CC、线性整流单元ReLU;learnable transformation structure The learnable transform structure is a network layer structure based on circular convolution, iteratively optimizes the initial input vector of the neural network, and outputs the real-valued output vector of the neural network, wherein the learnable transformation structure Contains deformation operators Circular convolution layer CC, linear rectification unit ReLU;
其中,in,
变形算子用于将所述神经网络在迭代优化过程中产生的向量x变形为所述神经网络在迭代优化过程中产生的矩阵z,其中,z用于进行所述循环卷积运算;deformation operator for transforming the vector x generated by the neural network in the iterative optimization process into a matrix z generated by the neural network in the iterative optimization process, where z is used to perform the circular convolution operation;
逆变形算子用于将所述神经网络在迭代优化过程中产生的矩阵z变形为所述神经网络在迭代优化过程中产生的向量x;Inverse shape operator for transforming the matrix z generated by the neural network in the iterative optimization process into a vector x generated by the neural network in the iterative optimization process;
循环卷积层CC,包含所述可学习网络参数用于将映射为其中,h=CCK,P,Q(z;w,b),n mod L表示在1到L之间模L同余于n的整数;A recurrent convolutional layer CC containing the learnable network parameters for the maps to where, h=CC K,P,Q (z; w,b), n mod L represents an integer between 1 and L that modulo L is congruent to n ;
线性整流单元ReLU,定义为[ReLU(x)]i=max(0,xi)。Linear rectification unit ReLU, defined as [ReLU(x)] i =max(0,x i ).
可选地,所述训练数据为复数值观测向量和复数值参数真值向量对,所述训练数据是按照如下步骤生成的:Optionally, the training data is a pair of a complex-valued observation vector and a complex-valued parameter true value vector, and the training data is generated according to the following steps:
产生散射中心参数集合;Generate a set of scattering center parameters;
按照所述散射中心参数集合中的散射中心参数计算所述复数值参数真值向量;Calculate the true value vector of the complex-valued parameter according to the scattering center parameter in the scattering center parameter set;
根据所述复数值参数真值向量和复数值GTD字典矩阵,计算复数值观测向量,将所述复数值观测向量和所述复数值参数真值向量组成一个复数值观测向量和复数值参数真值向量对,作为一组所述训练数据。Calculate a complex-valued observation vector according to the complex-valued parameter truth value vector and the complex-valued GTD dictionary matrix, and combine the complex-valued observation vector and the complex-valued parameter truth value vector into a complex-valued observation vector and a complex-valued parameter truth value vector pair, as a set of the training data.
可选地,所述散射中心参数集合为S={(ln,αn,σn):n=1,…,Ns},其中,Optionally, the set of scattering center parameters is S={(l n ,α n ,σ n ):n=1,...,N s }, where,
ln为距离单元,定义为1~L中无放回随机抽取的整数,ln各不相同;αn为频率依赖因子,αn从预设频率依赖因子集合中有放回随机抽取;σn为散射系数,σn的幅度|σn|服从预设幅度的分布,σn的幅角∠σn服从预设幅角的分布。l n is the distance unit, which is defined as an integer from 1 to L that is randomly selected without replacement; n is the scattering coefficient, the amplitude of σ n |σ n | obeys the distribution of the preset amplitude, and the argument ∠σ n of σ n obeys the distribution of the preset argument.
可选地,按照所述散射中心参数集合中的散射中心参数计算所述复数值参数真值向量,包括:Optionally, calculating the true value vector of the complex-valued parameter according to the scattering center parameter in the scattering center parameter set, including:
计算 calculate
其中,l=1,…,L,α属于所述预设频率依赖因子集合{α1,α2,…,αJ};Wherein, l=1,...,L, α belongs to the preset frequency-dependent factor set {α 1 ,α 2 ,...,α J };
将得到的组合成所述复数值参数真值向量定义为:will get combined into the complex-valued parameter truth vector defined as:
根据所述复数值参数真值向量和复数值GTD字典矩阵,计算复数值观测向量,包括:Calculate the complex-valued observation vector according to the complex-valued parameter truth vector and the complex-valued GTD dictionary matrix, including:
利用所述复数值GTD字典矩阵,将所述复数值参数真值向量转换为所述复数值观测向量:其中,Φ为所述复数值GTD字典矩阵。Using the complex-valued GTD dictionary matrix, convert the complex-valued parameter truth vector into the complex-valued observation vector: Wherein, Φ is the complex-valued GTD dictionary matrix.
可选地,对所述神经网络进行训练,包括:Optionally, training the neural network includes:
在每个不同的训练轮次ep中,针对每组所述训练数据,对所述训练数据中的所述复数值观测向量添加不同的随机相位和噪声,以及对所述训练数据中的所述复数值参数真值向量添加不同的随机相位,作为所述训练样本;In each different training round ep, for each set of the training data, different random phases and noises are added to the complex-valued observation vectors in the training data, and different random phases and noises are added to the complex-valued observation vectors in the training data Different random phases are added to the true value vector of the complex-valued parameter as the training sample;
将所述训练样本中的复数值观测向量输入到神经网络,根据所述神经网络输出的目标参数预测值和所述训练样本中的复数值真值向量,计算所述训练样本对应的损失函数的值,其中,所述损失函数的值至少包括目标背景比损失函数的值;Input the complex-valued observation vector in the training sample into the neural network, and calculate the loss function corresponding to the training sample according to the predicted value of the target parameter output by the neural network and the complex-valued true value vector in the training sample. value, wherein the value of the loss function includes at least the value of the target-background ratio loss function;
计算所述损失函数对所述可学习网络参数的梯度,并基于所述损失函数对所述神经网络的可学习网络参数的梯度,对所述可学习网络参数进行优化。The gradient of the loss function to the learnable network parameter is calculated, and the learnable network parameter is optimized based on the gradient of the loss function to the learnable network parameter of the neural network.
可选地,所述目标背景比损失函数,定义为:Optionally, the target-background ratio loss function is defined as:
其中,为所述目标背景比损失函数的值,TBR为所述目标背景比,为目标区域,为背景区域,NT为AT的元素个数,NB为AB的元素个数,为所述目标参数预测值中的元素。in, is the value of the target-background ratio loss function, TBR is the target-background ratio, is the target area, is the background area, N T is the number of elements of A T , N B is the number of elements of A B , element in the predicted value for the target parameter.
可选地,所述损失函数还包括差异性损失函数和对称性损失函数中的一种或多种,其中,所述对称性损失函数是基于所述神经网络的迭代优化模块中的中间变量和所述可学习变换结构定义的;Optionally, the loss function further includes one or more of a dissimilarity loss function and a symmetry loss function, wherein the symmetry loss function is an intermediate variable sum in an iterative optimization module based on the neural network. The learnable transform structure Defined;
当所述损失函数包含所述目标背景比损失函数、所述差异性损失函数和所述对称性损失函数时,按照如下公式计算总损失函数值:When the loss function includes the target-to-background ratio loss function, the difference loss function and the symmetry loss function, the total loss function value is calculated according to the following formula:
其中,为总损失函数值,为所述目标背景比损失函数的值,所述差异性损失函数的值,为所述对称性损失函数的值,λ1、λ2为预设平衡系数。in, is the total loss function value, is the value of the target-background ratio loss function, the value of the dissimilarity loss function, are the values of the symmetry loss function, and λ 1 and λ 2 are preset balance coefficients.
本申请实施例第二方面提供了一种参数估计电子设备,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序以实现本申请实施例所提出的可学习低频宽带雷达目标参数估计方法中的步骤。A second aspect of the embodiments of the present application provides an electronic device for parameter estimation, including a memory, a processor, and a computer program stored in the memory, where the processor executes the computer program to implement the learnable learning method proposed by the embodiments of the present application Steps in a low-frequency broadband radar target parameter estimation method.
本申请实施例第三方面提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时以实现本申请实施例所提出的可学习低频宽带雷达目标参数估计方法中的步骤。A third aspect of an embodiment of the present application provides a computer program product, including a computer program/instruction, when the computer program/instruction is executed by a processor to implement the method for estimating target parameters of a learnable low-frequency broadband radar proposed by the embodiment of the present application A step of.
有益效果:Beneficial effects:
本申请提供一种可学习低频宽带雷达目标参数估计方法、设备及程序产品,通过基于数据增强训练样本训练所构建的神经网络,不断对神经网络中的可学习参数进行优化,得到参数估计网络用于处理低频宽带雷达信号,具有以下优点:The present application provides a method, equipment and program product for estimating target parameters of a learnable low-frequency broadband radar. By training a neural network constructed based on data augmentation training samples, the learnable parameters in the neural network are continuously optimized to obtain parameters for the estimation network. For processing low frequency broadband radar signals, it has the following advantages:
(1)采用了基于循环卷积的网络层结构,降低了计算复杂度,提升了参数估计的效率和精度。(1) The network layer structure based on circular convolution is adopted, which reduces the computational complexity and improves the efficiency and accuracy of parameter estimation.
(2)采用多个具有随机相位和噪声的训练数据作为训练样本,对神经网络进行训练得到的参数估计网络用于进行参数估计,训练优化得到的参数估计网络可以处理不同的低频宽带雷达信号数据,提升了参数估计算法的精度和泛化能力。(2) Using multiple training data with random phase and noise as training samples, the parameter estimation network obtained by training the neural network is used for parameter estimation, and the parameter estimation network obtained by training optimization can process different low-frequency broadband radar signal data , which improves the accuracy and generalization ability of the parameter estimation algorithm.
(3)将用于训练神经网络的训练数据集中的训练数据引入随机相位和噪声的数据增强作为训练样本,采用至少包括目标背景比损失函数的损失函数训练神经网络,使训练优化得到的参数估计网络可以在低信噪比的场景下很好的处理低频宽带雷达信号数据,有效提升目标的显著性。(3) The training data in the training data set used to train the neural network is introduced into data augmentation of random phase and noise as training samples, and the neural network is trained with a loss function including at least the target-background ratio loss function, so that the parameter estimates obtained by the training optimization are used. The network can process low-frequency broadband radar signal data well in low signal-to-noise ratio scenarios, effectively improving the saliency of the target.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application. , for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是本申请一实施例提出的可学习低频宽带雷达目标参数估计方法流程图;1 is a flowchart of a method for estimating target parameters of a learnable low-frequency broadband radar proposed by an embodiment of the present application;
图2是本申请一实施例提出的可学习低频宽带雷达目标参数估计神经网络训练方法流程图;FIG. 2 is a flowchart of a neural network training method for learning low-frequency broadband radar target parameter estimation proposed by an embodiment of the present application;
图3是本申请一实施例提出的信噪比-目标背景比折线图;3 is a line graph of the signal-to-noise ratio-target background ratio proposed by an embodiment of the present application;
图4是本申请一实施例提出的信噪比-均方误差折线图;FIG. 4 is a broken line graph of signal-to-noise ratio-mean square error proposed by an embodiment of the present application;
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
相关技术中,采用基于几何绕射理论(geometric theory of diffraction,GTD)模型的GTD参数估计方法,然而现有的GTD参数估计方法具有计算复杂度高、超参数调整复杂等问题,导致现有的GTD参数估计方法的算法效率低,且对于不同场景的低频宽带雷达数据的泛化能力较弱;此外,在现有的GTD参数估计方法中,信噪比对算法性能影响较为严重,导致在低信噪比的场景下,目标显著性较低,参数估计性能下降明显。In the related art, the GTD parameter estimation method based on the geometric theory of diffraction (GTD) model is adopted. However, the existing GTD parameter estimation method has problems such as high computational complexity and complex hyperparameter adjustment, which leads to the existing The algorithm efficiency of the GTD parameter estimation method is low, and the generalization ability of low-frequency broadband radar data in different scenarios is weak; in addition, in the existing GTD parameter estimation methods, the signal-to-noise ratio has a serious impact on the performance of the algorithm, resulting in low In the scenario of signal-to-noise ratio, the target saliency is low, and the parameter estimation performance drops significantly.
有鉴于此,本申请实施例提出一种可学习低频宽带雷达目标参数估计方法,旨在解决现有技术中存在的算法效率低、泛化能力弱,且在低信噪比场景下目标显著性较低的问题。下面将对本申请的参数估计方法进行详细说明。In view of this, an embodiment of the present application proposes a method for estimating target parameters of a learnable low-frequency broadband radar, which aims to solve the problems of low efficiency, weak generalization ability, and target salience in low signal-to-noise ratio scenarios existing in the prior art. lower problem. The parameter estimation method of the present application will be described in detail below.
图1是本申请实施例示出的可学习低频宽带雷达目标参数估计方法流程图。参照图1,本申请实施例提出的可学习低频宽带雷达目标参数估计方法包括如下步骤:FIG. 1 is a flowchart of a method for estimating target parameters of a learnable low-frequency broadband radar according to an embodiment of the present application. Referring to FIG. 1 , the method for estimating target parameters of a learnable low-frequency broadband radar proposed by an embodiment of the present application includes the following steps:
S101、获取低频宽带雷达采集的数据。S101. Acquire data collected by a low-frequency broadband radar.
具体实施时,从低频宽带雷达采集的数据中获取如下数据:In the specific implementation, the following data are obtained from the data collected by the low-frequency broadband radar:
雷达观测频率{fm:m=1,…,M},频率间隔Δf,满足fm=f1+(m-1)Δf;雷达观测信号的频谱{E(fm):m=1,…,M};观测场景距离单元数L,要求L>M。Radar observation frequency {f m : m=1,...,M}, frequency interval Δf, satisfying f m =f 1 +(m-1)Δf; spectrum of radar observation signal {E(f m ):m=1, ..., M}; the number of distance units L in the observation scene, L>M is required.
S102、将低频宽带雷达采集的数据输入预先训练的参数估计网络。S102. Input the data collected by the low-frequency broadband radar into a pre-trained parameter estimation network.
具体实施时,将上述低频宽带雷达采集的数据中的雷达观测频率、雷达观测信号的频谱和观测场景距离单元数输入预先训练好的参数估计网络DNN。将输入的低频宽带雷达采集的数据对参数估计网络中的对应量进行赋值,将输入的雷达观测信号的频谱{E(fm):m=1,…,M}组合成雷达频谱信号向量e:During specific implementation, the radar observation frequency, the frequency spectrum of the radar observation signal, and the number of distance units of the observation scene in the data collected by the low-frequency broadband radar are input into the pre-trained parameter estimation network DNN. The data collected by the input low-frequency broadband radar is assigned to the corresponding quantity in the parameter estimation network, and the spectrum {E(f m ):m=1,...,M} of the input radar observation signal is combined into the radar spectrum signal vector e :
该参数估计网络DNN是以多个具有随机相位和噪声的训练数据作为训练样本,对神经网络进行训练得到的。该神经网络的训练方法详见下述参数估计方法中训练神经网络过程S201-S203,此处不再赘述。The parameter estimation network DNN is obtained by training the neural network with multiple training data with random phase and noise as training samples. For the training method of the neural network, please refer to the steps S201-S203 of training the neural network in the following parameter estimation method, which will not be repeated here.
S103、参数估计网络输出目标参数估计值。S103, the parameter estimation network outputs the target parameter estimation value.
具体实施时,按照参数估计网络DNN中的预设算法和优化过的可学习网络参数Θopt进行求解,得到目标参数估计向量并输出目标参数估计值 During specific implementation, the solution is performed according to the preset algorithm in the parameter estimation network DNN and the optimized learnable network parameter Θ opt , and the target parameter estimation vector is obtained. and output the target parameter estimates
目标参数向量其元素为其定义与下述参数真值向量相同,此处不再赘述。当时,表示第l个距离单元中存在频率依赖因子为α的散射中心,其散射系数估计值为当时,表示第l个距离单元中不存在频率依赖因子为α的散射中心。target parameter vector Its elements are Its definition is the same as the following parameter truth vector, so it will not be repeated here. when When , it means that there is a scattering center with a frequency-dependent factor α in the l-th distance unit, and its estimated scattering coefficient is when , it means that there is no scattering center with a frequency-dependent factor α in the l-th distance unit.
图2是本申请实施例示出的可学习低频宽带雷达目标参数估计神经网络训练方法流程图,参照图2,本申请实施例提出的可学习低频宽带雷达目标参数估计方法中训练神经网络过程,包括如下步骤:FIG. 2 is a flowchart of a neural network training method for estimating target parameters of a learnable low-frequency broadband radar shown in an embodiment of the present application. Referring to FIG. 2 , the training process of a neural network in the method for estimating target parameters of a learnable low-frequency wideband radar proposed by an embodiment of the present application includes: Follow the steps below:
S201、构建GTD字典和神经网络。S201. Construct GTD dictionary and neural network.
具体实施时,首先构建GTD字典。GTD字典包括复数值GTD字典矩阵和实数值GTD字典矩阵。In specific implementation, the GTD dictionary is first constructed. The GTD dictionary includes a complex-valued GTD dictionary matrix and a real-valued GTD dictionary matrix.
构建复数值GTD字典矩阵定义为:Build a matrix of complex-valued GTD dictionaries defined as:
其中,Φ(α),α∈{α1,α2,…,αJ}是J个子字典矩阵,的第m行、第l列元素(m=1,…,M,l=1,…,L)定义为:Among them, Φ (α) ,α∈{α 1 ,α 2 ,…,α J } are J sub-dictionary matrices, The m-th row and l-th column elements (m=1,...,M,l=1,...,L) are defined as:
j为虚数单位,c为光速,fC=f1+(M-1)Δf/2表示中心频率,rl=(l/L)×c/(2Δf)表示第l个距离单元。j is the imaginary unit, c is the speed of light, f C =f 1 +(M-1)Δf/2 represents the center frequency, and r l =(l/L)×c/(2Δf) represents the lth distance unit.
构建实数值GTD字典矩阵定义为Build a matrix of real-valued GTD dictionaries defined as
其中Re(·)和Im(·)分别表示实部和虚部运算。where Re(·) and Im(·) represent the real and imaginary part operations, respectively.
通过构建GTD字典,将实际场景中的低频宽带雷达采集的数据转化为后续的数学模型,使后续的神经网络运算数学模型与之前的实际模型分开处理,降低本申请实施例处理数据的复杂程度。By constructing a GTD dictionary, the data collected by the low-frequency broadband radar in the actual scene is converted into a subsequent mathematical model, so that the subsequent neural network operation mathematical model is processed separately from the previous actual model, thereby reducing the complexity of the data processing in the embodiment of the present application.
构建神经网络DNN。该神经网络DNN为未完成优化的上述参数估计网络DNN,包含可学习网络参数Θ、可学习变换本申请实施例中采用的是Python编程语言和PyTorch深度学习框架构建神经网络DNN,神经网络DNN可以采用现有技术中的编程语言和深度学习框架,对此本申请不做限制。神经网络DNN包含输入模块、迭代优化模块、和输出模块。Build a neural network DNN. The neural network DNN is the above-mentioned parameter estimation network DNN that has not been optimized, and includes a learnable network parameter Θ, a learnable transformation In the embodiments of this application, the Python programming language and the PyTorch deep learning framework are used to construct the neural network DNN, and the neural network DNN may use the programming language and deep learning framework in the prior art, which is not limited in this application. The neural network DNN includes an input module, an iterative optimization module, and an output module.
输入模块,用于将输入所述神经网络的所述训练样本中的复数值观测向量转化为实数值观测向量,并利用所述实数值观测向量和实数值GTD字典矩阵计算所述神经网络的初始输入向量。The input module is used to convert the complex-valued observation vector in the training sample input into the neural network into a real-valued observation vector, and use the real-valued observation vector and the real-valued GTD dictionary matrix to calculate the initial value of the neural network. input vector.
具体实施时,输入训练样本中的复数值观测向量。其中,该训练样本为训练数据集中的训练数据引入随机相位和噪声得到的,具体产生该训练数据集的过程见下述S202。During specific implementation, the complex-valued observation vector in the training sample is input. The training sample is obtained by introducing random phase and noise into the training data in the training data set, and the specific process of generating the training data set is shown in the following S202.
将输入的复数值观测向量e(此处在训练时e=ei,ep,ei,ep为添加随机相位和噪声的复数值观测向量,具体定义见下述内容,在测试时e为输入的复数值观测向量)转化为对应的实数值观测向量b:The input complex-valued observation vector e (here e=e i,ep during training, e i,ep is the complex-valued observation vector with random phase and noise added, see the following for the specific definition, and e is the input during testing The complex-valued observation vector of ) is converted into the corresponding real-valued observation vector b:
并利用实数值GTD字典矩阵计算初值,作为神经网络的初始输入向量:And use the real-valued GTD dictionary matrix to calculate the initial value as the initial input vector of the neural network:
y(1)=x(0)=ΨT(ΨΨT)-1by (1) = x (0) = Ψ T (ΨΨ T ) -1 b
迭代优化模块,用于对所述神经网络的初始输入向量进行多次迭代优化,得到所述神经网络的实数值输出向量。The iterative optimization module is configured to perform multiple iterative optimizations on the initial input vector of the neural network to obtain a real-valued output vector of the neural network.
具体实施时,将上述神经网络的实数值输出向量进行Np=15次迭代,对k=1,…,Np,重复以下迭代步骤:During specific implementation, the real-valued output vector of the neural network is subjected to N p =15 iterations, and the following iterative steps are repeated for k=1, . . . , N p :
r(k)=y(k)-μ(k)ΨT(Ψy(k)-b)r (k) = y (k) - μ (k) Ψ T (Ψy (k) - b)
y(k+1)=x(k)-ρ(k)(x(k)-x(k-1))y (k+1) = x (k) - ρ (k) (x (k) - x (k-1) )
其中,包含的μ(k),θ(k),ρ(k)、定义如下:Among them, including μ (k) , θ (k) , ρ (k) , Defined as follows:
μ(k),θ(k),ρ(k)定义为:μ (k) , θ (k) , ρ (k) are defined as:
μ(k)=sp(a1k+c1),θ(k)=sp(a2k+c2),其中,μ (k) =sp(a 1 k+c 1 ), θ (k) =sp(a 2 k+c 2 ), in,
sp(x)=ln(1+exp(x)),a1,a2,a3,c1,c2,c3属于可学习网络参数Θ1(神经网络DNN中的可学习网络参数Θ定义见下述内容,下同),Sθ是参数为θ的阈值收缩函数,是一个逐元素函数,定义为:sp(x)=ln(1+exp(x)), a 1 , a 2 , a 3 , c 1 , c 2 , c 3 belong to the learnable network parameter Θ 1 (the learnable network parameter Θ in the neural network DNN See the definition below, the same below), S θ is a threshold shrinkage function with parameter θ, which is an element-wise function, defined as:
为可学习变换,所述可学习变换结构是基于循环卷积的网络层结构,对所述神经网络的初始输入向量进行迭代优化,输出所述神经网络的实数值输出向量,其中,所述可学习变换结构包含变形算子循环卷积层CC、线性整流单元ReLU; is a learnable transform, the learnable transform structure is a network layer structure based on circular convolution, iteratively optimizes the initial input vector of the neural network, and outputs the real-valued output vector of the neural network, wherein the learnable transformation structure Contains deformation operators Circular convolution layer CC, linear rectification unit ReLU;
在一些实施方式中,可学习变换结构(卷积层层数为5)结构如下:In some embodiments, the transform structure can be learned (The number of convolutional layers is 5) The structure is as follows:
其中,in,
变形算子定义为:变形算子将向量变形为矩阵相应的逆变形算子将矩阵变形为向量满足zd,l=x(d-1)L+l,其中d=1,…,2J,l=1,…,L;deformation operator Defined as: deformation operator the vector morph into a matrix Corresponding Inverse Transform Operator put the matrix morph to vector Satisfy z d,l =x (d-1)L+l , where d=1,...,2J,l=1,...,L;
循环卷积层CC包含可学习网络参数用于将映射为其中,The recurrent convolutional layer CC contains learnable network parameters for the maps to in,
h=CCK,P,Q(z;w,b), h=CC K,P,Q (z;w,b),
n mod L表示在1到L之间模L同余于n的整数,p=1,…,P;n mod L represents an integer between 1 and L whose modulus L is congruent to n , p=1,...,P;
线性整流单元ReLU为逐元素函数,定义为:[ReLU(x)]i=max(0,xi)The linear rectification unit ReLU is an element-wise function, which is defined as: [ReLU(x)] i =max(0,x i )
神经网络DNN中的可学习网络参数Θ,定义为:Θ=Θ1∪Θ2,其中,Θ1={a1,a2,a3,c1,c2,c3},用于上述计算μ(k),θ(k),ρ(k);用于上述定义的可学习变换 The learnable network parameter Θ in the neural network DNN is defined as: Θ=Θ 1 ∪Θ 2 , where Θ 1 ={a 1 ,a 2 ,a 3 ,c 1 ,c 2 ,c 3 }, for the above Calculate μ (k) , θ (k) , ρ (k) ; A learnable transform for the above definition
上述可学习变换的卷积层层数、卷积核尺寸K、特征通道数NF、频率依赖因子集合的元素个数J均为提前预设,具体数值本申请不做限制。本申请实施例采用的卷积层层数为5,卷积核尺寸为K=3,特征通道数为NF=32,频率依赖因子集合的元素个数为J=5。The above learnable transformable convolution layer layers, convolution kernel size K, feature channel number NF , and element number J of the frequency-dependent factor set are all preset in advance, and the specific values are not limited in this application. The number of convolution layers used in the embodiment of the present application is 5, the size of the convolution kernel is K=3, the number of feature channels is NF =32, and the number of elements of the frequency-dependent factor set is J=5.
本申请实施例引入了中间向量y,通过该中间向量y的运算,降低了实数值输出向量的迭代次数,减小了计算量,提升了算法的效率。The embodiment of the present application introduces an intermediate vector y, and through the operation of the intermediate vector y, the real-valued output vector is reduced The number of iterations is reduced, the amount of calculation is reduced, and the efficiency of the algorithm is improved.
输出模块,用于将所述神经网络的实数值输出向量转化为复数值输出向量,作为目标参数预测值进行输出。The output module is used to convert the real-valued output vector of the neural network into a complex-valued output vector, and output it as the predicted value of the target parameter.
具体实施时,将上述迭代优化模块输出的实数值输出向量按照下式转化为复数值输出向量:In specific implementation, the real value output vector output by the above iterative optimization module is Convert to a complex-valued output vector as follows:
将该复数值输出向量作为目标参数预测值进行输出。use the complex-valued output vector as the target parameter predictor to output.
S202、产生复数值观测向量-复数值参数真值向量对作为训练数据,组成训练数据集。S202. Generate a complex-valued observation vector-complex-valued parameter true value vector pair as training data to form a training data set.
具体实施时,产生训练数据集D={(ei,σi):i=1,…,ND},其中训练数据为复数值观测向量-复数值参数真值向量对(ei,σi),该训练数据的数量ND为提前预设的,对此训练样本数的具体数值本申请不做限制,本申请实施例采用的数量为ND=50000。每组训练数据(ei,σi)按照如下步骤产生:During specific implementation, a training data set D={(e i ,σ i ):i=1,...,N D } is generated, wherein the training data is a complex-valued observation vector-complex-valued parameter true value vector pair (e i ,σ i ), the number ND of the training data is preset in advance, and the specific value of the number of training samples is not limited in the present application, and the number used in the embodiment of the present application is ND =50000. Each set of training data ( ei ,σ i ) is generated according to the following steps:
产生散射中心个数Ns,Ns为预设范围内的随机整数,对此本申请不做限制,本实施例中Ns预设为1-15的随机整数;generating the number of scattering centers N s , where N s is a random integer within a preset range, which is not limited in this application, and in this embodiment, N s is preset as a random integer ranging from 1 to 15;
产生散射中心参数集合,散射中心参数集合为S={(ln,αn,σn):n=1,…,Ns},其中,ln为距离单元,定义为1~L中无放回随机抽取的整数,ln各不相同;αn为频率依赖因子,αn从预设频率依赖因子集合中有放回随机抽取;σn为散射系数,σn的幅度|σn|服从预设幅度的分布,σn的幅角∠σn服从预设幅角的分布。A set of scattering center parameters is generated, and the set of scattering center parameters is S={(l n ,α n ,σ n ):n=1,...,N s }, where ln is a distance unit, which is defined as no in 1~L Integers drawn randomly by replacement, l n are different; α n is the frequency-dependent factor, α n is randomly selected from the preset frequency-dependent factor set; σ n is the scattering coefficient, the amplitude of σ n |σ n | It obeys the distribution of preset amplitude, and the argument ∠σ n of σ n obeys the distribution of preset argument.
其中,所述预设频率依赖因子集合、预设幅度的分布、预设幅角的分布具体数值本申请不做限制。本申请实施例采用的频率依赖因子集合为{-1,-1/2,0,1/2,1},频率依赖因子集合的元素个数为J=5,幅度服从均匀分布U(0.5,1.5),幅角服从均匀分布U(0,2π)。The specific values of the preset frequency-dependent factor set, the distribution of the preset amplitude, and the distribution of the preset argument are not limited in this application. The frequency-dependent factor set used in the embodiments of the present application is {-1, -1/2, 0, 1/2, 1}, the number of elements in the frequency-dependent factor set is J=5, and the amplitude obeys a uniform distribution U(0.5, 1.5), the argument obeys the uniform distribution U(0,2π).
按照所述散射中心参数集合中的散射中心参数计算所述复数值参数真值向量,计算 Calculate the true value vector of the complex-valued parameter according to the scattering center parameter in the scattering center parameter set, and calculate
其中,l=1,…,L,α属于所述预设频率依赖因子集合{α1,α2,…,αJ};Wherein, l=1,...,L, α belongs to the preset frequency-dependent factor set {α 1 ,α 2 ,...,α J };
将得到的组合成所述复数值参数真值向量定义为:will get combined into the complex-valued parameter truth vector defined as:
根据所述复数值参数真值向量和复数值GTD字典矩阵,计算复数值观测向量,利用所述复数值GTD字典矩阵,将所述复数值参数真值向量转换为所述复数值观测向量:将每个得到的复数值观测向量和复数值参数真值向量组成的一个复数值观测向量-复数值参数真值向量对,作为一组训练数据,所有训练数据组成训练数据集。Calculate a complex-valued observation vector according to the complex-valued parameter truth vector and complex-valued GTD dictionary matrix, and use the complex-valued GTD dictionary matrix to convert the complex-valued parameter truth vector into the complex-valued observation vector: A complex-valued observation vector-complex-valued parameter true-value vector pair composed of each obtained complex-valued observation vector and a complex-valued parameter true value vector is used as a set of training data, and all training data constitute a training data set.
S203、基于训练样本计算损失函数,并基于损失函数的值优化可学习网络参数,对神经网络进行训练,得到参数估计网络DNN。S203: Calculate a loss function based on the training samples, optimize the learnable network parameters based on the value of the loss function, and train the neural network to obtain a parameter estimation network DNN.
具体实施时,按照如下步骤进行:When implementing, follow the steps below:
初始化神经网络DNN及优化器,神经网络DNN中的可学习网络参数Θ=Θ1∪Θ2,其中Θ1={a1,a2,a3,c1,c2,c3}分别初始化为初始值,采用算法对可学习网络参数Θ2做初始化;采用优化器,设置批尺寸(batch size)、初始学习率、学习率衰减为预设值。Initialize the neural network DNN and optimizer, the learnable network parameters in the neural network DNN Θ=Θ 1 ∪Θ 2 , where Θ 1 ={a 1 ,a 2 ,a 3 ,c 1 ,c 2 ,c 3 } are initialized respectively As the initial value, an algorithm is used to initialize the learnable network parameter Θ 2 ; an optimizer is used to set the batch size, initial learning rate, and learning rate decay to preset values.
上述可学习网络参数Θ1的初始值和优化器批尺寸、初始学习率、学习率衰减的值均为提前预设,具体数值本申请不做限制;用于可学习网络参数Θ2初始化的算法为现有技术,具体内容本申请不做限制。本申请实施例采用的可学习网络参数Θ1的初始值为{0.5,0.2,1,2,1,0},采用Xavier算法对可学习网络参数Θ2做初始化,采用Adam优化器,预设值为批尺寸为32,预设学习率为0.001,每5个训练轮次ep学习率衰减为0.8。The initial value of the above-mentioned learnable network parameter Θ 1 and the values of the optimizer batch size, initial learning rate, and learning rate decay are all preset in advance, and the specific values are not limited in this application; the algorithm for the initialization of the learnable network parameter Θ 2 It is the prior art, and the specific content is not limited in this application. The initial value of the learnable network parameter Θ1 adopted in the embodiment of the present application is {0.5, 0.2, 1, 2 , 1 , 0}, the Xavier algorithm is used to initialize the learnable network parameter Θ2, the Adam optimizer is used, and the preset The value is the batch size of 32, the preset learning rate is 0.001, and the ep learning rate decays to 0.8 every 5 training epochs.
数据增强,针对每个所述训练数据(ei,σi),对所述训练数据中的复数值观测向量ei添加随机相位φi,ep和噪声ni,ep,对所述训练数据中的复数值参数真值向量σi添加所述随机相位φi,ep,得到训练样本(ei,ep,σi,ep),具体按照如下公式进行:Data augmentation, for each of the training data (ei ,σ i ), adding a random phase φ i,ep and noise ni,ep to the complex-valued observation vector ei in the training data, and adding a random phase φ i ,ep and noise ni,ep to the training data Add the random phase φ i,ep to the true value vector σ i of the complex-valued parameter in , to obtain the training sample ( ei,ep ,σ i,ep ), which is specifically carried out according to the following formula:
ei,ep=ei·exp(jφi,ep)+ni,ep e i,ep = e i ·exp(jφ i,ep )+n i,ep
σi,ep=σi·exp(jφi,ep)σ i,ep =σ i ·exp(jφ i,ep )
其中,φi,ep服从预设分布,噪声ni,ep的类型和噪声ni,ep所满足信噪比为提前预设,具体数值本申请均不做限制。本申请实施例中,φi,ep服从均匀分布U(0,2π),噪声ni,ep为高斯加性白噪声,满足信噪比SNR=5dB。Among them, φ i,ep obeys a preset distribution, the type of noise ni,ep and the signal-to-noise ratio satisfied by noise ni,ep are preset in advance, and the specific values are not limited in this application. In the embodiment of the present application, φ i,ep obeys the uniform distribution U(0,2π), and the noise ni,ep is Gaussian additive white noise, which satisfies the signal-to-noise ratio SNR=5dB.
通过引入不同的随机相位和噪声的数据增强,使原有训练数据集中的(ei,σi)以不同相位形成多次数据增强,增强后的数据量大幅增加,提升了训练样本的容量,从而使基于该数据增强的训练样本训练优化得到的参数估计网络准确率更高,更接近真值;此外,将引入噪声的数据作为训练样本,使得基于该数据增强的训练样本训练优化得到的参数估计网络可以对低信噪比环境下的数据有更好的处理能力,有效提升目标显著性。By introducing data enhancement with different random phases and noises, the (e i ,σ i ) in the original training data set can be enhanced multiple times with different phases. Therefore, the parameter estimation network obtained by training and optimization based on the data-enhanced training sample has higher accuracy and is closer to the true value; in addition, the noise-introduced data is used as the training sample, so that the optimized parameters are trained based on the data-enhanced training sample. Estimation network can have better processing ability for data in low signal-to-noise ratio environment and effectively improve target saliency.
将训练样本(ei,ep,σi,ep)中的ei,ep输入神经网络DNN,基于神经网络DNN的预设算法输出目标参数预测值计算所述训练样本对应的损失函数的值,其中,所述损失函数的值至少包括目标背景比损失函数的值。Input the e i,ep in the training samples (e i,ep ,σ i,ep ) into the neural network DNN, and output the predicted value of the target parameter based on the preset algorithm of the neural network DNN Calculate the value of the loss function corresponding to the training sample, wherein the value of the loss function at least includes the value of the target-background ratio loss function.
所述目标背景比(target-to-ground ratio,TBR)损失函数,定义为:The target-to-ground ratio (TBR) loss function is defined as:
其中,为所述目标背景比损失函数的值,TBR为所述目标背景比,为目标区域,为背景区域,NT为AT的元素个数,NB为AB的元素个数,为所述目标参数预测值中的元素。in, is the value of the target-background ratio loss function, TBR is the target-background ratio, is the target area, is the background area, N T is the number of elements of A T , N B is the number of elements of A B , element in the predicted value for the target parameter.
所述损失函数还包括差异性损失函数和对称性损失函数中的一种或多种。The loss function also includes one or more of a dissimilarity loss function and a symmetric loss function.
差异性损失函数(即均方误差MSE),用于衡量估计值与真值的偏差程度,定义为:The dissimilarity loss function (that is, the mean squared error MSE), used to measure the degree of deviation of the estimated value from the true value, is defined as:
其中,||·||2表示l2的范数。where ||·|| 2 represents the norm of l 2 .
对称性损失函数,用于衡量训练过程中对称损失的影响,定义为:The symmetric loss function, which measures the impact of symmetric loss during training, is defined as:
其中,表示神经网络DNN输入为ei,ep时,对应的中间变量r(k)。in, Represents the corresponding intermediate variable r (k) when the input of the neural network DNN is e i,ep .
当所述损失函数包含所述目标背景比损失函数、所述差异性损失函数和所述对称性损失函数时,按照如下公式计算总损失函数值:When the loss function includes the target-to-background ratio loss function, the difference loss function and the symmetry loss function, the total loss function value is calculated according to the following formula:
其中,为总损失函数值,为所述目标背景比损失函数的值,所述差异性损失函数的值,为所述对称性损失函数的值,λ1、λ2为预设平衡系数,由于相比的数量级更大,因此设置系数λ1、λ2用于平衡 间的数量级,具体数值本申请不作限制,本申请实施例中λ1=0.1,λ2=0.001。in, is the total loss function value, is the value of the target-background ratio loss function, the value of the dissimilarity loss function, are the values of the symmetry loss function, λ 1 and λ 2 are preset balance coefficients, since compared to is an order of magnitude larger, so set the coefficients λ 1 , λ 2 for balance The specific value is not limited in the present application, and in the embodiment of the present application, λ 1 =0.1, λ 2 =0.001.
优化神经网络DNN中的可学习网络参数Θ,应用反向传播算法,计算总损失函数值对可学习网络参数Θ的梯度然后用优化器优化可学习网络参数Θ。Optimize the learnable network parameters Θ in the neural network DNN, apply the backpropagation algorithm, and calculate the total loss function value Gradient to learnable network parameter Θ The learnable network parameters Θ are then optimized with an optimizer.
在每个轮次ep中,重复上述数据增强(在每个不同的训练轮次ep中对每个训练样本(ei,ep,σi,ep)添加不同的随机相位和噪声,)、输入神经网络输出计算损失函数值、优化可学习网络参数Θ操作多个轮次,得到参数估计网络DNN。其中,所述重复轮次的数量为提前预设,本申请不作限制,本申请实施例重复共计60轮次。In each epoch ep, repeat the above data augmentation (add a different random phase and noise to each training sample ( ei,ep ,σi ,ep ) in each different training epoch), input neural network output Calculate the loss function value, optimize the learnable network parameter Θ for multiple rounds, and obtain the parameter estimation network DNN. Wherein, the number of the repeated rounds is preset in advance, which is not limited in the present application, and the embodiment of the present application is repeated for a total of 60 rounds.
图3是本申请实施例示出的信噪比-目标背景比折线图。参照图3,示出了不同算法在不同低信噪比SNR情况下的目标背景比TBR值。从图3中可以看出,现有技术中的FOCUSS算法、SVR算法、FISTA-Net算法与本申请实施例提出的参数估计方法中的算法(图中TEFISTA-Net)相比,在相同信噪比时的TBR值均更低,说明本申请实施例提出的参数估计方法具有更强的目标显著性。FIG. 3 is a line graph of the signal-to-noise ratio-target background ratio shown in the embodiment of the present application. Referring to FIG. 3 , the target-to-background ratio TBR values of different algorithms under different low signal-to-noise ratio SNR conditions are shown. It can be seen from FIG. 3 that, compared with the FOCUSS algorithm, SVR algorithm, and FISTA-Net algorithm in the prior art, compared with the algorithm in the parameter estimation method (TEFISTA-Net in the figure) proposed in the embodiment of the present application, the signal-to-noise ratio is the same. The TBR values are lower than those of the time, indicating that the parameter estimation method proposed in the embodiment of the present application has stronger target saliency.
图4是本申请实施例示出的信噪比-均方误差折线图。参照图4,示出了不同算法在不同低信噪比SNR情况下的均方误差MSE值。从图4可以看出,现有技术中的FOCUSS算法、SVR算法、FISTA-Net算法与本申请实施例提出的参数估计方法中的算法(图中TEFISTA-Net)相比,在相同信噪比时的MSE值更高,说明本申请实施例提出的参数估计方法与期望值差别更小,应用本申请实施例提出的参数估计方法得到的参数估计值更加准确。FIG. 4 is a broken line graph of signal-to-noise ratio-mean square error shown in an embodiment of the present application. Referring to FIG. 4 , the mean square error MSE values of different algorithms under different low signal-to-noise ratio SNR conditions are shown. It can be seen from FIG. 4 that, compared with the FOCUSS algorithm, SVR algorithm, and FISTA-Net algorithm in the prior art, compared with the algorithm in the parameter estimation method (TEFISTA-Net in the figure) proposed in the embodiment of the present application, the same signal-to-noise ratio is obtained. When the MSE value is higher, it means that the parameter estimation method proposed in the embodiment of the present application is less different from the expected value, and the parameter estimation value obtained by applying the parameter estimation method proposed in the embodiment of the present application is more accurate.
此外,图3和图4中低信噪比(例如SNR≤10dB)情况下的对应的性能评价指标,本申请实施例提出的参数估计方法均表现出相比现有技术中的FOCUSS算法、SVR算法、FISTA-Net算法更好的性能,说明本申请实施例提出的参数估计方法通过对训练神经网络的训练数据集进行数据增强,对基础训练样本引入了随机相位和高斯加性白噪声,并且针对低信噪比场景引入了目标背景比损失函数,使训练优化得到的参数估计网络可以在低信噪比的场景下很好的处理低频宽带雷达信号数据。In addition, the corresponding performance evaluation indicators in the case of low signal-to-noise ratio (for example, SNR≤10dB) in FIGS. 3 and 4, the parameter estimation methods proposed in the embodiments of the present application all show better performance than the FOCUSS algorithm and SVR in the prior art. The better performance of the algorithm and the FISTA-Net algorithm indicates that the parameter estimation method proposed in the embodiment of the present application introduces random phase and Gaussian additive white noise to the basic training sample by performing data enhancement on the training data set for training the neural network, and A target-to-background ratio loss function is introduced for low SNR scenarios, so that the parameter estimation network obtained by training optimization can process low-frequency broadband radar signal data well in low SNR scenarios.
本申请提供可学习低频宽带雷达目标参数估计方法,通过基于数据增强训练样本训练所构建的神经网络,不断对神经网络中的可学习参数进行优化,得到参数估计网络用于处理低频宽带雷达信号,具有以下优点:The present application provides a method for estimating target parameters of a learnable low-frequency wideband radar. By training a neural network constructed based on data augmentation training samples, the learnable parameters in the neural network are continuously optimized to obtain a parameter estimation network for processing low-frequency wideband radar signals. Has the following advantages:
(1)采用了基于循环卷积的网络层结构,降低了计算复杂度,提升了参数估计的效率。(1) The network layer structure based on circular convolution is adopted, which reduces the computational complexity and improves the efficiency of parameter estimation.
(2)采用多个具有随机相位和噪声的训练数据作为训练样本,对神经网络进行训练得到的参数估计网络用于进行参数估计,训练优化得到的参数估计网络可以处理不同的低频宽带雷达信号数据,降低了计算复杂度,提升了参数估计算法的效率和泛化能力。(2) Using multiple training data with random phase and noise as training samples, the parameter estimation network obtained by training the neural network is used for parameter estimation, and the parameter estimation network obtained by training optimization can process different low-frequency broadband radar signal data , which reduces the computational complexity and improves the efficiency and generalization ability of the parameter estimation algorithm.
(3)将用于训练神经网络的训练数据集中的训练数据引入随机相位和噪声的数据增强作为训练样本,采用至少包括目标背景比损失函数的损失函数训练神经网络,使训练优化得到的参数估计网络可以在低信噪比的场景下很好的处理低频宽带雷达信号数据,有效提升目标的显著性。(3) The training data in the training data set used to train the neural network is introduced into data augmentation of random phase and noise as training samples, and the neural network is trained with a loss function including at least the target-background ratio loss function, so that the parameter estimates obtained by the training optimization are used. The network can process low-frequency broadband radar signal data well in low signal-to-noise ratio scenarios, effectively improving the saliency of the target.
本申请实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序以实现本申请实施例所提出的可学习低频宽带雷达目标参数估计方法中的步骤。Embodiments of the present application further provide an electronic device, including a memory, a processor, and a computer program stored in the memory, where the processor executes the computer program to realize the learnable low-frequency broadband radar target proposed by the embodiments of the present application Steps in a parameter estimation method.
本申请提供的又一实施例中,还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时以实现本申请实施例所提出的可学习低频宽带雷达目标参数估计方法中的步骤。In another embodiment provided by the present application, a computer program product is also provided, including a computer program/instruction, when the computer program/instruction is executed by a processor, to realize the learnable low-frequency broadband radar target proposed by the embodiment of the present application Steps in a parameter estimation method.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that, in this document, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a related manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the partial descriptions of the method embodiments.
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application are included in the protection scope of this application.
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210735768.8A CN115015869A (en) | 2022-06-27 | 2022-06-27 | Can learn low frequency broadband radar target parameter estimation method, equipment and program product |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210735768.8A CN115015869A (en) | 2022-06-27 | 2022-06-27 | Can learn low frequency broadband radar target parameter estimation method, equipment and program product |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115015869A true CN115015869A (en) | 2022-09-06 |
Family
ID=83076737
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210735768.8A Pending CN115015869A (en) | 2022-06-27 | 2022-06-27 | Can learn low frequency broadband radar target parameter estimation method, equipment and program product |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115015869A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116430347A (en) * | 2023-06-13 | 2023-07-14 | 成都实时技术股份有限公司 | Radar data acquisition and storage method |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110275158A (en) * | 2018-03-15 | 2019-09-24 | 南京理工大学 | Parameter Estimation Method of Wideband Radar Echo Signal Based on Bayesian Compressed Sensing |
| CN111693975A (en) * | 2020-05-29 | 2020-09-22 | 电子科技大学 | MIMO radar sparse array design method based on deep neural network |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
-
2022
- 2022-06-27 CN CN202210735768.8A patent/CN115015869A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110275158A (en) * | 2018-03-15 | 2019-09-24 | 南京理工大学 | Parameter Estimation Method of Wideband Radar Echo Signal Based on Bayesian Compressed Sensing |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN111693975A (en) * | 2020-05-29 | 2020-09-22 | 电子科技大学 | MIMO radar sparse array design method based on deep neural network |
Non-Patent Citations (2)
| Title |
|---|
| 田彪 等: "基于几何绕射理论模型高精度参数估计的多频带合成成像", 电子与信息学报, vol. 35, no. 7, 15 July 2013 (2013-07-15), pages 1532 - 1539 * |
| 陈杭 等: "稀疏孔径和大转角下ISAR对目标转动的估计", 电波科学学报, vol. 34, no. 1, 31 December 2019 (2019-12-31), pages 70 - 75 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116430347A (en) * | 2023-06-13 | 2023-07-14 | 成都实时技术股份有限公司 | Radar data acquisition and storage method |
| CN116430347B (en) * | 2023-06-13 | 2023-08-22 | 成都实时技术股份有限公司 | Radar data acquisition and storage method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zhang et al. | Parameter estimation of underwater impulsive noise with the Class B model | |
| Rue et al. | Fitting Gaussian Markov random fields to Gaussian fields | |
| Lin et al. | On the privacy properties of gan-generated samples | |
| KR102421349B1 (en) | Method and Apparatus for Transfer Learning Using Sample-based Regularization | |
| US11501197B2 (en) | Systems and methods for quantum computing based sample analysis | |
| CN113822444A (en) | Method, apparatus and computer-readable storage medium for model training and data processing | |
| Xu et al. | Latent semantic diffusion-based channel adaptive de-noising SemCom for future 6G systems | |
| Abdulhussain et al. | Fast and accurate computation of high‐order Tchebichef polynomials | |
| CN111915007B (en) | A Noise Reduction Method for Magnetic Resonance Spectrum Based on Neural Network | |
| CN115062658B (en) | Modulation type recognition method for overlapping radar signals based on adaptive threshold network | |
| CN111488904A (en) | Image classification method and system based on adversarial distribution training | |
| Ivanov et al. | Reducing the size of a sample sufficient for learning due to the symmetrization of correlation relationships between biometric data | |
| CN117318671A (en) | Self-adaptive filtering method based on fast Fourier transform | |
| Tan et al. | Parameters or privacy: A provable tradeoff between overparameterization and membership inference | |
| CN114255293A (en) | Rapid imaging method for solving highly nonlinear inverse scattering problem based on deep learning | |
| CN115015869A (en) | Can learn low frequency broadband radar target parameter estimation method, equipment and program product | |
| CN115719092A (en) | Model training method based on federal learning and federal learning system | |
| Zhang et al. | A sparsity preestimated adaptive matching pursuit algorithm | |
| CN113311429B (en) | 1-bit radar imaging method based on countermeasure sample | |
| JP2022537977A (en) | Apparatus and method for lattice point enumeration | |
| CN114757221A (en) | Internet of things equipment identification method and system based on RF-DNA fingerprints | |
| Zhang et al. | Compressed Sensing Reconstruction of Radar Echo Signal Based on Fractional Fourier Transform and Improved Fast Iterative Shrinkage‐Thresholding Algorithm | |
| US11907326B1 (en) | Systems and method for determining frequency coefficients of signals | |
| CN116680521A (en) | Single-channel aliasing electromagnetic signal separation method and device based on deep learning | |
| Ouyang et al. | Cryo-electron microscope image denoising based on the geodesic distance |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |