+

CN115308705B - Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network - Google Patents

Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network Download PDF

Info

Publication number
CN115308705B
CN115308705B CN202210940267.3A CN202210940267A CN115308705B CN 115308705 B CN115308705 B CN 115308705B CN 202210940267 A CN202210940267 A CN 202210940267A CN 115308705 B CN115308705 B CN 115308705B
Authority
CN
China
Prior art keywords
narrow pulse
network
pulse echo
ultra
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210940267.3A
Other languages
Chinese (zh)
Other versions
CN115308705A (en
Inventor
张亮
周强
宋益恒
王彦华
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210940267.3A priority Critical patent/CN115308705B/en
Publication of CN115308705A publication Critical patent/CN115308705A/en
Application granted granted Critical
Publication of CN115308705B publication Critical patent/CN115308705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本发明涉及极窄脉冲雷达数据生成技术领域,具体涉及一种基于生成对抗网络(Generative Adversarial Network,GAN)的多姿态极窄脉冲回波生成方法。该方法通过构建一个具备多姿态极窄脉冲回波数据生成能力的GAN,对目标的缺失视角极窄脉冲回波进行生成,能够一定程度上替代实测采集、电磁仿真等方法实现未知视角极窄脉冲回波数据的快速获取,解决雷达目标电磁特性、RATR研究中面临的极窄脉冲回波姿态敏感性问题。

The present invention relates to the technical field of ultra-narrow pulse radar data generation, and in particular to a method for generating multi-attitude ultra-narrow pulse echoes based on a generative adversarial network (GAN). The method generates ultra-narrow pulse echoes of missing viewing angles of targets by constructing a GAN with the ability to generate multi-attitude ultra-narrow pulse echo data. This method can replace actual measurement acquisition, electromagnetic simulation and other methods to a certain extent to achieve rapid acquisition of ultra-narrow pulse echo data of unknown viewing angles, and solve the problem of ultra-narrow pulse echo attitude sensitivity faced in radar target electromagnetic characteristics and RATR research.

Description

Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network
Technical Field
The invention relates to the technical field of ultra-narrow pulse radar data generation, in particular to a multi-pose ultra-narrow pulse echo generation method based on a generation countermeasure Network (GENERATIVE ADVERSARIAL Network, GAN).
Background
Ultra-narrow pulse radar refers to a type of radar where the processed single echo pulse width is much smaller than the target size. For very narrow pulse radar, the echo of the target contains a plurality of very narrow pulses, which correspond to different scattering points on the target. Thus, the target very narrow pulse echo can represent the distribution of the scattering points of the target in the radar line of sight direction, also commonly referred to as the High Resolution Range Profile (HRRP) of the target.
Because the extremely narrow pulse echo bears abundant structural and geometric characteristic information of the target, the method plays an important role in researches on the electromagnetic characteristics of the radar target, the automatic identification of the radar target (Radar Automatic Target Recognition, RATR) and the like. The ultra-narrow pulse echo can be usually obtained through methods such as actual measurement collection and electromagnetic simulation, however, the method for obtaining the ultra-narrow pulse echo through actual measurement collection requires a large amount of manpower and material resources, and the method for obtaining the ultra-narrow pulse echo through electromagnetic simulation requires a high-precision target three-dimensional model, a large amount of hardware resources and time resources, so that the requirements are generally difficult to meet in practical application. With the development of machine learning technology, related researchers propose a GAN-based depth generation model capable of rapidly generating realistic data samples with low calculation amount and preliminarily applied to the field of extremely narrow pulse radar data generation.
However, the ultra-narrow pulse echo data has high gesture sensitivity, and when the relative gesture between the radar and the target changes slightly, the ultra-narrow pulse echo shape changes greatly, which has adverse effects on the research of the electromagnetic characteristics and RATR of the radar target. The data set visual angle completion is carried out by acquiring the ultra-narrow pulse echo of the unknown visual angle, and the method is an effective method for overcoming the problem of the gesture sensitivity of the ultra-narrow pulse echo. However, the existing GAN-based ultra-narrow pulse echo generation method only performs distribution fitting on input original data, and cannot generate a target ultra-narrow pulse echo with a clear viewing angle.
Disclosure of Invention
Aiming at the problems, the invention provides a multi-pose extremely-narrow pulse echo generating method based on a generation countermeasure network, which generates the ultra-narrow pulse echo with the missing view angle of a target by constructing a GAN with the multi-pose extremely-narrow pulse echo data generating capability, can replace methods such as actual measurement acquisition, electromagnetic simulation and the like to a certain extent to realize the rapid acquisition of the ultra-narrow pulse echo data with the unknown view angle, and solves the problems of the electromagnetic characteristics of the radar target and the pose sensitivity of the extremely-narrow pulse echo in RATR research.
The method is realized by the following technical scheme:
A multi-pose ultra-narrow pulse echo generation method based on generation countermeasure network, which firstly realizes the decoupling representation of input ultra-narrow pulse echo based on a factorization module of attention, decomposes the identity and pose characteristics thereof, then based on a decoder module embedded in a continuous view, converts the view label to be generated into new pose characteristics and recombines the new pose characteristics with the previously decomposed identity characteristics, thereby realizing the generation of the ultra-narrow pulse echo of a specified view, the method comprises the following steps:
step 1, constructing an extremely narrow pulse echo data set, dividing the constructed extremely narrow pulse echo data set into a training data set and a data set to be complemented, and respectively carrying out normalization pretreatment on the divided training data set and the data set to be complemented to obtain a normalization pretreated training data set and a normalization pretreated data set to be complemented;
step 2, constructing a GAN generator network G, wherein the constructed GAN generator network G comprises an attention-based factorization module F and a decoder module R embedded based on a continuous view;
step 3, constructing a GAN discriminator network D and countering training loss;
step 4, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN discriminator network D constructed in the step 3, and carrying out parameter optimization on the GAN discriminator network D constructed in the step 3 through the countermeasure training loss constructed in the step 3;
Step 5, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN generator network G constructed in the step 2, and carrying out parameter optimization on the GAN generator network G constructed in the step 2 through the GAN discriminator network D with optimized parameters in the step 4 and the countermeasure training loss constructed in the step 3;
And 6, carrying out the completion of the missing view angle on the data set to be completed according to the GAN generator network G obtained by the parameter optimization in the step 5, and completing the generation of the multi-pose ultra-narrow pulse echo based on the generation countermeasure network.
In the step 1, the content of the extremely narrow pulse echo data set comprises extremely narrow pulse echo data of a class C object in a viewing angle range theta s~Θe and a corresponding viewing angle label, wherein theta s represents a starting viewing angle of the viewing angle range, and theta e represents an ending viewing angle of the viewing angle range.
In the step 1, the method for dividing the constructed ultra-narrow pulse echo data set into the training data set and the data set to be complemented comprises the following steps:
counting the condition of the ultra-narrow pulse echo data visual angle label, obtaining N types of ultra-narrow pulse echoes with uniform intervals delta theta at all visual angles if the data visual angle label is a uniform interval full visual angle, and recording the N types of ultra-narrow pulse echoes as a training data set X, otherwise, obtaining C-N types of ultra-narrow pulse echoes with partial visual angles theta 12,...,θm at all visual angles, and recording the N types of ultra-narrow pulse echoes as a data set to be complemented
In the step 1, an amplitude maximum value normalization processing method is adopted when normalization pretreatment is carried out, and specifically comprises the following steps:
Normalization of amplitude maximum value normalizes the range profile by using the maximum value of each frame of extremely narrow pulse echo, and if x is extracted from the data set as one frame of extremely narrow pulse echo, the extremely narrow pulse echo after the amplitude normalization is recorded as Expressed as:
In the step 2, the attention-based factorization module includes a feature extractor F, a feature decomposer T, and a supervision constraint network C, where the attention-based factorization module is configured to extract an identity feature F i of the input ultra-narrow pulse echo data x, that is:
fi=F(x)=T(F(x))
the ultra-narrow pulse echo data X is a subset sampled in the training data set X;
the feature extractor F is configured by a 4-layer convolutional neural network, and is configured to input feature extraction of the ultra-narrow pulse echo data x, and output a hybrid feature representation F mix, that is:
fmix=F(x)
the feature decomposer T is composed of a 2-layer self-attention network and a 1-layer weighted decomposition network, wherein the input is the mixed feature representation f mix extracted by the feature extractor, and the output is the visual angle feature f a and the identity feature f i, the self-attention network is used for acquiring the weight factor lambda of the mixed feature, and the weighted decomposition network is used for realizing the decomposition of f mix according to lambda, namely:
[fa,fi]=T(fmix)
Wherein, Representing matrix dot multiplication operation, wherein 1 is an all-1 matrix;
The supervision constraint network C consists of a visual angle fully-connected regression network C a and an identity fully-connected classification network C i, and performs supervision constraint on the decomposition process of f mix;
The input of the view angle fully connected regression network C a is f a, the output is a view angle regression result C a(fa), and the view angle regression loss L a is obtained by performing supervised constraint by adopting a mean square error loss function as follows:
La=MSE(Ca(fa)-ya)
Wherein y a is a real view angle label of input ultra-narrow pulse echo data, and MSE represents a mean square error loss function;
The identity full-connection classification network C i is input as f i, and output as an identity classification result C i(fi), and the cross entropy loss function is adopted to conduct supervision constraint, so that identity classification loss L i is obtained as follows:
Li=CE(Ci(fi),yi)
Where yi is the true identity tag of the input extremely narrow pulse echo data, CE represents the cross entropy loss function;
In the step 2, the decoder module R based on the continuous view embedding includes a view embedding indicator E and a view reconstruction decoder R, and the decoder module R based on the continuous view embedding reconstructs an extremely narrow pulse echo, denoted as x ', according to the identity f i and the input view y a' to be generated, namely:
x'=R(fi,ya')
The view embedded presenter E is formed by a 1-layer fully connected network, and implements embedded presentation from the view label y a 'to the view feature f a' to be generated, namely:
fa'=E(ya')
The view reconstruction decoder R is composed of a 4-layer deconvolution neural network, realizes the recombination of the view characteristic f a' and the identity characteristic f i to be generated, and outputs a reconstructed extremely narrow pulse echo, namely:
x'=R(fi,fa')=R(fi,E(ya'));
In the step 3, the GAN discriminator network D is a discriminator composed of 4 layers of convolutional neural networks, and is used for discriminating the authenticity of the input extremely narrow pulse echo;
the method for constructing the countermeasure training loss comprises the following steps:
(1) Constructing a generator training loss through a discriminator, wherein the discriminator is responsible for discriminating the extremely narrow pulse echo data x' generated by the generator into a True sample, namely outputting a prediction label as True, and performing supervised constraint by adopting a cross entropy loss function to obtain the generator training loss L Gadv:
LGadv=CE(D(x'),True)
(2) Constructing a discriminant training loss through a discriminant, and when the discriminant is trained, judging the extremely narrow pulse echo data x' generated by a generator as a False sample, namely, outputting a predictive label as False, judging the extremely narrow pulse echo data x sampled from the training data as a True sample, namely, outputting the predictive label as True, and performing supervision constraint by adopting a cross entropy loss function to obtain the discriminant training loss L Dadv:
LDadv=CE(D(x'),False)+CE(D(x),True);
In the step 4, the parameter optimization method comprises the steps of fixing parameters of a generator network G, training a discriminator, and optimizing parameters of a discriminator network D by using a discriminator training loss L Dadv:
LD=λDadvLDadv
in the step 5, the parameter optimization method comprises the following steps:
the parameters of the discriminator network D are fixed, the generator is trained, the parameters of the generator network G are optimized by utilizing the visual angle regression loss L a, the identity classification loss L i and the generator training loss L Gadv, and the loss function adopted by the generator training is as follows:
LG=λaLaiLiGadvLGadv
and (5) performing optimization of the circulation parameters until the network converges.
In the step 6, the method for supplementing the missing view angle to the data set to be supplemented comprises the following steps:
And (3) uniformly spacing delta theta view angle labels in the existing data and full-angle labels of the data set to be generated, namely theta s~Θe view angle range, inputting the data and the full-angle labels into the GAN generator network G trained in the step (5), and storing output samples of the continuous view embedded encoder module, wherein the samples are view angle complement samples.
Compared with the prior art, the method provided by the invention has the following advantages:
(1) The invention provides a target ultra-narrow pulse echo generation network framework, which can not generate target ultra-narrow pulse echoes with definite visual angles by using the existing GAN-based ultra-narrow pulse echo generation method.
(2) The existing ultra-narrow pulse echo generating method based on GAN only carries out distribution fitting on input original data, a great amount of redundant information exists in the input data, the invention realizes the decoupling representation of the input extremely narrow pulse echo by using the attention-based factorization module, separates the identity characteristic and the gesture characteristic of the input extremely narrow pulse echo, realizes the information refining of the input data at the characteristic level, and improves the representation capability of the network.
(3) The invention realizes the view angle reconstruction of the ultra-narrow pulse echo by utilizing a decomposition-recombination method, converts the view angle label to be generated into a new gesture characteristic based on a decoder module embedded in a continuous view angle, and recombines the new gesture characteristic with the identity characteristic decomposed by the input ultra-narrow pulse echo, thereby realizing the generation of the ultra-narrow pulse echo with multiple gesture targets.
Drawings
FIG. 1 is a block diagram of a generator network G;
FIG. 2 is a diagram of a structure of a arbiter network D;
FIG. 3 is a GTRI dataset missing view data complement result;
fig. 4 is a feature evaluation result.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention provides a multi-view ultra-narrow pulse echo generation method based on GAN. GAN is a generative network model whose basic ideas stem from two-player zero and game play in game theory, usually consisting of a generator and a arbiter, trained by way of counterlearning, with the aim of estimating the potential distribution of data samples and generating new data samples conforming to the distribution. GAN has outstanding sample generation capability that can mechanically learn the sample distribution of the original dataset by fitting to a deep neural network, and can generate samples with the same distribution as the original dataset.
For multi-gesture extremely narrow pulse echo generation, the invention designs a GAN network structure to implicitly model a multi-gesture extremely narrow pulse echo change rule by means of the distribution fitting capability of GAN, and realizes clear view target generation through view reconstruction.
The technical scheme includes that ultra-narrow pulse echo data of a class-C target are firstly obtained, all-view data are divided into training data sets, part of view data are divided into data sets to be complemented, and amplitude maximum normalization processing is carried out. And then constructing a designed multi-gesture extremely-narrow pulse echo generation GAN network, and inputting a training data set sample and a visual angle label into the network for training. And finally, inputting the full-angle label and the data in the data set to be complemented into a trained GAN network to realize the generation of the missing view angle data.
The method specifically comprises the following steps:
And step 1, constructing an extremely narrow pulse echo data set, dividing the extremely narrow pulse echo data set into a training data set and a data set to be complemented, and carrying out normalization pretreatment.
101. And acquiring ultra-narrow pulse echo data of the class C target in the viewing angle range theta s~Θe and a corresponding viewing angle label.
102. And counting the condition of the view angle labels of the ultra-narrow pulse echo data to obtain N types of all-view angle ultra-narrow pulse echoes with uniform intervals delta theta, and recording the N types of all-view angle ultra-narrow pulse echoes as a training data set X.
103. Counting the situation of the residual extremely narrow pulse echo data visual angle label to obtain partial visual angle extremely narrow pulse echo with C-N type visual angle theta 12,...,θm, and marking the partial visual angle extremely narrow pulse echo as a data set to be complemented
104. And carrying out amplitude maximum normalization processing on the training data set and the data set to be complemented.
Amplitude maximum normalization normalizes the range profile with the maximum of each frame of very narrow pulse echo. Let x be a frame of very narrow pulse echo extracted from the dataset, then the amplitude normalized very narrow pulse echo is recorded asCan be expressed as:
Step 2, a GAN generator network G is constructed, as shown in fig. 1, which includes an attention-based factorization module F and a decoder module R embedded based on successive views.
201. Constructing an attention-based factorization module F comprising a feature extractor F, a feature decomposer T and a supervision constraint network C for extracting identity features F i of input extremely narrow pulse echo data x, namely, F i =F (x) =T (F (x))
The feature extractor F is formed by a 4-layer convolutional neural network, and is used for extracting features of input ultra-narrow pulse echo data x, and can output a mixed feature representation F mix, namely:
fmix=F(x)
The feature decomposer T is composed of a 2-layer self-attention network and a 1-layer weighted decomposition network, and the input is the mixed feature representation f mix extracted by the feature extractor, and the output is the visual angle feature f a and the identity feature f i. Wherein the self-attention network is used for obtaining the weight factor lambda of the mixed characteristic, and the weighted decomposition network is used for realizing the decomposition of f mix according to lambda, namely:
[fa,fi]=T(fmix)
Wherein, Representing a matrix dot product operation, 1 is an all 1 matrix.
The supervision constraint network C consists of a visual angle full-connection regression network C a and an identity full-connection classification network C i, and performs supervision constraint on the decomposition process of f mix. Wherein the input of the view angle fully connected regression network is f a, the output is a view angle regression result C a(fa), and the view angle regression loss L a is obtained by adopting a mean square error loss function to carry out supervision constraint as follows:
La=MSE(Ca(fa)-ya)
Where y a is the true view label of the input very narrow pulse echo data, MSE represents the mean square error loss function.
The input of the identity full-connection classification network is f i, the output is an identity classification result C i(fi), and the supervision constraint is carried out by adopting a cross entropy loss function to obtain an identity classification loss L i as follows:
Li=CE(Ci(fi),yi)
Where yi is the true identity tag of the input extremely narrow pulse echo data, and CE represents the cross entropy loss function. 202. Constructing a decoder module R based on continuous view embedding, wherein the decoder module R comprises a view embedding indicator E and a view reconstruction decoder R, and reconstructing an extremely narrow pulse echo according to an identity characteristic f i and an input view y a 'to be generated, wherein the extremely narrow pulse echo is marked as x', namely:
x'=R(fi,ya')
The view embedded presenter E is formed by a 1-layer fully connected network, and implements embedded presentation from the view label y a 'to the view feature f a' to be generated, namely:
fa'=E(ya')
The view reconstruction decoder R is composed of a 4-layer deconvolution neural network, realizes the recombination of the view characteristic f a' and the identity characteristic f i to be generated, and outputs a reconstructed extremely narrow pulse echo, namely:
x'=R(fi,fa')=R(fi,E(ya'))
and 3, constructing a GAN discriminator network D and countering training loss.
301. A discriminator consisting of 4 layers of convolutional neural networks is constructed, as shown in fig. 2, for discriminating the authenticity of the input extremely narrow pulse echoes.
302. The generator training loss is constructed by the arbiter. When training the generator, the discriminator is responsible for discriminating the extremely narrow pulse echo data x 'generated by the generator into a True sample, namely outputting a predictive label as True, so that the cross entropy loss function is adopted for supervision constraint to obtain the training loss of the generator L Gadv:LGadv = CE (D (x'), true
303. And constructing a discriminant training loss through the discriminant. When training the discriminator, the discriminator is responsible for discriminating the extremely narrow pulse echo data x 'generated by the generator into a False sample, namely, outputting a predictive label as False, and discriminating the extremely narrow pulse echo data x sampled from the True data into a True sample, namely, outputting the predictive label as True, so that the cross entropy loss function is adopted for supervision constraint to obtain the training loss L Dadv:LDadv =CE (D (x'), false) +CE (D (x), true
And 4, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN discriminator network D constructed in the step 3, and carrying out parameter optimization on the GAN discriminator network D constructed in the step 3 through the countermeasure training loss constructed in the step 3.
401. Fixing parameters of the generator network G, training a discriminator, and optimizing parameters of a discriminator network D by using a discriminator training loss L Dadv:
LD=λDadvLDadv
And 5, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN generator network G constructed in the step 2, and carrying out parameter optimization on the GAN generator network G constructed in the step 2 through the GAN discriminator network D subjected to parameter optimization in the step 4 and the countermeasure training loss constructed in the step 3.
501. Parameters of the discriminator network D are fixed, the generator is trained, parameters of the generator network G are optimized by using the view angle regression loss L a, the identity classification loss L i and the generator training loss L Gadv, and therefore, a loss function adopted by the generator training is as follows:
LG=λaLaiLiGadvLGadv
502. returning to step 401, the next training is performed until the network converges.
And 6, generating a GAN network based on the multi-pose extremely narrow pulse echo after parameter optimization, and completing the missing view angle of the non-cooperative target data.
601. The existing data of the data set to be generated and the full-angle labels (namely, the angle labels with uniform delta theta intervals in the angle range of theta s~Θe) are input into the GAN generator network G trained in the step 5.
602. And saving output samples of the continuous view embedded encoder module, wherein the samples are view complement samples.
Examples
The effect of the present invention will be described in this section in conjunction with actual measurement data experiments. To evaluate the performance of the proposed generation of the challenge network, experiments were performed using a radar field disclosure dataset GTRI.
Data set and parameter settings:
GTRI data sets are T-72 tank target ultra-narrow pulse echo data sets measured by an X-band radar, the radar center frequency is 9.6GHz, and the synthetic bandwidth is 600MHz. Limited by the data set, very narrow pulse echoes at 10 different pitch angles are considered to be of class 10, with class 1 data being denoted as the data set to be generated and class 2-10 data being denoted as the training data set. The training data set angle interval Δθ=0.2°, and the viewing angle range Θ s~Θe is 0.5 ° -2 °. The view angle of the data set to be generated is 0.2 degrees, 2 degrees and 4 degrees, and the details of the data set are as follows:
table 1 dataset details
And inputting the set training data set into a GAN network for parameter optimization, and further completing the completion of the missing view angle by using the trained GAN to-be-generated data set.
Experimental results:
fig. 3 shows the complement result of the missing view angle data on the actually measured GTRI dataset, and the observation result shows that compared with the traditional interpolation method and the GAN field classical method (including DRGAN method and PeaceGAN method), the invention can generate sample data which is more approximate to the real ultra-narrow pulse echo data at all angles.
FIG. 4 shows the evaluation result of the distribution similarity of the scattering points of the generated sample on GTRI data sets, and the evaluation of the distribution similarity of the scattering points of the generated sample and the real sample by utilizing the KL divergence, wherein the smaller the KL divergence value is, the closer the generated sample characteristic is to the real ultra-narrow pulse echo characteristic. Therefore, the characteristic evaluation result shows that compared with the traditional interpolation method and the GAN domain classical method (comprising DRGAN method and PeaceGAN method), the ultra-narrow pulse echo generated by the method is closest to the characteristic distribution of the real ultra-narrow pulse echo data.
In order to explore the application prospect of the invention, a data enhancement contrast experiment is developed, and compared with the traditional interpolation method, the recognition performance improvement condition when the data set to be generated is subjected to the enhancement of the missing view angle data is verified. The experimental setup is shown in the following table:
table 2 data enhancement experimental setup
In a basic experiment, setting a view angle of category 1 in a training set as a missing state, setting view angles of categories 2-10 as a complete state, and setting each target view angle in a test set as a complete state, wherein the method is used for verifying the accuracy rate and recall rate of target identification under the condition of angle missing. In the enhancement experiment, the training set category 1 is set to complement the missing view angles by the method, the category 2-10 view angles are in a complete state, and each target view angle in the test set is in a complete state, so that the method is used for verifying the recognition performance improvement condition of the method for data enhancement. The invention is used for data enhancement, and the identification lifting conditions are as follows:
Table 3 data enhancement experimental setup
Method of Basic experiment Enhancement experiments
Accuracy rate of 65.3% 85.7%
Recall rate of recall 32.1% 73.2%
Therefore, the method is used for data enhancement, can effectively improve the accuracy and recall rate of target identification, and has better improvement effect than the traditional methods such as interpolation. The recognition accuracy reaches 85.7%, 20.4% is improved relative to the basic experiment, the recall rate reaches 73.2%, and 41.1% is improved relative to the basic experiment.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A multi-pose extremely narrow pulse echo generation method based on generation of an countermeasure network, characterized in that the method comprises the steps of:
step 1, constructing an extremely narrow pulse echo data set, dividing the constructed extremely narrow pulse echo data set into a training data set and a data set to be complemented, and respectively carrying out normalization pretreatment on the divided training data set and the data set to be complemented to obtain a normalization pretreated training data set and a normalization pretreated data set to be complemented;
step 2, constructing a GAN generator network G, wherein the constructed GAN generator network G comprises an attention-based factorization module F and a decoder module R embedded based on a continuous view;
step 3, constructing a GAN discriminator network D and countering training loss;
step 4, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN discriminator network D constructed in the step 3, and carrying out parameter optimization on the GAN discriminator network D constructed in the step 3 through the countermeasure training loss constructed in the step 3;
Step 5, taking the ultra-narrow pulse echo data and the visual angle label in the normalized and preprocessed training data set in the step 1 as the input of the GAN generator network G constructed in the step 2, and carrying out parameter optimization on the GAN generator network G constructed in the step 2 through the GAN discriminator network D with optimized parameters in the step 4 and the countermeasure training loss constructed in the step 3;
step 6, completing the missing view angle of the data set to be completed according to the GAN generator network G obtained by the parameter optimization in the step 5, and completing the generation of multi-pose extremely narrow pulse echoes based on the generation countermeasure network;
In the step 2, the decoder module R based on the continuous view embedding includes a view embedding indicator E and a view reconstruction decoder R, and the decoder module R based on the continuous view embedding reconstructs an extremely narrow pulse echo, denoted as x ', according to the identity f i and the input view y a' to be generated, namely:
x'=R(fi,ya')
The view embedded presenter E is formed by a 1-layer fully connected network, and implements embedded presentation from the view label y a 'to the view feature f a' to be generated, namely:
fa'=E(ya')
The view reconstruction decoder R is composed of a 4-layer deconvolution neural network, realizes the recombination of the view characteristic f a' and the identity characteristic f i to be generated, and outputs a reconstructed extremely narrow pulse echo, namely:
x'=R(fi,fa')=R(fi,E(ya'))。
2. a multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 1, characterized in that:
In the step 1, the content of the extremely narrow pulse echo data set comprises extremely narrow pulse echo data of a class C object in a viewing angle range theta s~Θe and a corresponding viewing angle label, wherein theta s represents a starting viewing angle of the viewing angle range, and theta e represents an ending viewing angle of the viewing angle range.
3. A multi-pose extremely narrow pulse echo generation method based on generation of an countermeasure network according to claim 1 or 2, characterized by:
in the step 1, the method for dividing the constructed ultra-narrow pulse echo data set into the training data set and the data set to be complemented comprises the following steps:
counting the condition of the ultra-narrow pulse echo data visual angle label, obtaining N types of ultra-narrow pulse echoes with uniform intervals delta theta at all visual angles if the data visual angle label is a uniform interval full visual angle, and recording the N types of ultra-narrow pulse echoes as a training data set X, otherwise, obtaining C-N types of ultra-narrow pulse echoes with partial visual angles theta 12,...,θm at all visual angles, and recording the N types of ultra-narrow pulse echoes as a data set to be complemented
4. A multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 3, wherein:
In the step 1, an amplitude maximum value normalization processing method is adopted when normalization pretreatment is carried out, and specifically comprises the following steps:
Normalization of amplitude maximum value normalizes the range profile by using the maximum value of each frame of extremely narrow pulse echo, and if x is extracted from the data set as one frame of extremely narrow pulse echo, the extremely narrow pulse echo after the amplitude normalization is recorded as Expressed as:
5. the multi-pose extremely narrow pulse echo generating method based on generating an countermeasure network according to claim 4, wherein:
In the step 2, the attention-based factorization module includes a feature extractor F, a feature decomposer T, and a supervision constraint network C, where the attention-based factorization module is configured to extract an identity feature F i of the input ultra-narrow pulse echo data x, that is:
fi=F(x)=T(F(x));
the ultra-narrow pulse echo data X is a subset sampled in the training data set X;
the feature extractor F is configured by a 4-layer convolutional neural network, and is configured to input feature extraction of the ultra-narrow pulse echo data x, and output a hybrid feature representation F mix, that is:
fmix=F(x)
the feature decomposer T is composed of a 2-layer self-attention network and a 1-layer weighted decomposition network, wherein the input is the mixed feature representation f mix extracted by the feature extractor, and the output is the visual angle feature f a and the identity feature f i, the self-attention network is used for acquiring the weight factor lambda of the mixed feature, and the weighted decomposition network is used for realizing the decomposition of f mix according to lambda, namely:
[fa,fi]=T(fmix)
Wherein, Representing matrix dot multiplication operation, wherein 1 is an all-1 matrix;
The supervision constraint network C consists of a visual angle fully-connected regression network C a and an identity fully-connected classification network C i, and performs supervision constraint on the decomposition process of f mix;
The input of the view angle fully connected regression network C a is f a, the output is a view angle regression result C a(fa), and the view angle regression loss L a is obtained by performing supervised constraint by adopting a mean square error loss function as follows:
La=MSE(Ca(fa)-ya)
Wherein y a is a real view angle label of input ultra-narrow pulse echo data, and MSE represents a mean square error loss function;
The identity full-connection classification network C i is input as f i, and output as an identity classification result C i(fi), and the cross entropy loss function is adopted to conduct supervision constraint, so that identity classification loss L i is obtained as follows:
Li=CE(Ci(fi),yi)
where yi is the true identity tag of the input extremely narrow pulse echo data, and CE represents the cross entropy loss function.
6. A multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 1, characterized in that:
In the step 3, the GAN discriminator network D is a discriminator composed of 4 layers of convolutional neural networks, and is used for discriminating the authenticity of the input extremely narrow pulse echo;
the method for constructing the countermeasure training loss comprises the following steps:
(1) Constructing a generator training loss through a discriminator, wherein the discriminator is responsible for discriminating the extremely narrow pulse echo data x' generated by the generator into a True sample, namely outputting a prediction label as True, and performing supervised constraint by adopting a cross entropy loss function to obtain the generator training loss L Gadv:
LGadv=CE(D(x'),True)
(2) Constructing a discriminant training loss through a discriminant, and when the discriminant is trained, judging the extremely narrow pulse echo data x' generated by a generator as a False sample, namely, outputting a predictive label as False, judging the extremely narrow pulse echo data x sampled from the training data as a True sample, namely, outputting the predictive label as True, and performing supervision constraint by adopting a cross entropy loss function to obtain the discriminant training loss L Dadv:
LDadv=CE(D(x'),False)+CE(D(x),True)。
7. a multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 1, characterized in that:
In the step 4, the parameter optimization method comprises the steps of fixing parameters of a generator network G, training a discriminator, and optimizing parameters of a discriminator network D by using a discriminator training loss L Dadv:
LD=λDadvLDadv
8. A multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 1, characterized in that:
in the step 5, the parameter optimization method comprises the following steps:
the parameters of the discriminator network D are fixed, the generator is trained, the parameters of the generator network G are optimized by utilizing the visual angle regression loss L a, the identity classification loss L i and the generator training loss L Gadv, and the loss function adopted by the generator training is as follows:
LG=λaLaiLiGadvLGadv
and (5) performing optimization of the circulation parameters until the network converges.
9. A multi-pose ultra-narrow pulse echo generation method based on generation of an countermeasure network according to claim 1, characterized in that:
in the step 6, the method for supplementing the missing view angle to the data set to be supplemented comprises the following steps:
And (3) uniformly spacing delta theta view angle labels in the existing data and full-angle labels of the data set to be generated, namely theta s~Θe view angle range, inputting the data and the full-angle labels into the GAN generator network G trained in the step (5), and storing output samples of the continuous view embedded encoder module, wherein the samples are view angle complement samples.
CN202210940267.3A 2022-08-05 2022-08-05 Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network Active CN115308705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210940267.3A CN115308705B (en) 2022-08-05 2022-08-05 Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210940267.3A CN115308705B (en) 2022-08-05 2022-08-05 Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN115308705A CN115308705A (en) 2022-11-08
CN115308705B true CN115308705B (en) 2024-12-03

Family

ID=83860198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210940267.3A Active CN115308705B (en) 2022-08-05 2022-08-05 Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN115308705B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934869B (en) * 2024-03-22 2024-06-18 中铁大桥局集团有限公司 A target detection method, system, computing device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077523A (en) * 2019-12-13 2020-04-28 南京航空航天大学 An Inverse Synthetic Aperture Radar Imaging Method Based on Generative Adversarial Networks
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767803B (en) * 2020-06-08 2022-02-08 北京理工大学 Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar
CN112784930B (en) * 2021-03-17 2022-03-04 西安电子科技大学 HRRP recognition database sample expansion method based on CACGAN
GB2609708B (en) * 2021-05-25 2023-10-25 Samsung Electronics Co Ltd Method and apparatus for video recognition
CN113987674B (en) * 2021-10-25 2025-09-09 杭州电子科技大学 Radar HRRP continuous learning method based on generation countermeasure network
CN114428234B (en) * 2021-12-23 2025-08-01 西安电子科技大学 Radar high-resolution range profile noise reduction recognition method based on GAN and self-attention

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077523A (en) * 2019-12-13 2020-04-28 南京航空航天大学 An Inverse Synthetic Aperture Radar Imaging Method Based on Generative Adversarial Networks
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Also Published As

Publication number Publication date
CN115308705A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
Wu et al. A transformer-based approach for novel fault detection and fault classification/diagnosis in manufacturing: A rotary system application
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN111753985A (en) Image deep learning model testing method and device based on neuron coverage
CN104808107A (en) XLPE cable partial discharge defect type identification method
CN115565019B (en) A ground object classification method for single-channel high-resolution SAR images based on deep self-supervised generative adversarial model
CN113743180B (en) CNNKD-based radar HRRP small sample target identification method
CN108764310A (en) SAR target identification methods based on multiple dimensioned multiple features depth forest
CN105913081A (en) Improved PCAnet-based SAR image classification method
CN114740441B (en) Low-speed small target radar echo identification method based on small sample
CN114973019B (en) A method and system for detecting and classifying geospatial information changes based on deep learning
CN103218623B (en) The radar target feature extraction method differentiating projection is kept based on self-adaptation neighbour
CN115308705B (en) Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network
CN114972904A (en) Zero sample knowledge distillation method and system based on triple loss resistance
Zhou et al. High frequency patterns play a key role in the generation of adversarial examples
CN115902879A (en) Deep neural network target identification method based on radar scene
CN110110625A (en) SAR image target identification method and device
Rethik et al. Attention based mapping for plants leaf to classify diseases using vision transformer
CN111353607A (en) A method and device for obtaining a quantum state discrimination model
CN115187830A (en) A fuzzy comprehensive evaluation method for the construction effect of artificial electromagnetic environment based on SAR images and signals
CN114692695B (en) Fault feature selection and classification method research based on multi-index fusion
CN111310838A (en) Drug effect image classification and identification method based on depth Gabor network
CN110780270A (en) Target library attribute discrimination local regular learning subspace feature extraction method
Zhu et al. An experimental study on estimating the quantity of fish in cages based on image sonar
Su et al. A dual quantum image feature extraction method: PSQIFE
Liu et al. A Robustness-oriented data augmentation method for DNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhang Liang

Inventor after: Zhou Qiang

Inventor after: Zhang Xin

Inventor after: Song Yiheng

Inventor after: Wang Yanhua

Inventor after: Li Yang

Inventor before: Zhang Liang

Inventor before: Zhou Qiang

Inventor before: Song Yiheng

Inventor before: Wang Yanhua

Inventor before: Li Yang

CB03 Change of inventor or designer information
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载