CN114048837A - A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph - Google Patents
A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph Download PDFInfo
- Publication number
- CN114048837A CN114048837A CN202111234229.8A CN202111234229A CN114048837A CN 114048837 A CN114048837 A CN 114048837A CN 202111234229 A CN202111234229 A CN 202111234229A CN 114048837 A CN114048837 A CN 114048837A
- Authority
- CN
- China
- Prior art keywords
- graph
- model
- brain
- neurons
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于分布式类脑图的深度神经网络模型加固方法,通过将关键路径应用在分布式类脑图上生成主干类脑图以对该路径上的传播过程和节点行为进行更有效的约束例如构建新的梯度损失函数来减弱噪声的传播,从而在关键路径指导的主干类脑图上利用特征路径长度和参与系数等图网络指标生长成新的鲁棒的类脑图结构重构回模型,提高模型的鲁棒性,加固模型。本发明方法使用类脑图展现了与生物神经网络更紧密的联系性,只对神经网络中一些关键神经元进行了操作,极大保留了神经网络的完整性。
The invention discloses a deep neural network model reinforcement method based on a distributed brain-like graph, which generates a backbone brain-like graph by applying a key path to the distributed brain-like graph to improve the propagation process and node behavior on the path. Effective constraints such as constructing a new gradient loss function to weaken the propagation of noise, so as to use graph network metrics such as feature path length and participation coefficient to grow into a new robust brain-like graph structure on the backbone brain-like graph guided by critical paths. Reconstruct the model, improve the robustness of the model, and strengthen the model. The method of the invention uses the brain-like diagram to show the closer connection with the biological neural network, only operates on some key neurons in the neural network, and greatly preserves the integrity of the neural network.
Description
技术领域technical field
本发明涉及分布式机器学习、人工智能安全领域,具体涉及一种基于分布式类脑图的深度神经网络模型加固方法。The invention relates to the field of distributed machine learning and artificial intelligence security, in particular to a method for strengthening a deep neural network model based on a distributed brain-like graph.
背景技术Background technique
随着现代社会下软件性能和硬件算力的大幅提升,人工智能被广泛应用于计算机视觉、自然语言处理、复杂网络分析等领域且取得了良好的成效。但Christian等人提出在原样本图像上添加一种人类察觉不到的误导性扰动以生成的新的样本就会导致模型以高置信度给出一个错误的输出。这种新生成的样本被称作对抗样本,它对深度学习系统,如人脸识别系统、自动校验系统和自动驾驶系统等构成潜在的安全威胁。With the substantial improvement of software performance and hardware computing power in modern society, artificial intelligence has been widely used in computer vision, natural language processing, complex network analysis and other fields with good results. However, Christian et al. proposed that adding a misleading perturbation to the original sample image to generate a new sample would cause the model to give a wrong output with high confidence. This newly generated sample is called an adversarial sample, and it poses a potential security threat to deep learning systems such as face recognition systems, automatic verification systems, and autonomous driving systems.
在过去的几年里,大量的防御方法被提出来提高模型对对抗样本的鲁棒性,以避免现实应用中潜在的危险。这些方法大致可以分为对抗性训练、输入转换、模型架构转换和对抗性样本检测。然而,上述这些方法大部分是针对输入图像的像素空间,很少有通过研究模型层间结构来分析对抗性扰动的影响。这是因为虽然人们普遍认为神经网络的性能取决于其体系结构,但对神经网络精度与其底层图结构之间的关系缺乏系统的了解。而在最近的研究中,一方面You等人提出了一种将神经网络表示为图的新方法,称之为关系图。该图主要关注信息的交换,而不仅仅是定向数据流。但是这种方法只能构建原始的鲁棒模型,一旦遭受对抗样本的攻击就很难从细粒度的角度去进行解释和防御,同时这种关系图依然没有脱离传统的图像网络,缺乏与生物神经网络的深层次联系。另一方面,最近Laura等人利用网络拓扑结构和模块化组织计算方法模拟了人脑的网络结构,并绘制了脑网络图统计了包括特征路径长度和参与系数等指标来指导研究,但这种指导方式缺乏对神经网络鲁棒性能的可解释性理论基础。In the past few years, a large number of defense methods have been proposed to improve the robustness of models to adversarial examples to avoid potential dangers in real-world applications. These methods can be roughly classified into adversarial training, input transformation, model architecture transformation, and adversarial sample detection. However, most of these methods above are aimed at the pixel space of the input image, and few of them analyze the impact of adversarial perturbations by studying the inter-layer structure of the model. This is because while it is widely believed that the performance of a neural network depends on its architecture, there is a lack of systematic understanding of the relationship between neural network accuracy and its underlying graph structure. In a recent study, on the one hand, You et al. proposed a new way to represent neural networks as graphs, called relational graphs. The diagram focuses primarily on the exchange of information, not just directed data flow. However, this method can only build the original robust model. Once attacked by adversarial samples, it is difficult to explain and defend from a fine-grained point of view. At the same time, this relationship graph is still not separated from the traditional image network, and lacks connection with biological neural networks. Deep connections in the web. On the other hand, recently Laura et al. simulated the network structure of the human brain by using network topology and modular organization calculation method, and drew a brain network map and counted indicators including characteristic path length and participation coefficient to guide the research, but this kind of The guiding approach lacks an interpretable theoretical basis for the robust performance of neural networks.
同时,最近Li等人提出了一种基于梯度的影响传播策略来获得临界攻击神经元,从而在计算图上进一步构建神经网络的临界攻击路径,并通过对该路径上的传播过程和节点行为进行约束减弱噪声的传播,提高模型的鲁棒性的方法。例如,在社交网络中,虚假信息通过在节点之间的迅速传播可能带来巨大的社会威胁。而具有较高信息能力的节点比其他节点更关键,更容易传递虚假信息,并被包含在关键路径中。为了有效抑制虚假信息的传播,一般采取免疫策略即在图内发现并阻断关键路径从而减少虚假信息的传播,提高社交网络的安全性。但是现有的图网络表示方式缺乏普遍性,与生物学和神经科学脱节。At the same time, recently Li et al. proposed a gradient-based influence propagation strategy to obtain critical attack neurons, so as to further construct the critical attack path of the neural network on the computational graph, and through the propagation process and node behavior on the path. Constraints reduce the propagation of noise and improve the robustness of the model. For example, in a social network, false information may bring a huge social threat through the rapid spread among nodes. And nodes with higher information capabilities are more critical than other nodes, are more likely to transmit false information, and are included in the critical path. In order to effectively suppress the spread of false information, an immune strategy is generally adopted, that is, to find and block critical paths in the graph to reduce the spread of false information and improve the security of social networks. But existing graph network representations lack generality and are out of touch with biology and neuroscience.
针对上述问题,本发明提出了一种将关键路径应用于分布式类脑图上生成主干类脑图,并在主干类脑图上利用具指导意义的图网络指标生长成新的鲁棒类脑图结构重构回模型,从而提高模型的鲁棒性的方法。In view of the above problems, the present invention proposes a method of applying the critical path to the distributed brain-like graph to generate a backbone brain-like graph, and using instructive graph network indicators to grow into a new robust brain-like graph on the backbone brain-like graph A method to reconstruct the graph structure back into the model, thereby improving the robustness of the model.
发明内容SUMMARY OF THE INVENTION
为了进一步加深深度神经网络与生物神经网络的联系性,以及为神经网络的类脑图表征方式提供一种细粒度的解释,本发明提出了一种基于分布式类脑图的深度神经网络模型加固方法,通过将关键路径应用在分布式类脑图上生成主干类脑图,从而在主干类脑图上利用特征路径长度和参与系数等图网络指标生长成新的鲁棒的类脑图结构重构回模型,提高模型的鲁棒性,加固模型。In order to further deepen the connection between the deep neural network and the biological neural network, and provide a fine-grained explanation for the representation of the neural network's brain-like graph, the present invention proposes a deep neural network model reinforcement based on the distributed brain-like graph. method, by applying the critical path to the distributed brain-like graph to generate the backbone brain-like graph, so as to use graph network indicators such as characteristic path length and participation coefficient to grow into a new robust brain-like graph structure on the backbone brain-like graph. Reconstruct the model, improve the robustness of the model, and strengthen the model.
为实现上述发明目的,本发明提供以下技术方案:一种基于分布式类脑图的深度神经网络模型加固方法,包括以下步骤:In order to achieve the above purpose of the invention, the present invention provides the following technical solutions: a method for strengthening a deep neural network model based on a distributed brain-like graph, comprising the following steps:
(1)从目标模型数据集中选取样本数据;(1) Select sample data from the target model dataset;
(2)对步骤(1)选取的样本数据构建目标模型;再对目标模型进行训练,最后保存训练后的目标模型;(2) constructing a target model for the sample data selected in step (1); then training the target model, and finally saving the trained target model;
(3)定义神经网络,再定义单个神经网络的计算图,输入步骤(2)训练得到的目标模型,构建原始的分布式类脑图;(3) Define the neural network, and then define the calculation graph of a single neural network, input the target model trained in step (2), and construct the original distributed brain-like graph;
(4)计算神经网络中每两层神经元之间的影响,选取关键神经元,将关键神经元映射到步骤(3)构建的分布式类脑图,得到有关键路径指导的分布式类脑图结构;(4) Calculate the influence between each two layers of neurons in the neural network, select key neurons, map the key neurons to the distributed brain-like graph constructed in step (3), and obtain a distributed brain-like graph guided by key paths graph structure;
(5)定义图网络指标,分别计算步骤(3)得到的原始的分布式类脑图和步骤(4)得到的有关键路径指导的分布式类脑图结构的图网络指标,生成新的类脑图结构并重构新的目标模型。(5) Define the graph network index, calculate the original distributed brain-like graph obtained in step (3) and the graph network index of the distributed brain-like graph structure guided by the critical path obtained in step (4), respectively, and generate a new class Brain map structure and reconstruct a new target model.
进一步地,所述步骤(1)具体为:所述目标模型数据集包括n条样本数据,分为a类样本数据,从每类样本数据中抽取d%的样本数据作为目标模型的训练集Dtrain;其中,n、a、d为自然数。Further, the step (1) is specifically as follows: the target model data set includes n pieces of sample data, which are divided into a type of sample data, and d% of the sample data is extracted from each type of sample data as the training set D of the target model. train ; where n, a, and d are natural numbers.
进一步地,所述步骤(2)具体为:Further, the step (2) is specifically:
(2.1)对步骤(1)选取的样本数据构建目标模型,所述目标模型采用分布式结构,分别对图像的RGB三个特征设置三个相同的子模型m1,m2,m3,以及最后用于归一化特征矩阵的输出模型mout;(2.1) Constructing a target model for the sample data selected in step (1), the target model adopts a distributed structure, and three identical sub-models m 1 , m 2 , m 3 are respectively set for the three RGB features of the image, and Finally, the output model m out is used to normalize the feature matrix;
(2.2)对步骤(2.1)中设置的所有模型设置统一的超参数,用于对步骤(1)设置的训练集Dtrain进行训练,具体为:自定义设置训练epoch次数、批次大小、优化器、学习率和损失函数,其中优化器采用随机梯度下降,学习率设置为初始0.1的cos余弦学习率,损失函数Lossc则在交叉熵函数基础上添加了正则化参数λ:(2.2) Set uniform hyperparameters for all the models set in step (2.1) to train the training set D train set in step (1), specifically: custom setting the number of training epochs, batch size, optimization The optimizer uses stochastic gradient descent, the learning rate is set to the initial cosine learning rate of 0.1, and the loss function Loss c adds a regularization parameter λ based on the cross entropy function:
其中p(·)表示样本的真实标签,q(·)表示模型的预测概率,xi表示输入的样本,表示模型参数,λ表示正则化系数;where p( ) represents the true label of the sample, q( ) represents the predicted probability of the model, x i represents the input sample, represents the model parameters, and λ represents the regularization coefficient;
(2.3)重复训练直至目标模型的准确率率收敛,然后保存训练得到的目标模型。(2.3) Repeat the training until the accuracy rate of the target model converges, and then save the target model obtained by training.
进一步地,所述步骤(3)具体为:Further, the step (3) is specifically:
(3.1)定义神经网络:定义图G=(V,E),其中V={v1,...,vn}为节点集合, 为边集合,且每个节点v有一个节点的特征向量Wv;(3.1) Define the neural network: define the graph G=(V, E), where V={v 1 ,...,v n } is the node set, is an edge set, and each node v has a feature vector W v of a node;
(3.2)定义单个模型的计算图:利用前向传播算法,定义图节点集合V={v1,...,vn}为所有的神经元,边集合为每前后两层之间有传播关系的两个神经元节点之间的连线,边的权重设为由前一层向后一层传播时对应节点的特征向量矩阵的分量,用公式描述为:(3.2) Defining the computational graph of a single model: Using the forward propagation algorithm, define the graph node set V={v 1 ,...,v n } as all neurons and edge sets It is the connection between two neuron nodes that have a propagation relationship between the two layers before and after each, and the weight of the edge is set as the component of the eigenvector matrix of the corresponding node when it propagates from the previous layer to the next layer, which is described by the formula as :
Wv=[wi1,wi2,…,wij]W v =[ wi1 , wi2 ,..., wij ]
其中,对于每一个分量wij,i表示权重所连接的前一层网络当中神经元的下标即所在的位置,j表示权重所连接的后一层当中神经元的下标即所在的位置;Among them, for each component w ij , i represents the subscript of the neuron in the previous layer network connected to the weight, i.e. the location, and j represents the subscript of the neuron in the next layer to which the weight is connected, i.e. the location;
(3.3)构建分布式类脑图:输入步骤(2)中训练得到的目标模型,先计算每个模型的每个神经元节点的特征向量,根据步骤(3.3)的定义分别绘制各个模型的计算图,最后将所有绘制的子模型的计算图以设置好的权重为连边与输出模型的计算图连接到一起生成一个原始的分布式类脑图Gori。(3.3) Build a distributed brain-like graph: input the target model trained in step (2), first calculate the feature vector of each neuron node of each model, and draw the calculation of each model according to the definition in step (3.3) Finally, the calculation graphs of all drawn sub-models are connected with the calculation graph of the output model with the set weights as the edges to generate an original distributed brain-like graph G ori .
进一步地,所述步骤(4)具体为:Further, described step (4) is specifically:
(4.1)计算两层神经元之间的影响:对于一个第l层的第j个神经元Fl j的输出中的单个元素z,用参数表示第l-1层的第i个神经元对其的影响值:(4.1) Calculate the influence between neurons in two layers: for the output of the jth neuron F l j of an lth layer A single element z in , with the parameter represents the ith neuron of layer l-1 Its impact value:
其中下标l代表第l层,l=1,2,…L,L为神经网络的总层数,是第l-1层的第i个神经元的元素的集合,A(·)函数提取在指定位置的元素。The subscript l represents the lth layer, l=1,2,...L, L is the total number of layers of the neural network, is the set of elements of the ith neuron of the l-1th layer, and the A(·) function extracts the element at the specified position.
用表示i,j两个神经元之间的影响值即第l-1层的第i个神经元对第l层的第j个神经元的输出中的每个元素的值之和:use Represents the influence value between two neurons i, j, i.e. the ith neuron in the l-1th layer For the jth neuron in the lth layer Output The sum of the values of each element in :
(4.2)选取关键神经元:给定一个样本x,首先计算最后一个卷积层L的第i个神经元对模型决策贡献的损失梯度:(4.2) Select key neurons: Given a sample x, first calculate the loss gradient of the ith neuron of the last convolutional layer L contributing to the model decision:
其中,表示第L层中第i个神经元的输出;in, represents the output of the ith neuron in the Lth layer;
接着将每个神经元的损失梯度放在一起进行从大到小排序,挑选出前k个的神经元作为关键神经元,用表示最后一层L中选取到的关键神经元:Then, the loss gradients of each neuron are put together to sort from large to small, and the top k neurons are selected as key neurons, and the Represents the key neurons selected in the last layer L:
其中top_k(·)函数就表示选取前k个,FL就是最后一个卷积层L的神经元集合,然后基于l-1层对l层的影响,用表示每一层取到的关键神经元:The top_k( ) function represents the selection of the first k, and FL is the set of neurons in the last convolutional layer L. Then, based on the influence of the l-1 layer on the l layer, use Represents the key neurons taken by each layer:
最后用R(x)表示样本x在不同层的关键神经元:Finally, use R(x) to represent the key neurons of the sample x in different layers:
(4.3)限制损失梯度:通过限制关键神经元的梯度得到一个损失项:(4.3) Constraining the loss gradient: A loss term is obtained by restricting the gradient of key neurons:
再把该损失项加入到交叉熵损失当中得到最终的损失函数:Then add the loss term to the cross entropy loss to get the final loss function:
Loss=Lossc+δLossg Loss=Loss c +δLoss g
其中Lossc表示为步骤(2)中的交叉熵损失函数,δ是用于平衡这些损失项的超参数;where Loss c is denoted as the cross-entropy loss function in step (2), and δ is the hyperparameter used to balance these loss terms;
(4.4)映射关键路径:将步骤(4.2)中得到的关键神经元映射到步骤(3.3)中绘制的分布式类脑图上,去掉类脑图中非关键神经元的节点和其连边得到有关键路径指导的分布式类脑图结构Gpath。(4.4) Mapping the critical path: map the key neurons obtained in step (4.2) to the distributed brain-like graph drawn in step (3.3), and remove the nodes and their edges of non-critical neurons in the brain-like graph to get A distributed brain-like graph structure G path guided by critical paths.
进一步地,所述步骤(4)具体为:Further, described step (4) is specifically:
(5.1)定义图网络指标:所述图网络指标包括特征路径长度和参与系数;所述特征路径长度具体为网络的平均最短路径长度,用于衡量效率;所述参与系数是一个节点在网络社区中的连接分布的度量;(5.1) Define graph network index: the graph network index includes characteristic path length and participation coefficient; the characteristic path length is specifically the average shortest path length of the network, which is used to measure efficiency; the participation coefficient is a node in the network community A measure of the distribution of connections in ;
(5.2)生长新的类脑图结构重构模型:(5.2) Growing a new brain-like map structure reconstruction model:
分别计算步骤(3)得到的原始的分布式类脑图Gori和步骤(4)得到的有关键路径指导的分布式类脑图结构Gpath的上述步骤(5.1)定义的图网络指标,观察指标的变化趋势,其中以特征路径长度指导类脑图生长的趋势方向,以参与系数指导各个子模型的分配权重比,生成的新类脑图结构并重构回目标模型得到m1′,m2′,m3′子模型和mout′输出模型。Calculate the graph network indicators defined in the above step (5.1) of the original distributed brain-like graph G ori obtained in step (3) and the distributed brain-like graph structure G path guided by the critical path obtained in step (4), and observe The change trend of the index, in which the characteristic path length guides the growth trend of the brain-like map, and the participation coefficient guides the distribution weight ratio of each sub-model, and the generated new brain-like map structure is reconstructed back to the target model to obtain m 1 ′,m 2 ′, m 3 ′ submodel and m out ′ output model.
本发明的技术构思为:本发明提供的基于分布式类脑图的深度神经网络模型加固方法,通过将关键路径应用在分布式类脑图上生成主干类脑图以对该路径上的传播过程和节点行为进行更有效的约束例如构建新的梯度损失函数来减弱噪声的传播,从而在关键路径指导的主干类脑图上利用特征路径长度和参与系数等图网络指标生长成新的鲁棒的类脑图结构重构回模型,提高模型的鲁棒性。最后用三种最先进的对抗攻击方法生成对抗样本,用攻击模型以验证模型鲁棒性能的提升。The technical idea of the present invention is as follows: the deep neural network model reinforcement method based on the distributed brain-like graph provided by the present invention generates the backbone brain-like graph by applying the key path to the distributed brain-like graph to propagate the process on the path. Make more effective constraints on node behaviors, such as building a new gradient loss function to weaken the propagation of noise, so as to use graph network metrics such as characteristic path length and participation coefficient to grow into a new robust on the backbone brain-like graph guided by critical paths. The brain-like graph structure is reconstructed back to the model to improve the robustness of the model. Finally, three state-of-the-art adversarial attack methods are used to generate adversarial samples and attack the model to verify the improvement of the robust performance of the model.
本发明的有益成果主要体现在:1)与传统关键路径中使用计算图和最新出现的关系图的方式来选择关键神经元相比,使用类脑图展现了与生物神经网络更紧密的联系性。2)通过使用关键路径的方法为神经网络的类脑图表征方式提供一种细粒度的解释。并且利用关键路径提高模型的鲁棒性的方法只对神经网络中一些关键神经元进行了操作,极大保留了神经网络的完整性。3)使用分布式的结构打破了原本图像的各个特征都以相同权重比直接展平成矩阵的网络构筑方法,实现了在关键路径和图网络指标的引导下的有参照依据的权重比设置。The beneficial results of the present invention are mainly reflected in: 1) Compared with the traditional critical path method that uses computational graphs and newly emerging relational graphs to select key neurons, the use of brain-like graphs shows closer connections with biological neural networks . 2) Provide a fine-grained explanation for the representation of the neural network's brain-like graph by using the critical path method. And the method of using the critical path to improve the robustness of the model only operates on some key neurons in the neural network, which greatly preserves the integrity of the neural network. 3) Using the distributed structure breaks the network construction method in which each feature of the original image is directly flattened into a matrix with the same weight ratio, and realizes the reference-based weight ratio setting under the guidance of the critical path and graph network indicators.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动前提下,还可以根据这些附图获得其他附图。In order to illustrate the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative efforts.
图1是本发明方法的流程图;Fig. 1 is the flow chart of the inventive method;
图2是本发明中模型构建类脑图的示意图;Fig. 2 is the schematic diagram of model building brain-like diagram in the present invention;
图3是本发明中在关系图中选取关键神经元的示意图。FIG. 3 is a schematic diagram of selecting key neurons in the relationship diagram in the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, and do not limit the protection scope of the present invention.
参照图1~图3,本发明提供了一种基于分布式类脑图的深度神经网络模型加固方法,包括以下步骤:1 to 3 , the present invention provides a method for strengthening a deep neural network model based on a distributed brain-like graph, including the following steps:
(1)构建目标模型数据集,具体为:(1) Build the target model dataset, specifically:
所述目标模型数据集包括n条样本数据,所述样本数据分为a类样本数据,从每类样本数据中抽取d%的样本数据作为目标模型的训练集Dtrain,将每类剩余的样本数据作为目标模型的测试集Dtest;其中,n、a、d为自然数;The target model data set includes n pieces of sample data, the sample data is divided into a type of sample data, d% of the sample data is extracted from each type of sample data as the training set D train of the target model, and the remaining samples of each type are The data is used as the test set D test of the target model; wherein, n, a, and d are natural numbers;
本发明实施例中,使用CIFAR-10数据集进行类脑图的构筑和鲁棒性验证。所述CIFAR-10数据集共包含60000张RGB彩色图像,每张图像大小均为32*32,分为10类,每类共计6000张样本。其中训练样本50000张,测试样本10000张。本发明从CIFAR-10数据集的50000张训练样本里取全部50000张作为目标模型的训练集Dtrain,从测试样本中取全部10000张作为目标模型的测试集Dtest。In the embodiment of the present invention, the CIFAR-10 dataset is used to construct the brain-like map and verify its robustness. The CIFAR-10 dataset contains a total of 60,000 RGB color images, each with a size of 32*32, divided into 10 categories, with a total of 6,000 samples in each category. Among them, there are 50,000 training samples and 10,000 test samples. In the present invention, all 50,000 training samples of the CIFAR-10 data set are taken as the training set D train of the target model, and all 10,000 samples are taken from the test samples as the test set D test of the target model.
(2)构建并训练目标模型,具体包括以下子步骤:(2) Build and train the target model, including the following sub-steps:
(2.1)对步骤(1)选取的样本数据构建目标模型,所述目标模型采用分布式结构,分别对图像的RGB三个特征设置三个相同的子模型m1,m2,m3,以及最后用于归一化特征矩阵的输出模型mout。(2.1) Constructing a target model for the sample data selected in step (1), the target model adopts a distributed structure, and three identical sub-models m 1 , m 2 , m 3 are respectively set for the three RGB features of the image, and The output model mout is finally used to normalize the feature matrix.
本发明实施例在CIFAR-10数据集使用3个具有512个隐藏单元的5层MLP作为目标子模型结构,使用1个具有32个隐藏单元的2层MLP作为目标输出模型结构,MLP的输入是CIFAR-10图像(32*32*3)的3072维扁平向量,输出是10维预测,每个MLP层有一个ReLU激活函数和一个BatchNorm正则化层。In the embodiment of the present invention, three 5-layer MLPs with 512 hidden units are used as the target sub-model structure in the CIFAR-10 dataset, and one 2-layer MLP with 32 hidden units is used as the target output model structure. The input of the MLP is A 3072-dimensional flat vector of CIFAR-10 images (32*32*3), the output is a 10-dimensional prediction, each MLP layer has a ReLU activation function and a BatchNorm regularization layer.
(2.2)对步骤(2.1)中设置的所有目标模型设置统一的超参数,对骤(1)设置的训练集Dtrain进行训练:自定义设置训练epoch次、批次大小、优化器、学习率和损失函数,采用随机梯度下降、学习率设置为初始0.1的cos余弦学习率、损失函数Lossc则在交叉熵函数基础上添加了正则化参数λ:(2.2) Set unified hyperparameters for all target models set in step (2.1), and train the training set D train set in step (1): customize training epoch times, batch size, optimizer, learning rate And the loss function, using stochastic gradient descent, the learning rate is set to the cosine learning rate of the initial 0.1, and the loss function Loss c adds the regularization parameter λ on the basis of the cross entropy function:
其中p(·)表示样本的真实标签,q(·)表示模型的预测概率,xi表示输入的样本,表示模型参数,λ表示正则化系数。where p( ) represents the true label of the sample, q( ) represents the predicted probability of the model, x i represents the input sample, represents the model parameters, and λ represents the regularization coefficient.
本发明实施例中设置统一的超参数:训练epoch数为200、批次大小为128,采用随机梯度下降(SGD)、学习率设置为初始0.1的cos余弦学习率、损失函数在交叉熵函数基础上添加了正则化参数为λ,得到上述损失函数Lossc计算公式。In the embodiment of the present invention, uniform hyperparameters are set: the number of training epochs is 200, the batch size is 128, stochastic gradient descent (SGD) is used, the learning rate is set to the initial cosine learning rate of 0.1, and the loss function is based on the cross-entropy function. The regularization parameter is added to λ, and the above loss function Loss c calculation formula is obtained.
(2.3)重复训练直至目标模型的准确率率基本收敛,且不再提升;然后保存训练得到的目标模型。(2.3) Repeat the training until the accuracy rate of the target model basically converges and no longer improves; then save the target model obtained by training.
(3)定义神经网络,再定义单个神经网络的计算图,输入步骤(2)训练得到的目标模型,构建分布式类脑图,包括以下子步骤:(3) Define the neural network, then define the calculation graph of a single neural network, input the target model trained in step (2), and construct a distributed brain-like graph, including the following sub-steps:
(3.1)定义神经网络:定义图G=(V,E),其中V={v1,...,vn}为节点集合, 为边集合,且每个节点v有一个节点的特征向量Wv。(3.1) Define the neural network: define the graph G=(V, E), where V={v 1 ,...,v n } is the node set, is a set of edges, and each node v has a eigenvector W v of the node.
(3.2)定义单个模型的计算图:(3.2) Define the computation graph of a single model:
利用前向传播算法,定义图节点集合V={x1,...,vn}为所有的神经元,边集合为每前后两层之间有传播关系的两个神经元节点之间的连线,边的权重设为由前一层向后一层传播时对应节点的特征向量矩阵的分量,以全连接网络为例,用公式描述为:Using forward propagation algorithm, define graph node set V={x 1 ,...,v n } as all neurons, edge set It is the connection between the two neuron nodes that have a propagation relationship between the two layers before and after each, and the weight of the edge is set as the component of the eigenvector matrix of the corresponding node when it propagates from the previous layer to the next layer, with a fully connected network. For example, the formula is described as:
Wv=[wi1,wi2,…,wij]W v =[ wi1 , wi2 ,..., wij ]
其中对于每一个分量wij,i表示权重所连接的前一层网络当中神经元的下标即所在的位置,j表示权重所连接的后一层当中神经元的下标即所在的位置。一般全连接的网络中前一层网络的每一个神经元都存在与下一层所有神经元的连边即从第1个到第j个。For each component w ij , i represents the subscript of the neuron in the previous layer network to which the weight is connected, that is, the location, and j represents the subscript of the neuron in the next layer to which the weight is connected, i.e. the location. Generally, in a fully connected network, each neuron of the previous layer of network has connections with all the neurons of the next layer, that is, from the 1st to the jth.
(3.3)构建分布式类脑图:(3.3) Build a distributed brain-like graph:
将步骤(2)中训练得到的目标模型载入,先计算每个模型的每个神经元节点的特征向量,根据步骤(3.2)的定义分别绘制各个模型的计算图,最后将所有绘制的子模型的计算图以设置好的权重为连边与输出模型的计算图连接到一起生成一个原始的分布式类脑图Gori。Load the target model trained in step (2), first calculate the feature vector of each neuron node of each model, draw the calculation graph of each model according to the definition of step (3.2), and finally put all drawn sub The computational graph of the model is connected with the computational graph of the output model with the set weights as the edges to generate a primitive distributed brain-like graph G ori .
(4)约束关键路径,具体包括以下子步骤:(4) Constraining the critical path, which specifically includes the following sub-steps:
(4.1)计算两层神经元之间的影响:(4.1) Calculate the influence between two layers of neurons:
对任意一个模型来说,为了构建关键攻击路径,需要每一层中提取关键攻击神经元并将它们连接起来。那首先就要通过反向传播的形式计算前一层神经元对后一层神经元的影响。For any model, in order to construct the critical attack path, it is necessary to extract the critical attack neurons in each layer and connect them. The first step is to calculate the influence of the neurons in the previous layer on the neurons in the next layer by back-propagation.
具体来说,一个神经元的影响为其每个位置上的元素对于另一个神经元的梯度的绝对值之和。对于一个第l层的第j个神经元的输出中的单个元素z,用参数表示第l-1层的第i个神经元对其的影响值:Specifically, the influence of one neuron is the sum of the absolute values of the gradients of the elements at each position with respect to the other neuron. For the jth neuron of an lth layer Output A single element z in , with the parameter represents the ith neuron of layer l-1 Its impact value:
其中下标l代表第l层,l=1,2,…L,L为神经网络的总层数,是第l-1层的第i个神经元的元素的集合,A(·)函数提取在指定位置的元素。The subscript l represents the lth layer, l=1,2,...L, L is the total number of layers of the neural network, is the set of elements of the ith neuron of the l-1th layer, and the A(·) function extracts the element at the specified position.
用表示i,j两个神经元之间的影响值即第l-1层的第i个神经元对第l层的第j个神经元的输出中的每个元素的值之和:use Represents the influence value between two neurons i, j, i.e. the ith neuron in the l-1th layer For the jth neuron in the lth layer Output The sum of the values of each element in :
(4.2)选取关键神经元:(4.2) Select key neurons:
给定一个样本x,首先推导出最后一个卷积层L的第i个神经元对模型决策贡献的损失梯度:Given a sample x, first derive the gradient of the loss that the ith neuron of the last convolutional layer L contributes to the model decision:
其中表示第L层中第i个神经元的输出。in represents the output of the ith neuron in the Lth layer.
接着将每个神经元的损失梯度放在一起进行从大到小排序,挑选出前k个的神经元作为关键神经元,用表示最后一层L中选取到的关键神经元:Then, the loss gradients of each neuron are put together to sort from large to small, and the top k neurons are selected as key neurons, and the Represents the key neurons selected in the last layer L:
其中top_k(·)函数就表示选取前k个,FL就是最后一个卷积层L的神经元集合,然后基于l-1层对l层的影响用步骤4.1)的公式递归得到前面几层的关键神经元,用表示每一层取到的关键神经元:The top_k(·) function means to select the first k, and FL is the set of neurons of the last convolutional layer L, and then based on the influence of the l-1 layer on the l layer, the formula of step 4.1) is used to recursively obtain the first few layers. key neurons, with Represents the key neurons taken by each layer:
最后用R(x)表示样本x在不同层的关键神经元:Finally, use R(x) to represent the key neurons of the sample x in different layers:
(4.3)限制损失梯度:(4.3) Constrain the loss gradient:
在面对对抗攻击时,约束关键路径的一种直观方法是限制损失梯度来减少这些神经元所造成的影响。通过限制关键神经元的梯度可以直接得到一个损失项:An intuitive way to constrain the critical path in the face of adversarial attacks is to limit the gradient of the loss to reduce the influence of these neurons. A loss term can be directly obtained by limiting the gradient of key neurons:
再把该损失项加入到交叉熵损失当中得到最终的损失函数:Then add the loss term to the cross entropy loss to get the final loss function:
Loss=Lossc+δLossg Loss=Loss c +δLoss g
其中Lossc表示步骤(2)中的交叉熵损失函数,δ是用于平衡这些损失项的超参数。where Loss c represents the cross-entropy loss function in step (2), and δ is the hyperparameter used to balance these loss terms.
(4.4)映射关键路径:(4.4) Mapping the critical path:
将步骤(4.2)中得到的关键神经元映射到步骤(3.3)中绘制的分布式类脑图上,去掉类脑图中非关键神经元的节点和其连边得到有关键路径指导的分布式类脑图结构Gpath。Map the key neurons obtained in step (4.2) to the distributed brain-like graph drawn in step (3.3), and remove the nodes and edges of non-critical neurons in the brain-like graph to obtain a distributed brain-like graph guided by critical paths. Brain-like graph structure G path .
(5)在图网络指标引导下重构模型,具体包括以下子步骤:(5) Reconstructing the model under the guidance of graph network indicators, including the following sub-steps:
(5.1)定义多种图网络指标:(5.1) Define various graph network metrics:
①特征路径长度:特征路径长度是衡量效率的指标,定义为网络的平均最短路径长度。用于计算最短路径的距离矩阵必须是一个连接长度矩阵,通常通过从权值到长度的映射获得。这里使用最常用的带权路径长度作为计算的标准,公式如下:①Characteristic path length: The feature path length is an index to measure the efficiency, which is defined as the average shortest path length of the network. The distance matrix used to calculate the shortest path must be a link length matrix, usually obtained by mapping from weights to lengths. Here, the most commonly used weighted path length is used as the calculation standard, and the formula is as follows:
WPL=∑wij*lWPL=∑w ij *l
其中wij是步骤(3.2)中定义的边权重,l为步骤(4.1)中的下标,代表第l层,l=1,2,…L,L为神经网络的总层数,这里一般设置l为wij中下标i的神经元所处的层。where w ij is the edge weight defined in step (3.2), l is the subscript in step (4.1), representing the lth layer, l=1,2,...L, L is the total number of layers of the neural network, here generally Set l to the layer where the neuron with index i in w ij is located.
②参与系数:参与系数是一个节点在网络社区中的连接分布的度量。当参与系数为0时,节点的连接完全局限于其区块。参与系数越接近1,节点在各区块之间的联系分布越均匀。数学上,节点i的参与系数P为:②Participation coefficient: The participation coefficient is a measure of the connection distribution of a node in the network community. When the participation coefficient is 0, the connection of a node is completely limited to its block. The closer the participation coefficient is to 1, the more uniform the distribution of nodes' connections between blocks. Mathematically, the participation coefficient P of node i is:
其中Sis为节点i到区块c中节点的连接权之和,Si为节点i的强度,C为区块总数。这里设置每一个子模型为一个区块;where S is the sum of the connection weights from node i to nodes in block c, S i is the strength of node i, and C is the total number of blocks. Here each sub-model is set as a block;
(5.2)生长新的类脑图结构重构模型(5.2) Growing a new brain-like map structure reconstruction model
分别计算步骤(3)得到的原始的分布式类脑图Gori和步骤(4)得到的有关键路径指导的分布式类脑图结构Gpath的上述(5.1)中两项指标,观察指标的变化趋势,其中以特征路径长度指导类脑图生长的趋势方向,以参与系数指导各个子模型的分配权重比,最后生成生长后的新类脑图结构,将生成新的类脑图结构转化回模型,即重构回目标模型,得到m1′,m2′,m3′和mout′的鲁棒目标模型。Calculate the two indicators in the above (5.1) of the original distributed brain-like graph G ori obtained in step (3) and the distributed brain-like graph structure G path guided by the critical path obtained in step (4). The changing trend, in which the characteristic path length guides the growth trend of the brain-like map, and the participation coefficient guides the distribution weight ratio of each sub-model, and finally generates a new brain-like map structure after growth, and converts the generated new brain-like map structure back to model, that is, reconstructed back to the target model to obtain robust target models of m 1 ′, m 2 ′, m 3 ′ and m out ′.
(6)对目标模型进行对抗攻击生成对抗样本,用攻击模型以验证模型鲁棒性的提升:(6) Conduct adversarial attacks on the target model to generate adversarial samples, and use the attack model to verify the improvement of the robustness of the model:
本发明实施例采用了多种对抗性攻击方法,包括FGSM攻击、CW攻击和PGD攻击。每种攻击在每个数据集中随机选取1000张生成对抗样本进行攻击。三种攻击设置不同的参数,其中对于FGSM攻击,设置参数ε=2;对于CW攻击,使用L2范数的攻击,设置初始值c=0.01,置信度k=0,迭代次数epoch=200;对于PGD攻击,设置参数ε=2,步长α=ε/10,迭代次数epoch=20。The embodiment of the present invention adopts a variety of adversarial attack methods, including FGSM attack, CW attack and PGD attack. Each attack randomly selects 1000 generated adversarial samples in each dataset to attack. The three attacks set different parameters. For FGSM attack, set parameter ε=2; for CW attack, use L 2 norm attack, set initial value c=0.01, confidence degree k=0, iteration number epoch=200; For the PGD attack, set the parameter ε=2, the step size α=ε/10, and the number of iterations epoch=20.
模型鲁棒性评价指标;在遭受对抗攻击时,模型常用准确率作为鲁棒性能的评价指标。Model robustness evaluation index; when subjected to adversarial attacks, the model often uses accuracy as an evaluation index for robust performance.
准确率:准确率表示对于给定的测试数据集,分类器正确分类的样本数与总样本数之比Accuracy: Accuracy represents the ratio of the number of samples correctly classified by the classifier to the total number of samples for a given test data set
其中,TP表示正类判定为正类,FP表示负类被判定为正类,FN表示正类被判定为负类,TN表示负类被判定为负类,准确率越低,表明鲁棒性能越好。经实验得到CIFAR-10数据集的鲁棒目标模型在三种攻击下的准确率比原始目标模型平均提高了42.3%。Among them, TP means that the positive class is judged as the positive class, FP means that the negative class is judged as the positive class, FN means that the positive class is judged as the negative class, TN means that the negative class is judged as the negative class, the lower the accuracy rate, the more robust performance the better. The accuracy of the robust target model of the CIFAR-10 dataset under three attacks is improved by an average of 42.3% compared with the original target model.
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。The above-mentioned specific embodiments describe in detail the technical solutions and beneficial effects of the present invention. It should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, additions and equivalent substitutions made within the scope shall be included within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111234229.8A CN114048837A (en) | 2021-10-22 | 2021-10-22 | A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111234229.8A CN114048837A (en) | 2021-10-22 | 2021-10-22 | A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114048837A true CN114048837A (en) | 2022-02-15 |
Family
ID=80206082
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111234229.8A Pending CN114048837A (en) | 2021-10-22 | 2021-10-22 | A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114048837A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115169540A (en) * | 2022-08-04 | 2022-10-11 | 浙江工业大学 | Defect tracing method of deep learning computing framework based on distributed brain-like graph |
| CN117764120A (en) * | 2024-02-22 | 2024-03-26 | 天津普智芯网络测控技术有限公司 | Picture identification architecture capable of reducing single event fault influence |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107180236A (en) * | 2017-06-02 | 2017-09-19 | 北京工业大学 | A kind of multi-modal emotion identification method based on class brain model |
| US20190095806A1 (en) * | 2017-09-28 | 2019-03-28 | Siemens Aktiengesellschaft | SGCNN: Structural Graph Convolutional Neural Network |
| CN111714118A (en) * | 2020-06-08 | 2020-09-29 | 北京航天自动控制研究所 | Brain cognition model fusion method based on ensemble learning |
| CN112183716A (en) * | 2020-08-28 | 2021-01-05 | 北京航空航天大学 | Method and device for determining critical attack path in neural network |
| CN112183717A (en) * | 2020-08-28 | 2021-01-05 | 北京航空航天大学 | Neural network training method and device based on critical path |
| CN113128892A (en) * | 2021-04-28 | 2021-07-16 | 中国水利水电科学研究院 | Chained disaster risk assessment method and device based on complex network topological relation |
| CN113157935A (en) * | 2021-03-16 | 2021-07-23 | 中国科学技术大学 | Graph neural network model and method for entity alignment based on relationship context |
| CN113255895A (en) * | 2021-06-07 | 2021-08-13 | 之江实验室 | Graph neural network representation learning-based structure graph alignment method and multi-graph joint data mining method |
-
2021
- 2021-10-22 CN CN202111234229.8A patent/CN114048837A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107180236A (en) * | 2017-06-02 | 2017-09-19 | 北京工业大学 | A kind of multi-modal emotion identification method based on class brain model |
| US20190095806A1 (en) * | 2017-09-28 | 2019-03-28 | Siemens Aktiengesellschaft | SGCNN: Structural Graph Convolutional Neural Network |
| CN111714118A (en) * | 2020-06-08 | 2020-09-29 | 北京航天自动控制研究所 | Brain cognition model fusion method based on ensemble learning |
| CN112183716A (en) * | 2020-08-28 | 2021-01-05 | 北京航空航天大学 | Method and device for determining critical attack path in neural network |
| CN112183717A (en) * | 2020-08-28 | 2021-01-05 | 北京航空航天大学 | Neural network training method and device based on critical path |
| CN113157935A (en) * | 2021-03-16 | 2021-07-23 | 中国科学技术大学 | Graph neural network model and method for entity alignment based on relationship context |
| CN113128892A (en) * | 2021-04-28 | 2021-07-16 | 中国水利水电科学研究院 | Chained disaster risk assessment method and device based on complex network topological relation |
| CN113255895A (en) * | 2021-06-07 | 2021-08-13 | 之江实验室 | Graph neural network representation learning-based structure graph alignment method and multi-graph joint data mining method |
Non-Patent Citations (1)
| Title |
|---|
| 李慧: "《模糊认知超图与多关系数据挖掘》", 31 July 2017, 现代教育出版社, pages: 121 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115169540A (en) * | 2022-08-04 | 2022-10-11 | 浙江工业大学 | Defect tracing method of deep learning computing framework based on distributed brain-like graph |
| CN117764120A (en) * | 2024-02-22 | 2024-03-26 | 天津普智芯网络测控技术有限公司 | Picture identification architecture capable of reducing single event fault influence |
| CN117764120B (en) * | 2024-02-22 | 2024-07-16 | 天津普智芯网络测控技术有限公司 | Picture identification method capable of reducing single event fault influence |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108229444B (en) | Pedestrian re-identification method based on integral and local depth feature fusion | |
| CN110048827B (en) | Class template attack method based on deep learning convolutional neural network | |
| CN110490320B (en) | Deep neural network structure optimization method based on fusion of prediction mechanism and genetic algorithm | |
| CN112465120A (en) | Fast attention neural network architecture searching method based on evolution method | |
| CN113408743A (en) | Federal model generation method and device, electronic equipment and storage medium | |
| CN107832789B (en) | Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation | |
| CN116391193B (en) | Method and apparatus for energy-based latent variable model based neural networks | |
| CN109753571A (en) | A low-dimensional space embedding method of scene graph based on quadratic topic space projection | |
| CN113723238B (en) | A face lightweight network model construction method and face recognition method | |
| CN108594793A (en) | A kind of improved RBF flight control systems fault diagnosis network training method | |
| CN113935489A (en) | Variational quantum model TFQ-VQA based on quantum neural network and two-stage optimization method thereof | |
| CN114511737A (en) | Training method of image recognition domain generalization model | |
| CN118114734A (en) | Convolutional neural network optimization method and system based on sparse regularization theory | |
| CN114048837A (en) | A Deep Neural Network Model Reinforcement Method Based on Distributed Brain-like Graph | |
| Wang et al. | Deep learning and its adversarial robustness: A brief introduction | |
| CN117421667A (en) | Attention-CNN-LSTM industrial process fault diagnosis method based on improved gray wolf algorithm optimization | |
| CN112052933A (en) | Security testing method and repair method of deep learning model based on particle swarm optimization | |
| CN112766496A (en) | Deep learning model security guarantee compression method and device based on reinforcement learning | |
| CN120067950A (en) | Dynamic graph anomaly detection method based on hypergraph contrast learning | |
| CN112836729A (en) | An image classification model construction method and image classification method | |
| CN108985382B (en) | Confrontation sample detection method based on key data path representation | |
| CN118504333B (en) | A cable-stayed bridge damage identification method and system based on multi-wavelet packet energy and IHPO-SVM fusion | |
| CN114925802A (en) | Integrated transfer learning method and system based on depth feature mapping | |
| CN115392434A (en) | Depth model reinforcement method based on graph structure variation test | |
| CN113283537B (en) | Deep model privacy protection method and device based on parameter sharing for member inference attacks |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |