CN117077765A - Electroencephalogram signal identity recognition method based on personalized federal incremental learning - Google Patents
Electroencephalogram signal identity recognition method based on personalized federal incremental learning Download PDFInfo
- Publication number
- CN117077765A CN117077765A CN202310644445.2A CN202310644445A CN117077765A CN 117077765 A CN117077765 A CN 117077765A CN 202310644445 A CN202310644445 A CN 202310644445A CN 117077765 A CN117077765 A CN 117077765A
- Authority
- CN
- China
- Prior art keywords
- learning
- client
- task
- incremental
- federated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/096—Transfer learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
本发明提供了一种基于个性化联邦增量学习的脑电信号身份识别方法,包括如下步骤:首先,采用基于服务器—客户端的联邦学习架构,每个客户端初始化私有的本地模型、学习率网络和回放样本池。然后,在每一个任务增量时,每个客户端采用增量元学习方法对本地源源不断的任务流进行持续学习。接着,本发明设计个性化联邦增量学习策略,通过全局共享模型参数、本地保留学习率网络来实现对不同客户端中不同增量任务的异构数据的学习。最后,中央服务器使用通信过程中训练好的全局模型对不同机构的脑电信号数据进行身份识别。本发明所提出的个性化联邦增量学习方法,实现了联邦通信过程中客户端间知识的正向迁移,以生成适用于各端的通用身份识别模板。
The present invention provides an EEG signal identity recognition method based on personalized federated incremental learning, which includes the following steps: first, using a server-client based federated learning architecture, each client initializes a private local model, learning rate network and playback sample pool. Then, at each task increment, each client uses the incremental meta-learning method to continuously learn from the local continuous task stream. Next, the present invention designs a personalized federated incremental learning strategy to achieve learning of heterogeneous data for different incremental tasks in different clients through globally shared model parameters and local retention learning rate networks. Finally, the central server uses the global model trained during the communication process to identify the EEG signal data from different institutions. The personalized federated incremental learning method proposed by the present invention realizes the forward transfer of knowledge between clients in the federated communication process to generate a universal identity recognition template suitable for each terminal.
Description
技术领域Technical Field
本发明涉及脑电信号身份识别的联邦增量学习领域,具体地说,本发明涉及一种基于个性化联邦增量学习的对分布式任务增量场景进行隐私保护和持续学习的脑电信号身份识别方法。The present invention relates to the field of federated incremental learning for electroencephalogram (EEG) signal identification, and more specifically, to an EEG signal identification method for privacy protection and continuous learning of distributed task incremental scenarios based on personalized federated incremental learning.
背景技术Background Art
随着社会对身份认证系统高可靠性的需求激增和人工智能技术的高速发展,生物特征识别(包括人脸识别、说话人识别、虹膜识别、脑电信号识别等)近年来取得了重大的进展和广泛的应用,人们不断产生数据的同时,也越来越注重隐私安全。尤其当身份识别系统涉及到个体的隐私数据,人们更加要求系统的隐私性和安全性。脑电信号因其普适性、永久性、可收集性和唯一性,成为目前安全系数较高的生物认证特征,但仍存在一定的局限性。一方面,深度神经网络的出现实现了对诸如EEG等复杂生物特征编码的快速正确识别,并在大规模数据集上能够获得更高的识别准确率和更好的泛化能力。然而,随着硬件设备的普及和在线数据流的增加,传统的离线训练深度神经网络模型的方法变得不再有效。比如,现实生活中的身份认证系统每天都有新用户注册其脑电特征,离线训练的模型无法适用于新注册的身份数据,这对高效可靠的身份认证系统带来了困难。传统训练采用将全量数据集中训练的方法,可以获得识别性能和泛化效果最好的离线模型,但是当数据量越来越庞大时,此种方法无疑给存储和计算内存带来了巨大的压力,且相当耗时。同时,由于隐私问题或者存储受限,旧任务的数据通常不被全量访问,这种情况下,传统的神经网络每次只基于新任务的样本进行训练,这会导致模型偏向于新数据,从而对旧数据的识别性能下降,称之为灾难性遗忘问题(Catastrophic Forgetting)。因此,面对源源不断增量的生物数据流,可靠的身份认证模型必须展示出持续学习的能力:即学习连续任务而不忘记识别过去训练过的任务的能力。另一方面,现实生活中的数据来自于各个分布式的边缘设备,将庞大的数据集中训练,可以提高模型的准确性和鲁棒性,但是这对云设备的存储和计算能力有很高的要求,还有可能带来边缘设备的隐私风险和数据泄露的威胁。每个分布式机构单独训练它们自己的数据,这可以保护每个边缘设备的数据隐私,但是会大幅降低每个分布式模型的准确性和泛化性。With the rapid increase in the demand for high reliability of identity authentication systems and the rapid development of artificial intelligence technology, biometric recognition (including face recognition, speaker recognition, iris recognition, EEG signal recognition, etc.) has made significant progress and has been widely used in recent years. While people are constantly generating data, they are also paying more and more attention to privacy and security. Especially when the identity recognition system involves individual privacy data, people require more privacy and security of the system. EEG signals have become a biometric authentication feature with a higher security factor due to their universality, permanence, collectability and uniqueness, but there are still certain limitations. On the one hand, the emergence of deep neural networks has achieved rapid and correct recognition of complex biometric encodings such as EEG, and can obtain higher recognition accuracy and better generalization ability on large-scale data sets. However, with the popularization of hardware devices and the increase in online data streams, the traditional offline training method of deep neural network models has become ineffective. For example, in real-life identity authentication systems, new users register their EEG features every day, and the offline trained model cannot be applied to the newly registered identity data, which brings difficulties to efficient and reliable identity authentication systems. Traditional training uses the method of training the entire data set in a centralized manner, which can obtain an offline model with the best recognition performance and generalization effect. However, when the amount of data becomes larger and larger, this method undoubtedly puts tremendous pressure on storage and computing memory, and is quite time-consuming. At the same time, due to privacy issues or storage limitations, the data of old tasks are usually not fully accessed. In this case, traditional neural networks are trained only based on samples of new tasks each time, which will cause the model to be biased towards new data, thereby reducing the recognition performance of old data, which is called catastrophic forgetting. Therefore, in the face of a continuous stream of incremental biological data, a reliable identity authentication model must demonstrate the ability to continuously learn: that is, the ability to learn continuous tasks without forgetting to recognize tasks that have been trained in the past. On the other hand, real-life data comes from various distributed edge devices. Centralizing training with huge data sets can improve the accuracy and robustness of the model, but this places high demands on the storage and computing capabilities of cloud devices, and may also bring privacy risks and data leakage threats to edge devices. Each distributed organization trains its own data separately, which can protect the data privacy of each edge device, but will greatly reduce the accuracy and generalization of each distributed model.
在真实场景中,需要同时考虑数据的增量问题和隐私问题。每个分布式身份认证设备的任务数据是源源不断到来的,这无疑给脑电信号身份识别带来了困难。一方面,每有一个新的任务流到来时,每个客户端都需要基于所有的训练样本,重新训练出一个新的模型以适用于所有任务,这会导致内存的占用和时间的浪费;尤其当每个边缘设备是轻量级时,连续任务流会增大每个设备的存储压力。另一方面,当各客户端间的数据是非独立同分布时,如果只基于当前最新的任务数据进行训练,那么模型梯度将会偏向新任务,导致对过去旧任务产生灾难性遗忘问题。综上所述,设计一种适用于分布式任务增量场景的脑电信号身份识别方法对真实场景中的生物特征认证系统具有指导意义。In real scenarios, it is necessary to consider both the data increment problem and the privacy problem. The task data of each distributed authentication device is continuously coming, which undoubtedly brings difficulties to EEG signal identification. On the one hand, every time a new task stream arrives, each client needs to retrain a new model based on all training samples to apply to all tasks, which will lead to memory usage and time waste; especially when each edge device is lightweight, the continuous task stream will increase the storage pressure of each device. On the other hand, when the data between the clients are not independent and identically distributed, if only the current latest task data is used for training, the model gradient will be biased towards the new task, resulting in catastrophic forgetting of the old tasks in the past. In summary, designing an EEG signal identification method suitable for distributed task increment scenarios has guiding significance for biometric authentication systems in real scenarios.
发明内容Summary of the invention
为解决上述问题,本发明提供了一种基于个性化联邦增量学习的脑电信号身份识别方法,构建了一个基于隐私保护的个性化联邦增量学习框架,通过全局共享元参数、本地保留学习率来实现个性化的增量学习,每个客户端采用基于样本回放和任务蒸馏的自适应增量元学习方法进行本地更新,在避免对过去旧任务遗忘的同时实现对不同任务的自适应学习。To solve the above problems, the present invention provides an EEG signal identity recognition method based on personalized federated incremental learning, constructs a personalized federated incremental learning framework based on privacy protection, realizes personalized incremental learning by globally sharing meta-parameters and locally retaining learning rates, and each client adopts an adaptive incremental meta-learning method based on sample playback and task distillation for local updates, thereby achieving adaptive learning of different tasks while avoiding forgetting old tasks.
本发明提供的基于个性化联邦增量学习的脑电信号身份识别方法使用流程包括如下步骤:The use process of the EEG signal identity recognition method based on personalized federated incremental learning provided by the present invention includes the following steps:
根据权利要求1所述的一种基于个性化联邦增量学习的脑电信号身份识别方法,其特征在于,步骤S101中:采用基于服务器—客户端的联邦学习架构,将每个增量任务所采集到的运动想象脑电信号平均分配到所有K个客户端(联邦通信涉及共K个客户端),每个客户端k(1≤k≤K)初始化私有的本地模型学习率网络和回放样本池 According to the method of EEG signal identification based on personalized federated incremental learning according to claim 1, it is characterized in that in step S101: a server-client based federated learning architecture is adopted to evenly distribute the motor imagery EEG signals collected by each incremental task to all K clients (federal communication involves a total of K clients), and each client k (1≤k≤K) initializes a private local model Learning Rate Network and playback sample pool
首先,采用的服务器—客户端架构是以服务器为中心、客户端为分布式节点的联邦学习框架,每个客户端只基于本地的私有数据样本进行训练,不与其它客户端、服务器共享数据,但是与服务器共享模型参数;First, the server-client architecture used is a federated learning framework with the server as the center and the client as the distributed node. Each client is trained only based on local private data samples, and does not share data with other clients or servers, but shares model parameters with the server.
然后,按照不同的数据分布场景,将所采集到的运动想象脑电信号平均分配到所有客户端上。对于共M个类别、每个类别的样本数为n的脑电信号数据,在IID(独立同分布)设置下,所有类的训练样本均匀地分布在所有客户端,每个客户端都将处理类别相同但样本不重叠的数据:每个客户端k处理的样本类别数为M、样本数为在非IID(非独立同分布)设置下,所有的类别均匀地分布在所有客户端,每个客户端处理的类别互不相交:每个客户端k处理的样本类别数为样本数为 Then, according to different data distribution scenarios, the collected motor imagery EEG signals are evenly distributed to all clients. For EEG signal data with M categories and n samples in each category, under the IID (independent and identically distributed) setting, the training samples of all classes are evenly distributed on all clients, and each client will process data with the same category but non-overlapping samples: each client k processes M sample categories and n samples. In the non-IID setting, all categories are evenly distributed across all clients, and the categories processed by each client are disjoint: the number of sample categories processed by each client k is The number of samples is
最后,对每个客户端k进行初始化,包括本地元学习模型的模型参数本地学习率网络的模型参数和一定容量的回放样本池 Finally, each client k is initialized, including the model parameters of the local meta-learning model Model parameters of the local learning rate network And a playback sample pool of a certain capacity
根据权利要求1所述的一种基于个性化联邦增量学习的脑电信号身份识别方法,其特征在于,步骤S102中:对于步骤S101所述的第k个客户端(1≤k≤K,联邦通信涉及共K个客户端),设置其私有任务流在第c个任务通信时(1≤c≤C,联邦过程共有C个任务通信),输入当前增量任务中的样本数据,基于本地的模型和当前增量任务进行增量学习;According to the method for EEG signal identification based on personalized federated incremental learning of claim 1, it is characterized in that in step S102: for the kth client (1≤k≤K, the federated communication involves a total of K clients) described in step S101, set its private task flow When the cth task communicates (1≤c≤C, there are C tasks communicating in the federated process), enter the current incremental task Sample data in, based on local models and the current incremental task Conduct incremental learning;
首先,p和i分别表示外部循环和内部循环的次数,表示在第k个客户端第p次外部循环时进行i次内部训练得到的模型参数;First, p and i represent the number of external and internal loops, respectively. represents the model parameters obtained by performing i-time internal training in the p-th external cycle of the k-th client;
然后,在增量元学习的内部循环中,将元学习和最近类均值样本回放相结合;基于当前增量任务的数据,采用可学习的学习率网络对多任务小样本场景进行元学习,在第c个任务通信时(2≤c≤C)对每个客户端k本地的回放样本池进行样本采样,得到采样的旧任务样本基于旧任务样本计算蒸馏损失;对当前增量任务进行样本采样,得到采样的新任务样本b={(Xc,Yc),2≤c≤C},基于新任务样本b和旧任务样本计算分类损失;于是元学习内部训练的目标为Then, in the inner loop of incremental meta-learning, meta-learning is combined with the recent class mean sample replay; based on the current incremental task The data of the local playback sample pool of each client k is replayed using a learnable learning rate network for meta-learning of multi-task small sample scenarios during the c-th task communication (2≤c≤C). Perform sample sampling to obtain the sampled old task samples Based on old task samples Calculate distillation loss; for the current incremental task Perform sample sampling to obtain a new task sample b = {(X c , Y c ), 2≤c≤C}, based on the new task sample b and the old task sample Calculate the classification loss; so the goal of meta-learning internal training is
其中,X1:c和Y1:c分别表示前c个任务通信时该客户端处理的训练数据和对应标签,Xc和Yc分别表示第c个任务通信时该客户端处理的训练数据和对应标签,λ表示分类损失和蒸馏损失的相对权重,lmeta表示元任务的损失函数,表示输入为X1:c、模型参数为的输出预测结果,lCE和lKD分别表示分类损失函数和蒸馏损失函数;同时,根据最近类均值样本回放规则更新回放样本池;Among them, X1 :c and Y1 :c represent the training data and corresponding labels processed by the client during the first c task communications, Xc and Yc represent the training data and corresponding labels processed by the client during the cth task communication, λ represents the relative weight of classification loss and distillation loss, lmeta represents the loss function of the meta task, It means the input is X 1:c and the model parameters are The output prediction result of , l CE and l KD represent the classification loss function and distillation loss function respectively; at the same time, the playback sample pool is updated according to the nearest class mean sample playback rule;
最后,在增量元学习的外部循环中,根据元损失进行梯度下降更新学习率网络和元模型的参数,找到适用于不同任务的学习速率和方向;对于k个客户端下一轮外部循环p+1,元参数由基于分类损失和蒸馏损失的元损失进行梯度更新:Finally, in the outer loop of incremental meta-learning, the learning rate network and the parameters of the meta-model are updated by gradient descent according to the meta-loss to find the learning rate and direction suitable for different tasks; for the next outer loop p+1 of k clients, the meta-parameters Gradient update is performed by the meta-loss based on classification loss and distillation loss:
其中,β表示元模型参数更新的学习率参数;对于第k个客户端下一轮外部循环p+1,对元损失基于第p次外部循环的学习率网络计算梯度,自适应地更新学习率网络:Where β represents the learning rate parameter for updating the meta-model parameters; for the next external cycle p+1 of the k-th client, the meta-loss is based on the learning rate network of the p-th external cycle Calculate the gradient and adaptively update the learning rate network:
其中,αhyperlr表示学习率网络更新的学习率参数,是第k个客户端第p次外部循环的可学习的学习率网络,该网络的架构和元模型的架构相同。Among them, α hyperlr represents the learning rate parameter updated by the learning rate network, is the learnable learning rate network for the k-th client p-th external loop, the architecture and metamodel of this network The architecture is the same.
根据权利要求1所述的一种基于个性化联邦增量学习的脑电信号身份识别方法,其特征在于,步骤S103中:个性化联邦增量学习框架中由多个步骤S102所述的客户端和一个中央服务器组成,每个客户端完成当前任务的本地更新后,本地保留学习率网络和回放样本池,全局共享元模型的参数,实现与服务器的通信和保留了本地的个性化学习方向;According to the method for EEG signal identification based on personalized federated incremental learning of claim 1, it is characterized in that in step S103: the personalized federated incremental learning framework is composed of multiple clients described in step S102 and a central server, and after each client completes the local update of the current task, the learning rate network and the playback sample pool are locally retained, and the parameters of the meta-model are globally shared to achieve communication with the server and retain the local personalized learning direction;
首先,中央服务器聚合来自所有客户端的本地元模型参数每个客户端当前轮次处理的数据量分别为d1,…,dk,…,dK;然后,根据数据量对聚合的模型参数进行联邦加权平均后,得到全局模型参数最后,中央服务器将全局模型参数Θ分发给所有客户端。First, the central server aggregates the local metamodel parameters from all clients The amount of data processed by each client in the current round is d 1 ,…,d k ,…,d K . Then, the aggregated model parameters are federated and weighted averaged according to the amount of data to obtain the global model parameters. Finally, the central server distributes the global model parameters Θ to all clients.
根据权利要求1所述的一种基于个性化联邦增量学习的脑电信号身份识别方法,其特征在于,步骤S104中:对所述S103设计的个性化联邦增量学习方法进行迭代训练直至R次通信轮次,保证模型收敛;使用所述个性化联邦增量学习方法对待识别的脑电数据样本进行分布式增量学习,确定该脑电数据样本所对应的用户标签,其具体过程为中央服务器基于训练好的全局参数Θ对输入测试脑电信号x进行识别,得到的标签y即为预测的身份信息。According to the method for EEG signal identity recognition based on personalized federated incremental learning according to claim 1, it is characterized in that in step S104: the personalized federated incremental learning method designed in S103 is iteratively trained until R communication rounds to ensure model convergence; the personalized federated incremental learning method is used to perform distributed incremental learning on the EEG data sample to be identified to determine the user label corresponding to the EEG data sample, and the specific process is that the central server identifies the input test EEG signal x based on the trained global parameter Θ, and the obtained label y is the predicted identity information.
本发明有益的效果是:本发明的基于个性化联邦增量学习的脑电信号身份识别方法,构建了一个基于服务器--客户端架构的联邦学习框架,其中每一个客户端采用了增量元学习方法进行本地的增量学习;然后,设计了一种全局共享模型参数、本地保留学习率的个性化联邦通信策略,设置每个客户端只与服务器通信其元学习参数,而不通信私有的学习率网络,一方面能够加强对本地模型学习方法的保护,另一方面能够实现不同客户端的不同优化器学习率的个性化学习,同时学习率网络的本地私有化也降低了传输的通信成本。The beneficial effects of the present invention are: the EEG signal identification method based on personalized federated incremental learning of the present invention constructs a federated learning framework based on a server-client architecture, in which each client adopts an incremental meta-learning method to perform local incremental learning; then, a personalized federal communication strategy for globally sharing model parameters and locally retaining learning rates is designed, and each client is set to communicate only its meta-learning parameters with the server instead of communicating a private learning rate network. On the one hand, it can strengthen the protection of the local model learning method, and on the other hand, it can realize personalized learning of different optimizer learning rates of different clients. At the same time, the local privatization of the learning rate network also reduces the communication cost of transmission.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明实施例基于个性化联邦增量学习的脑电信号身份识别方法的结构示意图。FIG1 is a schematic diagram of the structure of an EEG signal identity recognition method based on personalized federated incremental learning according to an embodiment of the present invention.
图2是本发明实施例客户端基于自适应元学习的增量学习方法架构示意图。FIG2 is a schematic diagram of the architecture of an incremental learning method based on adaptive meta-learning in a client according to an embodiment of the present invention.
图3是本发明实施例元任务中蒸馏损失和分类损失的计算示意图。FIG3 is a schematic diagram of calculating the distillation loss and the classification loss in the meta-task of an embodiment of the present invention.
图4是本发明实施例增量元学习方法内部循环和外部循环的更新过程示意图。FIG4 is a schematic diagram of the updating process of the inner loop and the outer loop of the incremental meta-learning method according to an embodiment of the present invention.
图5是本发明实施例个性化联邦增量学习的脑电信号身份识别总体框架图。FIG5 is a diagram showing the overall framework of EEG signal identity recognition using personalized federated incremental learning according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
下面结合附图和具体实施例对本发明进行详细描述:本发明的方法共分为四个部分。The present invention is described in detail below with reference to the accompanying drawings and specific embodiments: The method of the present invention is divided into four parts.
第一部分:联邦学习的架构构建Part 1: Architecture Construction of Federated Learning
第二部分:客户端基于增量元学习进行本地更新Part 2: Client local updates based on incremental meta-learning
第三部分:服务器—客户端间的个性化通信Part III: Personalized communication between server and client
第四部分:服务器进行脑电信号身份识别Part 4: Server performs EEG signal identification
根据这四个部分,本发明实施例基于个性化联邦增量学习的脑电信号身份识别方法,如图1所示,包括以下步骤:According to these four parts, the EEG signal identity recognition method based on personalized federated incremental learning in the embodiment of the present invention, as shown in FIG1 , includes the following steps:
S101:采用基于服务器—客户端的联邦学习架构,将每个增量任务所采集到的运动想象脑电信号平均分配到所有K个客户端(联邦通信涉及共K个客户端),每个客户端k(1≤k≤K)初始化私有的本地模型学习率网络和回放样本池 S101: Adopting a server-client based federated learning architecture, the motor imagery EEG signals collected by each incremental task are evenly distributed to all K clients (federated communication involves a total of K clients), and each client k (1≤k≤K) initializes a private local model Learning Rate Network and playback sample pool
首先,采用的服务器—客户端架构是以服务器为中心、客户端为分布式节点的联邦学习框架,每个客户端只基于本地的私有数据样本进行训练,不与其它客户端、服务器共享数据,但是与服务器共享模型参数。First, the server-client architecture adopted is a federated learning framework with the server as the center and the client as a distributed node. Each client is trained only based on local private data samples, and does not share data with other clients or servers, but shares model parameters with the server.
然后,按照不同的数据分布场景,将所采集到的运动想象脑电信号平均分配到所有客户端上。对于共M个类别、每个类别的样本数为n的脑电信号数据,在IID(独立同分布)设置下,所有类的训练样本均匀地分布在所有客户端,每个客户端都将处理类别相同但样本不重叠的数据:每个客户端k处理的样本类别数为M、样本数为在非IID(非独立同分布)设置下,所有的类别均匀地分布在所有客户端,每个客户端处理的类别互不相交:每个客户端k处理的样本数为样本类别数为 Then, according to different data distribution scenarios, the collected motor imagery EEG signals are evenly distributed to all clients. For EEG signal data with M categories and n samples in each category, under the IID (independent and identically distributed) setting, the training samples of all classes are evenly distributed on all clients, and each client will process data with the same category but non-overlapping samples: each client k processes M sample categories and n samples. In the non-IID setting, all categories are evenly distributed across all clients, and the categories processed by each client are disjoint: the number of samples processed by each client k is The number of sample categories is
最后,对每个客户端k进行初始化,包括本地元学习模型的模型参数本地学习率网络的模型参数和一定容量的回放样本池 Finally, each client k is initialized, including the model parameters of the local meta-learning model Model parameters of the local learning rate network And a playback sample pool of a certain capacity
S102:每一个任务增量时,步骤S101所初始化的每个客户端基于本地的模型和当前的私有增量任务进行增量学习,主要由以下三个步骤实现:1)将元学习的内部循环和最近类均值样本回放相结合;2)元任务蒸馏;3)基于可学习的学习率网络的元学习外部循环;S102: At each task increment, each client initialized in step S101 performs incremental learning based on the local model and the current private incremental task, which is mainly achieved by the following three steps: 1) combining the inner loop of meta-learning with the playback of the nearest class mean sample; 2) meta-task distillation; 3) meta-learning outer loop based on a learnable learning rate network;
图2所示为每一个客户端本地的基于自适应元学习的增量学习方法架构示意图。FIG2 is a schematic diagram of the architecture of the incremental learning method based on adaptive meta-learning local to each client.
第一个步骤,将元学习的内部循环和最近类均值样本回放相结合对于第k个客户端(1≤k≤K,联邦通信涉及共K个客户端),私有任务流在第c个任务通信时(1≤c≤C,联邦过程共有C个任务通信),客户端k处理的私有任务为在第c个任务通信时(1≤c≤C),每个客户端k(1≤k≤K)基于其私有任务的样本训练其本地增量模型,其中任务采样于私有数据集且包含了共个类别。假设样本空间共有N个类别标签,那么满足私有数据集中表示样本数据,表示样本标签。其中需要注意的是,对于客户端k,不同任务通信轮次处理的任务样本类别不重叠,即当i≠j时对于不同客户端p和q,相同任务通信轮次处理的任务样本类别可能存在重叠,即当p≠q时 The first step combines the inner loop of meta-learning with the replay of the nearest class mean sample. For the kth client (1≤k≤K, federated communication involves a total of K clients), the private task flow When the cth task communicates (1≤c≤C, there are C tasks communicating in the federated process), the private task processed by client k is During the cth task communication (1≤c≤C), each client k (1≤k≤K) based on its private task The local incremental model is trained with samples from Sampled from private dataset And includes a total Assuming that the sample space has N category labels, then Private Datasets middle represents sample data, It should be noted that for client k, the task sample categories processed by different task communication rounds do not overlap, that is, when i≠j For different clients p and q, the task sample categories processed by the same task communication round may overlap, that is, when p≠q
增量元学习训练的过程可以分为内部训练和外部训练。对于c时刻,本方法基于从当前新任务采样的新任务样本b进行内部训练,基于b和从样本池采样的旧任务样本的混合数据进行外部训练。于是,该方法可以在学习新数据分布的同时,确保新旧任务的梯度对齐。The incremental meta-learning training process can be divided into internal training and external training. For time c, this method is based on the current new task The new task sample b is sampled for internal training, based on b and from the sample pool Sample of old tasks Therefore, this method can ensure the alignment of gradients of new and old tasks while learning new data distribution.
首先,在增量元学习的内部循环中,基于当前新任务的数据,采用可学习的学习率网络对多任务小样本场景进行快速适应,p和i分别表示外部循环和内部循环的次数,表示在第k个客户端第p次外部循环时进行i次内部训练得到的模型参数。First, in the inner loop of incremental meta-learning, based on the data of the current new task, a learnable learning rate network is used to quickly adapt to the multi-task small sample scenario. p and i represent the number of outer and inner loops, respectively. It represents the model parameters obtained by performing i internal trainings in the p-th external loop of the k-th client.
0033.0033.
然后,令bm=mc-1∪b为从样本池采样的旧任务样本和当前任务采样的新任务样本b的混合数据,基于内部训练得到的模型参数和混合数据bm进行外部元训练得到本地元参数Θc。于是,在c时刻,公式(1)的目标可以改写为:Then, let b m = m c-1 ∪b be the number of samples from the sample pool. Sample of old tasks Mixed data of new task sample b sampled by the current task, based on the model parameters obtained by internal training The local meta-parameters Θ c are obtained by external meta-training with the mixed data b m . Therefore, at time c, the objective of formula (1) can be rewritten as:
其中,{X1:c,Y1:c}是从bm采样得到的目前处理过的1~c个任务的样本数据。Among them, {X 1:c ,Y 1:c } is the sample data of 1 to c tasks currently processed obtained by sampling from b m .
最近类均值样本回放方法使用了一个固定容量的回放样本池,其储存了旧任务中最近类均值的样本数据,用于在新任务训练过程中取出。是在第c轮通信中进行最近类均值采样得到的回放样本池,并在第c轮通信后更新为最近类均值样本回放方法包括了两个主要阶段:样本采样和样本池更新。样本采样阶段构建了限制大小的回放样本m(流程如算法1所示);样本池更新阶段重建了时刻c固定容量的回放样本池(流程如算法2所示)。The recent class mean sample replay method uses a fixed-capacity replay sample pool, which stores the sample data of the recent class mean in the old task for retrieval during the new task training process. is the playback sample pool obtained by sampling the nearest class mean in the cth round of communication, and is updated to after the cth round of communication The nearest class mean sample replay method includes two main stages: sample sampling and sample pool update. The sample sampling stage constructs a limited size replay sample m (the process is shown in Algorithm 1); the sample pool update stage reconstructs a fixed capacity replay sample pool at time c (The process is shown in Algorithm 2).
第二个步骤,元任务蒸馏:如上介绍,元任务采样的bm是由当前任务批次b和从样本池采样的回放样本混合组成。本发明需要存储的不是每个任务对应的网络参数Θc,而是回放样本池中最近类均值样本的原型(对应任务在全连接层前提取的特征嵌入)作为软标签。对于每个旧任务τ,回放样本池中的软标签是由刚训练完任务τ时的分类器Θτ生成的,以保证Θτ可以最准确地学习任务τ的样本分布。The second step is meta-task distillation: As mentioned above, the meta-task sampling bm is composed of the current task batch b and the sample pool Sample playback sample Mixed composition. The present invention does not need to store the network parameters θ c corresponding to each task, but the prototype of the nearest class mean sample in the playback sample pool (the feature embedding extracted before the fully connected layer of the corresponding task) as a soft label. For each old task τ, the soft label in the playback sample pool is generated by the classifier θ τ when the task τ is just trained, so as to ensure that θ τ can learn the sample distribution of the task τ most accurately.
如图3所示,为计算元任务中旧样本的蒸馏损失和新旧样本的分类损失的方法。在计算蒸馏损失时,令回放样本的类别标签数目为|m|。对于中的样本数据 (其中标签令先前旧分类器的输出和当前新分类器的输出分别为和O|m|(x)=[o1(x),…,o|m|(x)]。于是蒸馏损失可以表示为:As shown in Figure 3, this is a method for calculating the distillation loss of old samples and the classification loss of new and old samples in the meta-task. When calculating the distillation loss, let the playback sample The number of class labels is |m|. Sample data in (where label Let the output of the previous old classifier and the output of the current new classifier be and O |m| (x)=[ o1 (x),…,o |m| (x)]. Then the distillation loss can be expressed as:
其中T表示温度尺度。在计算分类损失时,令新任务样本的类别标签数目为|n|。对于中|m|个回放样本和当前任务b中|n|个样本的混合数据(其中标签令当前新分类器的输出为O|m|+|n|(x)=[o1(x),…,o|m|(x),o|m|+1(x),…,o|m|+|n|(x)]。于是交叉熵分类损失可以表示为:in T represents the temperature scale. When calculating the classification loss, let the number of category labels of the new task samples be |n|. Mixed data of |m| replay samples in b and |n| samples in current task b (where label Let the output of the current new classifier be O |m|+|n| (x)=[ o1 (x),…,o |m| (x),o |m|+1 (x),…,o |m|+|n| (x)]. Then the cross entropy classification loss can be expressed as:
其中,表示样本的真实类别y为k则取1否则取0,pk(x)表示分类器在第k个类别输出的概率(如softmax层的logits值)。in, If the true category y of the sample is k, it takes 1, otherwise it takes 0. p k (x) represents the probability of the classifier outputting the kth category (such as the logits value of the softmax layer).
于是,公式(4)可以改写为:Therefore, formula (4) can be rewritten as:
其中,lCE是在bm上用于正确分类的交叉熵损失函数,lKD是在上用于网络正则化的蒸馏损失函数。下一轮外部循环p+1的元参数由基于分类损失和蒸馏损失的元损失进行梯度更新:Among them, l CE is the cross entropy loss function for correct classification on b m , l KD is The above distillation loss function is used for network regularization. The meta-parameters of the next outer cycle p+1 are updated by the gradient of the meta-loss based on the classification loss and the distillation loss:
其中,β表示元模型参数更新的学习率参数。Among them, β represents the learning rate parameter for meta-model parameter update.
第三个步骤,基于可学习的学习率网络的元学习外部循环:在第k个客户端第p+1次外部循环时,对元损失基于第p次外部循环的学习率网络计算梯度,自适应地更新学习率网络:The third step is to use a meta-learning outer loop based on a learnable learning rate network: At the p+1th outer loop of the kth client, the meta-loss is based on the learning rate network of the pth outer loop. Calculate the gradient and adaptively update the learning rate network:
其中,是第p次外部循环的可学习的学习率网络,该网络的架构和元模型的架构相同。in, is the learnable learning rate network for the pth outer cycle, the architecture and metamodel of this network The architecture is the same.
图4描述了和的更新过程,和都是由梯度下降进行更新,其中c表示所处理的任务编号,i和p分别表示内部循环和外部循环的次数。可学习的学习率网络目的是自适应地调节模型更新的学习速率和方向,同时降低对学习率初始化的依赖。Figure 4 depicts and The update process, and All are updated by gradient descent, where c represents the number of tasks being processed, and i and p represent the number of inner and outer loops, respectively. The purpose is to adaptively adjust the learning rate and direction of model updates while reducing dependence on learning rate initialization.
S103:步骤S102所更新的客户端保留本地的率网络和回放样本池,通过与服务器共享本地的元学习模型参数;中央服务器通过聚合步骤S102所更新的客户端的共享模型参数得到全局模型;中央服务器将联邦平均得到的全局模型分发至所有客户端;S103: The client updated in step S102 retains the local rate network and playback sample pool, and shares the local meta-learning model parameters with the server; the central server obtains the global model by aggregating the shared model parameters of the client updated in step S102; the central server distributes the global model obtained by federation averaging to all clients;
如图5所示,每个客户端k在本地保有其私有的回放样本池和可学习的学习率网络参数与中央服务器通信其元学习的模型参数Θk;中央服务器将各个参与客户端上传的元学习模型参数进行加权聚合,作为全局模型参数分发至各个客户端。不同于传统联邦系统在通信时共享优化器的学习率或者每轮通信时重新初始化优化器的学习率,本章构建的框架中每个客户端k只在第一个任务流时初始化学习率网络在随后联邦通信的过程中,任务流到来时仍然保持并训练以实现本地的个性化学习。该方法让客户端个性化地学习本地的数据分布,并在联邦通信时实现客户端间的知识正向迁移,同时降低了通信成本。算法4描述了该框架的具体流程,其设计的个性化框架使得客户端间全局共享元学习的模型参数,每个客户端本地保留自适应的学习率网络参数。As shown in Figure 5, each client k maintains its own private playback sample pool locally. and the learnable learning rate network parameter The central server communicates its meta-learning model parameters Θ k with the central server; the central server weights and aggregates the meta-learning model parameters uploaded by each participating client and distributes them to each client as global model parameters. Unlike traditional federated systems that share the optimizer's learning rate during communication or reinitialize the optimizer's learning rate during each round of communication, in the framework constructed in this chapter, each client k only initializes the learning rate network at the first task flow. During subsequent federated communication, task flows are maintained and trained as they arrive. To achieve local personalized learning. This method allows the client to learn the local data distribution in a personalized way and realizes the forward transfer of knowledge between clients during federated communication, while reducing the communication cost. Algorithm 4 describes the specific process of the framework. The personalized framework designed enables the model parameters of meta-learning to be shared globally between clients, and each client retains the adaptive learning rate network parameters locally.
S104:服务器基于步骤S103得到的联邦平均后的全局模型进行脑电信号的身份识别;使用所述个性化联邦增量学习方法对待识别的脑电数据样本进行分布式增量学习,确定该脑电数据样本所对应的用户标签,其具体过程为中央服务器基于训练好的全局参数Θ对输入测试脑电信号x进行识别,得到的标签y即为预测的身份信息。S104: The server performs identity recognition of the EEG signal based on the global model after federated averaging obtained in step S103; uses the personalized federated incremental learning method to perform distributed incremental learning on the EEG data sample to be recognized, and determines the user label corresponding to the EEG data sample. The specific process is that the central server recognizes the input test EEG signal x based on the trained global parameter Θ, and the obtained label y is the predicted identity information.
实验设计Experimental design
本发明训练模型的试验对象为大规模的标准脑电运动想象数据集EEG MotorMovement/Imagery Dataset。本实施例中,数据集包含1500多个1~2分钟的脑电信号记录,脑电信号样本来自于109名健康受试者,采样频率为160Hz;每名受试者执行不同的运动/想象任务,使用BCI2000系统记录64通道脑电信号;每位受试者进行了14次实验:2项1分钟的基线运动(第1次睁眼,第2次闭眼),以及以下4个任务中每一项进行了3次2分钟的运动:The experimental object of the training model of the present invention is a large-scale standard EEG Motor Movement/Imagery Dataset. In this embodiment, the data set contains more than 1,500 1-2 minute EEG signal records, and the EEG signal samples come from 109 healthy subjects with a sampling frequency of 160Hz; each subject performs a different movement/imagination task, and uses the BCI2000 system to record 64-channel EEG signals; each subject performs 14 experiments: 2 1-minute baseline exercises (the first time with eyes open, the second time with eyes closed), and 3 2-minute exercises for each of the following 4 tasks:
Task1:目标出现在屏幕的左侧或右侧。对象打开并合上相应的拳头,直到目标消失。然后主体放松。Task1: The target appears on the left or right side of the screen. The subject opens and closes the corresponding fist until the target disappears. Then the subject relaxes.
Task2:目标出现在屏幕的左侧或右侧。对象想象打开并合上相应的拳头,直到目标消失。然后主体放松。Task2: The target appears on the left or right side of the screen. The subject imagines opening and closing the corresponding fist until the target disappears. The subject then relaxes.
Task3:目标出现在屏幕的顶部或底部。对象打开或合上两个拳头(如果目标在顶部)或双脚(如果目标在底部),直到目标消失。然后主体放松。Task 3: The target appears at the top or bottom of the screen. The subject opens or closes both fists (if the target is at the top) or both feet (if the target is at the bottom) until the target disappears. The subject then relaxes.
Task4:目标出现在屏幕的顶部或底部。对象想象打开或合上两个拳头(如果目标在顶部)或双脚(如果目标在底部),直到目标消失。然后主体放松。Task 4: The target appears at the top or bottom of the screen. The subject imagines opening or closing two fists (if the target is at the top) or two feet (if the target is at the bottom) until the target disappears. The subject then relaxes.
简称睁眼为EO(Eye Open),闭眼为EC(Eye Close),运动状态为PHY(Physical),想象运动状态为IMA(Image)。The abbreviation for open eyes is EO (Eye Open), closed eyes is EC (Eye Close), movement state is PHY (Physical), and imagined movement state is IMA (Image).
本发明采用任务间数据集训练身份识别模型:将EO和EC静息状态的数据作为训练和测试。在联邦增量场景中,分别设置客户端数目为5和10,分别设置通信轮次为1、2、5、10、20,每个客户端的本地回放样本池大小为109。分别在IID设置和非IID设置下设计了实验:The present invention uses inter-task datasets to train the identity recognition model: the EO and EC resting state data are used as training and testing. In the federated incremental scenario, the number of clients is set to 5 and 10, the communication rounds are set to 1, 2, 5, 10, and 20, and the local playback sample pool size of each client is 109. Experiments are designed under IID settings and non-IID settings respectively:
IID设置,即独立同分布设置。所有类的训练样本均匀地分布在所有客户端,每个客户端都将处理相同的类别。具体地说,每个客户端将以不同的顺序处理与其它客户端具有相同类别标签的任务序列,而不同的客户端处理类别的样本互不相同。在该设置中,该数据集被分割为由不相交的10个增量类别组成的11个任务(其中最后一个任务包含9个类),每个任务增量时,将该任务的所有增量类别的样本平均分配给所有客户端;IID setting, that is, independent and identically distributed setting. The training samples of all classes are evenly distributed on all clients, and each client will process the same category. Specifically, each client will process the task sequence with the same category label as other clients in a different order, and different clients process different samples of different categories. In this setting, the dataset is divided into 11 tasks consisting of 10 non-overlapping incremental categories (the last task contains 9 classes). When each task is incremented, the samples of all incremental categories of the task are evenly distributed to all clients;
非IID设置:即非独立同分布设置。所有的类别均匀地分布在所有客户端,每个客户端处理的类别互不相交。具体地说,每个客户端处理包含不同类别标签的任务序列,不同的客户端处理的类别互不相同。在该设置中,当联邦框架中有5个客户端时,109个类被划分为5增量的9个任务和4增量的16个任务;当有10个客户端时,109个类被划分为3增量的11个任务和4增量的19个任务。Non-IID setting: that is, non-IID setting. All categories are evenly distributed among all clients, and the categories processed by each client are disjoint. Specifically, each client processes a task sequence containing different category labels, and the categories processed by different clients are different. In this setting, when there are 5 clients in the federated framework, 109 classes are divided into 9 tasks with 5 increments and 16 tasks with 4 increments; when there are 10 clients, 109 classes are divided into 11 tasks with 3 increments and 19 tasks with 4 increments.
实验结果Experimental Results
表1和表2分别展示了IID设置和非IID设置下本发明分别在5个客户端、10个客户端和1、2、5、10、20轮通信次数下的服务端全局性能和客户端局部性能。从数据分布的角度,所提出的本发明方法在IID设置和非IID设置下对联邦增量场景都具备有效性;特别地当通信轮次越多时,非IID设置下的性能越好,说明了当数据是非独立同分布时,本章提出的方法能够使每一个客户端在越多的通信轮次中从其它客户端学习到越多的迁移知识。从客户端数目和通信轮次的角度,本发明在不同的参数设定下均有效,多场景实验能够验证本发明的泛化性和鲁棒性。Table 1 and Table 2 show the global performance of the server and the local performance of the client under 5 clients, 10 clients and 1, 2, 5, 10, and 20 rounds of communication respectively under IID and non-IID settings. From the perspective of data distribution, the proposed method of the present invention is effective for federated incremental scenarios under IID and non-IID settings; in particular, the performance under the non-IID setting is better when the number of communication rounds is more, which shows that when the data is not independent and identically distributed, the method proposed in this chapter can enable each client to learn more migration knowledge from other clients in more communication rounds. From the perspective of the number of clients and communication rounds, the present invention is effective under different parameter settings, and multi-scenario experiments can verify the generalization and robustness of the present invention.
表1 IID设置:在不同客户端、不同通信轮次下的联邦增量性能。以服务端的全局性能(%)和客户端的局部性能(%)评估。每个实验取3个随机种子的平均值。Table 1. IID setup: federated incremental performance under different clients and different communication rounds. Evaluated by global performance (%) on the server and local performance (%) on the client. The average of 3 random seeds is taken for each experiment.
表2非IID设置:在不同客户端、不同通信轮次下的联邦增量性能。以服务端的全局性能(%)和客户端的局部性能(%)评估。每个实验取3个随机种子的平均值。Table 2. Non-IID setting: federated incremental performance under different clients and different communication rounds. Evaluated by global performance (%) on the server and local performance (%) on the client. The average of 3 random seeds is taken for each experiment.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310644445.2A CN117077765B (en) | 2023-06-01 | 2023-06-01 | A method for EEG signal identity recognition based on personalized federated incremental learning |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310644445.2A CN117077765B (en) | 2023-06-01 | 2023-06-01 | A method for EEG signal identity recognition based on personalized federated incremental learning |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117077765A true CN117077765A (en) | 2023-11-17 |
| CN117077765B CN117077765B (en) | 2025-09-23 |
Family
ID=88706732
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310644445.2A Active CN117077765B (en) | 2023-06-01 | 2023-06-01 | A method for EEG signal identity recognition based on personalized federated incremental learning |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117077765B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117591888A (en) * | 2024-01-17 | 2024-02-23 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key train components |
| CN117750320A (en) * | 2023-12-20 | 2024-03-22 | 南华大学 | Wifi personnel identity recognition method based on federal learning and class increment learning |
| CN119152684A (en) * | 2024-11-08 | 2024-12-17 | 西南财经大学 | Cross-domain traffic flow prediction method based on continuous learning |
| CN119598312A (en) * | 2024-11-14 | 2025-03-11 | 中国人民解放军国防科技大学 | Heterogeneous data-oriented federal learning modulation identification method and apparatus |
| CN119760579A (en) * | 2025-03-07 | 2025-04-04 | 浙江大学 | An EEG decoding method based on unsupervised individual continuous learning |
| CN119848818A (en) * | 2024-12-06 | 2025-04-18 | 中国地质大学(武汉) | EEG signal-based identity recognition method, device and equipment |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
| US20210035017A1 (en) * | 2019-07-31 | 2021-02-04 | BioSymetrics, Inc. | Methods, systems, and frameworks for data analytics using machine learning |
| US11204991B1 (en) * | 2015-10-29 | 2021-12-21 | Omnivu, Inc. | Identity verification system and method for gathering, identifying, authenticating, registering, monitoring, tracking, analyzing, storing, and commercially distributing dynamic markers and personal data via electronic means |
| CN114048780A (en) * | 2021-11-15 | 2022-02-15 | 中国科学院深圳先进技术研究院 | Electroencephalogram classification model training method and device based on federal learning |
| CN114564743A (en) * | 2022-02-18 | 2022-05-31 | 华中科技大学 | Privacy protection transfer learning method applied to motor imagery brain-computer interface system |
| CN114580663A (en) * | 2022-03-01 | 2022-06-03 | 浙江大学 | Data non-independent same-distribution scene-oriented federal learning method and system |
| CN115759297A (en) * | 2022-11-28 | 2023-03-07 | 国网山东省电力公司电力科学研究院 | A federated learning method, device, medium and computer equipment |
-
2023
- 2023-06-01 CN CN202310644445.2A patent/CN117077765B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11204991B1 (en) * | 2015-10-29 | 2021-12-21 | Omnivu, Inc. | Identity verification system and method for gathering, identifying, authenticating, registering, monitoring, tracking, analyzing, storing, and commercially distributing dynamic markers and personal data via electronic means |
| WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
| US20210035017A1 (en) * | 2019-07-31 | 2021-02-04 | BioSymetrics, Inc. | Methods, systems, and frameworks for data analytics using machine learning |
| CN114048780A (en) * | 2021-11-15 | 2022-02-15 | 中国科学院深圳先进技术研究院 | Electroencephalogram classification model training method and device based on federal learning |
| CN114564743A (en) * | 2022-02-18 | 2022-05-31 | 华中科技大学 | Privacy protection transfer learning method applied to motor imagery brain-computer interface system |
| CN114580663A (en) * | 2022-03-01 | 2022-06-03 | 浙江大学 | Data non-independent same-distribution scene-oriented federal learning method and system |
| CN115759297A (en) * | 2022-11-28 | 2023-03-07 | 国网山东省电力公司电力科学研究院 | A federated learning method, device, medium and computer equipment |
Non-Patent Citations (2)
| Title |
|---|
| 李窦哲;乔晓艳;董有尔;: "基于参数模型和FastICA算法的P300特征实时提取", 测试技术学报, no. 06, 15 November 2009 (2009-11-15) * |
| 路松峰等: "云边端协同的增量联邦学习算法", 《华中科技大学学报(自然科学版)》, 21 October 2022 (2022-10-21), pages 2 - 9 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117750320A (en) * | 2023-12-20 | 2024-03-22 | 南华大学 | Wifi personnel identity recognition method based on federal learning and class increment learning |
| CN117591888A (en) * | 2024-01-17 | 2024-02-23 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key train components |
| CN117591888B (en) * | 2024-01-17 | 2024-04-12 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key train components |
| CN119152684A (en) * | 2024-11-08 | 2024-12-17 | 西南财经大学 | Cross-domain traffic flow prediction method based on continuous learning |
| CN119598312A (en) * | 2024-11-14 | 2025-03-11 | 中国人民解放军国防科技大学 | Heterogeneous data-oriented federal learning modulation identification method and apparatus |
| CN119848818A (en) * | 2024-12-06 | 2025-04-18 | 中国地质大学(武汉) | EEG signal-based identity recognition method, device and equipment |
| CN119760579A (en) * | 2025-03-07 | 2025-04-04 | 浙江大学 | An EEG decoding method based on unsupervised individual continuous learning |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117077765B (en) | 2025-09-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN117077765A (en) | Electroencephalogram signal identity recognition method based on personalized federal incremental learning | |
| Wang et al. | Industrial cyber-physical systems-based cloud IoT edge for federated heterogeneous distillation | |
| Zhao et al. | Privacy-preserving collaborative deep learning with unreliable participants | |
| Tan et al. | Towards personalized federated learning | |
| Liu et al. | Keep your data locally: Federated-learning-based data privacy preservation in edge computing | |
| Lu et al. | Auction-based cluster federated learning in mobile edge computing systems | |
| Aggarwal et al. | Fedface: Collaborative learning of face recognition model | |
| CN111259738B (en) | Face recognition model construction method, face recognition method and related device | |
| Zhang et al. | Cross-subject EEG-based emotion recognition with deep domain confusion | |
| Xu et al. | Mimic embedding via adaptive aggregation: Learning generalizable person re-identification | |
| CN111985650A (en) | Activity recognition model and system considering both universality and individuation | |
| WO2023020214A1 (en) | Retrieval model training method and apparatus, retrieval method and apparatus, device and medium | |
| Zhang et al. | Instance Transfer Subject‐Dependent Strategy for Motor Imagery Signal Classification Using Deep Convolutional Neural Networks | |
| CN114564743B (en) | Privacy protection migration learning method applied to motor imagery brain-computer interface system | |
| Rehman et al. | Federated self-supervised learning for video understanding | |
| CN110210540A (en) | Across social media method for identifying ID and system based on attention mechanism | |
| CN113902131A (en) | An Update Method for Node Models Resisting Discrimination Propagation in Federated Learning | |
| CN115481755A (en) | Personalized federal learning method based on self-adaptive local aggregation | |
| Li et al. | Adaptive dropout method based on biological principles | |
| CN117521785A (en) | Privacy protection federal learning method and system for data-driven cognitive computation | |
| CN117422151A (en) | Federal learning method and device based on reasoning similarity and soft clustering | |
| Liu et al. | Collaborating domain-shared and target-specific feature clustering for cross-domain 3d action recognition | |
| US11358061B2 (en) | Computer program for performing drawing-based security authentication | |
| Liu et al. | Specific emitter identification at different time based on multi-domain migration | |
| Sun et al. | A Deep Learning Method for Intelligent Analysis of Sports Training Postures |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |