CN118333129A - Identification model training method, nonlinear system identification method and system - Google Patents
Identification model training method, nonlinear system identification method and system Download PDFInfo
- Publication number
- CN118333129A CN118333129A CN202410734169.3A CN202410734169A CN118333129A CN 118333129 A CN118333129 A CN 118333129A CN 202410734169 A CN202410734169 A CN 202410734169A CN 118333129 A CN118333129 A CN 118333129A
- Authority
- CN
- China
- Prior art keywords
- model
- neural network
- output
- data
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/27—Regression, e.g. linear or logistic regression
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Feedback Control In General (AREA)
Abstract
本发明属于非线性系统辨识技术领域,提供了一种辨识模型训练方法、非线性系统辨识方法及系统,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新;利用多种不同的模型对输入特征进行提取,比原有的单模型效果表现更加优秀,对非线性系统的拟合效果更佳,此外,基于滑动数据窗口的随机梯度下降算法,增强了数据的利用率,提高了算法的收敛速度与精度,同时在模型参数优化时只沿着一个模型的方向,增强了模型的训练效果,在非线性系统的辨识任务上展示了较好的成果。
The present invention belongs to the technical field of nonlinear system identification, and provides an identification model training method, a nonlinear system identification method and a system. A plurality of different neural network models are used to extract features of input data and output data to obtain different intermediate features; an alternating direction multiplier method is used to optimize parameters; a stochastic gradient descent algorithm based on a sliding data window is used to obtain the variation of each layer of the neural network model, and the parameters of each layer of the neural network model are updated; input features are extracted using a plurality of different models, and the effect is better than that of the original single model, and the fitting effect of the nonlinear system is better. In addition, the stochastic gradient descent algorithm based on the sliding data window enhances the utilization rate of data, improves the convergence speed and accuracy of the algorithm, and at the same time, only one model direction is followed when optimizing the model parameters, which enhances the training effect of the model and shows good results in the identification task of the nonlinear system.
Description
技术领域Technical Field
本发明属于非线性系统辨识技术领域,尤其涉及一种辨识模型训练方法、非线性系统辨识方法及系统。The present invention belongs to the technical field of nonlinear system identification, and in particular relates to an identification model training method, a nonlinear system identification method and a system.
背景技术Background technique
在工业生产过程自动化中,常常需要对某些设备和容器的液位进行测量和控制,调节容器内的输入输出物料的平衡,以便保证生产过程中各环节的物料搭配得当,级联水箱系统就是一种比较常见的工业现场液位系统,由两个水箱、一个蓄水池和一个水泵组成,其基本原理是水泵将水池里的水抽到较高的那个水箱,接着水通过一个开口从较高的水箱流向较低的水箱,最后再流向蓄水池。In the automation of industrial production processes, it is often necessary to measure and control the liquid levels of certain equipment and containers, and adjust the balance of input and output materials in the containers to ensure proper material matching in each link of the production process. The cascade water tank system is a relatively common industrial field liquid level system, which consists of two water tanks, a reservoir and a water pump. The basic principle is that the water pump pumps the water in the pool to the higher water tank, and then the water flows from the higher water tank to the lower water tank through an opening, and finally flows to the reservoir.
系统辨识是自动控制中广泛应用的经典课题,是建立目标系统数学模型的基本理论,在现代工程中发挥着不可取代的作用;目前辨识的系统对象一般是非线性系统,比如对级联水箱系统的辨识。系统辨识的主要目标是推断一个实际的自动系统,或从观测数据中建立一个动态模型,用来精确地预测未来的数据。多新息理论是系统辨识的分支,它的基本思想是扩展新息长度和充分利用来自数据的有用信息,即在训练中多次引入过去的数据。深度学习是近年来十分热门的领域,它是在以往神经网络的基础上,添加了更加复杂,深层的网络,可用于拟合大多数非线性问题。深度网络在许多领域都得到了应用且取得了巨大的成功,例如计算机视觉、语音识别和自然语言处理等。基于随机梯度的优化方法在科学和工程的许多领域具有核心的实际意义,在深度学习中,随机梯度下降在无数次实践中被证明是一种有效的参数查找优化方法。System identification is a classic topic widely used in automatic control. It is the basic theory for establishing the mathematical model of the target system and plays an irreplaceable role in modern engineering. At present, the system objects to be identified are generally nonlinear systems, such as the identification of cascade water tank systems. The main goal of system identification is to infer an actual automatic system or to establish a dynamic model from observed data to accurately predict future data. Multi-innovation theory is a branch of system identification. Its basic idea is to extend the length of innovation and make full use of useful information from data, that is, to introduce past data multiple times in training. Deep learning has been a very popular field in recent years. It is based on the previous neural network and adds a more complex and deep network, which can be used to fit most nonlinear problems. Deep networks have been applied in many fields and have achieved great success, such as computer vision, speech recognition and natural language processing. Stochastic gradient-based optimization methods have core practical significance in many fields of science and engineering. In deep learning, stochastic gradient descent has been proven to be an effective parameter search optimization method in countless practices.
发明人发现,对级联水箱系统等非线性系统辨识时,在神经网络的基础上,通过添加更加复杂,深层的网络,可用于拟合大多数非线性问题;但是,目前基于神经网络模型的非线性系统辨识模型还存有许多难以处理的非线性问题,拟合效果差,并且在训练过程中对数据的利用率有待提高,收敛速度与精度不高,训练效果和效果辨识较差。The inventors have found that when identifying nonlinear systems such as cascade water tank systems, by adding a more complex and deep network on the basis of the neural network, it can be used to fit most nonlinear problems; however, the current nonlinear system identification model based on the neural network model still has many difficult-to-handle nonlinear problems, the fitting effect is poor, and the utilization rate of data during the training process needs to be improved, the convergence speed and accuracy are not high, and the training effect and effect identification are poor.
发明内容Summary of the invention
本发明为了解决上述问题,提出了一种辨识模型训练方法、非线性系统辨识方法及系统,本发明利用多种不同的模型对输入特征进行提取,比原有的单模型效果表现更加优秀,对非线性系统的拟合效果更佳,基于滑动数据窗口的随机梯度下降算法,增强了数据的利用率,提高了算法的收敛速度与精度,基于交替方向乘子法原理的参数优化策略,增强了模型的训练效果。In order to solve the above problems, the present invention proposes an identification model training method, a nonlinear system identification method and a system. The present invention uses a variety of different models to extract input features, which has better performance than the original single model and better fitting effect on nonlinear systems. The stochastic gradient descent algorithm based on the sliding data window enhances the utilization of data and improves the convergence speed and accuracy of the algorithm. The parameter optimization strategy based on the principle of the alternating direction multiplier method enhances the training effect of the model.
为了实现上述目的,本发明是通过如下的技术方案来实现:In order to achieve the above object, the present invention is implemented through the following technical solutions:
第一方面,本发明提供了一种辨识模型训练方法,包括:In a first aspect, the present invention provides a recognition model training method, comprising:
获取非线性系统的输入数据和输出数据;Obtain input and output data of nonlinear systems;
将输入数据和输出数据,输入到预设的多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;Input data and output data are input into preset multi-layer neural network models for training to obtain recognition models; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization. In each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one neural network model are adjusted, and all neural network models are updated in sequence;
根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新。According to the pre-constructed loss function, the output of the network is compared with the actual output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
第二方面,本发明提供了一种非线性系统辨识方法,包括:In a second aspect, the present invention provides a nonlinear system identification method, comprising:
获取非线性系统的输入数据和输出数据;Obtain input and output data of nonlinear systems;
根据输入数据和输出数据,以及预设的辨识模型,得到辨识结果;Obtain identification results based on input data, output data, and a preset identification model;
其中,辨识模型包括多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新。Among them, the identification model includes multiple layers of different neural network models for training to obtain the identification model; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization, and in each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one of the neural network models are adjusted, and all the neural network models are updated in turn; according to a pre-constructed loss function, the output of the network is compared with the true output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change amount of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
进一步的,所述非线性系统为非线性自回归模型表征的级联水箱系统,级联水箱系统包括两个水箱和水泵;两个水箱的高度不同,所述输入数据为较低水箱的水位,所述输入数据为水泵电压。Furthermore, the nonlinear system is a cascade water tank system characterized by a nonlinear autoregressive model, and the cascade water tank system includes two water tanks and a water pump; the two water tanks have different heights, the input data is the water level of the lower water tank, and the input data is the water pump voltage.
进一步的,将不同的中间特征合并,作为特征处理网络的输入;利用特征处理网络将合并的中间特征进行多层的非线性变换,得到最终的模型输出。Furthermore, different intermediate features are merged as input to the feature processing network; the merged intermediate features are subjected to multi-layer nonlinear transformation using the feature processing network to obtain the final model output.
进一步的,每轮参数迭代时第h个特征提取网络的损失函数如下:Furthermore, the loss function of the h -th feature extraction network in each round of parameter iteration is as follows:
; ;
; ;
; ;
其中,β是超参数,控制模型受整体网络输出的影响程度;表示对输入处理得到的中间特征,表示对应的系统真实输出,i为常数;表示在作用下的模型输出;n表示训练数据个数。Among them, β is a hyperparameter that controls the degree to which the model is affected by the overall network output; represents the intermediate features obtained by input processing, represents the corresponding real output of the system, i is a constant; Indicated in The model output under the action of ; n represents the number of training data.
进一步的,重新构建损失函数如下:Furthermore, the loss function is reconstructed as follows:
; ;
其中,W和b分别表示网络的可学习参数权重和偏差;p表示新息长度,j为常数;Where W and b represent the network's learnable parameter weight and bias, respectively; p represents the innovation length, and j is a constant;
模型输出与真实输出的误差为:The error between the model output and the true output for:
。 .
第三方面,本发明还提供了一种非线性系统辨识系统,包括:In a third aspect, the present invention further provides a nonlinear system identification system, comprising:
数据采集模块,被配置为:获取非线性系统的输入数据和输出数据;The data acquisition module is configured to: obtain input data and output data of the nonlinear system;
辨识模块,被配置为:根据输入数据和输出数据,以及预设的辨识模型,得到辨识结果;The identification module is configured to: obtain an identification result according to the input data and the output data and a preset identification model;
其中,辨识模型包括多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新。Among them, the identification model includes multiple layers of different neural network models for training to obtain the identification model; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization, and in each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one of the neural network models are adjusted, and all the neural network models are updated in turn; according to a pre-constructed loss function, the output of the network is compared with the true output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change amount of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
第四方面,本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现了第一方面所述的非线性系统辨识方法的步骤。In a fourth aspect, the present invention further provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps of the nonlinear system identification method described in the first aspect.
第五方面,本发明还提供了一种电子设备,包括存储器、处理器及存储在存储器上并能够在处理器上运行的计算机程序,所述处理器执行所述程序时实现了第一方面所述的非线性系统辨识方法的步骤。In a fifth aspect, the present invention further provides an electronic device comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein when the processor executes the program, the steps of the nonlinear system identification method described in the first aspect are implemented.
第六方面,本发明还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时,实现了第一方面所述的非线性系统辨识方法的步骤。In a sixth aspect, the present invention further provides a computer program product, wherein the computer program product comprises a computer program, and when the computer program is executed by a processor, the steps of the nonlinear system identification method described in the first aspect are implemented.
与现有技术相比,本发明的有益效果为:Compared with the prior art, the present invention has the following beneficial effects:
本发明在模型训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新;利用多种不同的模型对输入特征进行提取,获得更多元化的一个信息,比原有的单模型效果表现更加优秀,对非线性系统的拟合效果更佳,此外,基于滑动数据窗口的随机梯度下降算法,在参数训练过程中引入过去的信息,增强了数据的利用率,提高了算法的收敛速度与精度,同时还提出了基于交替方向乘子法原理的参数优化策略,在模型参数优化时只沿着一个模型的方向,而不是以往的全体模型参数同时参与训练,增强了模型的训练效果,在非线性系统的辨识任务上展示了较好的成果。The present invention uses a plurality of different neural network models to extract features from input data and output data during model training to obtain different intermediate features; uses an alternating direction multiplier method to perform parameter optimization, and in each round of parameter optimization, fixes the parameters of other neural network models, only adjusts the parameters of one of the neural network models, and updates all the neural network models in sequence; compares the output of the network with the true output of the nonlinear system according to a pre-constructed loss function to obtain an error; uses a random gradient descent algorithm based on a sliding data window to obtain the variation of each layer of the neural network model, and updates the parameters of each layer of the neural network model; uses a plurality of different models to extract input features to obtain more diversified information, which is more excellent than the original single model effect and has a better fitting effect on the nonlinear system. In addition, the random gradient descent algorithm based on the sliding data window introduces past information in the parameter training process, enhances the utilization rate of data, and improves the convergence speed and accuracy of the algorithm. At the same time, a parameter optimization strategy based on the principle of the alternating direction multiplier method is proposed. When optimizing the model parameters, only one model direction is used, instead of all model parameters participating in the training at the same time in the past, so as to enhance the training effect of the model and show better results in the identification task of the nonlinear system.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
构成本实施例的一部分的说明书附图用来提供对本实施例的进一步理解,本实施例的示意性实施例及其说明用于解释本实施例,并不构成对本实施例的不当限定。The drawings in the specification that constitute a part of this embodiment are used to provide a further understanding of this embodiment. The schematic embodiments of this embodiment and their descriptions are used to explain this embodiment and do not constitute improper limitations on this embodiment.
图1为本发明实施例1的模型参数优化的整体流程;FIG1 is an overall flow chart of model parameter optimization in Example 1 of the present invention;
图2为本发明实施例1的样本数为500时的MLP模型训练误差变化图;FIG2 is a graph showing the variation of the MLP model training error when the number of samples is 500 according to Example 1 of the present invention;
图3为本发明实施例1的样本数为500时的MLP模型测试误差变化图;FIG3 is a graph showing the variation of the MLP model test error when the number of samples is 500 according to Example 1 of the present invention;
图4为本发明实施例1的TCN模型的测试误差情况变化图;FIG4 is a graph showing the test error variation of the TCN model according to Example 1 of the present invention;
图5为本发明实施例1的RNN模型的测试误差情况变化图;FIG5 is a graph showing the test error variation of the RNN model of Example 1 of the present invention;
图6为本发明实施例1的样本数为1000时新模型与其他单模型的误差分布箱型图;FIG6 is a box plot of the error distribution of the new model and other single models when the number of samples is 1000 in Example 1 of the present invention;
图7为本发明实施例1的样本数为8000时新模型与其他单模型的误差分布箱型图。FIG7 is a box plot of the error distribution of the new model and other single models when the number of samples is 8000 in Example 1 of the present invention.
具体实施方式Detailed ways
下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
应该指出,以下详细说明都是示例性的,旨在对本申请提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed descriptions are exemplary and are intended to provide further explanation of the present application. Unless otherwise specified, all technical and scientific terms used herein have the same meanings as those commonly understood by those skilled in the art to which the present application belongs.
实施例1:Embodiment 1:
本实施例提供了一种辨识模型训练方法,包括:This embodiment provides a recognition model training method, including:
获取非线性系统的输入数据和输出数据;Obtain input and output data of nonlinear systems;
将输入数据和输出数据,输入到预设的多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;Input data and output data are input into preset multi-layer neural network models for training to obtain recognition models; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization. In each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one neural network model are adjusted, and all neural network models are updated in sequence;
根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型参数的变化量,对每一层神经网络模型的参数进行更新。According to the pre-constructed loss function, the output of the network is compared with the actual output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change in the parameters of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
利用多种不同的模型对输入特征进行提取,获得更多元化的一个信息,比原有的单模型效果表现更加优秀,对非线性系统的拟合效果更佳,此外,基于滑动数据窗口的随机梯度下降算法,在参数训练过程中引入过去的信息,增强了数据的利用率,提高了算法的收敛速度与精度,同时还提出了基于交替方向乘子法原理的参数优化策略,在模型参数优化时只沿着一个模型的方向,而不是以往的全体模型参数同时参与训练,增强了模型的训练效果,在非线性系统的辨识任务上展示了较好的成果。By using a variety of different models to extract input features, we can obtain more diversified information, which performs better than the original single model and has a better fitting effect on nonlinear systems. In addition, the stochastic gradient descent algorithm based on the sliding data window introduces past information in the parameter training process, which enhances the utilization of data and improves the convergence speed and accuracy of the algorithm. At the same time, a parameter optimization strategy based on the principle of alternating direction multiplier method is proposed. When optimizing model parameters, only one model direction is used instead of all model parameters participating in the training at the same time as before, which enhances the training effect of the model and shows good results in the identification task of nonlinear systems.
基于辨识模型训练方法,本实施例还提供了一种非线性系统辨识方法,如图1所示,具体步骤包括:Based on the identification model training method, this embodiment also provides a nonlinear system identification method, as shown in FIG1 , the specific steps include:
S1、非线性系统的数据获取与预处理;S1. Data acquisition and preprocessing of nonlinear systems;
可选的,非线性系统采用级联水箱系统,级联水箱系统为包括两个水箱的液位控制系统,所述级联水箱系统的输入和输出分别为水泵电压和低水箱的水位,可采用一般非线性模型,即带外输入的非线性自回归模型(Non-linear AutoRegressive model witheXogenous inputs,NARX)来表征级联水箱系统:Optionally, the nonlinear system adopts a cascade water tank system, which is a liquid level control system including two water tanks. The input and output of the cascade water tank system are the water pump voltage and the water level of the low water tank, respectively. A general nonlinear model, i.e., a nonlinear autoregressive model with exogenous inputs (NARX), can be used to characterize the cascade water tank system:
其中,表示非线性自回归模型的映射;y(k)表示系统输出,即低水箱的水位,是一个正整数,表示采样的时序,u(k)是模型的外部输入,即水泵电压;lu和ly分别表示输入输出的最大滞后时间;取N个输入u(1),u(2),...,u(k)...,u(N),按顺序输入模型,得到N个对应时序的输出y(1),y(2),...,y(k)...,y(N),可得到样本数为N的数据集(U,Y),U包含N个输入,Y包含对应时序的系统输出;将数据集分为训练数据集和测试数据集,对训练数据集和测试数据集分别进行归一化与标准化的处理。in, represents the mapping of the nonlinear autoregressive model; y ( k ) represents the system output, i.e., the water level of the low water tank, is a positive integer representing the sampling time sequence, u ( k ) is the external input of the model, i.e., the pump voltage; lu and ly represent the maximum lag time between input and output respectively; take N inputs u (1), u (2), ..., u ( k )..., u ( N ), input them into the model in sequence, and obtain N outputs y (1), y (2), ..., y ( k )..., y ( N ) with corresponding time sequence, and obtain a data set ( U , Y ) with N samples, where U contains N inputs. , Y contains the system output corresponding to the timing ; Divide the data set into a training data set and a test data set, and normalize and standardize the training data set and the test data set respectively.
S2、将多个网络组合成辨识模型,用以拟合收集的训练数据,即用神经网络作为步骤S1中非线性自回归模型的映射g(),来构建级联水箱系统中水泵电压与低水箱水位的关系,辨识模型的组建步骤如下:S2. Combining multiple networks into an identification model to fit the collected training data, that is, using the neural network as the mapping g () of the nonlinear autoregressive model in step S1 to construct the relationship between the pump voltage and the low water tank water level in the cascade water tank system. The steps for building the identification model are as follows:
S2.1、选择适当的m个神经网络模型f用作特征提取,这些模型可以是卷积网络(Convolution Network,CNN)和长短期记忆网络(Long short-term memory,LSTM)等,输入经过不同网络的处理,各自抽取出不同的特征,一个层数为L的网络输出可表示为:S2.1. Select appropriate m neural network models f for feature extraction. These models can be convolutional networks (CNN) and long short-term memory networks (LSTM). The input is processed by different networks, and different features are extracted from each network. The output of a network with L layers can be expressed as:
其中,表示特征提取网络的输出;表示网络的输入;W和b分别是网络的可学习参数权重和偏差;L表示网络的层数;表示一个非线性变换函数,其形式可采用RELU或者相关的变体。in, represents the output of the feature extraction network; represents the input of the network; W and b are the learnable parameter weight and bias of the network respectively; L represents the number of layers of the network; Represents a nonlinear transformation function, which can be in the form of RELU or related variants.
S2.2、将不同的中间特征合并,作为特征处理网络的输入。S2.2. Different intermediate features Merge as input to the feature processing network .
S2.3、特征处理网络将这些抽象化特征组合进行多层的非线性变换,得到最终的模型输出,整个模型可表示为:S2.3, the feature processing network combines these abstract features Perform multiple layers of nonlinear transformation to obtain the final model output , the whole model can be expressed as:
其中,F d是特征处理网络,对中间特征进行处理,得到最后的输出。Among them, Fd is the feature processing network, which processes the intermediate features to obtain the final output.
S3、根据输入输出数据的性质选取损失函数J,利用多新息以及交替乘子法(Alternating Direction Method of Multipliers,ADMM)的理论来改进损失函数J,提高模型在实际数据上的拟合效果,即选择好一个评价函数来检测所提的神经网络是否能有效反应预设的输入输出间的关系,具体步骤如下:S3. Select the loss function J according to the properties of the input and output data, and use the theory of multiple innovations and the alternating direction method of multipliers (ADMM) to improve the loss function J and improve the fitting effect of the model on the actual data. That is, select an evaluation function to detect whether the proposed neural network can effectively reflect the relationship between the preset input and output. The specific steps are as follows:
S3.1、系统辨识通常涉及回归预测的任务,其数据通常是正数,因而评价模型的损失函数J采用的是均方误差(Mean Square Error,MSE):S3.1. System identification usually involves regression prediction tasks, and its data is usually positive. Therefore, the loss function J of the evaluation model uses the mean square error (MSE):
其中,是数据集中第k个数据u(k)输入作用下的网络模型输出;y(k)表示对应的系统真实输出;N表示总样本数。in, is the network model output under the action of the kth data u ( k ) input in the data set; y ( k ) represents the corresponding real output of the system; N represents the total number of samples.
S3.2、在模型的参数优化迭代中,采用批量处理的方式,在每一轮参数优化中,随机取n个训练数据进行训练,其中的网络输入,表示从数据集U中随机抽取的输入,经由模型处理,得到输出,其中,表示在作用下的模型输出,从而每一轮参数迭代中的损失函数可写为:S3.2. In the parameter optimization iteration of the model, batch processing is adopted. In each round of parameter optimization, n training data are randomly selected for training, where the network input , Represents the input randomly extracted from the data set U , processed by the model, and the output is obtained ,in, Indicated in The model output under the action of , so the loss function in each round of parameter iteration can be written as:
S3.3、采用ADMM的参数优化思想,在参数迭代时分别优化各自网络的参数,即在每一轮参数的优化过程中,固定其他模型的参数,只调整其中一个模型的参数,依次更新完所有的模型,增强网络的整体性。S3.3. Using the parameter optimization idea of ADMM, the parameters of each network are optimized separately during parameter iteration. That is, in each round of parameter optimization, the parameters of other models are fixed, and only the parameters of one of the models are adjusted. All models are updated in sequence to enhance the integrity of the network.
S3.4、在各个特征提取模型f中,引入模型自身的损失函数:S3.4. In each feature extraction model f , introduce the model's own loss function:
其中,是对输入处理得到的中间特征,这一损失用来抑制参数空间的过度自由化,使模型变得更加稳定。在考虑上述条件下,得到每轮参数迭代时第h个特征提取网络的损失函数如下:in, It is for input The intermediate features obtained by processing are used to suppress the excessive liberalization of the parameter space and make the model more stable. Under the above conditions, the loss function of the hth feature extraction network in each round of parameter iteration is as follows:
其中,β是在0-1内的可设置的超参数,控制模型受整体网络输出的影响程度,输出网络F d的损失函数则与原式保持一致。Among them, β is a configurable hyperparameter between 0 and 1, which controls the degree to which the model is affected by the overall network output. The loss function of the output network Fd remains consistent with the original formula.
S4、结合多新息理论,在原本的批量随机梯度算法的基础上,扩展每轮训练使用的数据,从而得到基于移动数据窗口的梯度下降算法,将其运用到模型的训练中,优化整个网络结构,得到最佳的模型,即通过优化算法得到网络参数,得到还原出级联水箱系统的模型。S4. Combining the multi-innovation theory, on the basis of the original batch stochastic gradient algorithm, the data used in each round of training is expanded to obtain a gradient descent algorithm based on a moving data window, which is applied to the training of the model to optimize the entire network structure and obtain the best model. That is, the network parameters are obtained through the optimization algorithm to restore the model of the cascade water tank system.
S4.1、根据步骤S1获取的数据集,将其划分为训练集与测试集,将训练集中的样本用以训练参数。S4.1. According to the data set obtained in step S1, divide it into a training set and a test set, and use the samples in the training set to train the parameters.
S4.2、网络模型的参数初始化采用随机初始化,每层的权重W与偏差b的值从均值为0,方差为0.25的高斯分布中随机取值。S4.2. The parameters of the network model are initialized randomly. The weight W and bias b of each layer are randomly selected from a Gaussian distribution with a mean of 0 and a variance of 0.25.
S4.3、利用随机搜索的方法寻找最佳的超参数,具体来说,将批量大小、优化器学习率以及步骤S3.4的β各自设置在一个合理的区间,通过随机在这些区间内取这三者值的组合进行部分训练,依据训练结果选择最合适的一组值作为后面的参数;S4.3, use the random search method to find the best hyperparameters. Specifically, set the batch size, optimizer learning rate, and β in step S3.4 to a reasonable range, randomly select a combination of the three values within these ranges for partial training, and select the most appropriate set of values as the subsequent parameters based on the training results;
以批量为一个新息长度p去扩展输入输出数据,将前几轮所用的数据引入到本次训练中,增加每一轮训练中的所用数据,在此基础上,重新构建步骤S3中的损失函数如下:Expand the input and output data with a batch size of innovation length p , introduce the data used in the previous rounds into this training, increase the data used in each round of training, and on this basis, reconstruct the loss function in step S3 as follows:
其中,n表示训练数据数,即批量大小;p表示新息长度。Among them, n represents the number of training data, that is, the batch size; p represents the length of the new information.
S4.4、将训练数据输入网络,得到网络输出,根据重新构建的损失函数,将网络输出与系统真实输出进行对比得到新的误差E,依据随机梯度下降算法,得到每层参数的变化量,经由反向转播,获得每层参数的优化路线,更新每一层的参数,在第t次迭代时w的参数优化算法可以表示为:S4.4. Input the training data into the network and get the network output According to the reconstructed loss function, the network output is compared with the real output of the system to obtain the new error E. According to the stochastic gradient descent algorithm, the change of each layer parameter is obtained. Through reverse propagation, the optimization path of each layer parameter is obtained, and the parameters of each layer are updated. At the tth iteration, the parameter optimization algorithm of w can be expressed as:
其中,表示第t次迭代时的步长;n代表训练数据量;p表示信息长度;表示参与计算的pn个输入对应的模型输出;是模型输出与真实输出的误差,由于加入了滑动窗口的数据,因而其表示为:in, represents the step size at the tth iteration; n represents the amount of training data; p represents the information length; Indicates the model output corresponding to the pn inputs involved in the calculation; is the error between the model output and the true output. Since the sliding window data is added, it is expressed as:
S4.5、将所有的训练数据投入,进行指定次数的反复训练,其中每轮参数训练时,依据ADMM的原理,重新构建损失函数的形式,依次更新每个模型的参数,其可表现为:S4.5. All training data are put into the training for a specified number of times. In each round of parameter training, the loss function is reconstructed according to the principle of ADMM, and the parameters of each model are updated in turn, which can be expressed as:
其中,w h表示第h个模型的参数;w d表示特征处理网络F d的参数;E h(p,t)表示t时刻第h个模型输出与真实输出间的误差:Where w h represents the parameters of the h -th model; w d represents the parameters of the feature processing network F d ; E h ( p, t ) represents the error between the h -th model output and the true output at time t :
将训练数据进行指定次数的往复优化计算,保存其中表现最好的模型参数,即与原系统的映射关系最接近的模型,将测试数据输入到保存好的模型中,得到模型的预测输出,用MSE来衡量模型的预测输入与系统真实输出间的差距,如果该误差符合要求,则完成对系统的辨识,如果误差达不到要求,则继续训练模型,直至达到要求,从而实现对级联水箱系统的逼近,达到以水泵电压来控制水箱水位的目的。The training data is subjected to a specified number of reciprocating optimization calculations, and the best performing model parameters are saved, that is, the model with the closest mapping relationship to the original system. The test data is input into the saved model to obtain the predicted output of the model. The MSE is used to measure the gap between the predicted input of the model and the actual output of the system. If the error meets the requirements, the system is identified. If the error does not meet the requirements, the model is trained continuously until the requirements are met, thereby achieving the approximation of the cascade water tank system and achieving the purpose of controlling the water level of the water tank by the water pump voltage.
为验证本实施例中网络模型以及优化算法的有效性,本实施例将辨识模型用于系统辨识的任务中,可选的,所拟合的NARX系统为:In order to verify the effectiveness of the network model and the optimization algorithm in this embodiment, this embodiment uses the identification model in the task of system identification. Optionally, the fitted NARX system is:
其中,u[k]选取的是随机生成的标准高斯分布,本实施例利用将u[k]作为输入来预测输出y[k],所采用的回溯步长为5,即利用前5个输入数据做为模型的输入来预测下一时刻,第6个数据的输出。本次实验采用的模型由多层感知机(Multilayer Perceptron,MLP)和时域卷积网络(Temporal Convolutions Network,TCN)组成,TCN是可用以序列信号的卷积神经网络的变体。本次实验采用上述网络对上述所提非线性函数进行拟合,数据集由系统随机生成,即通过随机生成输入,得到系统的输出,利用生成的训练集样本来优化模型的参数,再通过测试数据集对模型的拟合效果进行测试,用均方误差来评判模型的表现。在实验中,本实施例先测试了基于滑动数据窗口的随机梯度算法的效果,本实施例通过使用不同的新息长度p来测试算法的下降效果,p=1代表使用原本的梯度下降算法,在样本数为500的情况下,对MLP模型进行建模,得到训练损失、测试损失的变化如图2和图3所示,最优的测试误差如表1所示:Among them, u [ k ] is selected from a randomly generated standard Gaussian distribution. In this embodiment, u [ k ] is used as input to predict the output y [ k ]. The backtracking step length used is 5, that is, the first 5 input data are used as the input of the model to predict the output of the 6th data at the next moment. The model used in this experiment is composed of a multilayer perceptron (MLP) and a temporal convolution network (TCN). TCN is a variant of a convolutional neural network that can be used for sequence signals. This experiment uses the above network to fit the above-mentioned nonlinear function. The data set is randomly generated by the system, that is, the system output is obtained by randomly generating input, and the generated training set samples are used to optimize the parameters of the model. Then, the fitting effect of the model is tested through the test data set, and the mean square error is used to judge the performance of the model. In the experiment, this embodiment first tested the effect of the stochastic gradient algorithm based on the sliding data window. This embodiment tested the descent effect of the algorithm by using different innovation lengths p . p = 1 represents the use of the original gradient descent algorithm. When the number of samples is 500, the MLP model is modeled, and the changes in the training loss and the test loss are shown in Figures 2 and 3. The optimal test error is shown in Table 1:
表1 改良梯度下降算法的测试结果Table 1 Test results of improved gradient descent algorithm
可见,在不同样本数下,基于滑动数据窗口的优化算法表现得更好,且在其他模型上进行的测试也呈现出同样的结果,图4和图5是循环神经网络(Recurrent NeuralNetworks,RNN)与TCN的测试误差情况,随着新息长度的增加,显然收敛速度更快,测试结果相对会更好。It can be seen that under different sample numbers, the optimization algorithm based on the sliding data window performs better, and the tests on other models also show the same results. Figures 4 and 5 are the test errors of the recurrent neural network (RNN) and TCN. As the length of the new information increases, the convergence speed is obviously faster and the test results are relatively better.
在测试所提新模型的表现时,收集上述系统的输入输出数据,在同样的条件下分别对单模型MLP、TCN与所提新模型进行建模,对比其最优的测试误差,设置优化器为Adam,学习率为0.01,训练次数为10次,取其中的最优结果,在同样的运行时间下,得到的新网络模型的测试误差如表2所示:When testing the performance of the proposed new model, the input and output data of the above system are collected. Under the same conditions, the single model MLP, TCN and the proposed new model are modeled respectively, and their optimal test errors are compared. The optimizer is set to Adam, the learning rate is 0.01, the number of training times is 10, and the best result is taken. Under the same running time, the test error of the new network model is shown in Table 2:
表2不同网络模型的建模测试结果Table 2 Modeling test results of different network models
可见本实施例中的模型表现更加优异,即便某些样本数下表现稍显不足,但其收敛速度还是比其他单模型快,此外,根据数据的分布情况,本实施例得到了测试误差的箱型图,如图6和图7所示,本实施例的结果显然会比原先单模型表现更好;It can be seen that the model in this embodiment performs better. Even if the performance is slightly insufficient under certain sample numbers, its convergence speed is still faster than other single models. In addition, according to the distribution of data, this embodiment obtains a box plot of the test error, as shown in Figures 6 and 7. The results of this embodiment are obviously better than the original single model.
实施例2:Embodiment 2:
本实施例提供了一种非线性系统辨识方法,包括:This embodiment provides a nonlinear system identification method, including:
获取非线性系统的输入数据和输出数据;Obtain input and output data of nonlinear systems;
根据输入数据和输出数据,以及预设的辨识模型,得到辨识结果;Obtain identification results based on input data, output data, and a preset identification model;
其中,辨识模型包括多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新。Among them, the identification model includes multiple layers of different neural network models for training to obtain the identification model; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization, and in each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one of the neural network models are adjusted, and all the neural network models are updated in turn; according to a pre-constructed loss function, the output of the network is compared with the true output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change amount of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
所述方法中辨识模型训练训练方法与实施例1的辨识模型训练方法相同,这里不再赘述。The identification model training method in the method is the same as the identification model training method in Example 1, and will not be repeated here.
实施例3:Embodiment 3:
本实施例提供了一种非线性系统辨识系统,包括:This embodiment provides a nonlinear system identification system, including:
数据采集模块,被配置为:获取非线性系统的输入数据和输出数据;The data acquisition module is configured to: obtain input data and output data of the nonlinear system;
辨识模块,被配置为:根据输入数据和输出数据,以及预设的辨识模型,得到辨识结果;The identification module is configured to: obtain an identification result according to the input data and the output data and a preset identification model;
其中,辨识模型包括多层不同神经网络模型进行训练,得到辨识模型;训练时,利用多个不同的神经网络模型,对输入数据和输出数据进行特征提取,得到不同的中间特征;利用交替方向乘子法进行参数优化,在每一轮参数的优化过程中,固定其他神经网络模型的参数,只调整其中一个神经网络模型的参数,依次更新所有的神经网络模型;根据预先构建的损失函数,将网络的输出与非线性系统的真实输出进行对比,得到误差;基于滑动数据窗口的随机梯度下降算法,得到每层神经网络模型的变化量,对每一层神经网络模型的参数进行更新。Among them, the identification model includes multiple layers of different neural network models for training to obtain the identification model; during training, multiple different neural network models are used to extract features of input data and output data to obtain different intermediate features; the alternating direction multiplier method is used for parameter optimization, and in each round of parameter optimization, the parameters of other neural network models are fixed, and only the parameters of one of the neural network models are adjusted, and all the neural network models are updated in turn; according to a pre-constructed loss function, the output of the network is compared with the true output of the nonlinear system to obtain the error; based on the stochastic gradient descent algorithm of the sliding data window, the change amount of each layer of the neural network model is obtained, and the parameters of each layer of the neural network model are updated.
所述系统的工作方法与实施例1的非线性系统辨识方法相同,这里不再赘述。The working method of the system is the same as the nonlinear system identification method of Example 1, and will not be repeated here.
实施例4:Embodiment 4:
本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现了实施例1所述的非线性系统辨识方法的步骤。This embodiment provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the steps of the nonlinear system identification method described in Embodiment 1 are implemented.
实施例5:Embodiment 5:
本实施例提供了一种电子设备,包括存储器、处理器及存储在存储器上并能够在处理器上运行的计算机程序,所述处理器执行所述程序时实现了实施例1所述的非线性系统辨识方法的步骤。This embodiment provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor. When the processor executes the program, the steps of the nonlinear system identification method described in Example 1 are implemented.
实施例6:Embodiment 6:
本实施例提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时,实现了实施例1所述的非线性系统辨识方法的步骤。This embodiment provides a computer program product, which includes a computer program. When the computer program is executed by a processor, the steps of the nonlinear system identification method described in Embodiment 1 are implemented.
以上所述仅为本实施例的优选实施例而已,并不用于限制本实施例,对于本领域的技术人员来说,本实施例可以有各种更改和变化。凡在本实施例的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本实施例的保护范围之内。The above description is only a preferred embodiment of the present embodiment and is not intended to limit the present embodiment. For those skilled in the art, the present embodiment may have various modifications and variations. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present embodiment shall be included in the protection scope of the present embodiment.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410734169.3A CN118333129B (en) | 2024-06-07 | 2024-06-07 | Identification model training method, nonlinear system identification method and nonlinear system identification system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410734169.3A CN118333129B (en) | 2024-06-07 | 2024-06-07 | Identification model training method, nonlinear system identification method and nonlinear system identification system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118333129A true CN118333129A (en) | 2024-07-12 |
| CN118333129B CN118333129B (en) | 2024-09-06 |
Family
ID=91779966
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410734169.3A Active CN118333129B (en) | 2024-06-07 | 2024-06-07 | Identification model training method, nonlinear system identification method and nonlinear system identification system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118333129B (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118643820A (en) * | 2024-08-13 | 2024-09-13 | 温州市数安港管理服务中心 | A digital project duplication detection algorithm based on deep learning of multi-innovation theory |
| CN119480103A (en) * | 2024-11-11 | 2025-02-18 | 北京大学人民医院 | A trauma level assessment method, device and program product |
| CN119918383A (en) * | 2024-11-26 | 2025-05-02 | 中国铁建重工集团股份有限公司 | A method, device, equipment and storage medium for calculating the torque of a tunnel boring machine cutter head |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111427266A (en) * | 2020-03-20 | 2020-07-17 | 北华航天工业学院 | Nonlinear system identification method aiming at disturbance |
| WO2020161624A1 (en) * | 2019-02-04 | 2020-08-13 | Inesc Tec - Instituto De Engenharia De Sistemas E Computadores, Tecnologia E Ciência | Method and device for controlling a wastewatertank pumping system |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN115618724A (en) * | 2022-10-09 | 2023-01-17 | 电子科技大学长三角研究院(湖州) | Thermal nonlinear system identification method, system, medium, equipment and terminal |
| CN115796244A (en) * | 2022-12-20 | 2023-03-14 | 广东石油化工学院 | A Parameter Identification Method Based on CFF for Ultra-Nonlinear Input-Output System |
| CN115859830A (en) * | 2022-12-28 | 2023-03-28 | 浙江大学 | Air conditioner load power cluster identification method and device and medium |
| CN116415177A (en) * | 2023-03-02 | 2023-07-11 | 广东工业大学 | A Classifier Parameter Identification Method Based on Extreme Learning Machine |
| US20240127586A1 (en) * | 2021-02-04 | 2024-04-18 | Deepmind Technologies Limited | Neural networks with adaptive gradient clipping |
-
2024
- 2024-06-07 CN CN202410734169.3A patent/CN118333129B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020161624A1 (en) * | 2019-02-04 | 2020-08-13 | Inesc Tec - Instituto De Engenharia De Sistemas E Computadores, Tecnologia E Ciência | Method and device for controlling a wastewatertank pumping system |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN111427266A (en) * | 2020-03-20 | 2020-07-17 | 北华航天工业学院 | Nonlinear system identification method aiming at disturbance |
| US20240127586A1 (en) * | 2021-02-04 | 2024-04-18 | Deepmind Technologies Limited | Neural networks with adaptive gradient clipping |
| CN115618724A (en) * | 2022-10-09 | 2023-01-17 | 电子科技大学长三角研究院(湖州) | Thermal nonlinear system identification method, system, medium, equipment and terminal |
| CN115796244A (en) * | 2022-12-20 | 2023-03-14 | 广东石油化工学院 | A Parameter Identification Method Based on CFF for Ultra-Nonlinear Input-Output System |
| CN115859830A (en) * | 2022-12-28 | 2023-03-28 | 浙江大学 | Air conditioner load power cluster identification method and device and medium |
| CN116415177A (en) * | 2023-03-02 | 2023-07-11 | 广东工业大学 | A Classifier Parameter Identification Method Based on Extreme Learning Machine |
Non-Patent Citations (2)
| Title |
|---|
| 丁锋;: "辅助模型辨识方法(1):自回归输出误差系统", 南京信息工程大学学报(自然科学版), no. 01, 31 December 2016 (2016-12-31) * |
| 原康康;卫志农;段方维;刘芮彤;徐伟;严明辉;: "基于多新息最小二乘算法的电力线路参数辨识", 电力工程技术, no. 04, 28 July 2020 (2020-07-28) * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118643820A (en) * | 2024-08-13 | 2024-09-13 | 温州市数安港管理服务中心 | A digital project duplication detection algorithm based on deep learning of multi-innovation theory |
| CN118643820B (en) * | 2024-08-13 | 2024-11-22 | 温州市数安港管理服务中心 | Digital project duplication checking method based on deep learning of multiple innovation theory |
| CN119480103A (en) * | 2024-11-11 | 2025-02-18 | 北京大学人民医院 | A trauma level assessment method, device and program product |
| CN119918383A (en) * | 2024-11-26 | 2025-05-02 | 中国铁建重工集团股份有限公司 | A method, device, equipment and storage medium for calculating the torque of a tunnel boring machine cutter head |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118333129B (en) | 2024-09-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN118333129B (en) | Identification model training method, nonlinear system identification method and nonlinear system identification system | |
| CN112577747B (en) | Rolling bearing fault diagnosis method based on space pooling network | |
| CN111680446B (en) | Rolling bearing residual life prediction method based on improved multi-granularity cascade forest | |
| CN111913803B (en) | Service load fine granularity prediction method based on AKX hybrid model | |
| CN110232203B (en) | Knowledge distillation to optimize RNN short-term power outage prediction method, storage medium and equipment | |
| CN108764540B (en) | Water supply network pressure prediction method based on parallel LSTM series DNN | |
| CN115099519B (en) | Oil well yield prediction method based on multi-machine learning model fusion | |
| CN109002686A (en) | A kind of more trade mark chemical process soft-measuring modeling methods automatically generating sample | |
| CN103559537B (en) | Based on the template matching method of error back propagation in a kind of out of order data stream | |
| CN109767036A (en) | Support vector machine fault prediction method based on adaptive ant lion optimization | |
| CN103793887B (en) | Short-term electric load on-line prediction method based on self-adaptive enhancement algorithm | |
| CN117592593A (en) | Short-term power load prediction method based on improved quadratic modal decomposition and WOA optimization BILSTM-intent | |
| CN113807005B (en) | Bearing residual life prediction method based on improved FPA-DBN | |
| CN112733997A (en) | Hydrological time series prediction optimization method based on WOA-LSTM-MC | |
| CN116646929A (en) | A short-term wind power forecasting method based on PSO-CNN-BILSTM | |
| CN106022471A (en) | Real-time prediction method of ship roll based on wavelet neural network model based on particle swarm optimization algorithm | |
| CN118467992A (en) | A short-term power load forecasting method, system and storage medium based on meta-heuristic algorithm optimization | |
| Mousavi et al. | Applying q (λ)-learning in deep reinforcement learning to play atari games | |
| CN108537377A (en) | A kind of room rate prediction technique for searching plain index based on network | |
| CN118036809A (en) | Fault current prediction method and medium based on snow melting optimized recurrent neural network | |
| CN114818124B (en) | Virtual-real fusion grid rudder model parameter optimization method based on DPPO | |
| CN115618725A (en) | A Machine Learning-Based Multivariate Load Forecasting Method for Integrated Energy Systems | |
| CN110533109A (en) | A kind of storage spraying production monitoring data and characteristic analysis method and its device | |
| CN118784462A (en) | A method, device and equipment for automatic network configuration based on deep reinforcement learning | |
| CN116796073B (en) | Graph contrast learning session recommendation method based on feature enhancement |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |