+

CN112508239A - Energy storage output prediction method based on VAE-CGAN - Google Patents

Energy storage output prediction method based on VAE-CGAN Download PDF

Info

Publication number
CN112508239A
CN112508239A CN202011315545.3A CN202011315545A CN112508239A CN 112508239 A CN112508239 A CN 112508239A CN 202011315545 A CN202011315545 A CN 202011315545A CN 112508239 A CN112508239 A CN 112508239A
Authority
CN
China
Prior art keywords
data
sequence
discriminator
generator
energy storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011315545.3A
Other languages
Chinese (zh)
Inventor
饶宇飞
李朝晖
于琳琳
滕卫军
孙鑫
谷青发
杨海晶
徐鹏煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
State Grid Corp of China SGCC
Original Assignee
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
State Grid Corp of China SGCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Henan Electric Power Co Ltd, State Grid Corp of China SGCC filed Critical Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Priority to CN202011315545.3A priority Critical patent/CN112508239A/en
Publication of CN112508239A publication Critical patent/CN112508239A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Development Economics (AREA)
  • Molecular Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于VAE‑CGAN的储能出力预测方法,包括以下步骤:采集储能电站历史出力数据;通过VAE模型的训练获得历史出力数据的特征信息;将特征信息输入生成器与辨别器,设定噪声信息并将储能电站历史出力数据输入辨别器;生成器生成样本数据并输入辨别器中,辨别器根据条件信息与真实样本数据判断此生成样本数据的来源,能判断出其来源则修改参数并重复生成+判别过程,直到辨别器无法样本数据来源,得到较强生成器对储能装置的出力情况进行预测。此模型构建了一个连续的条件输入空间,与现有技术相比,本发明不仅能根据历史场景下条件信息生成数据,还可以根据未知条件信息做出准确生成,有利于储能在不确定场景下的出力精确建模。

Figure 202011315545

A method for predicting energy storage output based on VAE-CGAN, comprising the following steps: collecting historical output data of energy storage power stations; obtaining characteristic information of the historical output data through training of a VAE model; inputting the characteristic information into a generator and a discriminator, and setting Noise information and input the historical output data of the energy storage power station into the discriminator; the generator generates sample data and inputs it into the discriminator, the discriminator judges the source of the generated sample data according to the condition information and the real sample data, and modifies the parameters if the source can be judged And repeat the generation + discrimination process until the discriminator cannot source the sample data, and a strong generator is obtained to predict the output of the energy storage device. This model builds a continuous conditional input space. Compared with the prior art, the present invention can not only generate data based on conditional information in historical scenarios, but also can generate data accurately based on unknown conditional information, which is beneficial for energy storage in uncertain scenarios. The output below is accurately modeled.

Figure 202011315545

Description

Energy storage output prediction method based on VAE-CGAN
The technical field is as follows:
the invention belongs to the technical field of power distribution network energy storage output modeling, and particularly relates to a VAE-CGAN-based energy storage output prediction method.
Background art:
in recent years, new energy power generation is continuously developed, and the development of forms of solar power generation, wind power, tidal power and the like also brings new challenges to a power grid. For example, the forms of solar power generation, tidal power generation and the like are influenced by conditions such as weather and tidal cycle, and the power generation is intermittent; and wind power generation is also influenced by different wind power in different seasons, so that the power generation has uncertainty. As the proportion of new energy power generation is increasing, the influence of instability of new energy power generation on the power system is increasing. Naturally also the modeling of the output of the stored energy is affected.
The energy storage device has the effect of 'peak clipping and valley filling', can cope with the load in the peak period of the load, and reduces the peak-valley difference. The power grid enterprise can obtain more peak load benefits while adjusting the peak and relieving the power supply pressure. The output of the energy storage device is influenced by factors such as new energy power generation, weather, seasons and the like; meanwhile, although the electrical load is basically periodic, the electrical load still fluctuates continuously, and therefore, the determination of the energy storage output is influenced. The significance of the energy storage output modeling is that the electric energy can be dispatched in a more reasonable operation mode, the cost of a power grid enterprise can be saved, the profit can be improved, and the utilization rate of new energy can be improved.
For the reasons mentioned above, the determination of the energy storage capacity needs to be faced with the influence of uncertain external conditions. Due to the influence of unstable factors in the aspects of new energy power generation and load, the method establishes a continuous condition input space by using a VAE (variable auto-encoder) model, and inputs a CGAN (Conditional access adaptive network) model to establish an energy storage output model. Therefore, the energy storage output uncertainty modeling based on the VAE-CGAN can not only generate condition information in a historical scene, but also train characteristic information of the energy storage output uncertainty modeling by using a VAE model in the face of abnormal external conditions and perform modeling, and can accurately and comprehensively model the output of the energy storage while ensuring the accuracy.
The invention content is as follows:
the invention aims to determine the energy storage output force under different loads and generated energy under the condition of unstable generated energy of new energy, thereby enhancing the peak clipping and valley filling effects, reducing the waste conditions such as air abandon and the like and improving the economy.
The invention specifically adopts the following technical scheme:
a VAE-CGAN-based energy storage output prediction method is characterized by comprising the following steps:
step 1: collecting historical operation data of the energy storage device, wherein the historical operation data comprises battery voltage, battery ampere hours, discharge multiplying power, battery operation voltage, traditional energy power generation ratio and new energy power generation ratio;
step 2: directly inputting the historical operating data acquired in the step 1 into a VAE model for training to generate data characteristic information, namely generating new data containing information of the historical operating data;
and step 3: constructing a generator and a discriminator based on the CGAN, wherein the generator is used for generating simulation sample data, and the discriminator is used for discriminating the sources of the simulation sample data and real sample data;
and 4, step 4: inputting the characteristic information obtained in the step 2 as condition information into the generator and the discriminator, and performing countermeasure training on the generator and the discriminator, namely, the discriminator receives the simulation sample data and the real sample data acquired in the step 1 and judges the sources of the two data;
and 5: updating the parameters of the generator and the discriminator, namely optimizing the generator and the discriminator;
step 6: adding 1 to the iteration times, returning to the step 5 until the discriminator cannot distinguish the sample data source, and outputting a generator;
and 7: and (4) acquiring real-time operation data of the energy storage device, inputting the real-time operation data into the generator output in the step (6), and predicting the energy storage output condition of the energy storage device.
The invention further adopts the following preferred technical scheme:
the step 2 comprises the following steps:
step 201: constructing an encoder by adopting a layer of LSTM neural network; a decoder is constructed by adopting a layer of LSTM neural network and a layer of full connection layer;
step 202: the encoder receives a first high-dimensional data sequence X ═ X of historical operating data1,x2,…xn]Then, mapping a high-dimensional data sequence X of the historical operating data into a mean vector mu with the length of m and a standard deviation vector sigma with the length of m;
step 203: the encoder calculates a mean vector mu, a standard deviation vector sigma and a parameter sequence delta [ delta 1, delta 2, … delta m ═ according to the mean vector mu and the standard deviation vector sigma]And calculating a hidden variable sequence Z ═ Z by the following formula1,z2,…zm];
zi=μi+δi·exp(σi)
Wherein z isiFor the ith value in the hidden variable sequence Z,μiIs the ith value, σ, in the mean vector μiIs the ith value in the standard deviation vector σ, δ i is a parameter obtained by random sampling in a sample set subject to bernoulli distribution, and subject to a standard normal distribution, i.e., δ i to N (0,1), i ═ 1,2, …, m; step 204: inputting the hidden variable sequence Z obtained by calculation in the step 203 into a decoder, and restoring the hidden variable sequence Z into a second high-dimensional data sequence X ', wherein the second high-dimensional data sequence X' is a characteristic information sequence.
In step 2, the following objective function is adopted for training:
Figure BDA0002791264930000031
Figure BDA0002791264930000032
wherein L is1Is a self-encoding reconstruction error, L2Is KL divergence, P (x | Z) is a prior distribution of hidden variables Z representing a decoder in the VAE, q (Z | x) is a posterior distribution of values Z in a sequence of hidden variables derived from any value x in the real data sequence representing an encoder in the VAE;
and when the self-coding reconstruction error value is maximum and the KL divergence value is minimum, finishing training to obtain the characteristic information.
In the step 2, the battery voltage, the battery ampere hour and the discharge multiplying power are trained by adopting the same VAE model;
and the traditional energy power generation ratio and the new energy power generation ratio are trained by adopting independent VAE models respectively.
In step 4, random noise information Z is setnoiseAnd input into a generator together with the condition information, the generator utilizes the nonlinear mapping capability of the neural network to give the noise information znoiseAnd condition information Y into simulation sample data G (z)noise|y)。
In step 4, the generator generating the simulation sample comprises the following steps:
step 401: input random noise sequence ZnoiseWhen the length is n and the length of the conditional information sequence Y is m, the generator establishes a full connection layer to enable the generator to be connected with the random noise sequence ZnoiseThe number of the neurons of the full connection layer corresponding to the condition information sequence Y and the random noise sequence Z are respectivelynoiseThe length of the conditional information sequence Y is consistent with that of the conditional information sequence Y;
step 402: the data of the fully connected layer is corrected such that its mean value is approximately 0 and its variance is approximately 1, i.e., towards a standard normal distribution N (0, 1).
Step 403: random noise Z to be correctednoiseSplicing the correction sequence with the condition information Y correction sequence to form a spliced sequence with the length of (n + m)/10;
step 404: multiplying the same appearance probability p by (n + m)/10 neurons in the spliced sequence with the length of (n + m)/10 obtained in the previous layer to be 0.5;
step 405: in step 404, half of the neurons are temporarily deleted in the current generation, and the undeleted neurons are output from the generator as the simulation sample data.
In step 4, the discriminator discriminating the source of the input data comprises the following steps:
step 406: the discriminator receives the condition information Y, the real sample data sequence X and the simulation sample data sequence generated by the generator;
step 407: the discriminator respectively generates a first hidden layer for the condition information sequence Y, the real sample data sequence X and the simulation sample data sequence, wherein the number of the neurons of the hidden layer is i, and a first weight matrix W is establishedmaxout1Calculating the numerical value of the neuron in the first hidden layer by the following formula;
t’i=wi1×t1+wi2×t2+....+win×tn
wherein, wilIs a weight matrix Wmaxout1The 1 st element of the ith row, t'iIs the value of the ith neuron in the first hidden layer, tiIs the ith data of the input sequence.
Step 408: dividing neurons in a first hidden layer of a condition information sequence Y, a first hidden layer of a real sample data sequence X and a first hidden layer of a simulation sample data sequence into s groups, selecting the neuron with the largest value from each group of the groups as the output of the first hidden layer, and generating a condition information neural network layer, a real sample data neural network layer and a simulation sample data neural network layer, wherein the neural network layer contains s neurons;
step 409: splicing the generation condition information neural network layer, the real sample data neural network layer and the simulation sample data neural network layer to obtain a first neural network layer, wherein the first neural network layer is provided with 3s neurons;
step 410: setting the neuron deletion probability to be 0.5, and processing the first neural network layer in the step 409 to obtain a second neural network layer with random length;
step 411: establishing a second weight matrix Wmaxout2Mapping the second neural network layer by the following formula to obtain a second hidden layer, selecting the neuron with the maximum value in the second hidden layer 2 to map the neuron into a (0,1) interval, and taking the mapped data as the output result of the discriminator;
q’i=w’i1×q1+w’i2×q2+....+w’in×qn
W’j1is a weight matrix Wmaxout2Line j 1 st element, q'1For the 1 st data of the input sequence, qjIs the jth data in the second hidden layer.
In step 411, when the output result of the discriminator is 0.5, the discriminator cannot determine the source of the received data sample; otherwise, the discrimination results in that the received data sample originates from the generator.
In step 5, the constraints of the CGAN model are:
Figure BDA0002791264930000054
wherein, G (z)noiseY) is sample data generated by the generator according to the condition information Y and the noise information znoise; d (G (z)noiseY)) is the output of the discriminator;
Figure BDA0002791264930000053
replacing with the mean of the generated sample data and the real sample data for log D (x | y) mathematical expectation;
Figure BDA0002791264930000055
for z is a random number, Pz(z) is a probability distribution function of the random number z,
Figure BDA0002791264930000056
obey a probability distribution P when zz(z) mathematical expectation; pdata(x)Probability distribution of input real sample data; p is a probability distribution; v (D, G) represents the game function between the generator and the arbiter.
In said step 5, the generator is optimized by the following formula:
Figure BDA0002791264930000051
wherein G is the constraint condition of the encoder,
Figure BDA0002791264930000057
for mathematical expectation, Pdata(x | y) is the probability that x is from the true sample data, P, under the conditional constraint of yG(x | y) is the probability that x comes from the sample data generated by the generator under the conditional constraint of y;
when the value of the formula is minimum, the generator is considered to be optimized.
In said step 5, the discriminator is optimized by the following formula:
Figure BDA0002791264930000052
wherein D is a discriminator constraint condition;
the discriminator optimization is considered to be completed when the formula takes the maximum value.
When the output result of the discriminator is 0.5, the game between the discriminator and the generator is considered to be balanced, the training is finished, and the trained generator is output;
the invention has the following beneficial effects:
the energy storage output is predicted by using the VAE-CGAN model, so that not only can condition information under a historical scene be modeled, but also unknown condition information can be dealt with, the condition information hidden in an input sequence is trained through the VAE model, the CGAN model of a continuous condition input space is constructed, an optimal generator is obtained through confrontation, and the accuracy of energy storage output prediction is guaranteed. Therefore, the optimal output of the stored energy under the condition of large working condition change can be determined more comprehensively and accurately.
Drawings
FIG. 1 is a flow chart of a VAE-CGAN-based energy storage processing prediction method
FIG. 2 VAE training procedure
FIG. 3 CGAN Game model
FIG. 4 is a block diagram of a CGAN neural network
Detailed Description
The following detailed description of the embodiments of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the energy storage output prediction method based on VAE-CGAN of the present invention specifically includes the following steps:
step 1: the method comprises the steps of collecting battery voltage, battery ampere hours, discharge rate C and other data such as the power generation ratio of traditional energy and new energy to serve as input sequences of a VAE model, collecting different input sequences to serve as input of the VAE for extracting features according to different systems used by an energy storage battery in actual conditions, wherein the feature sequences with strong correlation such as the battery voltage, the battery ampere hours and the discharge rate C form a multi-element time sequence, analyzing the multi-element time sequence by using the same VAE, and training the data without correlation such as the power generation ratio of the traditional energy and the new energy by using a single VAE. Here, the VAE model input sequence is represented by X ═ X1, X2, … xn ].
Step 2: and (3) training various types of data acquired in the step (1) by using a VAE model to obtain a characteristic information sequence.
Specifically, as shown in fig. 2, step 2 includes the following steps:
step 201: adopting a layer of LSTM (Long short-term memory) neural network to construct an encoder; a decoder is constructed using a layer of LSTM neural network and a layer of fully-connected layers.
Step 202: the encoder receives a first high-dimensional data sequence X ═ X of historical operating data1,x2,…xn]Then, mapping the high-dimensional data sequence X of the historical operating data into a mean vector mu with the length of m and a standard deviation vector sigma with the length of m.
Step 203: the encoder calculates the mean vector mu, the standard deviation vector sigma and the parameter sequence delta [ delta 1, delta 2, … delta m ═ m]And calculating a hidden variable sequence Z ═ Z by the following formula1,z2,…zm]。
zi=μii·exp(σi) (1)
Wherein z isiIs the ith value, mu, in the hidden variable sequence ZiIs the ith value, σ, in the mean vector μiFor the ith value in the standard deviation vector σ, δ i is a parameter obtained by random sampling in a sample set that obeys a bernoulli distribution, and obeys a standard normal distribution, i.e., δ i to N (0,1), i ═ 1,2, …, m.
Step 204: inputting the hidden variable sequence Z obtained by calculation in step 203 into a decoder, restoring the hidden variable sequence Z into a second high-dimensional data sequence X ', wherein X' is [ X1 ', X2', … xn '], that is, the hidden variable sequence Z is identical in dimension to the VAE model input sequence, and finally reconstructing the high-dimensional data feature X' by the full connection layer to obtain a generated m-dimensional feature information sequence Y.
Specifically, the total constraint L in the VAE model training process is calculated and expressed by the following formula;
L=max loss=∫q(z|x)log P(x|z)dz-KL(q(z|x)||P(z)) (2)
the former part of the objective function is the self-coding reconstruction error, which is also called variation lower limit, and the maximum value is taken to represent that when X is used as the input of the encoder, m hidden variables zi are extracted from the encoder in a coding mode, and finally the decoder can recover X from Z with the maximum probability.
The self-encoded reconstruction error L1 is calculated by the following equation:
Figure BDA0002791264930000071
in the continuous optimization of the encoder, the self-encoding reconstruction error is continuously increased by changing parameters in the LSTM neural network.
The second part is KL divergence, and when the second part takes the minimum value, σi=0,μiWhen z is 0, z can be obtained from the formula (1)iFollowing a standard normal distribution, q (z | x) ═ p (z), KL divergence is zero. The KL divergence is calculated by the following formula:
Figure BDA0002791264930000072
p (x | Z) is an a priori distribution of hidden variable Z representing the decoder in the VAE, q (Z | x) is a Z a posteriori distribution derived from x representing the encoder in the VAE, it is desirable to fit these two distributions as closely as possible in the VAE model, so P (Z) in the VAE model is a Gaussian distribution with mean 0 and variance 1 when the two fit, KL divergence is 0, when we adjust that q (Z | x) and P (Z | x) are completely consistent, KL divergence vanishes to 0, and Lb and logP (x) are completely consistent. Therefore, no matter what the value of logP (x), we can always make Lb equal to logP (x) by adjusting, and because Lb is the lower bound of logP (x), solving Maximum logP (x) is equivalent to solving Maximum Lb, even if self-coding reconstruction error is Maximum, an optimal encoder model is obtained, and at this time, the trained VAE model can generalize the data characteristics of input data and generate a characteristic sequence as a condition information sequence used in the CGAN model.
And step 3: the method comprises the steps of constructing a generator and a discriminator based on the CGAN, wherein the generator is used for generating simulation sample data, and the discriminator is used for discriminating the sources of the simulation sample data and real sample data.
And 4, step 4: and (3) inputting the characteristic information obtained in the step (2) as condition information into the generator and the discriminator, and performing countermeasure training on the generator and the discriminator, namely, the discriminator receives the simulation sample data and the real sample data acquired in the step (1) and judges the sources of the two data.
And, a random noise information z is set at randomnoiseInput into a generator together with the condition information, the generator utilizes the nonlinear mapping capability of the neural network to give the noise information znoiseAnd condition information Y into simulation sample data G (z)noiseY) is also input to the generator.
In particular, as shown in fig. 3-4, the generation of the immersive sample by the generator comprises in particular the following steps:
step 401: input random noise sequence ZnoiseWhen the length is n and the length of the conditional information sequence Y is m, the generator establishes a full connection layer to enable the generator to be connected with the random noise sequence
Figure BDA0002791264930000081
The number of neurons of the full-connection layer corresponding to the condition information sequence Y is respectively equal to the random noise sequence
Figure BDA0002791264930000082
The length of the condition information sequence Y is consistent, wherein the full connection layer aggregates and classifies the characteristics of the input sequence under the condition of not changing the length of the input sequence.
Step 402: the data of the fully connected layer is corrected such that its mean value is approximately 0 and its variance is approximately 1, i.e., towards a standard normal distribution N (0, 1). Preferably, in the present invention, the data of the full-link layer is corrected using the Batch-Normalization algorithm.
Step 403: random noise Z to be correctednoiseAnd splicing the correction sequence with the condition information Y correction sequence to form a spliced sequence with the length of (n + m)/10. In one embodiment of the invention, the input random noise sequence ZnoiseIs 200 and the length of the conditional information sequence Y is 1000, a splicing sequence of length 120 is thus obtained in this step.
Step 404: the neurons in the stitched sequence obtained in step 403 are multiplied by the same probability of occurrence p, which is 0.5. Preferably, the concatenation sequence is processed using the dropout algorithm at this step 404.
Step 405: in step 404, about half of the neurons are temporarily deleted in this generation. Therefore, only part of input sequences participate in the generation in the process of generating the sample data by the generator, and the return value only adjusts the neurons participating in the generation, so that the over-fitting phenomenon of the generator can be avoided. The remaining undeleted neurons are output from the generator and input to the discriminator as generation sample data.
Specifically, as shown in fig. 3-4, the step of identifying the source of the data sample by the discriminator specifically comprises the following steps:
step 406: the discriminator receives the condition information Y, the real sample data sequence X and the simulation sample data sequence generated by the generator;
step 407: and the discriminator respectively generates a first hidden layer for the condition information sequence Y, the real sample data sequence X and the simulation sample data sequence, wherein the number of the neurons of the hidden layer is i. Preferably, in this step, the maxout algorithm is adopted to map each data sequence, and specifically, the weight matrix W is setmaxout1And calculating the hidden layer neuron value by the following formula:
t’i=wi1×t1+wi2×t2+....+win×tn (5)
wherein, wilIs a weight matrix Wmaxout1The 1 st element of the ith row, t'iIs the value of the ith neuron in the first hidden layer, tiIs the ith data of the input sequence.
Step 408: dividing neurons in a first hidden layer of a condition information sequence Y, a first hidden layer of a real sample data sequence X and a first hidden layer of a simulation sample data sequence into s groups, selecting the neuron with the largest value from each group of the groups as the output of the first hidden layer, and generating a condition information neural network layer, a real sample data neural network layer and a simulation sample data neural network layer, wherein the neural network layer contains s neurons. Preferably, in the present invention, the neurons in the first hidden layer are divided into 5 groups, thereby obtaining a neural network layer containing 5 neurons.
Step 409: and splicing the generation condition information neural network layer, the real sample data neural network layer and the simulation sample data neural network layer to obtain a first neural network layer, wherein the first neural network layer is provided with 3s neurons. Preferably, a neural network layer containing 15 neurons is obtained in one embodiment of the present invention.
Step 410: setting the neuron deletion probability to be 0.5, and processing the first neural network layer in the step 409 to obtain a second neural network layer with a random length. Specifically, in one embodiment of the present invention, the first neural network layer is processed using the dropout algorithm, and by setting the neuron deletion probability p to 0.5, that is, in one optimization process of the generator model, each of the 15 neurons has a 50% probability of being temporarily deleted until a part of the neurons is restored and deleted again with the probability p in the next optimization process. A generated neural network layer of random length is obtained.
Step 411: and mapping the second neural network layer by the following formula to obtain a second hidden layer, selecting the neuron with the maximum value in the second hidden layer 2 to map into a (0,1) interval, and taking the mapped data as the output result of the discriminator. That is, when the output result of the discriminator is 0.5, the discriminator cannot judge the source of the received data sample; otherwise, the discrimination results in that the received data sample originates from the generator.
q’i=w’i1×q1+w’i2×q2+....+w’in×qn (6)
W’j1Is a weight matrix Wmaxout2Line j 1 st element, q'1For the 1 st data of the input sequence, qjIs the jth data in the second hidden layer.
Preferably, in step 411, the maxout algorithm is used to map the second neural network layer; and mapping the neurons of the second hidden layer by adopting a sigmoid function.
And 5: the parameters of the generator and discriminator are updated, i.e. the generator and discriminator are optimized. Specifically, after the generator completes one generation, the neural network calculates a loss function of the generated data and propagates the loss function back to modify the parameters of the generator. When the generator completes the adjustment of the parameters, the discriminator judges the generated sample sequence, and when the output data of the discriminator is 0.5, the discriminator cannot judge that the data comes from the generator. When the discriminator output is not equal to 0.5, the generator continues the above-described generation and parameter adjustment process until the discriminator cannot distinguish the source of the sequence of samples generated by the generator. The probability that the generator generates the sample sequence at the moment is considered to be derived from real data by the discriminator, the discriminator also has the probability of 50% to judge that the sample sequence generated by the generator comes from the generator, namely the game between the generator and the generator reaches an equilibrium state, and the obtained generator model parameters can be considered as an optimal generator, namely a generator model for energy storage output prediction is established.
The constraint conditions of the CGAN model are as follows:
Figure BDA0002791264930000101
wherein, G (z)noiseY) is sample data generated by the generator according to the condition information Y and the noise information znoise; d (G (z)noiseY)) is the output of the discriminator;
Figure BDA0002791264930000113
as a mathematical expectation of log D (x | y), the generated sample data and the mean of the real sample data are used instead;
Figure BDA0002791264930000115
for z is a random number, Pz(z) is a probability distribution function of the random number z,
Figure BDA0002791264930000114
obey a probability distribution P when zz(z) mathematical expectation; pdata(x)Probability distribution of input real sample data; pz(z) is a probability distribution function for the random number z; p is a probability distribution; v (D, G) represents the game function between the generator and the arbiter.
In said step 5, the generator and the discriminator are optimized by the following two equations, respectively:
wherein the generator constraint is as follows (7):
Figure BDA0002791264930000111
wherein the discriminator constraint is the following equation (8):
Figure BDA0002791264930000112
according to respective constraint conditions, a generator and a discriminator in the CGAN model continuously adjust model parameters to enable a constraint function of the generator to be minimum and model parameters of the discriminator to be maximum, and with the aim of optimizing the generator parameters through back propagation of a neural network, the generator parameters comprise a mapping function of a full connection layer in the generator, values on neurons, the number of the neurons and the like.
Step 6: the number of iterations is increased by 1 and the process returns to step 5 until the discriminator cannot distinguish the source of the sample data. Namely, when the output result of the discriminator is 0.5, the game between the discriminator and the generator is considered to be balanced, the training is finished, so that a stronger generator is obtained, and the generator is output as an output prediction model of the energy storage device.
And 7: and (4) acquiring real-time operation data of the energy storage device, inputting the real-time operation data into the generator output in the step (6), and predicting the energy storage output condition of the energy storage device.
The VAE-CGAN model is used for predicting the energy storage output, not only can condition information under a historical scene be modeled, but also unknown condition information can be responded, the condition information hidden in an input sequence is trained through the VAE model, the continuous condition input space input CGAN model is constructed, an optimal generator is obtained through confrontation, and the accuracy of the energy storage output prediction is guaranteed. Therefore, the optimal output of the stored energy under the working environment with strong uncertainty can be determined more comprehensively and accurately.
While the best mode for carrying out the invention has been described in detail and illustrated in the accompanying drawings, it is to be understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the scope of the invention should be determined by the appended claims and any changes or modifications which fall within the true spirit and scope of the invention should be construed as broadly described herein.

Claims (12)

1.一种基于VAE-CGAN的储能出力预测方法,其特征在于,所述方法包括以下步骤:1. a kind of energy storage output prediction method based on VAE-CGAN, is characterized in that, described method comprises the following steps: 步骤1:采集储能装置的历史运行数据,所述运行数据包括电池电压、电池安时数、放电倍率、电池运行电压、传统能源发电占比以及新能源发电占比;Step 1: collect historical operating data of the energy storage device, the operating data includes battery voltage, battery ampere-hour, discharge rate, battery operating voltage, the proportion of traditional energy power generation and the proportion of new energy power generation; 步骤2:将所述步骤1中采集的历史运行数据直接输入到VAE模型中进行训练,生成数据特征信息,即生成含有历史运行数据的信息的新的数据;Step 2: The historical operation data collected in the step 1 is directly input into the VAE model for training, and the data characteristic information is generated, that is, new data containing the information of the historical operation data is generated; 步骤3:基于CGAN构建生成器以及辨别器,其中所述生成器用于生成拟真样本数据,所述辨别器用于辨别由所述拟真样本数据和真实样本数据的来源;Step 3: Construct a generator and a discriminator based on CGAN, wherein the generator is used to generate realistic sample data, and the discriminator is used to identify the source of the realistic sample data and the real sample data; 步骤4:所述步骤2中获得的特征信息作为条件信息输入到所述生成器和辨别器,并对所述生成器和辨别器进行对抗训练,即,所述辨别器接收所述拟真样本数据以及步骤1中采集的真实样本数据,判断两种数据的来源;Step 4: The feature information obtained in the step 2 is input into the generator and the discriminator as condition information, and the generator and the discriminator are trained against each other, that is, the discriminator receives the realistic sample data and the real sample data collected in step 1 to determine the source of the two data; 步骤5:更新所述生成器和辨别器的参数,即对所述生成器和辨别器进行优化;Step 5: update the parameters of the generator and the discriminator, that is, optimize the generator and the discriminator; 步骤6:迭代次数加1并返回所述步骤5,直到所述辨别器无法分辨样本数据来源,输出训练好的生成器;Step 6: increase the number of iterations by 1 and return to step 5, until the discriminator cannot distinguish the source of the sample data, and output the trained generator; 步骤7:采集储能装置的实时运行数据输入到步骤6中输出的生成器中,预测储能装置的储能出力情况。Step 7: Collect the real-time operation data of the energy storage device and input it into the generator output in step 6 to predict the energy storage output of the energy storage device. 2.根据权利要求1所述的基于VAE-CGAN的储能出力预测方法,其特征在于:2. the energy storage output prediction method based on VAE-CGAN according to claim 1, is characterized in that: 所述步骤2包括以下步骤:The step 2 includes the following steps: 步骤201:采用一层LSTM神经网络构建编码器;采用一层LSTM神经网络以及一层全连接层构建解码器;Step 201: use a layer of LSTM neural network to build an encoder; use a layer of LSTM neural network and a fully connected layer to build a decoder; 步骤202:所述编码器接收历史运行数据的第一高维数据序列X=[x1,x2,…xn]后,将所述历史运行数据的高维数据序列X映射为长度为m的均值向量μ与长度为m的标准差向量σ;Step 202: After receiving the first high-dimensional data sequence X=[x 1 , x 2 , . . . x n ] of the historical operating data, the encoder maps the high-dimensional data sequence X of the historical operating data to a length m The mean vector μ of and the standard deviation vector σ of length m; 步骤203:编码器根据所述均值向量μ、标准差向量σ以及参数序列δ=[δ1,δ2,…δm],并通过以下公式计算出隐变量序列Z=[z1,z2,…zm];Step 203: The encoder calculates the latent variable sequence Z=[z 1 , z 2 , ... z according to the mean vector μ, the standard deviation vector σ and the parameter sequence δ=[δ1, δ2, . m ]; zi=μii·exp(σi) zi = μ ii ·exp(σ i ) 其中,zi为隐变量序列Z中的第i个值,μi为均值向量μ中第i个值,σi为标准差向量σ中第i个值,δi是在服从伯努利分布的样本集中随机抽样获取的参数,且服从标准正态分布,即δi~N(0,1),i=1,2,…,m;步骤204:将所述步骤203计算获得的隐变量序列Z输入到解码器中,将所述隐变量序列Z恢复为第二高维数据序列X’,所述第二高维数据序列X’即为特征信息序列。Among them, zi is the ith value in the latent variable sequence Z, μ i is the ith value in the mean vector μ, σ i is the ith value in the standard deviation vector σ, δi is in the Bernoulli distribution The parameters obtained by random sampling in the sample set, and obey the standard normal distribution, that is, δi~N(0,1), i=1,2,...,m; Step 204: Calculate the hidden variable sequence Z obtained in the step 203 Input into the decoder, and restore the latent variable sequence Z to a second high-dimensional data sequence X', and the second high-dimensional data sequence X' is a feature information sequence. 3.根据权利要求2所述的基于VAE-CGAN的储能出力预测方法,其特征在于:3. the energy storage output prediction method based on VAE-CGAN according to claim 2, is characterized in that: 在所述步骤2中,采用以下目标函数进行训练:In step 2, the following objective function is used for training:
Figure FDA0002791264920000021
Figure FDA0002791264920000021
Figure FDA0002791264920000022
Figure FDA0002791264920000022
其中,L1是自编码重建误差,L2是KL散度,P(x|z)为隐变量Z的一个先验分布,代表VAE中的解码器,q(z|x)为根据真实数据序列中的任意一个值x推导出来的一个隐变量序列中的值z的后验分布,代表VAE中的编码器;Among them, L 1 is the auto-encoding reconstruction error, L 2 is the KL divergence, P(x|z) is a prior distribution of the latent variable Z, representing the decoder in the VAE, and q(z|x) is based on the real data The posterior distribution of the value z in a latent variable sequence derived from any value x in the sequence, representing the encoder in the VAE; 所述自编码重建误差值最大,KL散度取值最小时,结束训练获得所述特征信息。When the error value of the auto-encoding reconstruction is the largest and the value of the KL divergence is the smallest, the training is ended to obtain the feature information.
4.根据权利要求1-3任意一项所述的基于VAE-CGAN的储能出力预测方法,其特征在于:4. the energy storage output prediction method based on VAE-CGAN according to any one of claims 1-3, is characterized in that: 在所述步骤2中,电池电压、电池安时数、放电倍率采用同一VAE模型进行训练;In the step 2, the battery voltage, battery ampere-hour, and discharge rate are trained using the same VAE model; 传统能源发电占比以及新能源发电占比分别采用单独的VAE模型进行训练。The proportion of traditional energy power generation and the proportion of new energy power generation are trained using separate VAE models. 5.根据权利要求1-3任意一项所述的基于VAE-CGAN的储能出力预测方法,其特征在于:5. the energy storage output prediction method based on VAE-CGAN according to any one of claims 1-3, is characterized in that: 在所述步骤4中,设定随机的噪声信息Znoise,并与所述条件信息一起输入到生成器中,生成器利用神经网络的非线性映射能力,将给定噪声信息znoise和条件信息Y映射为拟真样本数据G(znoise|y)。In the step 4, random noise information Z noise is set and input into the generator together with the condition information. The generator uses the nonlinear mapping ability of the neural network to convert the given noise information z noise and condition information Y is mapped to realistic sample data G(z noise |y). 6.根据权利要求5所述的基于VAE-CGAN的储能出力预测方法,其特征在于:6. the energy storage output prediction method based on VAE-CGAN according to claim 5, is characterized in that: 在步骤4中,所述生成器生成拟真样本包括以下步骤:In step 4, the generator to generate a realistic sample includes the following steps: 步骤401:输入的随机噪声序列Znoise长度为n,条件信息序列Y长度为m时,生成器建立一层全连接层,使得与随机噪声序列Znoise和条件信息序列Y相对应的全连接层的神经元个数分别与随机噪声序列Znoise和条件信息序列Y的长度一致;Step 401: When the length of the input random noise sequence Z noise is n, and the length of the conditional information sequence Y is m, the generator establishes a fully connected layer, so that the fully connected layer corresponding to the random noise sequence Z noise and the conditional information sequence Y is The number of neurons is consistent with the length of the random noise sequence Z noise and the conditional information sequence Y respectively; 步骤402:对全连接层的数据进行校正,使其均值近似于0,方差近似于1,即向标准正态分布N(0,1)逼近;Step 402: Correct the data of the fully connected layer so that the mean value is approximately 0 and the variance is approximately 1, that is, it is approximated to the standard normal distribution N(0,1); 步骤403:将经过校正的随机噪声Znoise校正序列与条件信息Y校正序列进行拼接,使其变为一个(n+m)/10长度的拼接序列;Step 403: splicing the corrected random noise Z noise correction sequence and the conditional information Y correction sequence to make it a spliced sequence with a length of (n+m)/10; 步骤404:对上一层中得到的长度为(n+m)/10的拼接序列中的(n+m)/10个神经元乘以相同的出现概率p=0.5;Step 404: Multiply (n+m)/10 neurons in the splicing sequence of length (n+m)/10 obtained in the previous layer by the same occurrence probability p=0.5; 步骤405:在所述步骤404中,有一半的神经元在此次生成中被暂时删除,将未被删除的神经元从生成器中输出,作为拟真样本数据。Step 405: In the step 404, half of the neurons are temporarily deleted in this generation, and the neurons that have not been deleted are output from the generator as simulation sample data. 7.根据权利要求1-3任意一项所述的基于VAE-CGAN的储能出力预测方法,其特征在于:在步骤4中,所述辨别器辨别输入数据来源包括以下步骤:7. The method for predicting energy storage output based on VAE-CGAN according to any one of claims 1-3, wherein in step 4, the discriminator to identify the source of input data comprises the following steps: 步骤406:辨别器接收条件信息Y、真实样本数据序列X以及生成器生成的拟真样本数据序列;Step 406: the discriminator receives the condition information Y, the real sample data sequence X, and the realistic sample data sequence generated by the generator; 步骤407:辨别器分别对所述条件信息序列Y、真实样本数据序列X以及拟真样本数据序列生成一层第一隐藏层,其中隐藏层的神经元个数均为i个,建立第一权重矩阵Wmaxout1,并通过以下公式计算第一隐藏层中神经元的数值;Step 407: The discriminator generates a first hidden layer for the conditional information sequence Y, the real sample data sequence X and the simulated sample data sequence, wherein the number of neurons in the hidden layer is i, and establishes a first weight matrix W maxout1 , and calculate the value of neurons in the first hidden layer by the following formula; t’i=wi1×t1+wi2×t2+....+win×tn t' i =w i1 ×t 1 +w i2 ×t 2 +....+w in ×t n 其中,wil为权重矩阵Wmaxout1中第i行的第1个元素,t’i为第一隐藏层中第i个神经元的数值,ti为输入序列的第i个数据;Wherein, w il is the first element of the i-th row in the weight matrix W maxout1 , t' i is the value of the i-th neuron in the first hidden layer, and t i is the i-th data of the input sequence; 步骤408:将条件信息序列Y的第一隐藏层、真实样本数据序列X的第一隐藏层以及拟真样本数据序列的第一隐藏层中的神经元分为s个分组,并从每组分组中选取数值最大的神经元作为所述第一隐藏层的输出,生成条件信息神经网络层、真实样本数据神经网络层以及拟真样本数据神经网络层,其中所述神经网络层含有s个神经元;Step 408: Divide the neurons in the first hidden layer of the conditional information sequence Y, the first hidden layer of the real sample data sequence X, and the first hidden layer of the simulated sample data sequence into s groups, and group the neurons from each group. The neuron with the largest numerical value is selected as the output of the first hidden layer, and the conditional information neural network layer, the real sample data neural network layer and the simulated sample data neural network layer are generated, wherein the neural network layer contains s neurons. ; 步骤409:对所述生成条件信息神经网络层、真实样本数据神经网络层以及拟真样本数据神经网络层进行拼接,获得第一神经网络层,所述第一神经网络层具有3s个神经元;Step 409: splicing the generation condition information neural network layer, the real sample data neural network layer and the simulated sample data neural network layer to obtain a first neural network layer, and the first neural network layer has 3s neurons; 步骤410:设定神经元删除概率为0.5,对所述步骤409中的第一神经网络层进行处理,获得随机长度的第二神经网络层;Step 410: Set the neuron deletion probability to 0.5, and process the first neural network layer in step 409 to obtain a second neural network layer of random length; 步骤411:建立第二权重矩阵Wmaxout2,并通过以下公式对所述第二神经网络层进行映射,获得第二隐藏层,并选择第二隐藏层2中数值最大的神经元映射到(0,1)区间内,根据映射后的数据作为所述辨别器的输出结果;Step 411 : establish a second weight matrix W maxout2 , and map the second neural network layer by the following formula to obtain a second hidden layer, and select the neuron with the largest value in the second hidden layer 2 to map to (0, 1) In the interval, use the mapped data as the output result of the discriminator; q’i=w’i1×q1+w’i2×q2+....+w’in×qn q' i =w' i1 ×q 1 +w' i2 ×q 2 +....+w' in ×q n W’j1为权重矩阵Wmaxout2第j行第1个元素,q’1为输入序列的第1个数据,qj为第二隐藏层中的第j个数据。W' j1 is the first element of the jth row of the weight matrix W maxout2 , q' 1 is the first data of the input sequence, and q j is the jth data in the second hidden layer. 8.根据权利要求7所述的基于VAE-CGAN的储能出力预测方法,其特征在于:8. the energy storage output prediction method based on VAE-CGAN according to claim 7, is characterized in that: 在所述步骤411中,辨别器的输出结果为0.5时,辨别结果为辨别器无法判断出所接收的数据样本的来源;否则,辨别结果为所接收的数据样本来源于生成器。In the step 411, when the output result of the discriminator is 0.5, the discrimination result is that the discriminator cannot determine the source of the received data sample; otherwise, the discrimination result is that the received data sample comes from the generator. 9.根据权利要求1-3任意一项所述的基于VAE-CGAN的储能出力预测方法,其特征在于:9. The energy storage output prediction method based on VAE-CGAN according to any one of claims 1-3, characterized in that: 在所述步骤5中,CGAN模型的约束条件为:In the step 5, the constraints of the CGAN model are:
Figure FDA0002791264920000041
Figure FDA0002791264920000041
其中,G(znoise|y)为生成器根据条件信息Y与噪声信息znoise生成的样本数据;D(G(znoise|y))为辨别器的输出;
Figure FDA0002791264920000042
为log D(x|y)数学期望,采用生成的样本数据以及真实样本数据的均值代替;
Figure FDA0002791264920000043
为z是随机数,Pz(z)为随机数z的概率分布函数,
Figure FDA0002791264920000044
为当z服从概率分布Pz(z)时的数学期望;Pdata(x)为输入的真实样本数据的概率分布;P为概率分布;V(D,G)表示生成器与判别器之间的博弈函数。
Among them, G(z noise |y) is the sample data generated by the generator according to the condition information Y and the noise information znoise; D(G(z noise |y)) is the output of the discriminator;
Figure FDA0002791264920000042
is the mathematical expectation of log D(x|y), which is replaced by the mean value of the generated sample data and the real sample data;
Figure FDA0002791264920000043
For z is a random number, P z (z) is the probability distribution function of the random number z,
Figure FDA0002791264920000044
is the mathematical expectation when z obeys the probability distribution P z (z); P data (x) is the probability distribution of the input real sample data; P is the probability distribution; V (D, G) represents the difference between the generator and the discriminator game function.
10.根据权利要求9所述的基于VAE-CGAN的储能出力预测方法,其特征在于:10. The energy storage output prediction method based on VAE-CGAN according to claim 9, is characterized in that: 在所述步骤5中,通过以下公式对生成器进行优化:In said step 5, the generator is optimized by the following formula:
Figure FDA0002791264920000051
Figure FDA0002791264920000051
其中,G为编码器约束条件,
Figure FDA0002791264920000052
为数学期望,Pdata(x|y)为在y的条件约束下,x来自真实样本数据时的概率,PG(x|y)为在y的条件约束下,x来自生成器生成的样本数据时的概率;
where G is the encoder constraint,
Figure FDA0002791264920000052
is the mathematical expectation, P data (x|y) is the probability that x comes from the real sample data under the conditional constraint of y, and P G (x|y) is the conditional constraint of y, x comes from the sample generated by the generator the probability of the data;
该公式取值最小时认为生成器优化完成。The generator optimization is considered complete when the value of this formula is the smallest.
11.根据权利要求9所述的基于VAE-CGAN的储能出力预测方法,其特征在于:11. The energy storage output prediction method based on VAE-CGAN according to claim 9, is characterized in that: 在所述步骤5中,通过以下公式对辨别器进行优化:In the step 5, the discriminator is optimized by the following formula:
Figure FDA0002791264920000053
Figure FDA0002791264920000053
其中,D为判别器约束条件;Among them, D is the discriminator constraint; 该公式取最大值时认为辨别器优化完成。The discriminator optimization is considered complete when the formula takes the maximum value.
12.根据权利要求1-3任意一项所述的基于VAE-CGAN的储能出力预测方法,其特征在于:12. The energy storage output prediction method based on VAE-CGAN according to any one of claims 1-3, characterized in that: 在所述步骤6中,辨别器输出结果为0.5时,认为辨别器与生成器之间的博弈达到平衡,结束训练。In the step 6, when the output result of the discriminator is 0.5, it is considered that the game between the discriminator and the generator has reached a balance, and the training ends.
CN202011315545.3A 2020-11-22 2020-11-22 Energy storage output prediction method based on VAE-CGAN Pending CN112508239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011315545.3A CN112508239A (en) 2020-11-22 2020-11-22 Energy storage output prediction method based on VAE-CGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011315545.3A CN112508239A (en) 2020-11-22 2020-11-22 Energy storage output prediction method based on VAE-CGAN

Publications (1)

Publication Number Publication Date
CN112508239A true CN112508239A (en) 2021-03-16

Family

ID=74959347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011315545.3A Pending CN112508239A (en) 2020-11-22 2020-11-22 Energy storage output prediction method based on VAE-CGAN

Country Status (1)

Country Link
CN (1) CN112508239A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950409A (en) * 2021-04-19 2021-06-11 工数科技(广州)有限公司 Production scheduling optimization method of gas and steam energy comprehensive utilization system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082172A1 (en) * 2015-03-12 2018-03-22 William Marsh Rice University Automated Compilation of Probabilistic Task Description into Executable Neural Network Specification
CN109859310A (en) * 2019-01-22 2019-06-07 武汉纺织大学 A kind of model and its method for building up can be used for generating MR image
CN109886970A (en) * 2019-01-18 2019-06-14 南京航空航天大学 Detection and segmentation method of target objects in terahertz images and computer storage medium
CN110245380A (en) * 2019-05-10 2019-09-17 西安理工大学 Soft Instrument Training and Sample Supplementation Methods
GB201911689D0 (en) * 2019-08-15 2019-10-02 Facesoft Ltd Facial image processing
US20200074269A1 (en) * 2018-09-05 2020-03-05 Sartorius Stedim Data Analytics Ab Computer-implemented method, computer program product and system for data analysis
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Tool Condition Monitoring Dataset Enhancement Method Based on Generative Adversarial Networks
CN111191835A (en) * 2019-12-27 2020-05-22 国网辽宁省电力有限公司阜新供电公司 IES incomplete data load prediction method and system based on C-GAN transfer learning
CN111275115A (en) * 2020-01-20 2020-06-12 星汉智能科技股份有限公司 A Generative Adversarial Network-Based Adversarial Attack Sample Generation Method
CN111598805A (en) * 2020-05-13 2020-08-28 华中科技大学 Confrontation sample defense method and system based on VAE-GAN

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082172A1 (en) * 2015-03-12 2018-03-22 William Marsh Rice University Automated Compilation of Probabilistic Task Description into Executable Neural Network Specification
US20200074269A1 (en) * 2018-09-05 2020-03-05 Sartorius Stedim Data Analytics Ab Computer-implemented method, computer program product and system for data analysis
CN109886970A (en) * 2019-01-18 2019-06-14 南京航空航天大学 Detection and segmentation method of target objects in terahertz images and computer storage medium
CN109859310A (en) * 2019-01-22 2019-06-07 武汉纺织大学 A kind of model and its method for building up can be used for generating MR image
CN110245380A (en) * 2019-05-10 2019-09-17 西安理工大学 Soft Instrument Training and Sample Supplementation Methods
GB201911689D0 (en) * 2019-08-15 2019-10-02 Facesoft Ltd Facial image processing
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Tool Condition Monitoring Dataset Enhancement Method Based on Generative Adversarial Networks
CN111191835A (en) * 2019-12-27 2020-05-22 国网辽宁省电力有限公司阜新供电公司 IES incomplete data load prediction method and system based on C-GAN transfer learning
CN111275115A (en) * 2020-01-20 2020-06-12 星汉智能科技股份有限公司 A Generative Adversarial Network-Based Adversarial Attack Sample Generation Method
CN111598805A (en) * 2020-05-13 2020-08-28 华中科技大学 Confrontation sample defense method and system based on VAE-GAN

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
IAN J. GOODFELLOW 等: "Generative Adversarial Nets", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
PRABHAT 等: "Comparative Analysis of Deep Convolutional Generative Adversarial Network and Conditional Generative Adversarial Network using Hand Written Digits", 《2020 4TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS)》 *
YIZE CHEN 等: "Model-Free Renewable Scenario Generation Using Generative Adversarial Networks", 《IEEE TRANSACTIONS ON POWER SYSTEMS》 *
张文强 等: "基于VAE-CGAN的光伏不确定性建模方法", 《电网技术》 *
杨英 等: "VAE_LSTM算法在时间序列预测模型中的研究", 《湖南科技大学学报》 *
梁俊杰 等: "生成对抗网络GAN综述", 《计算机科学与探索》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950409A (en) * 2021-04-19 2021-06-11 工数科技(广州)有限公司 Production scheduling optimization method of gas and steam energy comprehensive utilization system

Similar Documents

Publication Publication Date Title
CN110059844B (en) Energy storage device control method based on ensemble empirical mode decomposition and LSTM
CN112330487B (en) A short-term power prediction method for photovoltaic power generation
CN114091615A (en) A method and system for electric energy metering data completion based on generative adversarial network
CN112884236B (en) A short-term load forecasting method and system based on VDM decomposition and LSTM improvement
CN111144644B (en) Short-term wind speed prediction method based on variation variance Gaussian process regression
CN110648017A (en) A Short-Term Shock Load Prediction Method Based on Two-layer Decomposition Technology
CN112834927A (en) Method, system, device and medium for predicting remaining life of lithium battery
CN113705086A (en) Ultra-short-term wind power prediction method based on Elman error correction
CN113919594A (en) A Demand Response Potential Assessment Method Based on Deep Forest
CN113627655B (en) A distribution network pre-disaster fault scenario simulation and prediction method and device
CN112288140A (en) Keras-based short-term power load prediction method, storage medium and equipment
CN117335425A (en) A power flow calculation method based on GA-BP neural network
CN114971090A (en) Electric heating load prediction method, system, equipment and medium
CN113435595A (en) Two-stage optimization method for extreme learning machine network parameters based on natural evolution strategy
CN109146131A (en) A kind of wind-power electricity generation prediction technique a few days ago
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN115759343A (en) E-LSTM-based user electric quantity prediction method and device
CN112508239A (en) Energy storage output prediction method based on VAE-CGAN
Arshad et al. Wind power prediction using genetic programming based ensemble of artificial neural networks (GPeANN)
CN119209491A (en) Photovoltaic power generation load forecasting method, system and medium based on fuzzy calculation
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN112132328A (en) An ultra-short-term local emotion reconstruction neural network prediction method for photovoltaic output power
CN118040678A (en) A short-term offshore wind power combination forecasting method
CN110910164A (en) Product sales forecasting method, system, computer device and storage medium
Mustafa et al. An application of genetic algorithm and least squares support vector machine for tracing the transmission loss in deregulated power system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载