CN110400010A - Prediction technique, device, electronic equipment and computer readable storage medium - Google Patents
Prediction technique, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN110400010A CN110400010A CN201910627110.3A CN201910627110A CN110400010A CN 110400010 A CN110400010 A CN 110400010A CN 201910627110 A CN201910627110 A CN 201910627110A CN 110400010 A CN110400010 A CN 110400010A
- Authority
- CN
- China
- Prior art keywords
- sequence
- time
- value
- training
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Operations Research (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the present application provides a kind of prediction technique, device, electronic equipment and computer readable storage medium, comprising: is handled according to historical time sequence of the time series predicting model to tranquilization, obtains fit time sequence and level forecasts value;The difference for calculating historical time sequence and fit time sequence, obtains the first residual sequence;The first residual sequence is handled using memory network model, obtains secondary predicted value;Final predicted value is determined according to level forecasts value and secondary predicted value.The embodiment of the present application obtains level forecasts value and residual sequence first with time series predicting model, then memory network model is recycled to be predicted residual sequence to obtain secondary predicted value, then final predicted value is determined according to level forecasts value and secondary predicted value, which optimizes the forecasting accuracy of time series, compared with prior art, so that the forecasting accuracy of time series such as network bandwidth utilization rate is improved.
Description
Technical Field
The present application relates to the field of prediction or optimization technologies, and in particular, to a prediction method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
In recent years, with the progress of science and technology, people are pursuing more intelligent and convenient life, more and more services and applications are moved to the network, and therefore, the traffic carried by the network link is larger and larger.
When the flow carried by the network link exceeds or reaches the bandwidth set by the user, network congestion or packet loss is easily caused, so that the working efficiency and the use experience of the user are influenced. Therefore, a need exists to predict network bandwidth usage by users. When the network bandwidth utilization of the user is predicted, the network bandwidth utilization of the user in the past year or longer is usually analyzed, so that the bandwidth utilization of the user in the next quarter or half year is predicted, and if the predicted bandwidth utilization exceeds or reaches a threshold value, the terminal equipment can prompt the user to expand the capacity in time.
In the prior art, when the network bandwidth utilization rate of a user is predicted, the prediction is usually performed in a regression analysis mode, and the accuracy of the prediction of the network bandwidth utilization rate in the prior art is relatively low.
Disclosure of Invention
In view of the above, embodiments of the present application provide a prediction method, an apparatus, an electronic device, and a computer-readable storage medium, so as to solve the problem in the prior art that the accuracy of network bandwidth usage prediction is relatively low.
In a first aspect, an embodiment of the present application provides a prediction method, where the method includes: processing the stabilized historical time sequence according to a time sequence prediction model to obtain a fitting time sequence and a primary predicted value, wherein the fitting time sequence is a time sequence formed by fitting values at the moment of at least one part of numerical values of the historical time sequence; calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence; processing the first residual sequence by using a memory network model to obtain a secondary predicted value; and determining a final predicted value according to the primary predicted value and the secondary predicted value.
According to the method, the prediction accuracy of the time sequence is optimized, and compared with the prior art, the prediction accuracy of the time sequence such as the network bandwidth utilization rate is improved.
In one possible design, the processing the smoothed historical time series according to the time series prediction model to obtain a fitting time series and a primary predicted value includes: processing the stabilized historical time sequence according to an autoregressive integrated moving average ARIMA model to obtain a fitting time sequence and a primary predicted value corresponding to the historical time sequence, wherein the ARIMA model is obtained by training according to a training time sequence; processing the first residual sequence by using a memory network model to obtain a secondary predicted value, wherein the processing comprises: and processing the first residual sequence according to a gated cycle unit GRU model to obtain a secondary predicted value, wherein the GRU model is obtained by training according to a preset residual sequence, and the preset residual sequence corresponds to a fitting sequence obtained by processing the training time sequence by the ARIMA model.
According to the embodiment of the application, the ARIMA model after training and the GRU model after training are matched for use, the ARIMA model is firstly utilized to obtain the primary predicted value and the residual sequence, then the GRU model is utilized to predict the residual sequence to obtain the secondary predicted value, the prediction accuracy of the time sequence is optimized in the mode, and compared with the prior art, the prediction accuracy of the time sequence such as the network bandwidth utilization rate is improved.
In one possible design, before processing the smoothed historical time series according to an autoregressive integrated moving average ARIMA model to obtain a fitting time series corresponding to the historical time series and a primary predicted value, the method further includes: d times of differential operation processing is carried out on the original time sequence to obtain a stabilized historical time sequence, d is a parameter in the ARIMA model, the value is the number of times of differential operation carried out to stabilize the training time sequence, and d is a positive integer.
In the implementation process, the purpose of performing differential operation on the original time sequence is to stabilize the original time sequence, that is, obtain a stabilized historical time sequence, the predictability of the stabilized time sequence is stronger, and the prediction accuracy of the network bandwidth utilization rate can be further improved by predicting the time sequence.
In one possible design, the determining a final predicted value based on the primary predicted value and the secondary predicted value includes: performing summation operation on the primary predicted value and the secondary predicted value to obtain a summation predicted value; and carrying out inverse difference operation on the addition predicted value to obtain the final predicted value.
The sum of the primary predicted value and the secondary predicted value is obtained, and then the sum predicted value is subjected to inverse difference operation, so that the final predicted value can be obtained.
In one possible design, the training process of the ARIMA model includes: determining parameters d, p and q of the ARIMA model according to a training time sequence serving as a training sample, wherein p is the number of autoregressive terms, and q is the number of moving average terms; d times of differential operation is carried out on the training time sequence to obtain a training stable sequence; substituting the training stationary sequence into the ARIMA model to obtain an expression of a first prediction time sequence; removing interference items in the expression of the first prediction time sequence to obtain a function consisting of data quantity with parameters to be estimated; determining an expression for a difference of the training stationary sequence and the function; and determining the solution of the parameter to be estimated in the function when the expression of the difference value meets a preset first constraint condition, and obtaining the ARIMA model according to the solution.
In the implementation process, parameters d, p and q of the ARIMA model are determined, d times of differential operation is performed on the training time sequence according to the parameter d to obtain a training stable sequence, an expression of a first prediction time sequence is obtained according to the p and the q, the expression comprises a constant term and an unknown term with a parameter to be estimated, all the unknown terms with the parameter to be estimated form a function, the expression of the difference between the training stable sequence and the function is calculated, the expression of the difference meets a first constraint condition, the parameter to be estimated in the function is calculated, and then a complete expression of the ARIMA model on the first prediction time sequence is obtained.
In one possible design, the training process of the GRU model includes: calculating the difference between a training stable sequence of the training time sequence after d times of differential operation and a predicted time sequence of the training time sequence after the ARIMA model processing to obtain a second residual sequence; processing the second residual sequence according to the initial GRU model to obtain a second prediction sequence; calculating a loss value between the second prediction sequence and the second residual sequence; and if the loss value does not accord with a preset second constraint condition, adjusting the weight parameter and the bias parameter of the initial GRU model until the loss value corresponding to the adjusted GRU model accords with the second constraint condition, and obtaining the GRU model.
In the implementation process, the difference between the training stable sequence and the prediction time sequence output by the ARIMA model is calculated firstly, a second residual sequence is obtained, the second residual sequence is input into the GRU model to obtain a second prediction sequence, the loss value between the second prediction sequence and the second residual sequence is calculated, if the loss value does not meet a second constraint condition, the weight parameter and the bias parameter of the GRU model are adjusted until the loss value corresponding to the adjusted GRU model meets the second constraint condition, and the output result of the trained ARIMA model is used as the input result of the GRU model to train the GRU model, so that the trained GRU model can be better matched with the ARIMA model, and a more accurate prediction value is obtained.
In one possible design, calculating a loss value between the second prediction sequence and the second residual sequence includes: and calculating an error average value according to the plurality of training predicted values in the second prediction sequence and the plurality of residual values in the second residual sequence, wherein the error average value is the loss value.
In the above implementation, the loss value may specifically be an error average value. The loss value may also be a numerical value obtained by other calculation methods, and the specific calculation method of the loss value should not be construed as limiting the application.
In one possible design, the loss value is not satisfiedIf a second constraint condition is set, adjusting the weight parameter and the bias parameter of the initial GRU model until the loss value corresponding to the adjusted GRU model meets the second constraint condition, and obtaining the GRU model, including: if the error average value exceeds the range of the preset value, adjusting the x component W of the updated gate weight in the GRU model by using a gradient descent methodxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhAnd executing the processing of the second residual sequence according to the initial GRU model to obtain a second prediction sequence; until the error average value is within the range of the preset value, updating the x component W of the gate weight in the GRU modelxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhIs determined as the final parameter of the GRU model.
In the implementation process, if the error average value exceeds the range of the preset value, the W in the GRU model needs to be adjusted firstxz、Whz、Wxr、Whr、Wrh、WxhThen, for the GRU model with the adjusted value, the step of inputting a second residual sequence to obtain a second prediction sequence is skipped until the average value of the errors is within the preset value range. W at this timexz、Whz、Wxr、Whr、Wrh、WxhThe value of (2) is used as the final parameter value of the GRU model to finish the training of the GRU model.
In one possible design, the calculating an error average value according to the plurality of training predictors in the second prediction sequence and the plurality of residual values in the second residual sequence includes: for each of the N training predicted values in the second prediction sequence, calculating a square of a difference between the training predicted value and a residual value in the second residual sequence at the same time; and calculating the average of the squares of the N differences to obtain the error average value, wherein N is the number of samples of the second prediction sequence.
In particular, using formulasCalculating an error average value E, where N is the number of samples of the second prediction sequence, prejAs a training predictor at time j, xjIs the residual value at time j in the second residual sequence.
In one possible design, the processing the second residual sequence according to the initial GRU model to obtain a second prediction sequence includes: based on tnResidual value of timetn-1Implicit status of a time of dayUpdating the gate weight parameter and updating the bias b of the gatefCalculating to obtain tnTime of day refresh gate outputBased on tnResidual value of timetn-1Implicit status of a time of dayReset gate weight parameter and reset gate bias brCalculating to obtain tnReset gate output of time of dayBased on resetting the gate outputtn-1Implicit status of a time of daytnResidual value of timeImplicit candidate state weight parameter and bias of implicit candidate state bhCalculating to obtain tnImplicit candidate states for a time of dayBased on tn-1Implicit status of a time of daytnImplicit candidate states for a time of dayAnd tnTime of day refresh gate outputCalculating to obtain tnImplicit status of a time of dayBased on tnImplicit status of a time of dayWeight W of fully-connected neural network layerpreBias of fully-connected neural network layer bpreCalculating to obtain tn+1Training prediction value of time
In particular, according to Calculating each residual valueThe training prediction value of the next momentWherein,is tnUpdate gate output of time, WxzTo update the x-component of the gate weights,is tnResidual value of time, WhzTo update the h component of the gate weight,is tn-1Implicit status of time of day, bfTo update the bias of the gate;is tnReset gate output of time, WxrTo reset the x-component of the gate weight, WhrTo reset the h component of the gate weight, brBiasing to reset the gate;is tnImplicit candidate states of time of day, WrhAs an implicit r component of the candidate state weight, WxhAs x-component of implicit candidate state weights, bhA bias that is an implicit candidate state;is tnThe implicit state of the moment of time,is tn+1Training prediction at time, WpreWeights for fully connected neural network layers, bpreFor the bias of the fully connected neural network layer, matrix element multiplication is denoted.
In the implementation process, the training prediction values can be obtained through the set of formulas, and then the training prediction values are arranged according to a time sequence to obtain a second prediction sequence.
In a second aspect, an embodiment of the present application provides a prediction apparatus, including: the primary predicted value obtaining module is used for processing the stabilized historical time sequence according to a time sequence prediction model to obtain a fitting time sequence and a primary predicted value, wherein the fitting time sequence is a time sequence formed by fitting values of the time at which at least one part of numerical values of the historical time sequence are located; the first residual sequence module is used for calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence; the secondary predicted value obtaining module is used for processing the first residual sequence by utilizing a memory network model to obtain a secondary predicted value; and the final predicted value calculating module is used for determining a final predicted value according to the primary predicted value and the secondary predicted value.
In one possible design, the primary predicted value obtaining module is specifically configured to process the smoothed historical time sequence according to an autoregressive integrated moving average ARIMA model to obtain a fitting time sequence and a primary predicted value corresponding to the historical time sequence, wherein the ARIMA model is obtained by training according to a training time sequence; and the secondary predicted value obtaining module is specifically configured to process the first residual sequence according to a gated cycle unit GRU model to obtain a secondary predicted value, wherein the GRU model is obtained by training according to a preset residual sequence, and the preset residual sequence corresponds to a fitting sequence obtained by processing the training time sequence by the ARIMA model.
In a possible design, the apparatus further includes a difference operation module, configured to perform d difference operation processes on the original time sequence to obtain a stabilized historical time sequence, where d is a parameter in the ARIMA model, and takes a value of the number of difference operations performed to stabilize the training time sequence, and d is a positive integer.
In one possible design, the final predicted value calculation module further includes: the addition submodule is used for carrying out summation operation on the primary predicted value and the secondary predicted value to obtain an addition predicted value; and the contrast sub-operation operator module is used for carrying out inverse difference operation on the addition predicted value to obtain the final predicted value.
In one possible design, the apparatus further includes: the parameter determining module is used for determining parameters d, p and q of the ARIMA model according to a training time sequence serving as a training sample, wherein p is the number of autoregressive terms, and q is the number of moving average terms; the training stable sequence module is used for carrying out d-time differential operation on the training time sequence to obtain a training stable sequence; a prediction expression obtaining module, configured to substitute the training stationary sequence into the ARIMA model to obtain an expression of a first prediction time sequence; the function composition module is used for removing interference items in the expression of the first prediction time sequence to obtain a function composed of data quantity with parameters to be estimated; a difference expression determination module for determining an expression of the difference between the training stationary sequence and the function; and the ARIMA model obtaining module is used for determining the solution of the parameter to be estimated in the function when the expression of the difference value meets a preset first constraint condition, and obtaining the ARIMA model according to the solution.
In one possible design, the apparatus further includes: the second residual sequence module is used for calculating the difference between a training stable sequence of the training time sequence after d times of differential operation and a predicted time sequence of the training time sequence after the ARIMA model processing to obtain a second residual sequence; a second prediction sequence module, configured to process the second residual sequence according to the initial GRU model to obtain a second prediction sequence; a loss value calculation module for calculating a loss value between the second prediction sequence and the second residual sequence; and the GRU model obtaining module is used for adjusting the weight parameter and the bias parameter of the initial GRU model if the loss value does not accord with a preset second constraint condition until the loss value corresponding to the adjusted GRU model accords with the second constraint condition, so as to obtain the GRU model.
In one possible design, the loss value calculation module is specifically configured to calculate an error average value according to a plurality of training prediction values in the second prediction sequence and a plurality of residual values in the second residual sequence, where the error average value is the loss value.
In one possible design, the GRU model obtaining module is specifically configured to adjust the x-component W of the updated gate weight in the GRU model by using a gradient descent method if the error average value exceeds a preset value rangexzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhAnd executing the processing of the second residual sequence according to the initial GRU model to obtain a second prediction sequence; until the error average value is within the range of the preset value, updating the x component W of the gate weight in the GRU modelxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhIs determined as the final parameter of the GRU model.
In one possible design, the loss value calculation module is specifically configured to calculate, for each of the N training prediction values in the second prediction sequence, a square of a difference between the training prediction value and a residual value in the second residual sequence at the same time; and calculating the average of the squares of the N differences to obtain the error average value, wherein N is the number of samples of the second prediction sequence.
In one possible design, the second prediction sequence module is configured to base t onnResidual value of timetn-1Implicit status of a time of dayUpdating the gate weight parameter and updating the bias b of the gatefCalculating to obtain tnTime of day refresh gate outputBased on tnResidual value of timetn-1Implicit status of a time of dayReset gate weight parameter and reset gate bias brCalculating to obtain tnReset gate output of time of dayBased on resetting the gate outputtn-1Implicit status of a time of daytnResidual value of timeImplicit candidate state weight parameter and bias of implicit candidate state bhCalculating to obtain tnImplicit candidate states for a time of dayBased on tn-1Implicit status of a time of daytnImplicit candidate states for a time of dayAnd tnTime of day refresh gate outputCalculating to obtain tnImplicit status of a time of dayBased on tnImplicit status of a time of dayWeight W of fully-connected neural network layerpreBias of fully-connected neural network layer bpreCalculating to obtain tn+1Training prediction value of time
In a third aspect, the present application provides an electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of the first aspect or any of the alternative implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect or any of the alternative implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation manner of the first aspect.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
For a clearer explanation of the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a system flow diagram for prediction;
FIG. 2 is a flow chart illustrating a prediction method provided by an embodiment of the present application;
FIG. 3 is a flow chart diagram illustrating one embodiment of a prediction method provided by an embodiment of the present application;
FIG. 4 shows a flow diagram of a training process for an ARIMA model;
FIG. 5 shows a flow diagram of a training process for a GRU model;
FIG. 6 is a schematic block diagram of a prediction apparatus provided in the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a GRU model;
fig. 9 shows a comparison of predicting time series using the prediction method provided by the embodiments of the present application and predicting time series using ARIMA model.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Before describing the specific embodiments of the present application, a brief description will be given of an application scenario of the present application.
In the prior art, the network bandwidth utilization of the user needs to be predicted, and generally, the network bandwidth utilization of the user in the past year or more is analyzed and processed, and then the bandwidth utilization of the user in the next quarter or half year is predicted.
After obtaining the network bandwidth usage of the user for the past year or longer, the prior art often processes the network bandwidth usage only by means of regression analysis, for example, only by using an Autoregressive Integrated Moving Average (ARIMA) model to predict the network bandwidth usage, where the prediction accuracy is relatively low, and the ARIMA model is a time series prediction model.
In order to solve the above problems, embodiments of the present application provide a method for predicting unknown values in a time series by combining a time series prediction model and a memory network model, and it should be understood that the method is not limited to be applied to prediction of network bandwidth utilization, and may also be applied to other predictions, for example, prediction of urban traffic flow and prediction of website visitation. For convenience of description, in the embodiment of the present application, the time series prediction model is an ARIMA model, the memory network model is a Gated Recycling Unit (GRU) model, and the prediction of the network bandwidth utilization is taken as an example for introduction.
Referring to fig. 1, fig. 1 shows a flowchart of an entire system for predicting a time series according to an embodiment of the present application, which may be divided into data acquisition, data preprocessing, model training, predicting a time series by using a trained model, determining whether the model meets the standard, model optimization, result feedback, and model application. If the model reaches the standard, the model can be directly applied, and if the model does not reach the standard, the processes of model optimization and result feedback can be carried out.
The data acquisition process may be: the Network traffic collector collects data traffic of multiple links of each of the multiple switches through a Simple Network Management Protocol (SNMP), and the data traffic can be collected once every one minute.
After data acquisition, a step of data pre-processing may be performed, the data pre-processing comprising the steps of:
through statistical analysis of each link in the plurality of links, it can be known that the utilization rate of a part of the links in the plurality of links is low, so the links can be screened, the links are sorted from high to low according to the utilization rate of the links, and then the link with the utilization rate of the first 70% can be taken, it should be understood that the first 70% is only an exemplary description, and can also be other values, for example, the first 80% or the first 60%.
White noise detection can be carried out on each link in the screened links, and white noise sequences are removed. The white noise sequence is a random sequence with zero variance and zero mean, and has low value for model training without rule.
After the white noise sequence is removed, an abnormal point with a value obviously higher than a value at a previous moment or a value at a next moment may exist in the flow data in one of the links, and the abnormal point in the flow data may be smoothed in a manner of taking an average value of the value at the previous moment and the value at the next moment.
If the flow data is lost, such as data loss for several days or data loss at intervals, the data can be completed by an averaging method, a median method, a time series prediction method and the like. For example, for 90 days of data in a quarter, where there are 10 days of data lost (perhaps 10 consecutive days of data lost, or perhaps intermittently with the total number of days of data lost being 10 days), the median of the 80 days of data may be counted by complementing the data with the median, and then using the median as the 10 days of data lost value. After the data traffic of each day in the preset time period is obtained, normalization processing may be performed on the data traffic to obtain a bandwidth usage rate, and optionally, the bandwidth usage rate may be calculated by dividing the data traffic by the total network bandwidth.
Since the maximum value of the network bandwidth usage rate of each day in the preset time period needs to be obtained, the average maximum value of the network bandwidth usage rates of half an hour in each day may be taken as the maximum value of the network bandwidth usage rate of the day, the obtained maximum value of the network bandwidth usage rate may be stored in a database, and the data format in the database may be as shown in the following table:
after the data preprocessing is performed, the preprocessed data may be divided, one part is used as a training sample of the model, and the other part is used as a testing sample of the model. For example, 80% of the data after preprocessing may be used as a training sample and 20% of the data after preprocessing may be used as a test sample. 80% as a training sample and 20% as a test sample are only exemplary, and other values, for example, 85% as a training sample and 15% as a test sample, are also possible; it may be 75% as training sample and 25% as test sample.
The training process of the ARIMA model and the GRU model is described in detail below, after the training is finished, the trained ARIMA model and the trained GRU model can be used for prediction, whether the ARIMA model and the GRU model reach the standard or not can be judged according to the test sample, and if the ARIMA model and the GRU model reach the standard, the standard model can be put into application; if the model does not reach the standard, the model can be optimized, for example, the ARIMA model and the GRU model can be retrained according to more training samples, and then the optimized model is tested by using the test sample until the model reaches the standard.
Referring to fig. 4, fig. 4 shows a training process of ARIMA model in the present application, which may be performed by a computing device, where the computing device is a device with data processing capability, and may be a server, or a terminal device, such as an intelligent mobile device or a computer, and the training process of ARIMA model includes the following steps:
step S210, parameters d, p and q of the ARIMA model are determined according to the training time sequence as a training sample.
The time series refers to a series formed by arranging numerical values of the same statistical index according to the time sequence of occurrence of the numerical values. In this embodiment, the value of the same statistical indicator may be a maximum value of the network bandwidth utilization of the user every day in a preset time period. The predetermined period of time may be a longer period of time, such as a quarter or half year, etc. The training time sequence may be a sequence in which the maximum values of the network bandwidth usage amounts of each day in a certain preset time period are arranged in time sequence.
The ARIMA model comprises three parameters of d, p and q, wherein the value of d is the number of differential operations performed to stabilize the time sequence, namely the time sequence can be stabilized after d differential operations. d is 0 or a positive integer, if d is 0, the time sequence is stable after 0 differential operation, namely the time sequence is stable without differential operation. p is the number of autoregressive terms of the ARIMA model, and q is the number of moving average terms.
The calculation of the parameter d is explained below:
if the training time series are a1, a2, a3, a4, a5, a6 and a7, when the parameter d of the ARIMA model is calculated, stability check can be performed on the training time series (a1, a2, a3, a4, a5, a6 and a7) to determine whether the training time series is stable or not.
If the training time sequence is stable, it indicates that the training time sequence is stable without performing differential operation, i.e., the number of differential operations to be performed to stabilize the training time sequence is 0, and then d takes a value of 0.
If the training time series are not stable, the training time series are subjected to the first difference operation to obtain (a2-a1), (a3-a2), (a4-a3), (a5-a4), (a6-a5) and (a7-a 6). If b1 is not (a2-a1), b2 is (a3-a2), b3 is (a4-a3), b4 is (a5-a4), b5 is (a6-a5), and b6 is (a7-a6), b1, b2, b3, b4, b5, and b6 can be obtained after the training time series is subjected to one difference operation. The training time series (b1, b2, b3, b4, b5, b6) after the difference operation is subjected to stationarity check, and whether (b1, b2, b3, b4, b5, b6) is stationary or not is judged.
If (b1, b2, b3, b4, b5, b6) is stable, it indicates that the training time sequence is stable through one difference operation, i.e. the number of difference operations to be performed to stabilize the training time sequence is 1, and at this time, d takes a value of 1.
If (b1, b2, b3, b4, b5, b6) is not smooth, the second difference operation is continued for (b1, b2, b3, b4, b5, b6) to obtain (b2-b1), (b3-b2), (b4-b3), (b5-b4), and (b6-b 5). Not to take ct1=(b2-b1),ct2=(b3-b2),ct3=(b4-b3),ct4=(b5-b4),ct5When the training time sequence is subjected to two differential operations, (b6-b5), c can be obtainedt1,ct2,ct3,ct4,ct5. For the training time sequence (c) with two differential operationst1,ct2,ct3,ct4,ct5) Performing stationarity test and judgment (c)t1,ct2,ct3,ct4,ct5) Whether it is smooth or not.
If (c)t1,ct2,ct3,ct4,ct5) And if the difference operation is stable, the training time sequence is stable after two times of difference operation, that is, the number of difference operation performed to stabilize the training time sequence is 2, and the value of d is 2 at this time.
……
When the stationarity test is carried out, the stationarity test can be carried out in a unit root test mode. The unit root detection means that whether a unit root exists in the time sequence to be detected is verified, and if the unit root exists, the time sequence is a non-stable time sequence; if no unit root exists, a smooth time sequence is obtained. It will be appreciated that instead of using the unit root test for stationarity testing, the time series may also be tested by a timing graph, such as a time series graph that plots a non-stationary sequence if the time series graph shows a significant trend or periodicity.
The calculation process of the parameters p, q is explained below:
p may be determined by a Partial Autocorrelation Function (PACF), and q may be determined by an Autocorrelation Function (ACF). p is the partial autocorrelation coefficient p-order truncation of the training time sequence, and q is the autocorrelation coefficient q-order truncation of the training time sequence. It is understood that p and q can be determined in other ways, for example, both p and q can be determined by an information minimization criterion (AIC), and p and q can also be determined by a Bayesian Information Criterion (BIC). The specific manner of determining the parameters p and q should not be construed as limiting the application.
And step S220, carrying out d-time differential operation on the training time sequence to obtain a training stable sequence.
If d is not 2, the description is continued with the above example, ct1,ct2,ct3,ct4,ct5The training time sequence a1, a2, a3, a4, a5, a6 and a7 are obtained after two difference operations.
And step S230, substituting the training stationary sequence into an ARIMA model to obtain an expression of the first prediction time sequence.
Determining an expression for a first predicted time series of training stationary sequences based on p and q, the expression being of the form xt′=φ0+φ1xt-1+φ2xt-2+...+φpxt-p+εt-θ1εt-1-θ2εt-2-...-θqεt-qWherein x ist' is a predicted value, phi, at time t in the training stationary sequence0Is a constant term, phii1,2, p is the actual value xiWeight of (1), xt-iI 1,2, p is the actual value x at time t in the training smoothing sequencetIs also the actual value at time (t-i) in the training stationary sequence, thetaiQ is a random interference eiWeight of (e ∈)t-iI 1, 2.. q is the random interference at time (t-i). The random interference calculation method comprises the following steps: the average value of all values in the training stationary sequence is calculated, and then the specific value at a certain moment of the training stationary sequence is subtracted from the average value to obtain the random interference at the moment.
For convenience of description, the training process of the ARIMA model is not described by taking p as 2 and q as 2 as an example, and the expression of the first prediction time series in this case is specifically in the form of:
xt′=φ0+φ1xt-1+φ2xt-2+εt-θ1εt-1-θ2εt-2
step S240, removing the interference term in the expression of the first prediction time sequence to obtain a function composed of data amount with the parameter to be estimated.
The above-mentioned interference term is expression xt′=φ0+φ1xt-1+φ2xt-2+εt-θ1εt-1-θ2εt-2Constant term of (1)0And εtRemoving the constant term phi0And εtThe data volume with the parameter to be estimated is formed into a function Ft(β)=φ1xt-1+φ2xt-2-θ1εt-1-θ2εt-2Where beta is used to represent all the parameters phi to be estimatedt,θt。
Step S250, determining an expression of a difference between the training stationary sequence and the function.
Subtracting the real value of the training stationary sequence at the time t from the function to obtain an expression of a residual error term of the training stationary sequence: xit=xt-Ft(β), the expression is the expression for the above difference. Wherein x istTo train the true value of the smoothing sequence at time t, for example, if t is t1, xt=ct1。
And step S260, determining the solution of the parameters to be estimated in the function when the expression of the difference value meets a preset first constraint condition, and obtaining the ARIMA model according to the solution.
After obtaining the expression of the residual terms of the training stationary sequence, the sum of the squares of the residuals of the residual terms is obtained:the first constraint may refer to obtaining the above-mentioned sum of squares of the residuals, and then calculating the minimum value of the sum of squares of the residuals using an iterative algorithm.
Then, the expression x of the first prediction time series when the sum of squared residuals is minimum can be obtainedt′=φ0+φ1xt-1+φ2xt-2+εt-θ1εt-1-θ2εt-2Parameter phi to be estimated1、φ2、θ1、θ2Thereby obtaining an expression of the determined first predicted time series of the ARIMA model.
Alternatively, a gradient descent method may be used to solve for φ when Q (β) is minimumtAnd thetatWherein when t is 1, the parameter phi to be estimated can be obtained1And theta1(ii) a When t is 2, the parameter phi to be estimated can be obtained2And theta2。
Firstly, determining parameters d, p and q of an ARIMA model, then carrying out d-time difference operation on a training time sequence according to the parameter d to obtain a training stable sequence, then obtaining an expression of a first prediction time sequence according to the p and the q, wherein the expression comprises a constant term and unknown terms with parameters to be estimated, forming all the unknown terms with the parameters to be estimated into a function, then calculating the expression of the difference between the training stable sequence and the function, enabling the expression of the difference to meet a first constraint condition, calculating the parameters to be estimated in the function, and further obtaining a complete expression of the ARIMA model about the first prediction time sequence.
Referring to fig. 5, fig. 5 shows a training process of a GRU model in the present application, where the GRU model is a memory network model, it is understood that the training process may also be performed by a computing device, and the training process of the GRU model includes the following steps:
step S310, calculating the difference between the training stable sequence of the training time sequence after d times of differential operation and the prediction time sequence of the training time sequence after ARIMA model processing, and obtaining a second residual sequence.
Determining expression x of ARIMA modelt′=φ0+φ1xt-1+φ2xt-2+εt-θ1εt-1-θ2εt-2Then, the stationary sequence c will be trainedt1,ct2,ct3,ct4,ct5Substituting the obtained time sequence into the expression to obtain a predicted time sequence ct3′,ct4′,ct5'. Wherein,
ct3′=φ0+φ1ct2+φ2xt1+εt-θ1εt2-θ2εt1;
ct4′=φ0+φ1ct3+φ2xt2+εt-θ1εt3-θ2εt2;
ct5′=φ0+φ1ct4+φ2xt3+εt-θ1εt4-θ2εt3。
calculating a training stationary sequence ct3,ct4,ct5And the prediction time series ct3′,ct4′,ct5The difference of' can be:
et0=ct3-ct3′;et1=ct4-ct4′;et2=ct5-ct5'. The second residual sequence is et0,et1,et2。
Step S320, the second residual sequence is processed according to the initial GRU model to obtain a second prediction sequence.
For the residual value at each time instant in the second residual sequence:
may be based on tnResidual value of timetn-1Implicit status of a time of dayUpdating the gate weight parameter and updating the bias b of the gatefCalculating to obtain tnTime of day refresh gate output
Based on tnResidual value of timetn-1Implicit status of a time of dayReset gate weight parameter and reset gate bias brCalculating to obtain tnReset gate output of time of day
Based on resetting the gate outputtn-1Implicit status of a time of daytnResidual value of timeImplicit candidate state weight parameter and bias of implicit candidate state bhCalculating to obtain tnOf time of dayImplicit candidate states
Based on tn-1Implicit status of a time of daytnImplicit candidate states for a time of dayAnd tnTime of day refresh gate outputCalculating to obtain tnImplicit status of a time of day
Based on tnImplicit status of a time of dayWeight W of fully-connected neural network layerpreBias of fully-connected neural network layer bpreCalculating to obtain tn+1Training prediction value of time
Specifically, each residual value in the second residual sequence is substituted into the following formula in the GRU model:
calculating each residual value in the second residual sequenceThe training prediction value of the next momentReferring to fig. 8, the GRU model includes a reset gate and an update gate, and the reset gate and the update gate control the memory state of the GRU model.
Wherein,is tnThe output of the update gate at the moment, sigmoid function is sigma,Wxzto update the x-component of the gate weights,is tnResidual value of time, WhzTo update the h component of the gate weight,is tn-1Implicit status of time of day, bfTo update the bias of the gate;is tnReset gate output of time, WxrTo reset the x-component of the gate weight, WhrTo reset the h component of the gate weight, brBiasing to reset the gate; tan h (x) is a hyperbolic tangent function, is tnImplicit candidate states of time of day, WrhAs an implicit r component of the candidate state weight, WxhAs x-component of implicit candidate state weights, bhA bias that is an implicit candidate state;is tnImplicit status of time of day, WpreWeights for fully connected neural network layers, bpreFor the bias of the fully connected neural network layer, matrix element multiplication is denoted.
For et0According to the formula
Obtaining training prediction values
For et1According to the formula
Obtaining training prediction values
The training predicted values are arranged according to the time sequence to obtain a second predicted sequence of
Step S330, calculating a loss value between the second prediction sequence and the second residual sequence.
An error average value, which is the loss value, may be calculated according to the training predictors in the second prediction sequence and the residual values in the second residual sequence. For each training prediction value in the N training prediction values in the second prediction sequence, calculating the square of the difference between the training prediction value and the residual value in the second residual sequence at the same moment; and calculating the average of the squares of the N differences to obtain the error average value, wherein N is the number of samples of the second prediction sequence.
In particular, using formulasThe average value of the errors E is calculated, where N is the number of samples of the second prediction sequence, which in the above example is 2, prejAs a training predictor at time j, xjIs the residual value at time j in the second residual sequence.
Step S340, if the loss value does not meet the preset second constraint condition, adjusting the weight parameter and the bias parameter of the initial GRU model until the loss value corresponding to the adjusted GRU model meets the second constraint condition, and obtaining the GRU model.
The second constraint condition is to judge whether the error average value exceeds the range of the preset value. If the error average value exceeds the range of the preset value, adjusting the W in the GRU model at the moment by using a gradient descent methodxz、Whz、Wxr、Whr、Wrh、WxhAnd go to step S320; until the error average value is within the range of the preset value, W in the GRU model at the momentxz、Whz、Wxr、Whr、Wrh、WxhIs determined as the final parameter of the GRU model.
Calculating the difference between the training stable sequence and the prediction time sequence output by the ARIMA model, obtaining a second residual sequence, inputting the second residual sequence into the GRU model to obtain a second prediction sequence, calculating the loss value between the second prediction sequence and the second residual sequence, adjusting the weight parameter and the bias parameter of the GRU model if the loss value does not satisfy a second constraint condition until the loss value corresponding to the adjusted GRU model satisfies the second constraint condition, and performing training of the GRU model by using the output result of the trained ARIMA model as the input result of the GRU model, so that the trained GRU model can be better matched with the ARIMA model, thereby obtaining a more accurate prediction value.
Referring to fig. 2, fig. 2 shows a prediction method provided by an embodiment of the present application, which includes the following steps:
and step S10, processing the smoothed historical time sequence according to the time sequence prediction model to obtain a fitting time sequence and a primary prediction value.
And step S20, calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence.
And step S30, processing the first residual sequence by using a memory network model to obtain a secondary predicted value.
And step S40, determining a final predicted value according to the primary predicted value and the secondary predicted value.
The time series prediction model is a model capable of predicting time series, for example, the time series prediction model may be an ARIMA model. The stabilized historical time sequence means that the historical time sequence is stable, and if the original time sequence is originally unstable, the original time sequence can be stabilized by performing stabilization processing on the original time sequence to obtain the stabilized historical time sequence. The specific way of smoothing the original time series may be: d difference operations are carried out on the original time sequence.
The original time sequence may be a sequence in which the maximum values of the network bandwidth usage rates of each day in a certain time period are arranged in chronological order. For example, it is known that the network bandwidth utilization of a certain user for 5 consecutive days is to predict the network bandwidth utilization from the 6 th day to the 8 th day, and the network bandwidth utilization for 5 consecutive days is the above-mentioned original time sequence. The historical time sequence is a stable time sequence obtained by carrying out d times of differential operation on the original time sequence, and if the original time sequence is stable, d is 0, namely the stable time sequence is obtained without carrying out differential operation. The fitting time sequence is a time sequence formed by fitting values of the time at which at least a part of numerical values of the historical time sequence are positioned. The memory network model may be a GRU model.
According to the prediction method provided by the embodiment of the application, the time sequence prediction model and the memory network model are matched for use, the time sequence prediction model is firstly utilized to obtain the primary predicted value and the residual sequence, then the memory network model is utilized to predict the residual sequence to obtain the secondary predicted value, and then the final predicted value is determined according to the primary predicted value and the secondary predicted value.
Next, referring to fig. 3, without taking the time-series prediction model as an ARIMA model and the memory network model as a GRU model as an example, the prediction method provided in the embodiment of the present application is described, which includes the following steps, which may be executed by a computing device:
and step S110, processing the stabilized historical time sequence according to an ARIMA model to obtain a fitting time sequence and a primary predicted value corresponding to the historical time sequence.
The smoothed historical time series may be obtained by d differential operations on the original time series.
The expression for the trained ARIMA is:
xt′=φ0+φ1xt-1+φ2xt-2+...+φpxt-p+εt-θ1εt-1-θ2εt-2-...-θqεt-q
as can be seen from the above expression, the value x at time (t-p) in the historical time series is used as a basist-pValue x to time (t-1)t-1And random interference epsilon at (t-q) time of historical time seriest-qRandom interference epsilon to time (t-1)t-1The predicted value x at the moment t can be obtainedt′。
Optionally, the random interference is calculated by: the average value of all numerical values in the historical time sequence is calculated, and then the specific value of the historical time sequence at a certain moment is subtracted from the average value to obtain the random interference at the moment.
If the historical time series has a known value at the time t, the predicted value x at the timet' is a member of the fitted time series; if the historical time series has no known value at the time t, the predicted value x at the time ist' is a member of the primary predictor.
For convenience of description, the prediction method provided in the embodiments of the present application is further described below by taking d ═ 0, p ═ 2, and q ═ 2 as examples:
since d is equal to 0, the original time sequence can be smoothed by performing 0 difference operations, and at this time, the historical time sequence and the original time sequence are smoothThe sequences between the two sequences are the same. For the original time series (i.e. the historical time series) zt1,zt2,zt3,zt4,zt5Substituted into the following formula:
xt′=φ0+φ1xt-1+φ2xt-2+εt-θ1εt-1-θ2εt-2
the following can be obtained:
zt3′=φ0+φ1zt2+φ2zt1+εt-θ1εt2-θ2εt1;
zt4′=φ0+φ1zt3+φ2zt2+εt-θ1εt3-θ2εt2;
……
zt6′=φ0+φ1zt5+φ2zt4+εt-θ1εt5-θ2εt4;
zt7′=φ0+φ1zt6′+φ2zt5+εt-θ1εt6-θ2εt5;
zt8′=φ0+φ1zt7′+φ2zt6′+εt-θ1εt7-θ2εt6。
known values z exist at the time moments t3, t4 and t5 respectively due to historical time seriest3,zt4,zt5Thus z is obtained by the above formulat3′,zt4′,zt5' belongs to a fitted time series; since there is no known value in the historical time series at times t6, t7, and t8, z is obtained by the above formulat6′,zt7′,zt8' belongs to the primary predictor.
When predicting a primary predicted value at a specific time, if the primary predicted value exists in the historical time series, the primary predicted value is compared with the historical time seriesPredicting the actual value corresponding to the specific moment by using the actual value in the historical time sequence; if the actual value corresponding to the specific time does not exist in the historical time series, the prediction is carried out by using the previously predicted primary predicted value. For example, in prediction zt6When, z is requiredt4,zt5Presence of z in the historical time seriest4And zt5And thus can be used directly. In prediction zt7When, z is requiredt5,zt6', only z exists in the historical time seriest5Thus using the previously predicted zt6' existence with historical time series zt5Co-prediction of zt7′。
In one embodiment, if d is not 0, e.g., d is 1, then z is the original time seriest1,zt2,zt3,zt4,zt5Performing a difference operation to obtain a historical time sequence (z)t2-zt1),(zt3-zt2),(zt4-zt3),(zt5-zt4) And then substituting the historical time sequence into an expression of the ARIMA model, wherein the calculation process after the expression is substituted corresponds to the content, and the description is omitted here.
Step S120, calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence.
The sequence obtained after d times of differential operations is a historical time sequence, the value in the historical time sequence at a certain moment and the value in the fitting time sequence at the same moment can be obtained, then the values are subtracted, the obtained differences are arranged according to the time sequence, and a first residual sequence can be obtained.
Continuing with the example where d is 0, p is 2, and q is 2, the historical time series z ist1,zt2,zt3,zt4,zt5Z int3,zt4,zt5Respectively fit to z in the time seriest3′,zt4′,zt5' subtract to obtain a first residual sequence zt3″,zt4″,zt5". Wherein z ist3″=zt3-zt3′,zt4″=zt4-zt4′,zt5″=zt5-zt5′。
And step S130, processing the first residual sequence according to the GRU model to obtain a secondary predicted value.
For ease of description, the first residual sequence is re-encoded, and z ist3″,zt4″,zt5"recoded to vt3,vt4,vt5Then substituting the residual value of the first residual sequence into
Calculating each residual valueSecondary predicted value of the latter time
For vt3And after substitution, the following can be obtained:
wherein,is a random value or 0.
For vt4And after substitution, the following can be obtained:
for vt5And after substitution, the following can be obtained:
in the predictionCan be obtained by the above formulaAndand (3) predicting:
in the predictionThen can utilizeAndmaking predictions, prediction processes and predictionsThe same applies, and the detailed description is omitted here.
In the process of calculating the secondary predicted value, when the secondary predicted value at a specific moment is predicted, if a real value corresponding to the specific moment exists in the first residual sequence, predicting by using the real value in the first residual sequence; and if the actual value corresponding to the specific moment does not exist in the first residual sequence, predicting by using the previously predicted secondary predicted value. For example, in predictionWhen v is requiredt5V is present in the first residual sequencet5And thus can be used directly. In the predictionWhen v does not exist in the historical time seriest6Thus using previously predictedTo predict
Through the iterative operation, the secondary predicted values are respectively obtained
And step S140, determining a final predicted value according to the primary predicted value and the secondary predicted value.
And performing summation operation on the primary predicted value and the secondary predicted value to obtain a final predicted value.
The summation operation may be an operation of directly adding the primary predicted value and the secondary predicted value; in one embodiment, the primary predicted value and the weight corresponding to the primary predicted value are multiplied to obtain a product, the secondary predicted value and the weight corresponding to the secondary predicted value are multiplied to obtain another product, and then the two products are added.
In the embodiment of the present application, since d is 0, the sum of the primary predicted value and the secondary predicted value at the same time may be calculated and directly used as the final predicted value. For example,original time series zt1,zt2,zt3,zt4,zt5Is zt6″′,zt7″′,zt8″′。
In one embodiment, if d is not 0, after the sum prediction value of the historical time series is calculated, the sum prediction value is subjected to inverse difference operation, so as to obtain a final prediction value.
In the implementation process, a fitting time sequence and a primary predicted value corresponding to the original time sequence are obtained according to the trained ARIMA model. And calculating the difference between the historical time sequence obtained by the d-time difference operation of the original time sequence and the fitting time sequence to obtain a first residual sequence formed by arranging the differences according to the time sequence. The method comprises the steps of inputting a first residual sequence into a GRU model to obtain a secondary predicted value, and then determining a final predicted value of an original time sequence according to the primary predicted value and the secondary predicted value.
Referring to fig. 6, fig. 6 shows a schematic structural block diagram of a prediction apparatus provided in the present application, it should be understood that the apparatus 400 corresponds to the method embodiments of fig. 3 to 5, and is capable of performing the steps related to the method embodiments, and the specific functions of the apparatus 400 may be referred to the description above, and a detailed description is appropriately omitted herein to avoid redundancy. The device 400 includes at least one software functional module that can be stored in a memory in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the device 400. Specifically, the apparatus 400 includes:
and a primary predicted value obtaining module 410, configured to process the smoothed historical time sequence according to a time sequence prediction model to obtain a fitting time sequence and a primary predicted value, where the fitting time sequence is a time sequence formed by fitting values at a time at which at least a part of numerical values of the historical time sequence are located.
A first residual sequence module 420, configured to calculate a difference between the historical time sequence and the fitting time sequence, so as to obtain a first residual sequence.
And a secondary predicted value obtaining module 430, configured to process the first residual sequence by using a memory network model to obtain a secondary predicted value.
And a final predicted value calculating module 440, configured to determine a final predicted value according to the primary predicted value and the secondary predicted value.
The primary predicted value obtaining module 410 is specifically configured to process the smoothed historical time sequence according to an autoregressive integrated moving average ARIMA model, and obtain a fitting time sequence corresponding to the historical time sequence and a primary predicted value, where the ARIMA model is obtained by training according to a training time sequence.
A secondary predicted value obtaining module 430, configured to specifically process the first residual sequence according to a gated cycle unit GRU model to obtain a secondary predicted value, where the GRU model is obtained by training according to a preset residual sequence, and the preset residual sequence corresponds to a fitting sequence obtained by processing the training time sequence by the ARIMA model.
The final predicted value calculation module 440 includes: the addition submodule is used for carrying out summation operation on the primary predicted value and the secondary predicted value to obtain an addition predicted value; and the contrast sub-operation operator module is used for carrying out inverse difference operation on the addition predicted value to obtain the final predicted value.
The device further comprises:
and the difference operation module is used for carrying out d times of difference operation processing on the original time sequence to obtain a stabilized historical time sequence, d is a parameter in the ARIMA model, the value is the number of times of difference operation carried out for stabilizing the training time sequence, and d is a positive integer.
And the parameter determining module is used for determining parameters d, p and q of the ARIMA model according to the training time sequence serving as the training sample, wherein p is the number of autoregressive terms, and q is the number of moving average terms.
And the training stable sequence module is used for carrying out d-time differential operation on the training time sequence to obtain a training stable sequence.
And the prediction expression obtaining module is used for substituting the training stable sequence into the ARIMA model to obtain an expression of a first prediction time sequence.
And the function composition module is used for removing the interference item in the expression of the first prediction time sequence to obtain a function composed of the data quantity with the parameter to be estimated.
A difference expression determination module for determining an expression of the difference of the training stationary sequence and the function.
And the ARIMA model obtaining module is used for determining the solution of the parameter to be estimated in the function when the expression of the difference value meets a preset first constraint condition, and obtaining the ARIMA model according to the solution.
And the second residual sequence module is used for calculating the difference between the training stable sequence of the training time sequence after d times of differential operation and the predicted time sequence of the training time sequence after the ARIMA model processing to obtain a second residual sequence.
And the second prediction sequence module is used for processing the second residual sequence according to the initial GRU model to obtain a second prediction sequence.
A loss value calculation module for calculating a loss value between the second prediction sequence and the second residual sequence.
The loss value calculation module is specifically configured to calculate an error average value according to the plurality of training prediction values in the second prediction sequence and the plurality of residual values in the second residual sequence, where the error average value is the loss value.
The loss value calculation module is specifically configured to calculate, for each of the N training predicted values in the second prediction sequence, a square of a difference between the training predicted value and a residual value in the second residual sequence at the same time; the computing device calculates an average of the squares of the N differences to obtain the error average, where N is the number of samples of the second prediction sequence.
And the GRU model obtaining module is used for adjusting the weight parameter and the bias parameter of the initial GRU model if the loss value does not accord with a preset second constraint condition until the loss value corresponding to the adjusted GRU model accords with the second constraint condition, so as to obtain the GRU model.
A GRU model obtaining module, configured to adjust an x component W of an update gate weight in the GRU model by using a gradient descent method if the error average value exceeds a preset value rangexzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhAnd executing the processing of the second residual sequence according to the initial GRU model to obtain a second prediction sequence; until the error average value is within the range of the preset value, updating the x component W of the gate weight in the GRU modelxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhImplicit candidate state weightsX component W ofxhIs determined as the final parameter of the GRU model. A second prediction sequence module for based on tnResidual value of timetn-1Implicit status of a time of dayUpdating the gate weight parameter and updating the bias b of the gatefCalculating to obtain tnTime of day refresh gate outputBased on tnResidual value of timetn-1Implicit state of the moment htn-1Reset gate weight parameter and reset gate bias brCalculating to obtain tnReset gate output of time of dayBased on resetting the gate outputtn-1Implicit status of a time of daytnResidual value of timeImplicit candidate state weight parameter and bias of implicit candidate state bhCalculating to obtain tnImplicit candidate states for a time of dayBased on tn-1Implicit status of a time of daytnImplicit candidate states for a time of dayAnd tnTime of day refresh gate outputCalculating to obtain tnImplicit status of a time of dayBased on tnImplicit status of a time of dayWeight W of fully-connected neural network layerpreBias of fully-connected neural network layer bpreCalculating to obtain tn+1Training prediction value of time
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
By using the prediction method, the prediction device, the electronic equipment and the computer readable storage medium provided by the embodiment of the application, the bandwidth utilization rate of the user in the next quarter or the next year can be predicted according to the bandwidth utilization rate of the previous year, so that the network bandwidth can be correctly planned to meet the service requirement of the user in the next quarter or the next year, and the bandwidth bottleneck and the wasted bandwidth can be found out to facilitate timely adjustment in the future, the response speed is improved, and the operation cost is reduced.
Because the prediction of the time series can be performed, and the bandwidth utilization rate is one of the time series, the prediction method provided by the embodiment of the application can be used for predicting other time series besides the network bandwidth utilization rate. For example, the traffic flow of the city is predicted by using the prediction method provided by the embodiment of the application to train the ARIMA model and the GRU model by adjusting the training sample to the traffic flow of the city in a certain preset time period.
Fig. 7 is a block diagram of a structure of an apparatus 500 in an embodiment of the present application, as shown in fig. 7. The apparatus 500 may include a processor 510, a communication interface 520, a memory 530, and at least one communication bus 540. Wherein the communication bus 540 is used for realizing direct connection communication of these components. The communication interface 520 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. Processor 510 may be an integrated circuit chip having signal processing capabilities. The Processor 510 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor 510 may be any conventional processor or the like.
The Memory 530 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 530 stores computer readable instructions that, when executed by the processor 510, enable the apparatus 500 to perform the steps involved in the method embodiments of fig. 3-5 described above.
The apparatus 500 may further include a memory controller, an input-output unit, an audio unit, a display unit 8.
The memory 530, the memory controller, the processor 510, the peripheral interface, the input/output unit, the audio unit, and the display unit are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, these elements may be electrically coupled to each other via one or more communication buses 540. The processor 510 is used to execute executable modules stored in the memory 530, such as software functional modules or computer programs included in the apparatus 300.
The input and output unit is used for providing input data for a user to realize the interaction of the user and the server (or the local terminal). The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
The audio unit provides an audio interface to the user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit provides an interactive interface (e.g. a user interface) between the electronic device and a user or for displaying image data to a user reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing. The display unit may display a composite image obtained by the processor 510 executing the steps shown in fig. 3 to 5, and may also display a result of determination as to whether or not a hidden trouble exists in the line in the region to be inspected.
The input and output unit is used for providing input data for a user to realize the interaction between the user and the processing terminal. The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
It will be appreciated that the configuration shown in fig. 7 is merely illustrative and that the apparatus 500 may include more or fewer components than shown in fig. 7 or have a different configuration than shown in fig. 7. The components shown in fig. 7 may be implemented in hardware, software, or a combination thereof.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the method embodiments.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the method embodiments.
Referring to fig. 9, fig. 9 is a graph showing a comparison between a prediction method provided by an embodiment of the present application and a prediction time sequence using only an ARIMA model, where a straight line is a true value of the time sequence of bandwidth usage within one month of an enterprise, a line segment with diamond points is a fitted sequence obtained by fitting only the ARIMA model, and a line segment with square points is a fitted sequence obtained by fitting the prediction method provided by the embodiment of the present application, as shown in fig. 9, an average relative error of the fitted sequence obtained by fitting the prediction method provided by the embodiment of the present application is 0.21089, and an average relative error of the fitted sequence obtained by fitting only the ARIMA model is 0.27608. Therefore, the bandwidth utilization rate of the enterprise at the next stage can be more accurately predicted by planning the bandwidth utilization rate based on the prediction method provided by the embodiment of the application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (13)
1. A prediction method applied to a computing device, the method comprising:
processing the stabilized historical time sequence according to a time sequence prediction model to obtain a fitting time sequence and a primary predicted value, wherein the fitting time sequence is a time sequence formed by fitting values at the moment of at least one part of numerical values of the historical time sequence;
calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence;
processing the first residual sequence by using a memory network model to obtain a secondary predicted value;
and determining a final predicted value according to the primary predicted value and the secondary predicted value.
2. The method according to claim 1, characterized in that it comprises:
the processing the smoothed historical time series according to the time series prediction model to obtain a fitting time series and a primary predicted value comprises the following steps:
processing the stabilized historical time sequence according to an autoregressive integrated moving average ARIMA model to obtain a fitting time sequence and a primary predicted value corresponding to the historical time sequence, wherein the ARIMA model is obtained by training according to a training time sequence;
processing the first residual sequence by using a memory network model to obtain a secondary predicted value, wherein the processing comprises:
and processing the first residual sequence according to a gated cycle unit GRU model to obtain a secondary predicted value, wherein the GRU model is obtained by training according to a preset residual sequence, and the preset residual sequence corresponds to a fitting sequence obtained by processing the training time sequence by the ARIMA model.
3. The method as claimed in claim 2, wherein before processing the smoothed historical time series according to an autoregressive integrated moving average ARIMA model to obtain a fitted time series corresponding to the historical time series and a primary predicted value, the method further comprises:
d times of differential operation processing is carried out on the original time sequence to obtain a stabilized historical time sequence, d is a parameter in the ARIMA model, the value is the number of times of differential operation carried out to stabilize the training time sequence, and d is a positive integer.
4. The method of claim 3, wherein determining a final predicted value based on the primary predicted value and the secondary predicted value comprises:
performing summation operation on the primary predicted value and the secondary predicted value to obtain a summation predicted value;
and carrying out inverse difference operation on the addition predicted value to obtain the final predicted value.
5. The method of claim 2, wherein the training process of the ARIMA model comprises:
determining parameters d, p and q of the ARIMA model according to a training time sequence serving as a training sample, wherein p is the number of autoregressive terms, and q is the number of moving average terms;
d times of differential operation is carried out on the training time sequence to obtain a training stable sequence;
substituting the training stationary sequence into the ARIMA model to obtain an expression of a first prediction time sequence;
removing interference items in the expression of the first prediction time sequence to obtain a function consisting of data quantity with parameters to be estimated;
determining an expression for a difference of the training stationary sequence and the function;
and determining the solution of the parameter to be estimated in the function when the expression of the difference value meets a preset first constraint condition, and obtaining the ARIMA model according to the solution.
6. The method of any of claims 2 to 5, wherein the training process of the GRU model comprises:
calculating the difference between a training stable sequence of the training time sequence after d times of differential operation and a predicted time sequence of the training time sequence after the ARIMA model processing to obtain a second residual sequence, wherein d is a parameter in the ARIMA model, the value is the number of differential operation performed to stabilize the training time sequence, and d is a positive integer;
processing the second residual sequence according to the initial GRU model to obtain a second prediction sequence;
calculating a loss value between the second prediction sequence and the second residual sequence;
and if the loss value does not accord with a preset second constraint condition, adjusting the weight parameter and the bias parameter of the initial GRU model until the loss value corresponding to the adjusted GRU model accords with the second constraint condition, and obtaining the GRU model.
7. The method of claim 6, wherein calculating a loss value between the second prediction sequence and the second residual sequence comprises:
and calculating an error average value according to the plurality of training predicted values in the second prediction sequence and the plurality of residual values in the second residual sequence, wherein the error average value is the loss value.
8. The method of claim 7, wherein the adjusting the weight parameter and the bias parameter of the initial GRU model if the loss value does not meet a preset second constraint condition until the loss value corresponding to the adjusted GRU model meets the second constraint condition to obtain the GRU model comprises:
if the error average value exceeds the range of the preset value, adjusting the x component W of the updated gate weight in the GRU model by using a gradient descent methodxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhAnd executing the processing of the second residual sequence according to the initial GRU model to obtain a second prediction sequence; until the error average value is within the range of the preset value, updating the x component W of the gate weight in the GRU modelxzH component W of updated gate weighthzReset x component of gate weight WxrReset h component of gate weight WhrR component W of implicit candidate state weightsrhX component W of implicit candidate state weightsxhIs determined as the final parameter of the GRU model.
9. The method of claim 7, wherein calculating an average error value from the plurality of training predictors in the second prediction sequence and the plurality of residual values in the second residual sequence comprises:
for each of the N training predicted values in the second prediction sequence, calculating a square of a difference between the training predicted value and a residual value in the second residual sequence at the same time;
and calculating the average of the squares of the N differences to obtain the error average value, wherein N is the number of samples of the second prediction sequence.
10. The method of claim 6, wherein processing the second residual sequence according to an initial GRU model to obtain a second predicted sequence comprises:
based on tnResidual value of timetn-1Implicit status of a time of dayUpdating the gate weight parameter and updating the bias b of the gatefCalculating to obtain tnTime of day refresh gate output
Based on tnResidual value of timetn-1Implicit of time of dayStatus of stateReset gate weight parameter and reset gate bias brCalculating to obtain tnReset gate output r of timetn;
Based on resetting the gate outputtn-1Implicit status of a time of daytnResidual value of timeImplicit candidate state weight parameter and bias of implicit candidate state bhCalculating to obtain tnImplicit candidate states for a time of day
Based on tn-1Implicit status of a time of daytnImplicit candidate states for a time of dayAnd tnTime of day refresh gate outputCalculating to obtain tnImplicit status of a time of day
Based on tnImplicit status of a time of dayWeight W of fully-connected neural network layerpreBias of fully-connected neural network layer bpreCalculating to obtain tn+1Training prediction value of time
11. A prediction apparatus, characterized in that the apparatus comprises:
the primary predicted value obtaining module is used for processing the stabilized historical time sequence according to a time sequence prediction model to obtain a fitting time sequence and a primary predicted value, wherein the fitting time sequence is a time sequence formed by fitting values of the time at which at least one part of numerical values of the historical time sequence are located;
the first residual sequence module is used for calculating the difference between the historical time sequence and the fitting time sequence to obtain a first residual sequence;
the secondary predicted value obtaining module is used for processing the first residual sequence by utilizing a memory network model to obtain a secondary predicted value;
and the final predicted value calculating module is used for determining a final predicted value according to the primary predicted value and the secondary predicted value.
12. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method according to any one of claims 1-10.
13. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 10.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910627110.3A CN110400010A (en) | 2019-07-11 | 2019-07-11 | Prediction technique, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910627110.3A CN110400010A (en) | 2019-07-11 | 2019-07-11 | Prediction technique, device, electronic equipment and computer readable storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN110400010A true CN110400010A (en) | 2019-11-01 |
Family
ID=68325348
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910627110.3A Pending CN110400010A (en) | 2019-07-11 | 2019-07-11 | Prediction technique, device, electronic equipment and computer readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110400010A (en) |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111414999A (en) * | 2020-04-27 | 2020-07-14 | 新智数字科技有限公司 | Method and device for monitoring running state of equipment |
| CN111476265A (en) * | 2020-02-26 | 2020-07-31 | 珠海格力电器股份有限公司 | Induction door control method and device, terminal and computer readable medium |
| CN111898826A (en) * | 2020-07-31 | 2020-11-06 | 北京文思海辉金信软件有限公司 | Resource consumption prediction method, apparatus, electronic device, and readable storage device |
| CN111931054A (en) * | 2020-08-14 | 2020-11-13 | 中国科学院深圳先进技术研究院 | A sequence recommendation method and system based on improved residual structure |
| CN112101400A (en) * | 2019-12-19 | 2020-12-18 | 国网江西省电力有限公司电力科学研究院 | Industrial control system abnormality detection method, equipment and server, storage medium |
| CN112115416A (en) * | 2020-08-06 | 2020-12-22 | 深圳市水务科技有限公司 | Predictive maintenance method, apparatus, and storage medium |
| CN112508283A (en) * | 2020-12-12 | 2021-03-16 | 广东电力信息科技有限公司 | Method and device for constructing time series model |
| WO2021057245A1 (en) * | 2019-09-23 | 2021-04-01 | 北京达佳互联信息技术有限公司 | Bandwidth prediction method and apparatus, electronic device and storage medium |
| CN112799913A (en) * | 2021-01-28 | 2021-05-14 | 中国工商银行股份有限公司 | Container operation abnormity detection method and device |
| CN113190429A (en) * | 2021-06-03 | 2021-07-30 | 河北师范大学 | Server performance prediction method and device and terminal equipment |
| CN113743738A (en) * | 2021-08-11 | 2021-12-03 | 湖北省食品质量安全监督检验研究院 | Method and device for predicting food safety risk grade interval |
| CN113901294A (en) * | 2021-09-13 | 2022-01-07 | 联想(北京)有限公司 | An information processing method and device |
| CN114492906A (en) * | 2020-11-12 | 2022-05-13 | 中国电信股份有限公司 | Training method for time series data prediction model, time series data prediction method and device |
| CN115482837A (en) * | 2022-07-25 | 2022-12-16 | 科睿纳(河北)医疗科技有限公司 | Emotion classification method based on artificial intelligence |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5884037A (en) * | 1996-10-21 | 1999-03-16 | International Business Machines Corporation | System for allocation of network resources using an autoregressive integrated moving average method |
| CN109684310A (en) * | 2018-11-22 | 2019-04-26 | 安徽继远软件有限公司 | A kind of information system performance Situation Awareness method based on big data analysis |
-
2019
- 2019-07-11 CN CN201910627110.3A patent/CN110400010A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5884037A (en) * | 1996-10-21 | 1999-03-16 | International Business Machines Corporation | System for allocation of network resources using an autoregressive integrated moving average method |
| CN109684310A (en) * | 2018-11-22 | 2019-04-26 | 安徽继远软件有限公司 | A kind of information system performance Situation Awareness method based on big data analysis |
Non-Patent Citations (1)
| Title |
|---|
| 刘洋: "基于GRU神经网络的时间序列预测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021057245A1 (en) * | 2019-09-23 | 2021-04-01 | 北京达佳互联信息技术有限公司 | Bandwidth prediction method and apparatus, electronic device and storage medium |
| US11374825B2 (en) | 2019-09-23 | 2022-06-28 | Beijing Daijia Internet Information Technology Co., Ltd. | Method and apparatus for predicting bandwidth |
| CN112101400A (en) * | 2019-12-19 | 2020-12-18 | 国网江西省电力有限公司电力科学研究院 | Industrial control system abnormality detection method, equipment and server, storage medium |
| CN111476265A (en) * | 2020-02-26 | 2020-07-31 | 珠海格力电器股份有限公司 | Induction door control method and device, terminal and computer readable medium |
| CN111414999A (en) * | 2020-04-27 | 2020-07-14 | 新智数字科技有限公司 | Method and device for monitoring running state of equipment |
| CN111414999B (en) * | 2020-04-27 | 2023-08-22 | 新奥新智科技有限公司 | Method and device for monitoring running state of equipment |
| CN111898826A (en) * | 2020-07-31 | 2020-11-06 | 北京文思海辉金信软件有限公司 | Resource consumption prediction method, apparatus, electronic device, and readable storage device |
| CN112115416A (en) * | 2020-08-06 | 2020-12-22 | 深圳市水务科技有限公司 | Predictive maintenance method, apparatus, and storage medium |
| CN111931054A (en) * | 2020-08-14 | 2020-11-13 | 中国科学院深圳先进技术研究院 | A sequence recommendation method and system based on improved residual structure |
| CN111931054B (en) * | 2020-08-14 | 2024-01-05 | 中国科学院深圳先进技术研究院 | Sequence recommendation method and system based on improved residual error structure |
| CN114492906A (en) * | 2020-11-12 | 2022-05-13 | 中国电信股份有限公司 | Training method for time series data prediction model, time series data prediction method and device |
| CN112508283A (en) * | 2020-12-12 | 2021-03-16 | 广东电力信息科技有限公司 | Method and device for constructing time series model |
| CN112799913A (en) * | 2021-01-28 | 2021-05-14 | 中国工商银行股份有限公司 | Container operation abnormity detection method and device |
| CN112799913B (en) * | 2021-01-28 | 2024-07-02 | 中国工商银行股份有限公司 | Method and device for detecting abnormal operation of container |
| CN113190429A (en) * | 2021-06-03 | 2021-07-30 | 河北师范大学 | Server performance prediction method and device and terminal equipment |
| CN113743738A (en) * | 2021-08-11 | 2021-12-03 | 湖北省食品质量安全监督检验研究院 | Method and device for predicting food safety risk grade interval |
| CN113901294A (en) * | 2021-09-13 | 2022-01-07 | 联想(北京)有限公司 | An information processing method and device |
| CN115482837A (en) * | 2022-07-25 | 2022-12-16 | 科睿纳(河北)医疗科技有限公司 | Emotion classification method based on artificial intelligence |
| CN115482837B (en) * | 2022-07-25 | 2023-04-28 | 科睿纳(河北)医疗科技有限公司 | Emotion classification method based on artificial intelligence |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110400010A (en) | Prediction technique, device, electronic equipment and computer readable storage medium | |
| CN110865929B (en) | Abnormality detection early warning method and system | |
| US11250449B1 (en) | Methods for self-adaptive time series forecasting, and related systems and apparatus | |
| US20170032398A1 (en) | Method and apparatus for judging age brackets of users | |
| CN113377568A (en) | Abnormity detection method and device, electronic equipment and storage medium | |
| CN113656691B (en) | Data prediction method, device and storage medium | |
| Van den Brakel et al. | Dealing with small sample sizes, rotation group bias and discontinuities in a rotating panel design | |
| WO2020220437A1 (en) | Method for virtual machine software aging prediction based on adaboost-elman | |
| US11636377B1 (en) | Artificial intelligence system incorporating automatic model updates based on change point detection using time series decomposing and clustering | |
| CN112562863A (en) | Epidemic disease monitoring and early warning method and device and electronic equipment | |
| CN110096335B (en) | A prediction method of business concurrency for different types of virtual machines | |
| CN112463964A (en) | Text classification and model training method, device, equipment and storage medium | |
| CN114186129A (en) | Package recommendation method and device, electronic equipment and computer readable medium | |
| Hudecová et al. | Tests for structural changes in time series of counts | |
| CN112749899B (en) | Order dispatching method, device and storage medium | |
| CN115422028A (en) | Credibility evaluation method and device for label portrait system, electronic equipment and medium | |
| CN111612357A (en) | Method and device for matching merchants for riders, storage medium and electronic equipment | |
| CN115168159A (en) | Abnormality detection method, abnormality detection device, electronic apparatus, and storage medium | |
| CN111026626A (en) | CPU consumption estimation and estimation model training method and device | |
| CN110009161A (en) | Water supply forecast method and device | |
| CN117472300B (en) | Order dispatch method and system for distributed 3D printing manufacturing platform | |
| CN117593000A (en) | Method, device, equipment and storage medium for determining credit limit | |
| CN117593090A (en) | Graph structure prediction model training method and related device based on multi-task learning | |
| CN110008974A (en) | Behavioral data prediction technique, device, electronic equipment and computer storage medium | |
| CN113420906B (en) | Traffic prediction method, device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191101 |
|
| RJ01 | Rejection of invention patent application after publication |