CN114399119A - MMP prediction method and device based on conditional convolution generative adversarial network - Google Patents
MMP prediction method and device based on conditional convolution generative adversarial network Download PDFInfo
- Publication number
- CN114399119A CN114399119A CN202210055932.0A CN202210055932A CN114399119A CN 114399119 A CN114399119 A CN 114399119A CN 202210055932 A CN202210055932 A CN 202210055932A CN 114399119 A CN114399119 A CN 114399119A
- Authority
- CN
- China
- Prior art keywords
- convolution
- mmp
- data
- generator
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- General Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Bioinformatics & Computational Biology (AREA)
- Animal Husbandry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mining & Mineral Resources (AREA)
- Marine Sciences & Fisheries (AREA)
- Primary Health Care (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Agronomy & Crop Science (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及油藏开发技术领域,具体而言,涉及一种基于条件卷积生成式对抗网络的MMP预测方法及装置。The invention relates to the technical field of oil reservoir development, in particular, to an MMP prediction method and device based on a conditional convolution generative confrontation network.
背景技术Background technique
CO2混相驱是低渗透油藏CO2-EOR中应用最广泛、采收率最高的一种驱油方式。向油藏注入CO2驱油过程中,岩层中会发生气、油、水三相的交互作用。产生相间组分转移、相变及其它复杂的相行为。混相驱的基本机理是驱替剂(CO2注入气)和被驱剂(原油)在油藏条件下形成稳定的混相带前缘,该前缘为单一相,它的移动能有效推动原油向前流动并最终到达生产井。由于混相,油气界面消失,使多孔介质中的界面张力降至零,因此理论上可使微观驱替效率达到100%。CO 2 miscible flooding is the most widely used and highest recovery method in CO 2 -EOR in low permeability reservoirs. In the process of injecting CO2 into the oil reservoir, the three-phase interaction of gas, oil and water will occur in the rock formation. Produce interphase composition transfer, phase transitions, and other complex phase behaviors. The basic mechanism of miscible flooding is that the displacing agent (CO 2 injection gas) and the displaced agent (crude oil) form a stable front of miscible zone under reservoir conditions. The front is a single phase, and its movement can effectively push the crude oil toward flow and eventually reach the production well. Due to the miscibility, the oil-gas interface disappears and the interfacial tension in the porous medium is reduced to zero, so the microscopic displacement efficiency can theoretically reach 100%.
CO2与油藏原油间的最小混相压力(MMP)是CO2驱替过程中的关键参数之一,是区分CO2混相驱和非混相驱的界限。准确确定CO2与原油的最小混相压力对于提高CO2混相驱替效率、降低操作成本、产生社会经济效益来说都是非常重要的。The minimum miscibility pressure (MMP) between CO 2 and reservoir crude oil is one of the key parameters in the process of CO 2 flooding, and it is the boundary between CO 2 miscible flooding and immiscible flooding. Accurately determining the minimum miscible pressure of CO 2 and crude oil is very important to improve the CO 2 miscible displacement efficiency, reduce operating costs, and generate social and economic benefits.
现有技术确定MMP通常采用实验测量的方法,这种方法测量虽然可以保证准确性,但是这种方法操作复杂、耗时较长且花费较大。因此现有技术缺少一种更为高效的确定CO2与油藏原油间的最小混相压力(MMP)的方案。In the prior art, an experimental measurement method is usually used to determine the MMP. Although this method can ensure the accuracy of measurement, this method is complicated to operate, takes a long time and costs a lot. Therefore, the prior art lacks a more efficient solution for determining the minimum miscibility pressure (MMP) between CO 2 and the crude oil in the reservoir.
发明内容SUMMARY OF THE INVENTION
本发明为了解决上述背景技术中的至少一个技术问题,提出了一种基于条件卷积生成式对抗网络的MMP预测方法及装置。In order to solve at least one technical problem in the above background technology, the present invention proposes an MMP prediction method and device based on conditional convolution generative adversarial network.
为了实现上述目的,根据本发明的一个方面,提供了一种基于条件卷积生成式对抗网络的MMP预测方法,该方法包括:In order to achieve the above object, according to one aspect of the present invention, there is provided an MMP prediction method based on conditional convolution generative adversarial network, the method comprising:
获取目标油藏的MMP影响因素数据;Obtain the MMP influencing factor data of the target reservoir;
将所述MMP影响因素数据输入到预先训练出的卷积生成器中,得到所述预先训练出的卷积生成器输出的所述目标油藏的MMP预测值,其中,卷积生成器为根据卷积神经网络搭建的,卷积生成器不包含随机噪声输入,所述预先训练出的卷积生成器为根据训练样本集对卷积生成器进行多次迭代训练得到的,所述训练样本集中的每个训练样本包含:油藏的MMP值以及油藏的MMP影响因素数据。Inputting the MMP influencing factor data into a pre-trained convolution generator to obtain the MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generator is based on The convolutional neural network is constructed, the convolutional generator does not contain random noise input, and the pre-trained convolutional generator is obtained by performing multiple iterative training on the convolutional generator according to the training sample set. Each training sample of contains: the MMP value of the reservoir and the MMP influencing factor data of the reservoir.
可选的,该基于条件卷积生成式对抗网络的MMP预测方法,还包括:Optionally, the MMP prediction method based on conditional convolution generative adversarial network further includes:
获取所述训练样本集;obtain the training sample set;
根据所述训练样本集进行H1次迭代训练,得到所述预先训练出的卷积生成器,其中,每一次迭代训练分为多个批次的训练,在进行每一个批次的训练时,先从所述训练样本集中选取H2个训练样本,然后基于选取的训练样本对卷积判别器的网络权重进行训练,最后基于选取的训练样本在由所述卷积判别器和卷积生成器组成的组合模型中对所述卷积生成器的网络权重进行训练,所述卷积判别器为根据卷积神经网络和全连接神经网络组合搭建的,H1和H2均为正整数。Perform H1 iterations of training according to the training sample set to obtain the pre-trained convolution generator, wherein each iteration training is divided into multiple batches of training. Select H2 training samples from the training sample set, then train the network weights of the convolutional discriminator based on the selected training samples, and finally use the selected training samples in the convolutional discriminator and the convolutional generator. In the combined model, the network weight of the convolution generator is trained, the convolution discriminator is constructed according to the combination of a convolutional neural network and a fully connected neural network, and both H1 and H2 are positive integers.
可选的,所述训练样本由第一数据和第二数据组成,所述第一数据为油藏的MMP影响因素数据,所述第二数据为油藏的MMP值;Optionally, the training sample consists of first data and second data, the first data is MMP influencing factor data of the oil reservoir, and the second data is the MMP value of the oil reservoir;
所述基于选取的训练样本对卷积判别器的网络权重进行训练,具体包括:The network weights of the convolution discriminator are trained based on the selected training samples, specifically including:
分别针对每个选取的训练样本,将卷积生成器根据训练样本的第一数据输出的MMP预测值与该训练样本的第一数据进行组合,得到组合数据,将该组合数据的标签设置为0,并对该组合数据的标签进行平滑处理;For each selected training sample, combine the MMP predicted value output by the convolution generator according to the first data of the training sample and the first data of the training sample to obtain combined data, and set the label of the combined data to 0 , and smooth the labels of the combined data;
将每个选取的训练样本的标签设置为1,并对训练样本的标签进行平滑处理;Set the label of each selected training sample to 1, and smooth the label of the training sample;
将标签平滑处理后的组合数据以及标签平滑处理后的训练样本输入到所述卷积判别器中,对所述卷积判别器的网络权重进行训练。The combined data after label smoothing and the training samples after label smoothing are input into the convolution discriminator, and the network weight of the convolution discriminator is trained.
可选的,所述基于选取的训练样本在由所述卷积判别器和卷积生成器组成的组合模型中对所述卷积生成器的网络权重进行训练,具体包括:Optionally, the network weights of the convolution generator are trained in the combined model composed of the convolution discriminator and the convolution generator based on the selected training samples, specifically including:
分别针对每个选取的训练样本,将训练样本的第一数据输入到所述卷积生成器中,得到所述卷积生成器输出的该训练样本对应的MMP预测值;For each selected training sample, input the first data of the training sample into the convolution generator, and obtain the MMP prediction value corresponding to the training sample output by the convolution generator;
分别针对将每个选取的训练样本,将训练样本的第一数据与训练样本对应的MMP预测值进行组合,得到组合数据,将该组合数据的标签设置为1,并对该组合数据的标签进行平滑处理;For each selected training sample, combine the first data of the training sample with the MMP prediction value corresponding to the training sample to obtain the combined data, set the label of the combined data to 1, and perform an analysis on the label of the combined data. smooth processing;
将标签平滑处理后的组合数据输入到所述卷积判别器中,得到所述卷积判别器输出的该组合数据为真实数据的概率。The combined data after label smoothing is input into the convolution discriminator, and the probability that the combined data output by the convolution discriminator is real data is obtained.
可选的,所述根据所述训练样本集进行H1次迭代训练,得到所述预先训练出的卷积生成器,包括:Optionally, performing H1 iterations of training according to the training sample set to obtain the pre-trained convolution generator, including:
采用超参数优化方法对迭代训练次数H1、训练样本数量H2、所述卷积生成器的超参数以及所述卷积判别器的超参数进行优化,得到最佳参数组合,进而根据所述最佳参数组合进行迭代训练得到所述预先训练出的卷积生成器。The hyperparameter optimization method is used to optimize the iterative training times H1, the number of training samples H2, the hyperparameters of the convolution generator and the hyperparameters of the convolution discriminator to obtain the optimal parameter combination, and then according to the optimal parameter combination The parameter combination is iteratively trained to obtain the pre-trained convolution generator.
可选的,所述卷积判别器的输入为MMP影响因素数据和MMP值,该MMP值包括所述卷积生成器输出的MMP预测值,所述卷积判别器的输出为数据为真实数据的概率;卷积判别器的网络结构具体包括:卷积神经网络层、拼接层以及全连接神经网络层,所述卷积神经网络层用于对MMP影响因素数据进行预处理,所述拼接层用于将所述卷积神经网络层输出的预处理后的数据与MMP值进行拼接,所述全连接神经网络层用于对所述拼接层输出的拼接后的数据进行处理输出数据为真实数据的概率。Optionally, the input of the convolution discriminator is MMP influencing factor data and an MMP value, the MMP value includes the MMP predicted value output by the convolution generator, and the output of the convolution discriminator is that the data is real data. The network structure of the convolution discriminator specifically includes: a convolutional neural network layer, a splicing layer and a fully connected neural network layer, the convolutional neural network layer is used to preprocess the MMP influencing factor data, and the splicing layer For splicing the preprocessed data output by the convolutional neural network layer with the MMP value, and the fully connected neural network layer is used to process the spliced data output by the splicing layer. The output data is real data The probability.
可选的,所述卷积生成器的超参数具体包括:卷积神经网络层的层数、每层卷积神经网络层的卷积核数量、卷积核的大小以及卷积生成器中优化器的初始学习速率;Optionally, the hyperparameters of the convolution generator specifically include: the number of layers of the convolutional neural network layer, the number of convolution kernels of each convolutional neural network layer, the size of the convolution kernel, and the optimization in the convolutional generator. the initial learning rate of the machine;
所述卷积判别器的超参数具体包括:卷积神经网络层的层数、每层卷积神经网络层的卷积核数量、卷积核的大小、全连接神经网络层的层数、每层全连接神经网络层的神经元个数、每层全连接神经网络层的丢弃率以及卷积判别器中优化器的初始学习速率。The hyperparameters of the convolution discriminator specifically include: the number of layers of the convolutional neural network layer, the number of convolution kernels of each convolutional neural network layer, the size of the convolution kernel, the number of layers of the fully connected neural network layer, the number of layers of each The number of neurons in the fully connected neural network layer, the dropout rate of each fully connected neural network layer, and the initial learning rate of the optimizer in the convolutional discriminator.
为了实现上述目的,根据本发明的另一方面,提供了一种基于条件卷积生成式对抗网络的MMP预测装置,该装置包括:In order to achieve the above object, according to another aspect of the present invention, there is provided an MMP prediction device based on conditional convolution generative adversarial network, the device comprising:
数据获取单元,用于获取目标油藏的MMP影响因素数据;The data acquisition unit is used to acquire the MMP influencing factor data of the target oil reservoir;
预测单元,用于将所述MMP影响因素数据输入到预先训练出的卷积生成器中,得到所述预先训练出的卷积生成器输出的所述目标油藏的MMP预测值,其中,卷积生成器为根据卷积神经网络搭建的,卷积生成器不包含随机噪声输入,所述预先训练出的卷积生成器为根据训练样本集对卷积生成器进行多次迭代训练得到的,所述训练样本集中的每个训练样本包含:油藏的MMP值以及油藏的MMP影响因素数据。A prediction unit, configured to input the MMP influencing factor data into a pre-trained convolution generator to obtain the MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the volume The product generator is constructed according to the convolutional neural network, the convolutional generator does not contain random noise input, and the pre-trained convolutional generator is obtained by performing multiple iterative training on the convolutional generator according to the training sample set, Each training sample in the training sample set includes: the MMP value of the oil reservoir and the MMP influencing factor data of the oil reservoir.
为了实现上述目的,根据本发明的另一方面,还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述基于条件卷积生成式对抗网络的MMP预测方法的步骤。In order to achieve the above object, according to another aspect of the present invention, a computer device is also provided, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer Procedure to implement the above-mentioned MMP prediction method based on conditional convolutional generative adversarial network.
为了实现上述目的,根据本发明的另一方面,还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述基于条件卷积生成式对抗网络的MMP预测方法的步骤。In order to achieve the above object, according to another aspect of the present invention, a computer program product is also provided, comprising a computer program/instruction, when the computer program/instruction is executed by a processor, the above-mentioned MMP based on conditional convolution generative adversarial network is implemented The steps of the prediction method.
本发明的有益效果为:The beneficial effects of the present invention are:
本发明将条件生成式对抗网络与CO2与油藏原油间的最小混相压力(MMP)预测相结合,并基于卷积神经网络构建条件生成式对抗网络的生成器,进而训练出条件生成式对抗网络的卷积生成器作为MMP预测模型,实现了能够准确、高效的对油藏的MMP进行预测的有益效果。The invention combines the conditional generative adversarial network with the minimum miscibility pressure (MMP) prediction between CO 2 and oil in the reservoir, and builds the conditional generative adversarial network generator based on the convolutional neural network, and then trains the conditional generative confrontation As the MMP prediction model, the convolutional generator of the network achieves the beneficial effect of accurately and efficiently predicting the MMP of the reservoir.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:In order to illustrate the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are For some embodiments of the present invention, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts. In the attached image:
图1是本发明实施例基于条件卷积生成式对抗网络的MMP预测方法的第一流程图;Fig. 1 is the first flow chart of the MMP prediction method based on conditional convolution generative adversarial network according to an embodiment of the present invention;
图2是本发明实施例基于条件卷积生成式对抗网络的MMP预测方法的第二流程图;2 is a second flowchart of an MMP prediction method based on a conditional convolution generative adversarial network according to an embodiment of the present invention;
图3是本发明实施例卷积判别器的训练流程图;Fig. 3 is the training flow chart of the convolution discriminator of the embodiment of the present invention;
图4是本发明实施例卷积生成器的训练流程图;Fig. 4 is the training flow chart of the convolution generator of the embodiment of the present invention;
图5是本发明实施例训练样本集示意图;5 is a schematic diagram of a training sample set according to an embodiment of the present invention;
图6是本发明实施例卷积生成器的网络结构示意图;6 is a schematic diagram of a network structure of a convolution generator according to an embodiment of the present invention;
图7是本发明实施例卷积判别器的网络结构示意图;7 is a schematic diagram of a network structure of a convolution discriminator according to an embodiment of the present invention;
图8是本发明实施例组合模型示意图;8 is a schematic diagram of a combination model according to an embodiment of the present invention;
图9是基于条件全连接生成式对抗网络模型的MMP随温度的改变而变化的关系曲线;Fig. 9 is the relation curve that the MMP of the conditional fully connected generative adversarial network model changes with the change of temperature;
图10是基于条件卷积生成式对抗网络模型的MMP随温度的改变而变化的关系曲线;Fig. 10 is the relationship curve of the MMP based on the conditional convolution generative adversarial network model as a function of temperature;
图11是基于条件卷积生成式对抗网络模型的MMP与CO2中N2的摩尔分数的关系;Figure 11 is the relationship between the MMP and the mole fraction of N2 in CO2 based on the conditional convolutional generative adversarial network model;
图12是基于条件卷积生成式对抗网络模型的MMP与CO2中H2S的摩尔分数的关系;Figure 12 is the relationship between the MMP and the mole fraction of H 2 S in CO 2 based on the conditional convolution generative adversarial network model;
图13是本发明实施例基于条件卷积生成式对抗网络的MMP预测装置的第一结构框图;13 is a first structural block diagram of an MMP prediction apparatus based on a conditional convolution generative adversarial network according to an embodiment of the present invention;
图14是本发明实施例基于条件卷积生成式对抗网络的MMP预测装置的第二结构框图;14 is a second structural block diagram of an MMP prediction apparatus based on a conditional convolution generative adversarial network according to an embodiment of the present invention;
图15是本发明实施例计算机设备示意图。FIG. 15 is a schematic diagram of a computer device according to an embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "comprising" and "having" in the description and claims of the present invention and the above-mentioned drawings, as well as any variations thereof, are intended to cover non-exclusive inclusion, for example, including a series of steps or units The processes, methods, systems, products or devices are not necessarily limited to those steps or units expressly listed, but may include other steps or units not expressly listed or inherent to such processes, methods, products or devices.
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that the embodiments of the present invention and the features of the embodiments may be combined with each other under the condition of no conflict. The present invention will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
需要说明的是,本发明中的MMP指的是CO2与原油间的最小混相压力。It should be noted that MMP in the present invention refers to the minimum miscibility pressure between CO 2 and crude oil.
生成式对抗网络(Generative Adversarial Networks,GANs)是一种深度学习模型,是近年来复杂分布上无监督学习最具前景的方法之一。模型通过框架中(至少)两个模块:生成模型(又称生成器)和判别模型(又称判别器)的互相博弈学习产生相当好的输出。Generative Adversarial Networks (GANs), a deep learning model, is one of the most promising approaches for unsupervised learning on complex distributions in recent years. The model produces reasonably good outputs through mutual game learning of (at least) two modules in the framework: a generative model (aka generator) and a discriminative model (aka discriminator).
生成式对抗网络是一种无监督机器学习方法,这使得该方法一般应用在数据增强上,即当提供给机器学习的训练样本上很少时,可以采用生成式对抗网络,生成一些数据样本,以供机器进行学习。由于生成式对抗网络诞生得较晚,其近几年才开始被少数石油工作者用来进行数据增强,但是,使用该方法生成的数据,其效果,各个研究者褒贬不一。Generative adversarial network is an unsupervised machine learning method, which makes this method generally applied to data enhancement, that is, when there are few training samples provided to machine learning, generative adversarial network can be used to generate some data samples, for machine learning. Due to the late birth of generative adversarial network, it has only been used by a few oil workers for data enhancement in recent years. However, the effect of the data generated by this method is different from various researchers.
最原始的生成式对抗网络,只要输入一个随机的向量,然后得到一个产生的对象,但是我们无法控制产生什么样的对象。于是研究人员提出了条件生成式对抗网络理论,对原始的GAN附加了约束,在生成模型和判别模型中引入了条件变量y(Conditionalvariable y),为模型引入了额外的信息,可指导性的生成数据。理论上y可以使有意义的各种信息,比如类标签,可以将GAN这种无监督学习的方法变成有监督的。The most primitive generative adversarial network only needs to input a random vector and get a generated object, but we have no control over what kind of object is generated. Therefore, the researchers proposed the conditional generative adversarial network theory, which added constraints to the original GAN, introduced the conditional variable y (Conditional variable y) in the generative model and the discriminant model, introduced additional information to the model, and guided the generation of data. Theoretically, y can make meaningful various information, such as class labels, which can turn the unsupervised learning method of GAN into supervised.
条件生成式对抗网络的出现,让生成式对抗网络从无监督学习变成了有监督学习,意味着该方法可以用在石油方面的参数预测上,在石油方面的应用前景很大。在使用条件生成式对抗网络对MMP进行预测时,基于条件全连接生成式对抗网络的MMP预测模型虽然预测精度较高,但是由于其生成器中输入的随机噪声的存在,使得在探究MMP与影响因素之间关系时,MMP会随着N2、H2S等因素的变化而呈现出与物理实验现象不同的规律,不利于MMP预测模型的实际应用。The emergence of conditional generative adversarial networks has changed generative adversarial networks from unsupervised learning to supervised learning, which means that this method can be used in oil parameter prediction and has great application prospects in oil. When using conditional generative adversarial network to predict MMP, although the MMP prediction model based on conditional fully connected generative adversarial network has high prediction accuracy, due to the existence of random noise input in its generator, it is difficult to explore the relationship between MMP and its influence. When the relationship between the factors is changed, the MMP will show different laws from the physical experimental phenomena with the changes of N 2 , H 2 S and other factors, which is not conducive to the practical application of the MMP prediction model.
图9为条件全连接神经网络中,生成网络的随机噪声造成的MMP随着温度的增加而呈现出不稳定的上升变化,可见这与实际物理规律不符合,不利于MMP预测模型的实际应用,需要加以改进。Figure 9 shows that in the conditional fully-connected neural network, the MMP caused by the random noise of the generating network presents an unstable upward change with the increase of temperature. It can be seen that this is inconsistent with the actual physical laws and is not conducive to the practical application of the MMP prediction model. need to be improved.
为了解决现有技术通过条件全连接神经网络对MMP进行预测中存在的缺陷,本发明实施例提供了基于改进条件卷积生成式对抗网络的CO2与原油最小混相压力(MMP)预测方案。In order to solve the deficiencies in the prior art in predicting MMP through conditional fully-connected neural networks, embodiments of the present invention provide a CO 2 and crude oil minimum miscibility pressure (MMP) prediction scheme based on an improved conditional convolutional generative adversarial network.
图1是本发明实施例基于条件卷积生成式对抗网络的MMP预测方法的第一流程图,如图1所示,在本发明一个实施例中,本发明的基于条件卷积生成式对抗网络的MMP预测方法包括步骤S101和步骤S102。FIG. 1 is a first flowchart of an MMP prediction method based on a conditional convolution generative adversarial network according to an embodiment of the present invention. As shown in FIG. 1 , in an embodiment of the present invention, the conditional convolution generative adversarial network of the present invention The MMP prediction method includes step S101 and step S102.
步骤S101,获取目标油藏的MMP影响因素数据。Step S101, acquiring MMP influencing factor data of the target oil reservoir.
在本发明一个实施例中,MMP影响因素数据具体包括:油层温度(TR)、原油中易挥发组分摩尔分数(Xvol)、原油中C2-C4组分摩尔分数(XC2-4)、原油中C5-C6组分摩尔分数(XC5-6)、原油中C7+组分分子量(MWC7+)、注入气中CO2及四个杂质摩尔分数(即yCO2、yC1、yN2、yH2S和yHC)等。In an embodiment of the present invention, the MMP influencing factor data specifically includes: reservoir temperature (T R ), mole fraction of volatile components in crude oil (X vol ), mole fraction of C 2 -C 4 components in crude oil (X C2 - 4 ), the mole fraction of C 5 -C 6 components in crude oil (X C5-6 ), the molecular weight of C 7+ components in crude oil (MW C7+ ), the mole fraction of CO 2 and four impurities in the injected gas (ie y CO 2 , y C1 , y N2 , y H2S and y HC ) and the like.
步骤S102,将所述MMP影响因素数据输入到预先训练出的卷积生成器中,得到所述预先训练出的卷积生成器输出的所述目标油藏的MMP预测值,其中,卷积生成器为根据卷积神经网络搭建的,卷积生成器不包含随机噪声输入,所述预先训练出的卷积生成器为根据训练样本集对卷积生成器进行多次迭代训练得到的,所述训练样本集中的每个训练样本包含:油藏的MMP值以及油藏的MMP影响因素数据。Step S102: Input the MMP influencing factor data into a pre-trained convolution generator to obtain the MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generates The convolution generator is built according to the convolutional neural network, the convolution generator does not contain random noise input, and the pre-trained convolution generator is obtained by performing multiple iterative training on the convolution generator according to the training sample set. Each training sample in the training sample set contains: the MMP value of the reservoir and the data of the MMP influencing factors of the reservoir.
本发明通过对生成器模型进行改进,放弃现有技术的采用全连接神经网络搭建生成器的方案,创造性的采用卷积神经网络搭建生成器形成卷积生成器。本发明的卷积生成器的输入不包含随机噪声输入,使得MMP预测更为准确,并且使得在探究MMP与影响因素之间关系时,MMP不会随着N2、H2S等因素的变化而呈现出与物理实验现象不同的规律,有助于MMP预测模型的实际应用。By improving the generator model, the present invention abandons the prior art scheme of using a fully connected neural network to build a generator, and creatively uses a convolutional neural network to build a generator to form a convolution generator. The input of the convolution generator of the present invention does not contain random noise input, so that MMP prediction is more accurate, and when exploring the relationship between MMP and influencing factors, MMP will not change with N 2 , H 2 S and other factors However, it shows different laws from the physical experimental phenomena, which is helpful for the practical application of the MMP prediction model.
图2是本发明实施例基于条件卷积生成式对抗网络的MMP预测方法的第二流程图,如图2所示,上述步骤S102中的所述预先训练出的卷积生成器具体是由步骤S201和步骤S202训练生成的。FIG. 2 is a second flowchart of the MMP prediction method based on the conditional convolution generative adversarial network according to the embodiment of the present invention. As shown in FIG. 2 , the pre-trained convolution generator in the above step S102 is specifically composed of steps It is generated by training in S201 and step S202.
步骤S201,获取所述训练样本集。Step S201, acquiring the training sample set.
在本发明一个实施例中,本发明收集一定数量的已有油藏的MMP值以及相应的MMP影响因素数据,并按照一定的比例划分为训练样本集、验证样本集以及测试样本集。In an embodiment of the present invention, the present invention collects a certain number of MMP values of existing oil reservoirs and corresponding MMP influencing factor data, and divides them into a training sample set, a verification sample set and a test sample set according to a certain proportion.
图5所示为本发明一个实施例收集的105个油藏的MMP值以及相应的MMP影响因素数据。具体的,本发明将所有数据按照6:2:2的比例划分为训练样本集、验证样本集以及测试样本集,则训练样本集有63组数据,验证样本集有21组数据,测试样本集有21组数据。Fig. 5 shows the MMP values of 105 oil reservoirs and the corresponding MMP influencing factor data collected in an embodiment of the present invention. Specifically, the present invention divides all data into training sample set, verification sample set and test sample set according to the ratio of 6:2:2, then the training sample set has 63 sets of data, the verification sample set has 21 sets of data, and the test sample set has 21 sets of data. There are 21 sets of data.
在本发明一个实施例中,本发明在得到训练样本集、验证样本集以及测试样本集之后,首先对训练样本集进行最大最小归一化处理,然后,利用训练样本集数据的最大值和最小值对验证样本集和测试样本集中的数据进行相同的处理。In an embodiment of the present invention, after obtaining the training sample set, the verification sample set and the test sample set, the present invention first performs maximum and minimum normalization processing on the training sample set, and then uses the maximum and minimum values of the training sample set data. The values in the validation sample set and the data in the test sample set are treated the same.
在本发明一个实施例中,最大最小归一化公式可以如下:In an embodiment of the present invention, the maximum-minimum normalization formula may be as follows:
步骤S202,根据所述训练样本集进行H1次迭代训练,得到所述预先训练出的卷积生成器,其中,每一次迭代训练分为多个批次的训练,在进行每一个批次的训练时,先从所述训练样本集中选取H2个训练样本,然后基于选取的训练样本对卷积判别器的网络权重进行训练,最后基于选取的训练样本在由所述卷积判别器和卷积生成器组成的组合模型中对所述卷积生成器的网络权重进行训练,所述卷积判别器为根据卷积神经网络和全连接神经网络组合搭建的,H1和H2均为正整数。Step S202, performing H1 iterations of training according to the training sample set to obtain the pre-trained convolution generator, wherein each iteration training is divided into multiple batches of training, and each batch of training is performed , first select H2 training samples from the training sample set, then train the network weights of the convolution discriminator based on the selected training samples, and finally generate the convolution discriminator and convolution based on the selected training samples. The network weight of the convolution generator is trained in the combined model composed of the convolution generator, the convolution discriminator is constructed according to the combination of the convolution neural network and the fully connected neural network, and both H1 and H2 are positive integers.
在本发明中,每一次迭代训练分为多个批次的训练,在进行每一个批次的训练时,先从训练样本集中选取H2个训练样本,对于同一次迭代训练的多个批次的训练选取的训练样本均不相同,若某个批次的训练时训练样本集中剩下的样本数不足H2时,就选取剩下的所有样本进行这一个批次的训练。由此训练样本集中所有样本都经过一次训练后,即实现了条件生成式对抗网络的一次迭代训练过程,也称为一个训练周期,卷积生成器和卷积判别器的性能将会随着迭代训练次数(H1)的增加,逐渐提高。In the present invention, each iterative training is divided into multiple batches of training, and when performing each batch of training, first select H2 training samples from the training sample set, and for multiple batches of the same iterative training The training samples selected for training are all different. If the number of remaining samples in the training sample set during training of a certain batch is less than H2, all the remaining samples are selected for training in this batch. Therefore, after all samples in the training sample set are trained once, an iterative training process of the conditional generative adversarial network is realized, also known as a training cycle. The performance of the convolution generator and the convolution discriminator will increase with the iteration. The training times (H1) increase gradually.
在本发明中,本发明根据上述流程进行H1次迭代训练。当迭代训练到一定的次数时,卷积生成器产生的对应条件下的MMP预测值,将非常接近真实数据,也就实现了MMP的预测功能。在本发明一个实施例中,本发明在迭代训练过程中同时监控每一次迭代训练完成后卷积生成器在验证集上的预测误差,并将每次迭代训练后的卷积生成器单独保存下来,最后选取迭代训练过程中在验证集上误差最小的卷积生成器,该卷积生成器即为MMP预测模型。In the present invention, the present invention performs H1 iteration training according to the above process. When the iterative training reaches a certain number of times, the predicted value of MMP under the corresponding conditions generated by the convolution generator will be very close to the real data, which realizes the prediction function of MMP. In an embodiment of the present invention, the present invention simultaneously monitors the prediction error of the convolution generator on the validation set after each iteration training is completed during the iterative training process, and saves the convolution generator after each iteration training separately. , and finally select the convolution generator with the smallest error on the validation set during the iterative training process, which is the MMP prediction model.
在本发明中,本发明先基于训练样本对卷积判别器的网络权重进行训练,然后基于训练样本在由卷积判别器和卷积生成器组成的组合模型中对卷积生成器的网络权重进行训练。图8是本发明实施例组合模型示意图,如图8所示,在组合模型中对卷积生成器的网络权重进行训练时,卷积判别器的网络权重不发生变化,卷积生成器的网络权重则会随着数据的训练而发生变化。In the present invention, the present invention firstly trains the network weights of the convolution discriminator based on the training samples, and then, based on the training samples, sets the network weights of the convolution generator in the combined model composed of the convolution discriminator and the convolution generator based on the training samples. to train. FIG. 8 is a schematic diagram of a combined model according to an embodiment of the present invention. As shown in FIG. 8 , when the network weight of the convolution generator is trained in the combined model, the network weight of the convolution discriminator does not change, and the network weight of the convolution generator does not change. The weights will change as the data is trained.
在本发明一个实施例中,所述训练样本由第一数据和第二数据组成,所述第一数据为油藏的MMP影响因素数据,所述第二数据为油藏的MMP值。In an embodiment of the present invention, the training sample consists of first data and second data, the first data is MMP influencing factor data of the oil reservoir, and the second data is the MMP value of the oil reservoir.
图3是本发明实施例卷积判别器的训练流程图,如图3所示,在本发明一个实施例中,上述步骤S202中的基于选取的训练样本对卷积判别器的网络权重进行训练,具体包括步骤S301至步骤S303。FIG. 3 is a training flow chart of a convolution discriminator according to an embodiment of the present invention. As shown in FIG. 3 , in an embodiment of the present invention, the network weights of the convolution discriminator are trained based on the selected training samples in the above step S202 , which specifically includes steps S301 to S303.
步骤S301,分别针对每个选取的训练样本,将卷积生成器根据训练样本的第一数据输出的MMP预测值与该训练样本的第一数据进行组合,得到组合数据,将该组合数据的标签设置为0,并对该组合数据的标签进行平滑处理。Step S301, for each selected training sample, combine the MMP predicted value output by the convolution generator according to the first data of the training sample and the first data of the training sample to obtain combined data, and the label of the combined data is obtained. Set to 0 and smooth the labels for this combined data.
步骤S302,将每个选取的训练样本的标签设置为1,并对训练样本的标签进行平滑处理。In step S302, the label of each selected training sample is set to 1, and the label of the training sample is smoothed.
在本发明中,将标签进行平滑处理的方式有很多种。在本发明一个具体实施例中,本发明的标签平滑方式为:将标签1替换为第一预设数值范围内(优选为0.8至1.0)的随机数,将标签0替换为第二预设数值范围内(优选为0至0.2)的随机数。In the present invention, there are many ways to smooth the label. In a specific embodiment of the present invention, the label smoothing method of the present invention is to replace
步骤S303,将标签平滑处理后的组合数据以及标签平滑处理后的训练样本输入到所述卷积判别器中,对所述卷积判别器的网络权重进行训练。Step S303 , the combined data after label smoothing and the training samples after label smoothing are input into the convolution discriminator, and the network weight of the convolution discriminator is trained.
在本发明一个实施例中,在开始进行一次迭代训练时,首先从训练样本集中选择第一个批次的训练样本,这个批次内的训练样本数为H2,利用建立好的卷积生成器,将该批次的H2个训练样本中的第一数据为卷积生成器的条件输入,输出卷积生成器第一次生成的对应条件下的MMP预测值;然后,将第一数据与卷积生成器输出的MMP预测值组合,得到组合数据,将组合数据设置标签为0,然后对标签进行平滑处理,输入到卷积判别器里;同时,将第一数据以及对应的真实的MMP值的组合,即训练数据,设置标签为1,然后对标签进行平滑处理,也送入卷积判别器里,让卷积判别器进行第一次的学习,即第一次对真假数据进行学习。进而根据上述流程,对卷积判别器进行多个批次进而H1次迭代训练,经过迭代训练出的卷积判别器能够较为准确的识别出真实数据。In an embodiment of the present invention, when starting an iterative training, first select the first batch of training samples from the training sample set, the number of training samples in this batch is H2, and use the established convolution generator , the first data in the batch of H2 training samples is the conditional input of the convolution generator, and the MMP prediction value under the corresponding conditions generated by the convolution generator for the first time is output; Combine the MMP predicted values output by the product generator to obtain the combined data, set the label of the combined data to 0, then smooth the label and input it into the convolution discriminator; at the same time, combine the first data and the corresponding real MMP value The combination of , that is, the training data, set the label to 1, then smooth the label, and also send it to the convolution discriminator, so that the convolution discriminator performs the first learning, that is, the first learning of true and false data . Then, according to the above process, the convolution discriminator is trained in multiple batches and then H1 iterations, and the convolution discriminator obtained after iterative training can more accurately identify the real data.
图4是本发明实施例卷积生成器的训练流程图,如图4所示,在本发明一个实施例中,上述步骤S202中的基于选取的训练样本在由所述卷积判别器和卷积生成器组成的组合模型中对所述卷积生成器的网络权重进行训练,具体包括步骤S401至步骤S403。FIG. 4 is a training flow chart of a convolution generator according to an embodiment of the present invention. As shown in FIG. 4 , in an embodiment of the present invention, the training samples based on the selected training samples in the above step S202 are processed by the convolution discriminator and the volume The network weight of the convolution generator is trained in the combined model composed of the product generator, which specifically includes steps S401 to S403.
步骤S401,分别针对每个选取的训练样本,将训练样本的第一数据输入到所述卷积生成器中,得到所述卷积生成器输出的该训练样本对应的MMP预测值。Step S401: For each selected training sample, input the first data of the training sample into the convolution generator to obtain the MMP prediction value corresponding to the training sample output by the convolution generator.
步骤S402,分别针对将每个选取的训练样本,将训练样本的第一数据与训练样本对应的MMP预测值进行组合,得到组合数据,将该组合数据的标签设置为1,并对该组合数据的标签进行平滑处理。Step S402, for each selected training sample, combine the first data of the training sample and the MMP predicted value corresponding to the training sample to obtain combined data, set the label of the combined data to 1, and set the combined data to 1. labels are smoothed.
步骤S403,将标签平滑处理后的组合数据输入到所述卷积判别器中,得到所述卷积判别器输出的该组合数据为真实数据的概率。Step S403: Input the combined data after label smoothing into the convolution discriminator to obtain the probability that the combined data output by the convolution discriminator is real data.
在本发明中,在对卷积判别器进行训练之后,保持卷积判别器的权重不变,进而在由卷积判别器和卷积生成器组成的组合模型中对卷积生成器的网络权重进行训练。具体的,本发明将卷积生成器的条件输入(即训练数据中的第一数据)与卷积生成器输出的MMP预测值组合,设置标签为1,然后对标签进行平滑处理,输入到组合模型内,实现对卷积生成器的单独训练学习,而不影响卷积判别器,提高卷积生成器生成数据的准确能力。In the present invention, after the convolution discriminator is trained, the weight of the convolution discriminator is kept unchanged, and then the network weight of the convolution generator is set in the combined model composed of the convolution discriminator and the convolution generator. to train. Specifically, the present invention combines the conditional input of the convolution generator (that is, the first data in the training data) with the MMP predicted value output by the convolution generator, sets the label to 1, then smoothes the label, and inputs it into the combination In the model, the separate training and learning of the convolution generator is realized without affecting the convolution discriminator, which improves the accuracy of the data generated by the convolution generator.
图6是本发明实施例卷积生成器的网络结构示意图,如图6所示,本发明利用卷积神经网络(CNN)搭建条件卷积生成式对抗网络中的卷积生成器,并将卷积生成器中的随机噪声输入去除,得到改进后的卷积生成器模型。卷积生成器只接受一个输入,即MMP影响因素数据,卷积生成器的输出为MMP预测值。FIG. 6 is a schematic diagram of the network structure of a convolution generator according to an embodiment of the present invention. As shown in FIG. 6 , the present invention uses a convolutional neural network (CNN) to build a convolution generator in a conditional convolution generative adversarial network, and convolution The random noise input in the product generator is removed to obtain an improved convolutional generator model. The convolution generator accepts only one input, the MMP influencer data, and the output of the convolution generator is the MMP prediction value.
如图6所示,在本发明一个实施例中,卷积生成器具体由多层卷积神经网络层组成。As shown in FIG. 6 , in an embodiment of the present invention, the convolution generator is specifically composed of multi-layer convolutional neural network layers.
在本发明一个实施例中,本发明在将MMP影响因素数据输入到卷积生成器的卷积神经网络层之前,还需要将一维的MMP影响因素数据转化为卷积神经网络层可以接收的二维矩阵形式。In an embodiment of the present invention, before inputting the MMP influencing factor data into the convolutional neural network layer of the convolution generator, the present invention also needs to convert the one-dimensional MMP influencing factor data into a convolutional neural network layer that can receive 2D matrix form.
在本发明一个实施例中,卷积生成器的超参数具体包括:卷积神经网络层的层数A1、每层卷积神经网络层的卷积核数量B1、卷积核的大小C1以及卷积生成器中优化器的初始学习速率G1。In an embodiment of the present invention, the hyperparameters of the convolution generator specifically include: the number of layers A1 of the convolutional neural network layer, the number of convolution kernels B1 of each convolutional neural network layer, the size of the convolution kernel C1, and the volume of the convolutional neural network layer. The initial learning rate G1 of the optimizer in the product generator.
在本发明一个实施例中,卷积生成器中优化器的初始学习速率具体为卷积生成器中Adam优化器的初始学习速率。In an embodiment of the present invention, the initial learning rate of the optimizer in the convolution generator is specifically the initial learning rate of the Adam optimizer in the convolution generator.
图7是本发明实施例卷积判别器的网络结构示意图,如图7所示,所述卷积判别器的输入为MMP影响因素数据和MMP值,该MMP值包括所述卷积生成器输出的MMP预测值,所述卷积判别器的输出为数据为真实数据的概率;卷积判别器的网络结构具体包括:卷积神经网络层、拼接层以及全连接神经网络层,所述卷积神经网络层用于对MMP影响因素数据进行预处理,所述拼接层用于将所述卷积神经网络层输出的预处理后的数据与MMP值进行拼接,所述全连接神经网络层用于对所述拼接层输出的拼接后的数据进行处理输出数据为真实数据的概率。FIG. 7 is a schematic diagram of a network structure of a convolution discriminator according to an embodiment of the present invention. As shown in FIG. 7 , the input of the convolution discriminator is MMP influencing factor data and an MMP value, and the MMP value includes the output of the convolution generator. The MMP predicted value of , the output of the convolution discriminator is the probability that the data is real data; the network structure of the convolution discriminator specifically includes: a convolutional neural network layer, a splicing layer and a fully connected neural network layer, the convolutional The neural network layer is used for preprocessing the MMP influencing factor data, the splicing layer is used for splicing the preprocessed data output by the convolutional neural network layer and the MMP value, and the fully connected neural network layer is used for splicing. The spliced data output by the splicing layer is processed and the probability that the output data is real data.
如图7所示,卷积判别器接受两个输入,第一个输入为条件X,即归一化后的MMP影响因素数据,进行预处理,第二个为条件X对应的真实的MMP值Y或者卷积生成器在条件X下生成的MMP预测值Y′。As shown in Figure 7, the convolution discriminator accepts two inputs, the first input is the condition X, that is, the normalized MMP influencing factor data, which is preprocessed, and the second is the real MMP value corresponding to the condition X Y or the MMP predicted value Y' generated by the convolution generator under condition X.
如图7所示,在本发明一个实施例中,卷积判别器设置时,首先设置卷积神经网络层来对条件X,即归一化后的MMP影响因素数据,进行预处理;然后,将经过卷积神经网络层预处理后的数据与输入的MMP值(条件X对应的真实的MMP值Y或者卷积生成器在条件X下生成的MMP预测值Y′)拼接起来;最后,再次设置全连接神经网络层来对拼接后的数据进行处理,并且,此全连接神经网络层的最后一层的神经元个数为1,激活函数为sigmoid,输出当前所输入的数据是真实数据的概率,如果输出的概率>0.5,属于真实数据,反之,则为假数据,即是卷积生成器生成的数据。As shown in FIG. 7 , in one embodiment of the present invention, when the convolution discriminator is set, the convolutional neural network layer is first set to preprocess the condition X, that is, the normalized MMP influencing factor data; then, Concatenate the data preprocessed by the convolutional neural network layer with the input MMP value (the real MMP value Y corresponding to the condition X or the MMP predicted value Y' generated by the convolution generator under the condition X); finally, again Set the fully connected neural network layer to process the spliced data, and the number of neurons in the last layer of this fully connected neural network layer is 1, the activation function is sigmoid, and the output current input data is real data. Probability, if the output probability is > 0.5, it belongs to real data, otherwise, it is fake data, that is, the data generated by the convolution generator.
在本发明一个实施例中,卷积判别器的超参数具体包括:卷积神经网络层的层数A2、每层卷积神经网络层的卷积核数量B2、卷积核的大小C2、全连接神经网络层的层数D2、每层全连接神经网络层的神经元个数E2、每层全连接神经网络层的丢弃率F2以及卷积判别器中优化器的初始学习速率G2。In an embodiment of the present invention, the hyperparameters of the convolutional discriminator specifically include: the number of layers of the convolutional neural network layer A2, the number of convolution kernels B2 of each convolutional neural network layer, the size of the convolutional kernel C2, the full The number of layers D2 connected to the neural network layer, the number of neurons E2 of each fully connected neural network layer, the drop rate F2 of each fully connected neural network layer, and the initial learning rate G2 of the optimizer in the convolutional discriminator.
在本发明一个实施例中,卷积判别器中优化器的初始学习速率具体为卷积生成器中Adam优化器的初始学习速率。In an embodiment of the present invention, the initial learning rate of the optimizer in the convolution discriminator is specifically the initial learning rate of the Adam optimizer in the convolution generator.
在本发明一个实施例中,在上述步骤S202进行迭代训练时,本发明还采用超参数优化方法对迭代训练次数H1、训练样本数量H2、卷积生成器的超参数以及卷积判别器的超参数进行优化,得到最佳参数组合。进而根据所述最佳参数组合进行迭代训练,得到所述预先训练出的卷积生成器,即MMP预测模型。In an embodiment of the present invention, when the iterative training is performed in the above step S202, the present invention also adopts a hyperparameter optimization method to adjust the iterative training times H1, the number of training samples H2, the hyperparameters of the convolution generator, and the hyperparameters of the convolution discriminator. The parameters are optimized to obtain the best parameter combination. Then, iterative training is performed according to the optimal parameter combination to obtain the pre-trained convolution generator, that is, the MMP prediction model.
在本发明一个具体实施例中,本发明利用贝叶斯超参数优化方法,对卷积生成器的超参数和卷积判别器的超参数、每次迭代训练过程中每个批次内的训练样本数(H2)以及迭代训练次数(H1)进行优化,寻找使模型在验证集中预测效果最好的参数组合,将在验证集中预测效果最好的参数组合作为最佳参数组合。In a specific embodiment of the present invention, the present invention utilizes the Bayesian hyperparameter optimization method to optimize the hyperparameters of the convolution generator, the hyperparameters of the convolution discriminator, and the training in each batch in each iteration training process. The number of samples (H2) and iterative training times (H1) are optimized to find the parameter combination that makes the model predict the best in the validation set, and the parameter combination with the best prediction effect in the validation set is used as the best parameter combination.
其中,贝叶斯优化时,会先随机使用几组参数组合,进行试算,试算的次数可以人为设置,这里将试算次数设置为10。然后,再进行真正的贝叶斯优化,试算结束以后的每一次贝叶斯优化,都会参考之前计算的结果,也就是模型在验证集上的表现,从而选择下一次计算时应该使用的超参数以及迭代训练次数,这里将真正执行贝叶斯优化的步数设置为40。Among them, during Bayesian optimization, several sets of parameter combinations will be randomly used to perform trial calculations. The number of trial calculations can be set manually. Here, the number of trial calculations is set to 10. Then, the real Bayesian optimization is performed. Each Bayesian optimization after the trial calculation will refer to the results of the previous calculation, that is, the performance of the model on the validation set, so as to select the hypervisor that should be used in the next calculation. Parameters and the number of iterative training, here the number of steps to actually perform Bayesian optimization is set to 40.
在本发明一个实施例中,本发明利用通过贝叶斯超参数优化得到的最佳参数组合,建立MMP预测模型(即卷积生成器)。该模型即为我们将用来对新油藏的MMP进行预测的模型,输入该新油藏的MMP影响因素数据,即可预测出对应的MMP值。In one embodiment of the present invention, the present invention uses the optimal parameter combination obtained through Bayesian hyperparameter optimization to establish an MMP prediction model (ie, a convolution generator). This model is the model we will use to predict the MMP of the new reservoir. Input the MMP influencing factor data of the new reservoir, and then the corresponding MMP value can be predicted.
在本发明一个实施例中,利用贝叶斯超参数优化得到最佳参数组合可以如下:In an embodiment of the present invention, the optimal parameter combination obtained by using Bayesian hyperparameter optimization can be as follows:
在卷积生成器网络中,卷积神经网络层的层数A1设置为1层,每层卷积神经网络层的卷积核数量B1设置为26个,卷积核的大小C1设置为2,激活函数均为relu。卷积生成器中优化器的初始学习速率G1设置为0.0002190。In the convolution generator network, the number of layers A1 of the convolutional neural network layer is set to 1 layer, the number of convolution kernels B1 of each convolutional neural network layer is set to 26, and the size of the convolution kernel C1 is set to 2. The activation functions are all relu. The initial learning rate G1 of the optimizer in the convolutional generator is set to 0.0002190.
在卷积判别器网络中,卷积神经网络层的层数A2设置为2层,每层卷积神经网络层的卷积核数量B2设置为88个,卷积核的大小C2设置为4,激活函数均为relu;全连接神经网络层的层数D2设置为3层,前2层每层神经元数量设置为91,前2层网络每层的丢弃率为0.2021,最后一层神经元数量设置为1,前2层的激活函数均为relu,最后1层的激活函数为sigmoid。卷积判别器中优化器的初始学习速率G2设置为0.0009755。In the convolutional discriminator network, the number of layers A2 of the convolutional neural network layer is set to 2, the number of convolutional kernels B2 of each convolutional neural network layer is set to 88, and the size of the convolutional kernel C2 is set to 4. The activation functions are all relu; the number of layers D2 of the fully connected neural network layer is set to 3 layers, the number of neurons in each layer of the first 2 layers is set to 91, the drop rate of each layer of the first 2 layers of the network is 0.2021, and the number of neurons in the last layer Set to 1, the activation functions of the first two layers are relu, and the activation function of the last layer is sigmoid. The initial learning rate G2 of the optimizer in the convolutional discriminator is set to 0.0009755.
迭代训练次数(H1)经过贝叶斯优化后设置为482,每次迭代训练过程中每个批次内的训练样本数(H2)经过贝叶斯优化后设置为45。The number of iterative training (H1) is set to 482 after Bayesian optimization, and the number of training samples (H2) in each batch in each iteration training process is set to 45 after Bayesian optimization.
进一步的,本发明根据同样的训练集数据,分别利用全连接神经网络、支持向量机、条件全连接神经网络等3种机器学习方法建立了MMP预测模型,并结合贝叶斯算法和验证数据集对各个模型的结构进行了优化。最后,利用同样未经过训练的测试集数据,对各个优化后模型的预测精度进行了评价。Further, according to the same training set data, the present invention uses three machine learning methods, such as fully connected neural network, support vector machine, and conditional fully connected neural network, to establish an MMP prediction model, and combines Bayesian algorithm and verification data set. The structure of each model has been optimized. Finally, the prediction accuracy of each optimized model is evaluated using the same untrained test set data.
表1各个MMP预测模型在测试样本集中的平均绝对百分比误差Table 1 The mean absolute percentage error of each MMP prediction model in the test sample set
从表1中可以看到,与全连接神经网络、支持向量机、条件全连接神经网络等机器学习方法建立的MMP模型的预测结果相比,本发明基于改进的条件卷积生成式对抗网络的MMP预测模型在测试集中的误差分别提高了3、10以及4个百分点,是四种机器学习方法里精度最高的。这体现了本发明改进的条件卷积生成式对抗网络强大的拟合能力,具有比全连接神经网络、支持向量机、条件全连接神经网络更高的预测精度,泛化能力强大。As can be seen from Table 1, compared with the prediction results of MMP models established by machine learning methods such as fully connected neural networks, support vector machines, and conditional fully connected neural networks, the present invention is based on the improved conditional convolution generative adversarial network. The error of the MMP prediction model in the test set is increased by 3, 10, and 4 percentage points, respectively, which is the highest accuracy among the four machine learning methods. This reflects the powerful fitting ability of the improved conditional convolution generative adversarial network of the present invention, which has higher prediction accuracy and stronger generalization ability than the fully connected neural network, the support vector machine and the conditional fully connected neural network.
图10、图11和图12,为利用改进以后的条件卷积生成式对抗网络建立的MMP预测模型得到的MMP随温度、CO2中N2的摩尔分数以及CO2中H2S的摩尔分数的改变而变化的曲线。可以看到,预测MMP随着温度以及CO2中N2的摩尔分数的增加而增加,随着CO2中H2S的摩尔分数的增加而减小,其显示规律与实际物理规律相符合,证明了本发明模型的可靠性与有效性,可以用于MMP的预测和影响因素分析。Figure 10, Figure 11 and Figure 12, the MMP prediction model obtained by using the improved conditional convolutional generative adversarial network with temperature, the mole fraction of N 2 in CO 2 and the mole fraction of H 2 S in CO 2 changes with the change of the curve. It can be seen that the predicted MMP increases with temperature as well as the mole fraction of N2 in CO2 , and decreases with the increase of the mole fraction of H2S in CO2 , which shows that the law is consistent with the actual physical law, It is proved that the model of the invention is reliable and effective, and can be used for MMP prediction and influencing factor analysis.
由以上实施例可见,本发明的基于条件卷积生成式对抗网络的MMP预测方法至少实现了以下有益效果:It can be seen from the above embodiments that the MMP prediction method based on the conditional convolution generative adversarial network of the present invention at least achieves the following beneficial effects:
1、本发明将机器学习方法中的条件卷积生成式对抗网络与油藏MMP预测相结合,是一种新的MMP预测思路和方法,开辟了条件卷积生成式对抗网络在MMP预测的先例,对油藏MMP预测以及油藏开发方案设计具有重要意义;1. The present invention combines the conditional convolution generative adversarial network in the machine learning method with the oil reservoir MMP prediction, which is a new MMP prediction idea and method, and opens up a precedent for the conditional convolution generative adversarial network in MMP prediction. , which is of great significance to reservoir MMP prediction and reservoir development plan design;
2、本发明根据MMP这一预测场景对条件卷积生成式对抗网络进行了适应性地改进,首先对生成器进行了改进,删除了生成器的随机噪声输入,然后,给形成的真假数据的标签进行了平滑处理,共同提高模型预测精度,从而得到了改进后的条件卷积生成式对抗网络,实验结果显示,改进后的条件卷积生成式对抗网络不仅提高了MMP的预测精度,而且可以准确地反映出MMP随各个影响因素的变化关系,适用性更强。2. The present invention adaptively improves the conditional convolution generative adversarial network according to the prediction scene of MMP. First, the generator is improved, and the random noise input of the generator is deleted. The labels are smoothed to jointly improve the prediction accuracy of the model, resulting in an improved conditional convolutional generative adversarial network. The experimental results show that the improved conditional convolutional generative adversarial network not only improves the prediction accuracy of MMP, but also It can accurately reflect the changing relationship of MMP with various influencing factors, and is more applicable.
总体来看,本发明模型建立过程简便、计算效率高、预测精度高,综合性、适用性强,具有广泛的应用前景。On the whole, the model of the invention has the advantages of simple process of establishing the model, high calculation efficiency, high prediction accuracy, strong comprehensiveness and applicability, and has a wide application prospect.
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。It should be noted that the steps shown in the flowcharts of the accompanying drawings may be executed in a computer system, such as a set of computer-executable instructions, and, although a logical sequence is shown in the flowcharts, in some cases, Steps shown or described may be performed in an order different from that herein.
基于同一发明构思,本发明实施例还提供了一种基于条件卷积生成式对抗网络的MMP预测装置,可以用于实现上述实施例所描述的基于条件卷积生成式对抗网络的MMP预测方法,如下面的实施例所述。由于基于条件卷积生成式对抗网络的MMP预测装置解决问题的原理与基于条件卷积生成式对抗网络的MMP预测方法相似,因此基于条件卷积生成式对抗网络的MMP预测装置的实施例可以参见基于条件卷积生成式对抗网络的MMP预测方法的实施例,重复之处不再赘述。以下所使用的,术语“单元”或者“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。Based on the same inventive concept, an embodiment of the present invention also provides an MMP prediction device based on a conditional convolution generative adversarial network, which can be used to implement the MMP prediction method based on a conditional convolution generative adversarial network described in the above embodiments, as described in the examples below. Since the principle of solving the problem of the MMP prediction device based on the conditional convolution generative adversarial network is similar to the MMP prediction method based on the conditional convolution generative adversarial network, the embodiment of the MMP prediction device based on the conditional convolution generative adversarial network can refer to The embodiment of the MMP prediction method based on the conditional convolution generative adversarial network will not be repeated here. As used below, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
图13是本发明实施例基于条件卷积生成式对抗网络的MMP预测装置的第一结构框图,如图13所示,在本发明一个实施例中,本发明的基于条件卷积生成式对抗网络的MMP预测装置包括:FIG. 13 is a first structural block diagram of an MMP prediction apparatus based on a conditional convolution generative adversarial network according to an embodiment of the present invention. As shown in FIG. 13 , in an embodiment of the present invention, the conditional convolution generative adversarial network-based The MMP predictor includes:
数据获取单元1,用于获取目标油藏的MMP影响因素数据;The
预测单元2,用于用于将所述MMP影响因素数据输入到预先训练出的卷积生成器中,得到所述预先训练出的卷积生成器输出的所述目标油藏的MMP预测值,其中,卷积生成器为根据卷积神经网络搭建的,卷积生成器不包含随机噪声输入,所述预先训练出的卷积生成器为根据训练样本集对卷积生成器进行多次迭代训练得到的,所述训练样本集中的每个训练样本包含:油藏的MMP值以及油藏的MMP影响因素数据。A
图14是本发明实施例基于条件卷积生成式对抗网络的MMP预测装置的第二结构框图,如图14所示,在本发明一个实施例中,本发明的基于条件卷积生成式对抗网络的MMP预测装置还包括:FIG. 14 is a second structural block diagram of the MMP prediction apparatus based on the conditional convolution generative adversarial network according to an embodiment of the present invention. As shown in FIG. 14 , in an embodiment of the present invention, the conditional convolution generative adversarial network-based The MMP predictor also includes:
训练样本集获取单元3,用于获取所述训练样本集;A training sample set obtaining
模型训练单元4,用于根据所述训练样本集进行H1次迭代训练,得到所述预先训练出的卷积生成器,其中,每一次迭代训练分为多个批次的训练,在进行每一个批次的训练时,先从所述训练样本集中选取H2个训练样本,然后基于选取的训练样本对卷积判别器的网络权重进行训练,最后基于选取的训练样本在由所述卷积判别器和卷积生成器组成的组合模型中对所述卷积生成器的网络权重进行训练,所述卷积判别器为根据卷积神经网络和全连接神经网络组合搭建的,H1和H2均为正整数。A
在本发明一个实施例中,所述训练样本由第一数据和第二数据组成,所述第一数据为油藏的MMP影响因素数据,所述第二数据为油藏的MMP值。在本发明一个实施例中,所述模型训练单元,具体包括:In an embodiment of the present invention, the training sample consists of first data and second data, the first data is MMP influencing factor data of the oil reservoir, and the second data is the MMP value of the oil reservoir. In an embodiment of the present invention, the model training unit specifically includes:
第一标签设置模块,用于分别针对每个选取的训练样本,将卷积生成器根据训练样本的第一数据输出的MMP预测值与该训练样本的第一数据进行组合,得到组合数据,将该组合数据的标签设置为0,并对该组合数据的标签进行平滑处理;The first label setting module is used to combine the MMP predicted value output by the convolution generator according to the first data of the training sample and the first data of the training sample for each selected training sample, to obtain the combined data, and the The label of the combined data is set to 0, and the label of the combined data is smoothed;
第二标签设置模块,用于将每个选取的训练样本的标签设置为1,并对训练样本的标签进行平滑处理;The second label setting module is used to set the label of each selected training sample to 1, and smooth the label of the training sample;
卷积判别器训练模块,用于将标签平滑处理后的组合数据以及标签平滑处理后的训练样本输入到所述卷积判别器中,对所述卷积判别器的网络权重进行训练。The convolution discriminator training module is used to input the combined data after label smoothing and the training samples after label smoothing into the convolution discriminator, and train the network weight of the convolution discriminator.
在本发明一个实施例中,所述模型训练单元,具体包括:In an embodiment of the present invention, the model training unit specifically includes:
预测值获取模块,用于分别针对每个选取的训练样本,将训练样本的第一数据输入到所述卷积生成器中,得到所述卷积生成器输出的该训练样本对应的MMP预测值;The predicted value acquisition module is used to input the first data of the training sample into the convolution generator for each selected training sample, and obtain the MMP predicted value corresponding to the training sample output by the convolution generator ;
组合数据获取模块,用于分别针对将每个选取的训练样本,将训练样本的第一数据与训练样本对应的MMP预测值进行组合,得到组合数据,将该组合数据的标签设置为1,并对该组合数据的标签进行平滑处理;The combined data acquisition module is used to combine the first data of the training sample and the MMP predicted value corresponding to the training sample for each selected training sample to obtain combined data, set the label of the combined data to 1, and Smooth the labels of the combined data;
卷积生成器训练模块,用于将标签平滑处理后的组合数据输入到所述卷积判别器中,得到所述卷积判别器输出的该组合数据为真实数据的概率。The convolution generator training module is used to input the combined data after label smoothing processing into the convolution discriminator, and obtain the probability that the combined data output by the convolution discriminator is real data.
可选的,所述模型训练单元,还包括:Optionally, the model training unit further includes:
超参数优化模块,用于采用超参数优化方法对迭代训练次数H1、训练样本数量H2、所述卷积生成器的超参数以及所述卷积判别器的超参数进行优化,得到最佳参数组合。The hyperparameter optimization module is used to optimize the iterative training times H1, the number of training samples H2, the hyperparameters of the convolution generator and the hyperparameters of the convolution discriminator using a hyperparameter optimization method to obtain the best parameter combination .
为了实现上述目的,根据本申请的另一方面,还提供了一种计算机设备。如图15所示,该计算机设备包括存储器、处理器、通信接口以及通信总线,在存储器上存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述实施例方法中的步骤。In order to achieve the above object, according to another aspect of the present application, a computer device is also provided. As shown in FIG. 15 , the computer device includes a memory, a processor, a communication interface and a communication bus, and a computer program that can be run on the processor is stored in the memory, and the processor implements the above embodiments when executing the computer program steps in the method.
处理器可以为中央处理器(Central Processing Unit,CPU)。处理器还可以为其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等芯片,或者上述各类芯片的组合。The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other Chips such as programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination of the above types of chips.
存储器作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态计算机可执行程序以及单元,如本发明上述方法实施例中对应的程序单元。处理器通过运行存储在存储器中的非暂态软件程序、指令以及模块,从而执行处理器的各种功能应用以及作品数据处理,即实现上述方法实施例中的方法。As a non-transitory computer-readable storage medium, the memory can be used to store non-transitory software programs, non-transitory computer-executable programs, and units, such as program units corresponding to the above method embodiments of the present invention. The processor executes various functional applications of the processor and works data processing by running the non-transitory software programs, instructions and modules stored in the memory, that is, to implement the methods in the above method embodiments.
存储器可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储处理器所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created by the processor, and the like. Additionally, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory may optionally include memory located remotely from the processor, such remote memory being connectable to the processor via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
所述一个或者多个单元存储在所述存储器中,当被所述处理器执行时,执行上述实施例中的方法。The one or more units are stored in the memory, and when executed by the processor, perform the methods in the above-described embodiments.
上述计算机设备具体细节可以对应参阅上述实施例中对应的相关描述和效果进行理解,此处不再赘述。The specific details of the above computer equipment can be understood by referring to the corresponding related descriptions and effects in the above embodiments, which will not be repeated here.
为了实现上述目的,根据本申请的另一方面,还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序在计算机处理器中执行时实现上述基于条件卷积生成式对抗网络的MMP预测方法中的步骤。本领域技术人员可以理解,实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储记忆体(Read-OnlyMemory,ROM)、随机存储记忆体(RandomAccessMemory,RAM)、快闪存储器(Flash Memory)、硬盘(Hard Disk Drive,缩写:HDD)或固态硬盘(Solid-State Drive,SSD)等;所述存储介质还可以包括上述种类的存储器的组合。In order to achieve the above object, according to another aspect of the present application, a computer-readable storage medium is also provided, where the computer-readable storage medium stores a computer program, and when the computer program is executed in a computer processor, the above-mentioned based on Steps in the MMP prediction method of conditional convolutional generative adversarial networks. Those skilled in the art can understand that all or part of the processes in the methods of the above embodiments can be completed by instructing relevant hardware through a computer program, and the program can be stored in a computer-readable storage medium. During execution, the processes of the embodiments of the above-mentioned methods may be included. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a flash memory (Flash Memory), a hard disk (Hard Disk Drive, Abbreviation: HDD) or solid-state drive (Solid-State Drive, SSD), etc.; the storage medium may also include a combination of the above-mentioned types of memories.
为了实现上述目的,根据本申请的另一方面,还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述基于条件卷积生成式对抗网络的MMP预测方法的步骤。In order to achieve the above object, according to another aspect of the present application, a computer program product is also provided, comprising a computer program/instruction, when the computer program/instruction is executed by a processor, the above-mentioned MMP based on conditional convolution generative adversarial network is implemented The steps of the prediction method.
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that the above-mentioned modules or steps of the present invention can be implemented by a general-purpose computing device, which can be centralized on a single computing device, or distributed in a network composed of multiple computing devices Alternatively, they can be implemented with program codes executable by a computing device, so that they can be stored in a storage device and executed by the computing device, or they can be made into individual integrated circuit modules, or they can be integrated into The multiple modules or steps are fabricated into a single integrated circuit module. As such, the present invention is not limited to any particular combination of hardware and software.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210055932.0A CN114399119B (en) | 2022-01-18 | 2022-01-18 | MMP prediction method and device based on condition convolution generation type countermeasure network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210055932.0A CN114399119B (en) | 2022-01-18 | 2022-01-18 | MMP prediction method and device based on condition convolution generation type countermeasure network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114399119A true CN114399119A (en) | 2022-04-26 |
| CN114399119B CN114399119B (en) | 2025-07-29 |
Family
ID=81230868
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210055932.0A Active CN114399119B (en) | 2022-01-18 | 2022-01-18 | MMP prediction method and device based on condition convolution generation type countermeasure network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114399119B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115034368A (en) * | 2022-06-10 | 2022-09-09 | 小米汽车科技有限公司 | Vehicle-mounted model training method and device, electronic equipment, storage medium and chip |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109741328A (en) * | 2019-02-02 | 2019-05-10 | 东北大学 | A method for vehicle appearance quality detection based on generative adversarial network |
| CN111814347A (en) * | 2020-07-20 | 2020-10-23 | 中国石油大学(华东) | Method and system for predicting gas channeling in a reservoir |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN113435128A (en) * | 2021-07-15 | 2021-09-24 | 中国石油大学(北京) | Oil and gas reservoir yield prediction method and device based on condition generation type countermeasure network |
-
2022
- 2022-01-18 CN CN202210055932.0A patent/CN114399119B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109741328A (en) * | 2019-02-02 | 2019-05-10 | 东北大学 | A method for vehicle appearance quality detection based on generative adversarial network |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN111814347A (en) * | 2020-07-20 | 2020-10-23 | 中国石油大学(华东) | Method and system for predicting gas channeling in a reservoir |
| CN113435128A (en) * | 2021-07-15 | 2021-09-24 | 中国石油大学(北京) | Oil and gas reservoir yield prediction method and device based on condition generation type countermeasure network |
Non-Patent Citations (4)
| Title |
|---|
| 任双双;杨胜来;沈飞;: "BP神经网络预测最小混相压力", 断块油气田, no. 02, 25 March 2010 (2010-03-25), pages 216 - 218 * |
| 刘小杰: "CO 2 -烷烃体系最小混相压力实验与 神经网络预测", 《中国优 秀硕士学位论文全文数据库工程科技辑I 辑》, no. 1, 15 January 2022 (2022-01-15), pages 019 - 58 * |
| 李虎,等: "基于广义回归神经网络的CO2 驱最小混相压力预测", 《岩性油气藏》, vol. 24, no. 1, 28 February 2012 (2012-02-28), pages 108 - 111 * |
| 王帅;王泰超;甘云雁;李昊;夏阳;: "CO_2驱最小混相压力预测方法综述", 石化技术, no. 11, 28 November 2019 (2019-11-28), pages 52 - 53 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115034368A (en) * | 2022-06-10 | 2022-09-09 | 小米汽车科技有限公司 | Vehicle-mounted model training method and device, electronic equipment, storage medium and chip |
| CN115034368B (en) * | 2022-06-10 | 2023-09-29 | 小米汽车科技有限公司 | Vehicle model training method and device, electronic equipment, storage medium and chip |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114399119B (en) | 2025-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Wu et al. | Object detection based on RGC mask R‐CNN | |
| CN111819580B (en) | Neural architecture search for dense image prediction tasks | |
| CN117892774A (en) | Neural architecture search for convolutional neural networks | |
| Phillipson | Quantum Machine Learning: Benefits and Practical Examples. | |
| CN117669700B (en) | Deep learning model training method and deep learning model training system | |
| CN113435128B (en) | Oil and gas reservoir production prediction method and device based on conditional generative adversarial network | |
| CN114358319B (en) | Machine learning framework-based classification method and related device | |
| Singh et al. | Edge proposal sets for link prediction | |
| CN113160795B (en) | Language feature extraction model training method, device, equipment and storage medium | |
| Wu et al. | Optimized deep learning framework for water distribution data-driven modeling | |
| Yu et al. | Boosted dynamic neural networks | |
| CN114358216A (en) | Quantum clustering method and related device based on machine learning framework | |
| CN116822742A (en) | A power load forecasting method based on dynamic decomposition-reconstruction integrated processing | |
| Liu et al. | Deep Boltzmann machines aided design based on genetic algorithms | |
| CN114169240B (en) | MMP prediction method and device based on conditional generative adversarial network | |
| CN116826734A (en) | A method and device for predicting photovoltaic power generation based on multiple input models | |
| CN117593275A (en) | A medical image segmentation system | |
| CN116305939A (en) | High-precision inversion method and system for carbon water flux of land ecological system and electronic equipment | |
| CN120356019A (en) | Image classification method based on quantum circuit convolution | |
| CN114399119A (en) | MMP prediction method and device based on conditional convolution generative adversarial network | |
| Wu et al. | Temporally correlated task scheduling for sequence learning | |
| CN114595641A (en) | Method and system for solving combined optimization problem | |
| CN117971354B (en) | Heterogeneous acceleration method, device, equipment and storage medium based on end-to-end learning | |
| Liu et al. | SuperPruner: Automatic neural network pruning via super network | |
| CN116911288B (en) | Discrete text recognition method based on natural language processing technology |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |