+

CN111310566A - A method and system for wildfire detection based on static and dynamic multi-feature fusion - Google Patents

A method and system for wildfire detection based on static and dynamic multi-feature fusion Download PDF

Info

Publication number
CN111310566A
CN111310566A CN202010046385.0A CN202010046385A CN111310566A CN 111310566 A CN111310566 A CN 111310566A CN 202010046385 A CN202010046385 A CN 202010046385A CN 111310566 A CN111310566 A CN 111310566A
Authority
CN
China
Prior art keywords
static
dynamic
features
sample
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010046385.0A
Other languages
Chinese (zh)
Inventor
李永祥
张申
杨罡
李强
晋涛
白耀鹏
刘志祥
张振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Shanxi Electric Power Co Ltd
Original Assignee
Shanxi Zhenzhong Electric Power Co ltd
Electric Power Research Institute of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Zhenzhong Electric Power Co ltd, Electric Power Research Institute of State Grid Shanxi Electric Power Co Ltd filed Critical Shanxi Zhenzhong Electric Power Co ltd
Priority to CN202010046385.0A priority Critical patent/CN111310566A/en
Publication of CN111310566A publication Critical patent/CN111310566A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种静动态多特征融合的山火检测方法及系统。所述方法首先对山火视频进行帧采样生成初始图像序列并进行预处理,生成预处理后的图像序列;将预处理后的图像序列划分为静态样本和动态样本,利用卷积神经网络提取所述静态样本的静态特征并提取所述动态样本的动态特征;利用核典型相关分析方法将静态特征和动态特征进行融合,生成融合后特征;将融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器;采用训练好的山火检测分类器进行山火检测。本发明方法在提取基于视频帧的图像特征过程中,对经过预处理后的图像采取静、动态特征分别提取再融合的方法,可以较全面的提取山火特征,提高山火检测的精度与准确度。

Figure 202010046385

The invention discloses a method and system for detecting a mountain fire with static and dynamic multi-feature fusion. The method first performs frame sampling on the mountain fire video to generate an initial image sequence and preprocesses it to generate a preprocessed image sequence; divides the preprocessed image sequence into static samples and dynamic samples, and uses a convolutional neural network to extract all the images. Describe the static features of the static samples and extract the dynamic features of the dynamic samples; use the nuclear canonical correlation analysis method to fuse the static features and dynamic features to generate fused features; input the fused features into the support vector machine classifier for learning and training , generate a trained wildfire detection classifier; use the trained wildfire detection classifier for wildfire detection. In the process of extracting image features based on video frames, the method of the invention adopts the method of separately extracting and re-merging the static and dynamic features of the pre-processed images, which can comprehensively extract the mountain fire features and improve the accuracy and accuracy of the mountain fire detection. Spend.

Figure 202010046385

Description

一种静动态多特征融合的山火检测方法及系统A method and system for wildfire detection based on static and dynamic multi-feature fusion

技术领域technical field

本发明涉及山火检测技术领域,特别是涉及一种静动态多特征融合的山火检测方法及系统。The present invention relates to the technical field of wildfire detection, in particular to a method and system for wildfire detection based on static and dynamic multi-feature fusion.

背景技术Background technique

随着电网的快速发展以及电网结构的不断扩大,电网不可避免地要穿越山林,从而山火就成了铺设电网必须预防的情况。因为身处山林,地形复杂且地域广阔,人工监测比较困难,实现山火的自动监测可以大大减少人力的浪费。图像处理技术和计算机视觉的迅速发展为计算机自动检测山火提供了技术基础。With the rapid development of the power grid and the continuous expansion of the power grid structure, the power grid inevitably has to pass through the mountains and forests, so wildfires have become a situation that must be prevented when laying the power grid. Because of the complex terrain and vast area in the mountains and forests, manual monitoring is difficult, and the automatic monitoring of wildfires can greatly reduce the waste of manpower. The rapid development of image processing technology and computer vision provides a technical basis for the automatic detection of wildfires by computers.

目前的山火识别与检测的方法,多数都是单独使用卷积神经网络或者对运动区域利用传统特征提取方法对图像进行特征提取;同时,由于山火的数据集较少,且发生地点的背景、天气状况也比较复杂,所以当前的对于山火的检测存在着特征提取不全面、识别准确度较低等缺点。Most of the current wildfire identification and detection methods use convolutional neural networks alone or use traditional feature extraction methods to extract features from images in moving areas. The weather conditions are also more complex, so the current detection of wildfires has shortcomings such as incomplete feature extraction and low recognition accuracy.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种静动态多特征融合的山火检测方法及系统,以解决现有的的山火识别与检测方法对山火特征提取不全面、识别准确度低的问题。The purpose of the present invention is to provide a method and system for detecting wildfires with static and dynamic multi-feature fusion, so as to solve the problems that the existing wildfire identification and detection methods do not fully extract the characteristics of wildfires and have low identification accuracy.

为实现上述目的,本发明提供了如下方案:For achieving the above object, the present invention provides the following scheme:

一种静动态多特征融合的山火检测方法,所述山火检测方法包括:A method for detecting wildfires based on static and dynamic multi-feature fusion, the method for detecting wildfires includes:

获取对山火视频进行帧采样生成的初始图像序列;所述初始图像序列中包括多幅连续拍摄的原始图像;Obtaining an initial image sequence generated by frame sampling of a mountain fire video; the initial image sequence includes a plurality of consecutively shot original images;

对所述初始图像序列进行预处理,生成预处理后的图像序列;Preprocessing the initial image sequence to generate a preprocessed image sequence;

将所述预处理后的图像序列划分为静态样本和动态样本;dividing the preprocessed image sequence into static samples and dynamic samples;

利用卷积神经网络提取所述静态样本的静态特征;Extract the static features of the static samples by using a convolutional neural network;

提取所述动态样本的动态特征;extracting dynamic features of the dynamic samples;

利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成融合后特征;The static feature and the dynamic feature are fused by using the kernel canonical correlation analysis method to generate the fused feature;

将所述融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器;Inputting the fused features into a support vector machine classifier for learning and training to generate a trained wildfire detection classifier;

采用所述训练好的山火检测分类器进行山火检测。The mountain fire detection is performed by using the trained mountain fire detection classifier.

可选的,所述对所述初始图像序列进行预处理,生成预处理后的图像序列,具体包括:Optionally, the preprocessing of the initial image sequence to generate a preprocessed image sequence specifically includes:

采用中值滤波方法对所述初始图像序列中的每幅原始图像进行去噪处理,生成去噪后图像;Denoising is performed on each original image in the initial image sequence by using a median filtering method to generate a denoised image;

采用单尺度Retinex方法对所述去噪后图像进行图像增强处理,生成增强后图像;多幅连续的增强后图像构成所述预处理后的图像序列。The single-scale Retinex method is used to perform image enhancement processing on the denoised image to generate an enhanced image; a plurality of consecutive enhanced images constitute the preprocessed image sequence.

可选的,所述将所述预处理后的图像序列划分为静态样本和动态样本,具体包括:Optionally, dividing the preprocessed image sequence into static samples and dynamic samples specifically includes:

将所述预处理后的图像序列中的多幅连续的增强后图像按照4:1的比例分为训练样本和测试样本;Divide multiple continuous enhanced images in the preprocessed image sequence into training samples and test samples according to a ratio of 4:1;

将所述训练样本中的多幅连续的增强后图像按3:2的比例划分为静态样本和动态样本。The multiple consecutive enhanced images in the training samples are divided into static samples and dynamic samples in a ratio of 3:2.

可选的,所述利用卷积神经网络提取所述静态样本的静态特征,具体包括:Optionally, the use of a convolutional neural network to extract the static features of the static samples specifically includes:

将所述静态样本输入VGG-19卷积神经网络,获取所述VGG-19卷积神经网络输出的特征图;The static sample is input into the VGG-19 convolutional neural network, and the feature map output by the VGG-19 convolutional neural network is obtained;

根据所述特征图的高、宽和通道数对不同通道的特征之间进行内积运算,得到特征矩阵;Perform an inner product operation between the features of different channels according to the height, width and number of channels of the feature map to obtain a feature matrix;

将所述特征矩阵向量化为所述静态样本的静态特征。The feature matrix is vectorized into static features of the static samples.

可选的,所述提取所述动态样本的动态特征,具体包括:Optionally, the extracting the dynamic features of the dynamic samples specifically includes:

利用单高斯背景模型法提取所述动态样本中的运动区域;Extract the motion area in the dynamic sample by using the single Gaussian background model method;

对所述运动区域利用灰度共生矩阵提取火焰纹理特征;Extracting flame texture features using a grayscale co-occurrence matrix for the motion region;

提取所述运动区域的面积变化特征及闪烁特征;extracting the area change feature and flicker feature of the motion area;

将所述火焰纹理特征、所述面积变化特征以及所述闪烁特征加权合并为所述动态样本的动态特征。The flame texture feature, the area change feature, and the flicker feature are weighted and combined into a dynamic feature of the dynamic sample.

一种静动态多特征融合的山火检测系统,所述山火检测系统包括:A static and dynamic multi-feature fusion wildfire detection system, the wildfire detection system includes:

初始图像序列获取模块,用于获取对山火视频进行帧采样生成的初始图像序列;所述初始图像序列中包括多幅连续拍摄的原始图像;an initial image sequence acquisition module, used to acquire an initial image sequence generated by frame sampling of a mountain fire video; the initial image sequence includes a plurality of consecutively shot original images;

图像预处理模块,用于对所述初始图像序列进行预处理,生成预处理后的图像序列;an image preprocessing module for preprocessing the initial image sequence to generate a preprocessed image sequence;

静动态样本划分模块,用于将所述预处理后的图像序列划分为静态样本和动态样本;a static and dynamic sample division module, used for dividing the preprocessed image sequence into static samples and dynamic samples;

静态特征提取模块,用于利用卷积神经网络提取所述静态样本的静态特征;A static feature extraction module, used for extracting the static features of the static sample by using a convolutional neural network;

动态特征提取模块,用于提取所述动态样本的动态特征;a dynamic feature extraction module for extracting the dynamic features of the dynamic samples;

静动态特征融合模块,用于利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成融合后特征;a static and dynamic feature fusion module, configured to fuse the static feature and the dynamic feature by using the kernel canonical correlation analysis method to generate a post-fusion feature;

模型训练模块,用于将所述融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器;a model training module for inputting the fused features into a support vector machine classifier for learning and training to generate a trained mountain fire detection classifier;

山火检测模块,用于采用所述训练好的山火检测分类器进行山火检测。A wildfire detection module is used to detect wildfires by using the trained wildfire detection classifier.

可选的,所述图像预处理模块具体包括:Optionally, the image preprocessing module specifically includes:

去噪处理单元,用于采用中值滤波方法对所述初始图像序列中的每幅原始图像进行去噪处理,生成去噪后图像;a denoising processing unit, configured to perform denoising processing on each original image in the initial image sequence by using a median filtering method to generate a denoised image;

增强处理单元,用于采用单尺度Retinex方法对所述去噪后图像进行图像增强处理,生成增强后图像;多幅连续的增强后图像构成所述预处理后的图像序列。The enhancement processing unit is configured to perform image enhancement processing on the denoised image by using the single-scale Retinex method to generate an enhanced image; a plurality of consecutive enhanced images constitute the preprocessed image sequence.

可选的,所述静动态样本划分模块具体包括:Optionally, the static and dynamic sample division module specifically includes:

训练样本划分单元,用于将所述预处理后的图像序列中的多幅连续的增强后图像按照4:1的比例分为训练样本和测试样本;A training sample dividing unit, which is used to divide multiple continuous enhanced images in the preprocessed image sequence into training samples and test samples according to a ratio of 4:1;

静动态样本划分单元,用于将所述训练样本中的多幅连续的增强后图像按3:2的比例划分为静态样本和动态样本。The static and dynamic sample dividing unit is configured to divide the plurality of continuous enhanced images in the training samples into static samples and dynamic samples in a ratio of 3:2.

可选的,所述静态特征提取模块具体包括:Optionally, the static feature extraction module specifically includes:

特征图获取单元,用于将所述静态样本输入VGG-19卷积神经网络,获取所述VGG-19卷积神经网络输出的特征图;A feature map acquisition unit, used to input the static sample into the VGG-19 convolutional neural network to obtain the feature map output by the VGG-19 convolutional neural network;

特征矩阵计算单元,用于根据所述特征图的高、宽和通道数对不同通道的特征之间进行内积运算,得到特征矩阵;a feature matrix calculation unit, configured to perform an inner product operation between features of different channels according to the height, width and number of channels of the feature map to obtain a feature matrix;

静态特征提取单元,用于将所述特征矩阵向量化为所述静态样本的静态特征。A static feature extraction unit, configured to vectorize the feature matrix into static features of the static sample.

可选的,所述动态特征提取模块具体包括:Optionally, the dynamic feature extraction module specifically includes:

运动区域提取单元,用于利用单高斯背景模型法提取所述动态样本中的运动区域;a motion region extraction unit, used for extracting the motion region in the dynamic sample by using a single Gaussian background model method;

火焰纹理特征提取单元,用于对所述运动区域利用灰度共生矩阵提取火焰纹理特征;a flame texture feature extraction unit, used for extracting flame texture features from the motion region by using a grayscale co-occurrence matrix;

面积变化特征及闪烁特征提取单元,用于提取所述运动区域的面积变化特征及闪烁特征;an area change feature and flicker feature extraction unit for extracting the area change feature and flicker feature of the motion area;

动态特征提取单元,用于将所述火焰纹理特征、所述面积变化特征以及所述闪烁特征加权合并为所述动态样本的动态特征。A dynamic feature extraction unit, configured to weight the flame texture feature, the area change feature and the flicker feature into a dynamic feature of the dynamic sample.

根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:

本发明提供一种静动态多特征融合的山火检测方法及系统,所述方法首先对山火视频进行帧采样生成初始图像序列;对所述初始图像序列进行预处理,生成预处理后的图像序列;将所述预处理后的图像序列划分为静态样本和动态样本;利用卷积神经网络提取所述静态样本的静态特征并提取所述动态样本的动态特征;利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成融合后特征;将所述融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器;采用所述山火检测分类器进行山火检测。本发明方法在提取基于视频帧的图像特征过程中,对经过预处理后的图像采取静、动态特征分别提取再融合的方法,可以较全面的提取山火特征,提高山火检测的精度与准确度。The present invention provides a method and system for detecting a mountain fire with static and dynamic multi-feature fusion. The method first performs frame sampling on a mountain fire video to generate an initial image sequence, and preprocesses the initial image sequence to generate a preprocessed image. sequence; dividing the preprocessed image sequence into static samples and dynamic samples; extracting the static features of the static samples by using a convolutional neural network and extracting the dynamic features of the dynamic samples; using the nuclear canonical correlation analysis method to The static features and the dynamic features are fused to generate the fused features; the fused features are input into the support vector machine classifier for learning and training to generate a trained wildfire detection classifier; the wildfire detection classification is adopted for wildfire detection. In the process of extracting image features based on video frames, the method of the invention adopts the method of separately extracting and re-merging the static and dynamic features of the pre-processed images, which can comprehensively extract the mountain fire features and improve the accuracy and accuracy of the mountain fire detection. Spend.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some of the present invention. In the embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative labor.

图1为本发明提供的静动态多特征融合的山火检测方法的流程图;Fig. 1 is the flow chart of the wildfire detection method of static and dynamic multi-feature fusion provided by the present invention;

图2为本发明提供的静动态多特征融合的山火检测方法的原理图;FIG. 2 is a schematic diagram of a method for detecting wildfires by static and dynamic multi-feature fusion provided by the present invention;

图3为本发明提供的图像预处理流程框图;3 is a block diagram of an image preprocessing process provided by the present invention;

图4为本发明提供的静态特征提取流程框图;Fig. 4 is a flow chart of static feature extraction provided by the present invention;

图5为本发明提供的动态特征提取流程框图;Fig. 5 is a flow chart of dynamic feature extraction provided by the present invention;

图6为本发明提供的静动态特征融合流程框图。FIG. 6 is a flow chart of the static and dynamic feature fusion provided by the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

本发明的目的是提供一种静动态多特征融合的山火检测方法及系统,以解决现有的的山火识别与检测方法对山火特征提取不全面、识别准确度低的问题。The purpose of the present invention is to provide a method and system for detecting wildfires with static and dynamic multi-feature fusion, so as to solve the problems that the existing wildfire identification and detection methods do not fully extract the characteristics of wildfires and have low identification accuracy.

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

图1为本发明提供的静动态多特征融合的山火检测方法的流程图;图2为本发明提供的静动态多特征融合的山火检测方法的原理图。参见图1和图2,本发明提供的静动态多特征融合的山火检测方法具体包括:FIG. 1 is a flowchart of a method for detecting wildfires by static and dynamic multi-feature fusion provided by the present invention; FIG. 2 is a schematic diagram of a method for detecting wildfires by static and dynamic multi-feature fusion provided by the present invention. Referring to FIG. 1 and FIG. 2 , the method for detecting wildfires by static and dynamic multi-feature fusion provided by the present invention specifically includes:

步骤101:获取对山火视频进行帧采样生成的初始图像序列。Step 101: Acquire an initial image sequence generated by frame sampling of the wildfire video.

本发明将采集到的山火视频进行帧采样,得到初始图像序列;所述初始图像序列中包括多幅连续拍摄的原始图像。将所述初始图像序列按照4:1的比例分为训练样本和测试样本,为之后训练分类器做准备。需要注意的是,划分训练样本和测试样本时,以不破坏视频的连贯性为准则,将4/5的连续原始图像作为训练样本,其余1/5的连续原始图像作为测试样本,以便于动态特征的提取。The present invention performs frame sampling on the collected mountain fire video to obtain an initial image sequence; the initial image sequence includes a plurality of consecutively shot original images. The initial image sequence is divided into training samples and test samples according to a ratio of 4:1 to prepare for training the classifier later. It should be noted that when dividing training samples and test samples, 4/5 of the continuous original images are used as training samples, and the remaining 1/5 of the continuous original images are used as test samples, so as to facilitate dynamic Feature extraction.

步骤102:对所述初始图像序列进行预处理,生成预处理后的图像序列。Step 102: Preprocess the initial image sequence to generate a preprocessed image sequence.

图3为本发明提供的图像预处理流程框图,参见图3,所述步骤102具体包括:FIG. 3 is a block diagram of an image preprocessing flow provided by the present invention. Referring to FIG. 3 , the step 102 specifically includes:

步骤2.1:采用中值滤波方法对所述初始图像序列中的每幅原始图像进行去噪处理,生成去噪后图像。Step 2.1: Denoising each original image in the initial image sequence by using the median filter method to generate a denoised image.

对所述初始图像序列中的所有原始图像进行预处理,采用中值滤波对其进行去噪处理,即将原始图像每一点的像素设置为以它为中心的3×3邻域的所有像素的中值,生成去噪后图像。All original images in the initial image sequence are preprocessed, and median filtering is used to denoise them, that is, the pixel of each point of the original image is set as the middle of all pixels in the 3×3 neighborhood with it as the center. value to generate the denoised image.

步骤2.2:采用单尺度Retinex方法对所述去噪后图像进行图像增强处理,生成增强后图像;多幅连续的增强后图像构成所述预处理后的图像序列。Step 2.2: using the single-scale Retinex method to perform image enhancement processing on the denoised image to generate an enhanced image; a plurality of consecutive enhanced images constitute the preprocessed image sequence.

采用单尺度Retinex方法对所述去噪后图像进行图像增强处理,具体步骤如下:The single-scale Retinex method is used to perform image enhancement processing on the denoised image, and the specific steps are as follows:

①读取每帧去噪后图像作为待增强图像I(x,y),并将图像I(x,y)的像素值从整型变换为double型(双精度浮点型),后续是对double型的待增强图像I(x,y)进行处理。本发明将待增强图像I(x,y)变换为double型是为了方便后续数据计算的处理。①Read the denoised image of each frame as the image to be enhanced I(x,y), and convert the pixel value of the image I(x,y) from an integer to a double type (double-precision floating-point type), followed by a pair of The double-type image to be enhanced I(x,y) is processed. The present invention transforms the image to be enhanced I(x, y) into a double type to facilitate the processing of subsequent data calculation.

②待增强图像I(x,y)可以由反射图像R(x,y)和入射图像L(x,y)的乘积构成:I(x,y)=R(x,y)×L(x,y)。将其转化到对数域可得:

Figure BDA0002369554310000061
其中r(x,y)为输出图像。②The image to be enhanced I(x,y) can be composed of the product of the reflection image R(x,y) and the incident image L(x,y): I(x,y)=R(x,y)×L(x ,y). Converting it to the logarithmic domain gives:
Figure BDA0002369554310000061
where r(x,y) is the output image.

③入射图像L(x,y)可由原始图像I(x,y)与高斯函数G(x,y,c)卷积表示:L(x,y)=I(x,y)*G(x,y,c);其中c是高斯核,取值85。且高斯函数

Figure BDA0002369554310000062
(K为归一化常数),在满足∫∫G(x,y,c)dxdy=1的情况下取K值。③The incident image L(x,y) can be represented by the convolution of the original image I(x,y) and the Gaussian function G(x,y,c): L(x,y)=I(x,y)*G(x , y, c); where c is a Gaussian kernel with a value of 85. and the Gaussian function
Figure BDA0002369554310000062
(K is a normalization constant), and the value of K is taken when ∫∫G(x, y, c)dxdy=1 is satisfied.

④求出K后就能求得r(x,y),再将其转化到实数域。④ After finding K, you can find r(x, y), and then convert it to the real number field.

即本发明对所述去噪后图像进行图像增强处理的过程是先根据

Figure BDA0002369554310000063
和∫∫G(x,y,c)dxdy=1计算K值,然后根据K值求得G(x,y,c),进一步根据L(x,y)=I(x,y)*G(x,y,c)求得入射图像L(x,y),然后根据
Figure BDA0002369554310000064
得到输出图像r(x,y)。由于是采用对数变化
Figure BDA0002369554310000065
将R(x,y)转化为r(x,y)的,所以只需要反对数变换就可以将对数型态的r(x,y)转换为原来的实数域也就是R(x,y)。转化为R(x,y)也就是做了增强改变后的反射图像的像素值,也就替换了原来的像素I(x,y),即后续I(x,y)的内容已经变成了R(x,y),R(x,y)即为增强后图像。That is, the process of performing image enhancement processing on the denoised image in the present invention is based on the
Figure BDA0002369554310000063
and ∫∫G(x,y,c)dxdy=1 to calculate the K value, then obtain G(x,y,c) according to the K value, and further according to L(x,y)=I(x,y)*G (x, y, c) to obtain the incident image L(x, y), and then according to
Figure BDA0002369554310000064
Get the output image r(x,y). Since the logarithmic change is used
Figure BDA0002369554310000065
Convert R(x, y) to r(x, y), so you only need inverse logarithmic transformation to convert r(x, y) in logarithmic form to the original real number domain, which is R(x, y ). Converting to R(x,y) is the pixel value of the reflected image after the enhancement and change, which also replaces the original pixel I(x,y), that is, the content of the subsequent I(x,y) has become R(x,y), R(x,y) is the image after enhancement.

Retinex的核心就是要获得表现本质信息的“反射图像”。通过分离入射图像,就有可能减弱因光照因素产生的对图像的影响,可以增强图像的细节信息,获得代表图像本质信息的内容。The core of Retinex is to obtain "reflection images" that represent essential information. By separating the incident image, it is possible to reduce the influence of the illumination factor on the image, enhance the detail information of the image, and obtain the content representing the essential information of the image.

本发明采用单尺度Retinex方法的作用:Retinex可以在色彩恒常性、动态范围压缩和边缘增强三个方面达到平衡,减弱光照不均匀的影响,增强图像的细节信息;并且有一定的去雾效果,对野外视频监测的天气状况有一定的包容度。The invention adopts the function of the single-scale Retinex method: Retinex can achieve a balance in three aspects of color constancy, dynamic range compression and edge enhancement, weaken the influence of uneven illumination, and enhance the detailed information of the image; and has a certain dehazing effect, There is a certain degree of tolerance for the weather conditions monitored by the field video.

至此,图像预处理部分结束,多幅连续的增强后图像R(x,y)即构成所述预处理后的图像序列。So far, the image preprocessing part ends, and a plurality of consecutive enhanced images R(x, y) constitute the preprocessed image sequence.

步骤103:将所述预处理后的图像序列划分为静态样本和动态样本。Step 103: Divide the preprocessed image sequence into static samples and dynamic samples.

本发明将所述预处理后的图像序列中的多幅连续的增强后图像R(x,y)按照4:1的比例分为训练样本和测试样本;再将所述训练样本中的多幅连续的增强后图像R(x,y)按3:2的比例(同样要求保持视频连贯性)划分为静态样本和动态样本分别进行静态、动态特征提取。In the present invention, multiple continuous enhanced images R(x, y) in the preprocessed image sequence are divided into training samples and test samples according to the ratio of 4:1; The continuous enhanced image R(x, y) is divided into static samples and dynamic samples according to the ratio of 3:2 (it is also required to maintain video coherence) for static and dynamic feature extraction respectively.

同样,测试样本也按照3:2的比例划分为静态样本和动态样本,同样要求保持视频连贯性,从而分别提取测试样本的静、动态特征。后续在训练支持向量机分类器时,采用的是训练样本的融合后特征,而在测试的过程中,运用的是测试样本中提取的融合后特征。所以对于训练样本和测试样本都需要分别提取静动态特征再融合,只是后续融合后特征的运用不一样。Similarly, the test samples are also divided into static samples and dynamic samples according to the ratio of 3:2, and it is also required to maintain video coherence, so as to extract the static and dynamic features of the test samples respectively. In the subsequent training of the SVM classifier, the fused features of the training samples are used, and in the testing process, the fused features extracted from the test samples are used. Therefore, it is necessary to extract static and dynamic features for training samples and test samples separately and then fuse them, but the use of features after subsequent fusion is different.

步骤104:利用卷积神经网络提取所述静态样本的静态特征。Step 104: Extract the static features of the static samples by using a convolutional neural network.

图4为本发明提供的静态特征提取流程框图,参见图4,由于卷积神经网络对图像的静态特征挖掘较好,而对于动态特征提取的能力有限,因此本发明利用卷积神经网络提取所述静态样本的静态特征,具体包括:4 is a block diagram of a static feature extraction process provided by the present invention. Referring to FIG. 4 , since the convolutional neural network is better at mining the static features of the image, but has limited ability to extract dynamic features, the present invention uses the convolutional neural network to extract all the features. Describe the static characteristics of static samples, including:

步骤4.1:将所述静态样本输入VGG-19卷积神经网络,获取所述VGG-19卷积神经网络输出的特征图。Step 4.1: Input the static sample into the VGG-19 convolutional neural network, and obtain the feature map output by the VGG-19 convolutional neural network.

将训练样本中的静态样本直接输入被广泛接受的VGG-19网络进行处理。VGG-19包括16个卷积层,卷积层的通道数分别是64、128、256、512,最后有三个全连接层,通道数分别是4096,4096,1000。The static samples in the training samples are directly input into the widely accepted VGG-19 network for processing. VGG-19 includes 16 convolutional layers, the number of channels of the convolutional layer is 64, 128, 256, 512, and finally there are three fully connected layers, the number of channels are 4096, 4096, 1000 respectively.

分别提取出第四个卷积层输出的特征(低层的卷积特征)和最后一个全连接层的输出特征(高层的卷积特征)。卷积神经网络的输出记为al,它是以特征图[nH,nW,nC]的形式输出,nH、nW和nC分别为特征图的高、宽和通道数。The features of the output of the fourth convolutional layer (the convolutional features of the lower layers) and the output features of the last fully connected layer (the convolutional features of the upper layers) are extracted respectively. The output of the convolutional neural network is denoted as a l , which is output in the form of a feature map [n H , n W , n C ], where n H , n W and n C are the height, width and number of channels of the feature map, respectively.

作用在于:低层的卷积特征能够更好地保留目标自身的位置和空间信息,深层的卷积特征却包含更多的语义信息,所以二者一起提取可以使提取的特征同时具有自身的位置和空间信息以及深层次的语义信息,就达到了更全面提取特征的目的,从而检测精度更高。The effect is that the low-level convolutional features can better preserve the position and spatial information of the target itself, while the deep-level convolutional features contain more semantic information, so the extraction of the two together can make the extracted features have their own position and Spatial information and deep-level semantic information can achieve the purpose of more comprehensive feature extraction, so that the detection accuracy is higher.

步骤4.2:根据所述特征图的高、宽和通道数对不同通道的特征之间进行内积运算,得到特征矩阵。Step 4.2: Perform an inner product operation between the features of different channels according to the height, width and number of channels of the feature map to obtain a feature matrix.

令m=nH,n=nW,k=nC,使用Gram矩阵可以提取图片的纹理信息以及颜色信息,其表达式为:

Figure BDA0002369554310000081
(k'为与k不同的另一通道),即不同通道的特征之间进行内积运算,从而得到特征矩阵G。Let m=n H , n=n W , k=n C , the texture information and color information of the picture can be extracted by using the Gram matrix, and its expression is:
Figure BDA0002369554310000081
(k' is another channel different from k), that is, inner product operation is performed between the features of different channels, thereby obtaining the feature matrix G.

步骤4.3:将所述特征矩阵向量化为所述静态样本的静态特征。Step 4.3: Vectorize the feature matrix into static features of the static sample.

将所述特征矩阵G向量化为所述静态样本的静态特征xstThe feature matrix G is vectorized into the static features x st of the static samples.

这种方法只在不同通道特征的同一位置进行特征提取,因此无法获取足够的空间信息,只能够提取图片的全局静态特征。分开提取的作用在于:低层的卷积特征能够更好地保留目标自身的位置和空间信息,深层的卷积特征却包含更多的语义信息。This method only performs feature extraction at the same position of different channel features, so it cannot obtain enough spatial information and can only extract the global static features of the picture. The effect of separate extraction is that the low-level convolutional features can better preserve the location and spatial information of the target itself, while the deep-level convolutional features contain more semantic information.

步骤105:提取所述动态样本的动态特征。Step 105: Extract the dynamic features of the dynamic samples.

图5为本发明提供的动态特征提取流程框图,参见图5,本发明人工动态特征提取过程先利用混合高斯模型的方法锁定运动区域,再对运动区域提取具体火焰纹理、面积变化以及闪烁的特征。所述步骤105提取所述动态样本的动态特征,具体包括:Fig. 5 is a flow chart of dynamic feature extraction provided by the present invention. Referring to Fig. 5, the artificial dynamic feature extraction process of the present invention first uses the method of mixed Gaussian model to lock the motion area, and then extracts the specific flame texture, area change and flickering features from the motion area. . The step 105 extracts the dynamic features of the dynamic samples, specifically including:

步骤5.1:利用单高斯背景模型法提取所述动态样本中的运动区域;具体方法如下:Step 5.1: Use the single Gaussian background model method to extract the motion area in the dynamic sample; the specific method is as follows:

a.设动态样本图像(按视频顺序排列)每一点的像素值出现的概率服从高斯分布,对于图像中的像素点,用I(x,y,t)表示像素点(x,y)在t时刻的像素值,则有:

Figure BDA0002369554310000091
其中P(I(x,y,t))就是针对于图片I(x,y,t)建立的单高斯模型,μt和σt分别为t时刻该像素点(x,y)的高斯分布的期望值和标准值,μt为μt(x,y)的简写,σt为σt(x,y)的简写。a. Assume that the probability of the occurrence of the pixel value of each point of the dynamic sample image (arranged in the video order) obeys the Gaussian distribution. For the pixels in the image, I(x, y, t) is used to indicate that the pixel (x, y) is at t The pixel value at the moment, there are:
Figure BDA0002369554310000091
where P(I(x,y,t)) is the single Gaussian model established for the picture I(x,y,t), μ t and σ t are the Gaussian distribution of the pixel (x, y) at time t, respectively The expected value and standard value of , μ t is short for μ t (x, y), and σ t is short for σ t (x, y).

b.将第一帧图像作为数据初始化的背景模型,则有:

Figure BDA0002369554310000092
此时t=0,且初始标准值σ0(x,y)设为20。b. The background model initialized with the first frame image as the data, there are:
Figure BDA0002369554310000092
At this time, t=0, and the initial standard value σ 0 (x, y) is set to 20.

c.检测前景与背景像素。其中满足公式|I(x,y,t)-μt-1(x,y)|<Thδt-1的像素点(x,y)为前景像素,满足|I(x,y,t)-μt-1(x,y)|≥Thδt-1的像素点(x,y)为背景像素,Th为设定的阈值(设为0.75)。c. Detect foreground and background pixels. The pixel point (x, y) that satisfies the formula |I(x,y,t)-μ t-1 (x,y)|<T h δ t-1 is the foreground pixel, and it satisfies |I(x,y, t)-μ t-1 (x, y)|≥T h δ t-1 The pixel point (x, y) of t-1 is the background pixel, and Th is the set threshold (set to 0.75).

步骤c检测前景与背景像素的目的是区分前景和背景,可以理解为前景区域就是本发明要提取的运动区域,背景区域就是静止区域。The purpose of detecting the foreground and background pixels in step c is to distinguish the foreground and the background. It can be understood that the foreground area is the moving area to be extracted by the present invention, and the background area is the static area.

d.对μt、σt、σt 2背景值进行更新,更新公式为:

Figure BDA0002369554310000093
其中α称为更新系数,反映了模型的更新速度:如果点(x,y)被检测为前景点,则背景模型中原来的概率分布应该得到保留,应取比较小(一般为α=0);如果点(x,y)被检测为背景点,则应取比较大(一般为α=0.8),以使背景模型中原来的概率分布能够跟上实际的变化。即对于前景点(x,y)进行背景值更新时,α取0;对于背景点(x,y)进行背景值更新时,α取0.8。d. Update the background values of μ t , σ t , σ t 2 , the update formula is:
Figure BDA0002369554310000093
Among them, α is called the update coefficient, which reflects the update speed of the model: if the point (x, y) is detected as a foreground point, the original probability distribution in the background model should be preserved and should be relatively small (usually α=0) ; If the point (x, y) is detected as a background point, it should be relatively large (generally α=0.8), so that the original probability distribution in the background model can keep up with the actual change. That is, when the background value is updated for the foreground point (x, y), α takes 0; when the background value is updated for the background point (x, y), α takes 0.8.

e.返回步骤c,重复操作(即对μt、σt、σt 2的值进行迭代更新,再送到c中按公式继续比较,就是不断的比较它是属于前景还是背景),以便更新前景和背景模型,直到所述动态样本图像序列全部运行完毕,最后输出前景图像,也就是本发明的运动区域。e. Return to step c and repeat the operation (that is, iteratively update the values of μ t , σ t , σ t 2 , and then send it to c to continue the comparison according to the formula, that is, to constantly compare whether it belongs to the foreground or the background), so as to update the foreground and the background model, until all the dynamic sample image sequences are run, and finally output the foreground image, which is the motion area of the present invention.

步骤5.2:对所述运动区域利用灰度共生矩阵提取火焰纹理特征,具体包括:Step 5.2: Extract the flame texture feature by using the grayscale co-occurrence matrix for the motion area, which specifically includes:

a.将动态样本图片变为灰度图片,并压缩灰度等级为8(一般灰度级有256级,将像素值除以32化为8个灰度级)。a. Change the dynamic sample picture into a grayscale picture, and compress the grayscale level to 8 (generally, there are 256 grayscale levels, and the pixel value is divided by 32 into 8 grayscale levels).

b.选择5×5的滑动窗口分别计算压缩后灰度图片各个方向(0°、45°、90°、135°)的灰度矩阵(即每次在图像灰度值矩阵上以5×5的窗口滑动观察),统计窗口中两个像素点的灰度值组合(a,b)在窗口中出现的次数,作为对应的灰度共生统计矩阵在点(a,b)的值,对应8个灰度等级产生8×8的灰度共生矩阵。每次窗口移动的步距设为1。每帧灰度图像的运动区域都得到一个灰度共生矩阵。b. Select a 5×5 sliding window to calculate the grayscale matrix in each direction (0°, 45°, 90°, 135°) of the compressed grayscale image (that is, each time on the image grayscale value matrix with 5×5 Window sliding observation), the number of times the gray value combination (a, b) of the two pixel points in the statistical window appears in the window, as the value of the corresponding gray co-occurrence statistical matrix at point (a, b), corresponding to 8 Each gray level produces an 8×8 gray co-occurrence matrix. The step size of each window movement is set to 1. The motion area of each frame of grayscale image gets a grayscale co-occurrence matrix.

c.计算灰度共生矩阵特征值。用P表示灰度共生矩阵的归一化频率矩阵,其中i,j表示按照某方向同时出现于两个像素的某两个级别的灰度值,所以Pij表示满足这种情况的两个像素出现的概率。c. Calculate the eigenvalues of the gray level co-occurrence matrix. Use P to represent the normalized frequency matrix of the grayscale co-occurrence matrix, where i, j represent the grayscale values of two levels that appear in two pixels at the same time in a certain direction, so P ij represents two pixels that satisfy this situation probability of occurrence.

d.根据概率Pij计算对比度:

Figure BDA0002369554310000101
(N为灰度等级)。d. Calculate the contrast according to the probability P ij :
Figure BDA0002369554310000101
(N is the gray scale).

e.根据概率Pij计算能量:

Figure BDA0002369554310000102
e. Calculate the energy according to the probability P ij :
Figure BDA0002369554310000102

f.则火焰的纹理特征向量为:f=[f1 f2]。f. The texture feature vector of the flame is: f=[f 1 f 2 ].

步骤5.3:提取所述运动区域的面积变化特征及闪烁特征;具体过程为:Step 5.3: Extract the area change feature and flicker feature of the motion area; the specific process is:

a.设当前提取的运动区域面积为St,上一帧运动区域面积为St-1,面积由像素个数统计,则面积变化率为:

Figure BDA0002369554310000103
则火焰的面积变化特征向量为:[ΔS]。a. Suppose the area of the currently extracted motion area is S t , the area of the motion area of the previous frame is S t-1 , and the area is counted by the number of pixels, then the area change rate is:
Figure BDA0002369554310000103
Then the characteristic vector of the area change of the flame is: [ΔS].

b.设当前提取的运动区域平均亮度值为Lt,上一帧平均亮度值为Lt-1,则闪烁特征可由亮度的变化表示:

Figure BDA0002369554310000104
则火焰的闪烁特征向量为:[ΔL]。b. Set the average luminance value of the currently extracted motion area to L t , and the average luminance value of the previous frame to be L t-1 , then the flicker feature can be represented by the change in luminance:
Figure BDA0002369554310000104
Then the flickering feature vector of the flame is: [ΔL].

步骤5.4:将所述火焰纹理特征、所述面积变化特征以及所述闪烁特征加权合并为所述动态样本的动态特征。Step 5.4: Weighting and combining the flame texture feature, the area change feature and the flicker feature into a dynamic feature of the dynamic sample.

将人工提取的三组动态特征直接加权求和,加权系数设为1,构成一个新的动态特征ydy,其中ydy=[f ΔS ΔL]。The three groups of manually extracted dynamic features are directly weighted and summed, and the weighting coefficient is set to 1 to form a new dynamic feature y dy , where y dy =[f ΔS ΔL].

步骤106:利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成融合后特征。Step 106: Fusion of the static feature and the dynamic feature by using the kernel canonical correlation analysis method to generate a fused feature.

图6为本发明提供的静动态特征融合流程框图,参见图6,本发明利用核典型相关分析(Kernel Canonical CorrelationAnalysis,KCCA)方法进行静、动态特征融合,生成融合后特征,具体包括:6 is a block diagram of a static and dynamic feature fusion process provided by the present invention. Referring to FIG. 6, the present invention utilizes a Kernel Canonical Correlation Analysis (KCCA) method to perform static and dynamic feature fusion to generate features after fusion, specifically including:

步骤6.1:将从VGG-19提取的特征xst作为静态特征,将人工提取的三组动态特征直接加权求和,加权系数设为1,构成一个新的动态特征ydy,其中ydy=[f ΔS ΔL]。Step 6.1: Take the feature x st extracted from VGG-19 as a static feature, directly weight and sum the three groups of manually extracted dynamic features, and set the weighting coefficient to 1 to form a new dynamic feature y dy , where y dy = [ f ΔS ΔL].

步骤6.2:通过两个非线性映射Α和Β作用于xst和ydy

Figure BDA0002369554310000111
设核函数为
Figure BDA0002369554310000112
定义对应的核矩阵为
Figure BDA0002369554310000113
Figure BDA0002369554310000114
分别为核矩阵
Figure BDA0002369554310000115
对应的核函数。Step 6.2: Act on x st and y dy via two nonlinear mappings A and B:
Figure BDA0002369554310000111
Let the kernel function be
Figure BDA0002369554310000112
The corresponding kernel matrix is defined as
Figure BDA0002369554310000113
Figure BDA0002369554310000114
kernel matrix
Figure BDA0002369554310000115
the corresponding kernel function.

步骤6.3:设向量a处于Α(xst)的空间中,根据核再生理论,必然存在向量ξ,使得a=ξTΑ(xst);同理存在向量η,使得b=ηTΒ(ydy)。因此只需令a、b间的相关系数ρ最大即可,

Figure BDA0002369554310000116
其中cov(a,b)是a和b的协方差,而D(a),D(b)分别是a和b的方差。Step 6.3: Set the vector a in the space of Α(x st ), according to the nuclear regeneration theory, there must be a vector ξ, such that a = ξ T Α(x st ); similarly, there is a vector η, such that b = η T Β( y dy ). Therefore, it is only necessary to maximize the correlation coefficient ρ between a and b,
Figure BDA0002369554310000116
where cov(a,b) is the covariance of a and b, and D(a), D(b) are the variances of a and b, respectively.

a、b间的相关系数ρ达到最大值,就意味着要融合的两个特征在映射后的高维空间中的相关度达到了最大,也就是融合得最好。The correlation coefficient ρ between a and b reaches the maximum value, which means that the two features to be fused have the maximum correlation in the mapped high-dimensional space, that is, the best fusion.

步骤6.4:对原始数据进行标准化(即使Α(xst)、Β(ydy)均值为0,方差为1),则求ρ最大值可转换为求最大化ξTSXYξ和ηTSXYη的问题,即

Figure BDA0002369554310000117
其中SXY=cov(Α(xst),Β(ydy))。Step 6.4: Standardize the original data (even if the mean of Α(x st ) and β(y dy ) is 0 and the variance is 1), then finding the maximum value of ρ can be converted into finding the maximum ξ T S XY ξ and η T S XY η problem, i.e.
Figure BDA0002369554310000117
where S XY = cov(Α(x st ), B(y dy )).

步骤6.5:引入拉格朗日函数J(ξ,η),最大化问题转化为最大化下式:Step 6.5: Introduce the Lagrangian function J(ξ, η), and the maximization problem is transformed into the maximization formula:

Figure BDA0002369554310000118
分别对ξ、η求导令结果为0,有:
Figure BDA0002369554310000121
(λ与θ为拉格朗日系数)。
Figure BDA0002369554310000118
Taking the derivation of ξ and η respectively, the result is 0, there are:
Figure BDA0002369554310000121
(λ and θ are Lagrangian coefficients).

步骤6.6:将式(a)代入式(b),整理可得λ=θ=ξTSXYη(c),即拉格朗日系数就是要优化的参数。Step 6.6: Substitute Equation (a) into Equation (b), and arrange to obtain λ=θ=ξ T S XY η(c), that is, the Lagrangian coefficient is the parameter to be optimized.

步骤6.7:将式(c)代入式(b),整理得到

Figure BDA0002369554310000122
其中SXX=cov(Α(xst),Α(xst)),SYY=cov(Β(ydy),Β(ydy)),SYX=cov(Β(ydy),Α(xst))。求解出ξ、η,则根据
Figure BDA0002369554310000123
得到a、b,即融合后的特征。Step 6.7: Substitute formula (c) into formula (b), and get
Figure BDA0002369554310000122
where S XX =cov(Α(x st ),Α(x st )), S YY =cov(Β(y dy ),Β(y dy )), S YX =cov(Β(y dy ),Α( x st )). Solve for ξ, η, then according to
Figure BDA0002369554310000123
A and b are obtained, that is, the fused features.

步骤107:将所述融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器。Step 107: Input the fused features into a support vector machine classifier for learning and training to generate a trained mountain fire detection classifier.

首先对训练样本作正、负样本标记,其中正样本是有火的图像,负样本是无火的图像,标记就是对图片进行人工筛选标注是有火还是无火。再对训练样本进行步骤104、步骤105和步骤106的静动态特征提取、融合,最后将所述融合后特征a、b输入支持向量机(SVM)分类器中进行学习训练,得到具有检测山火功能的SVM分类器。First, mark the training samples as positive and negative samples, in which the positive samples are images with fire, and the negative samples are images without fire. Then perform the static and dynamic feature extraction and fusion of step 104, step 105 and step 106 on the training sample, and finally input the fused features a and b into the support vector machine (SVM) classifier for learning and training, and obtain the ability to detect wildfires. Functional SVM classifier.

将测试样本经过同样的特征处理(包括静动态特征提取、融合)输入SVM分类器中,检测有无山火。SVM分类器的输入是融合后特征,输出是图片的类别(有火,无火),将输出的结果与原来人工标记的结果进行比较就可以知道分类的准确率是多少。The test samples are input into the SVM classifier after the same feature processing (including static and dynamic feature extraction and fusion) to detect whether there is a wildfire. The input of the SVM classifier is the fused feature, and the output is the category of the picture (with fire, no fire), and the accuracy of the classification can be known by comparing the output result with the original manual labeling result.

若测试样本检测正确率高于正确率阈值,则将SVM分类器作为训练好的山火检测分类器进行使用;若测试样本检测正确率不高于正确率阈值,则返回步骤101重新获取样本进行分类器训练。If the detection accuracy of the test sample is higher than the accuracy threshold, the SVM classifier will be used as the trained wildfire detection classifier; if the test sample detection accuracy is not higher than the accuracy threshold, return to step 101 to obtain the sample again. Classifier training.

步骤108:采用所述训练好的山火检测分类器进行山火检测。Step 108: Use the trained wildfire detection classifier to detect wildfires.

所述山火检测分类器的输入为融合后特征,输出是图片的类别(有火,无火)。进行山火检测时,首先获取待检测的山火视频,对山火视频进行帧采样生成初始图像序列;对所述初始图像序列进行预处理,生成预处理后的图像序列;将所述预处理后的图像序列划分为静态样本和动态样本;利用卷积神经网络提取所述静态样本的静态特征并提取所述动态样本的动态特征;利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成待检测山火视频的融合后特征;将该融合后特征输入训练好的山火检测分类器,即可输出山火检测结果(即有火或无火)。The input of the wildfire detection classifier is the fused feature, and the output is the category of the picture (with fire, without fire). When performing wildfire detection, first obtain the wildfire video to be detected, perform frame sampling on the wildfire video to generate an initial image sequence; perform preprocessing on the initial image sequence to generate a preprocessed image sequence; The resulting image sequence is divided into static samples and dynamic samples; use convolutional neural network to extract the static features of the static samples and extract the dynamic features of the dynamic samples; use the nuclear canonical correlation analysis method to separate the static features and the dynamic features. The features are fused to generate the fused features of the wildfire video to be detected; the fused features are input into the trained wildfire detection classifier, and the wildfire detection result (ie, fire or no fire) can be output.

本发明提供的一种静动态多特征融合的山火检测方法,在提取基于视频帧的图像特征过程中,采取静态、动态特征分别提取再融合的方法,最后结合支持向量机进行山火检测,可以较全面的提取特征,提高山火检测精度与准确度。The present invention provides a method for detecting wildfires with static and dynamic multi-feature fusion. In the process of extracting image features based on video frames, a method of extracting and re-merging static and dynamic features is adopted, and finally combined with support vector machines to detect wildfires, It can extract features more comprehensively and improve the precision and accuracy of wildfire detection.

基于本发明提供的一种静动态多特征融合的山火检测方法,本发明还提供一种静动态多特征融合的山火检测系统,所述山火检测系统包括:Based on the static and dynamic multi-feature fusion mountain fire detection method provided by the present invention, the present invention also provides a static and dynamic multi-feature fusion mountain fire detection system, the mountain fire detection system includes:

初始图像序列获取模块,用于获取对山火视频进行帧采样生成的初始图像序列;所述初始图像序列中包括多幅连续拍摄的原始图像;an initial image sequence acquisition module, used to acquire an initial image sequence generated by frame sampling of a mountain fire video; the initial image sequence includes a plurality of consecutively shot original images;

图像预处理模块,用于对所述初始图像序列进行预处理,生成预处理后的图像序列;an image preprocessing module for preprocessing the initial image sequence to generate a preprocessed image sequence;

静动态样本划分模块,用于将所述预处理后的图像序列划分为静态样本和动态样本;a static and dynamic sample division module, used for dividing the preprocessed image sequence into static samples and dynamic samples;

静态特征提取模块,用于利用卷积神经网络提取所述静态样本的静态特征;A static feature extraction module, used for extracting the static features of the static sample by using a convolutional neural network;

动态特征提取模块,用于提取所述动态样本的动态特征;a dynamic feature extraction module for extracting the dynamic features of the dynamic samples;

静动态特征融合模块,用于利用核典型相关分析方法将所述静态特征和所述动态特征进行融合,生成融合后特征;a static and dynamic feature fusion module, configured to fuse the static feature and the dynamic feature by using the kernel canonical correlation analysis method to generate a post-fusion feature;

模型训练模块,用于将所述融合后特征输入支持向量机分类器中进行学习训练,生成训练好的山火检测分类器;a model training module for inputting the fused features into a support vector machine classifier for learning and training to generate a trained mountain fire detection classifier;

山火检测模块,用于采用所述山火检测分类器进行山火检测。A wildfire detection module is used to perform wildfire detection by using the wildfire detection classifier.

其中,所述图像预处理模块具体包括:Wherein, the image preprocessing module specifically includes:

去噪处理单元,用于采用中值滤波方法对所述初始图像序列中的每幅原始图像进行去噪处理,生成去噪后图像;a denoising processing unit, configured to perform denoising processing on each original image in the initial image sequence by using a median filtering method to generate a denoised image;

增强处理单元,用于采用单尺度Retinex方法对所述去噪后图像进行图像增强处理,生成增强后图像;多幅连续的增强后图像构成所述预处理后的图像序列。The enhancement processing unit is configured to perform image enhancement processing on the denoised image by using the single-scale Retinex method to generate an enhanced image; a plurality of consecutive enhanced images constitute the preprocessed image sequence.

所述静动态样本划分模块具体包括:The static and dynamic sample division module specifically includes:

训练样本划分单元,用于将所述预处理后的图像序列中的多幅连续的增强后图像按照4:1的比例分为训练样本和测试样本;A training sample dividing unit, which is used to divide multiple continuous enhanced images in the preprocessed image sequence into training samples and test samples according to a ratio of 4:1;

静动态样本划分单元,用于将所述训练样本中的多幅连续的增强后图像按3:2的比例划分为静态样本和动态样本。The static and dynamic sample dividing unit is configured to divide the plurality of continuous enhanced images in the training samples into static samples and dynamic samples in a ratio of 3:2.

所述静态特征提取模块具体包括:The static feature extraction module specifically includes:

特征图获取单元,用于将所述静态样本输入VGG-19卷积神经网络,获取所述VGG-19卷积神经网络输出的特征图;A feature map acquisition unit, used to input the static sample into the VGG-19 convolutional neural network to obtain the feature map output by the VGG-19 convolutional neural network;

特征矩阵计算单元,用于根据所述特征图的高、宽和通道数对不同通道的特征之间进行内积运算,得到特征矩阵;a feature matrix calculation unit, configured to perform an inner product operation between features of different channels according to the height, width and number of channels of the feature map to obtain a feature matrix;

静态特征提取单元,用于将所述特征矩阵向量化为所述静态样本的静态特征。A static feature extraction unit, configured to vectorize the feature matrix into static features of the static sample.

所述动态特征提取模块具体包括:The dynamic feature extraction module specifically includes:

运动区域提取单元,用于利用单高斯背景模型法提取所述动态样本中的运动区域;a motion region extraction unit, used for extracting the motion region in the dynamic sample by using a single Gaussian background model method;

火焰纹理特征提取单元,用于对所述运动区域利用灰度共生矩阵提取火焰纹理特征;a flame texture feature extraction unit, used for extracting flame texture features from the motion region by using a grayscale co-occurrence matrix;

面积变化特征及闪烁特征提取单元,用于提取所述运动区域的面积变化特征及闪烁特征;an area change feature and flicker feature extraction unit for extracting the area change feature and flicker feature of the motion area;

动态特征提取单元,用于将所述火焰纹理特征、所述面积变化特征以及所述闪烁特征加权合并为所述动态样本的动态特征。A dynamic feature extraction unit, configured to weight the flame texture feature, the area change feature and the flicker feature into a dynamic feature of the dynamic sample.

本发明提供的静动态多特征融合的山火检测方法与现有技术相比,具有以下优点:Compared with the prior art, the static and dynamic multi-feature fusion mountain fire detection method provided by the present invention has the following advantages:

1、所述方法步骤102中步骤2.2采用单尺度Retinex方法对去噪后图像进行图像增强处理,因为山火检测是基于野外的视频拍摄,背景较复杂,单尺度Retinex方法可以增强图像的细节信息,并减弱光照不均匀的影响;还有一定的去雾效果,对野外视频监测的天气状况有一定的包容度。1. In step 2.2 of step 102 of the method, the single-scale Retinex method is used to perform image enhancement processing on the denoised image, because the wildfire detection is based on video shooting in the wild, and the background is more complicated. The single-scale Retinex method can enhance the detailed information of the image. , and reduce the influence of uneven illumination; there is also a certain defogging effect, which has a certain tolerance for the weather conditions of field video monitoring.

2、所述方法步骤104和105中对特征采用卷积神经网络进行静态特征提取和人工对动态特征提取的方法,可以较全面的提取图片特征。卷积神经网络对图像的静态特征提取较好,而对于动态特征提取的能力有限;而加入人工对运动区域特征进行动态提取就能弥补这一缺陷。2. In steps 104 and 105 of the method, the convolutional neural network is used to extract the static features and the dynamic features are manually extracted, so that the image features can be extracted more comprehensively. Convolutional neural network is better at extracting static features of images, but has limited ability to extract dynamic features; and adding artificial dynamic extraction of motion area features can make up for this defect.

3、所述方法步骤106中应用核典型相关分析(KCCA)进行特征融合的方法,它可以把不相关的两种特征映射到高维空间,在争取它们最大相关的情况下,将它们融合成新的特征向量,保证了特征融合的合理性。3. In step 106 of the method, Kernel Canonical Correlation Analysis (KCCA) is used for feature fusion, which can map two unrelated features to a high-dimensional space, and fuse them into a The new feature vector ensures the rationality of feature fusion.

4、所述方法对图像的预处理方法(步骤102中体现)结合山火发生地点的特点(背景复杂、山林容易有雾等情况)对图像进行处理,更贴合实际,也为训练分类器奠定良好基础。4. The image preprocessing method (embodied in step 102) of the method combines the characteristics of the place where the fire occurs (complex background, foggy mountains and forests, etc.) to process the image, which is more practical and is also used for training the classifier. Lay a good foundation.

5、所述方法因为特征提取较全面,所以对山火检测的精度有极大提高。5. Because of the comprehensive feature extraction of the method, the accuracy of wildfire detection is greatly improved.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method.

本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。In this paper, specific examples are used to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present invention; meanwhile, for those skilled in the art, according to the present invention There will be changes in the specific implementation and application scope. In conclusion, the contents of this specification should not be construed as limiting the present invention.

Claims (10)

1. A mountain fire detection method based on static and dynamic multi-feature fusion is characterized by comprising the following steps:
acquiring an initial image sequence generated by performing frame sampling on a forest fire video; the initial image sequence comprises a plurality of continuously shot original images;
preprocessing the initial image sequence to generate a preprocessed image sequence;
dividing the preprocessed image sequence into static samples and dynamic samples;
extracting static characteristics of the static sample by using a convolutional neural network;
extracting dynamic features of the dynamic sample;
fusing the static features and the dynamic features by using a kernel canonical correlation analysis method to generate fused features;
inputting the fused features into a support vector machine classifier for learning training to generate a trained forest fire detection classifier;
and performing mountain fire detection by adopting the trained mountain fire detection classifier.
2. The mountain fire detection method according to claim 1, wherein the preprocessing the initial image sequence to generate a preprocessed image sequence specifically comprises:
denoising each original image in the initial image sequence by adopting a median filtering method to generate a denoised image;
performing image enhancement processing on the denoised image by adopting a single-scale Retinex method to generate an enhanced image; a plurality of successive enhanced images constitutes the pre-processed image sequence.
3. The mountain fire detection method according to claim 2, wherein the dividing the preprocessed image sequence into static samples and dynamic samples specifically comprises:
dividing a plurality of continuous enhanced images in the preprocessed image sequence into a training sample and a test sample according to a ratio of 4: 1;
dividing a plurality of continuous enhanced images in the training sample into a static sample and a dynamic sample according to a ratio of 3: 2.
4. The mountain fire detection method according to claim 3, wherein the extracting the static feature of the static sample using a convolutional neural network specifically comprises:
inputting the static sample into a VGG-19 convolutional neural network to obtain a feature map output by the VGG-19 convolutional neural network;
performing inner product operation on the features of different channels according to the height, the width and the channel number of the feature map to obtain a feature matrix;
vectorizing the feature matrix into static features of the static sample.
5. The mountain fire detection method according to claim 4, wherein the extracting of the dynamic feature of the dynamic sample specifically includes:
extracting a motion area in the dynamic sample by using a single Gaussian background model method;
extracting flame texture characteristics from the motion area by utilizing a gray level co-occurrence matrix;
extracting area change characteristics and flicker characteristics of the motion area;
and weighting and combining the flame texture feature, the area change feature and the flicker feature into the dynamic feature of the dynamic sample.
6. A mountain fire detection system with static and dynamic multi-feature fusion is characterized by comprising:
the system comprises an initial image sequence acquisition module, a frame sampling module and a frame sampling module, wherein the initial image sequence acquisition module is used for acquiring an initial image sequence generated by performing frame sampling on a forest fire video; the initial image sequence comprises a plurality of continuously shot original images;
the image preprocessing module is used for preprocessing the initial image sequence to generate a preprocessed image sequence;
a static and dynamic sample dividing module, configured to divide the preprocessed image sequence into a static sample and a dynamic sample;
the static characteristic extraction module is used for extracting the static characteristics of the static sample by utilizing a convolutional neural network;
the dynamic characteristic extraction module is used for extracting dynamic characteristics of the dynamic sample;
the static and dynamic feature fusion module is used for fusing the static features and the dynamic features by utilizing a kernel canonical correlation analysis method to generate fused features;
the model training module is used for inputting the fused features into a support vector machine classifier for learning training to generate a trained mountain fire detection classifier;
and the mountain fire detection module is used for detecting mountain fire by adopting the trained mountain fire detection classifier.
7. The wildfire detection system as claimed in claim 6, wherein the image pre-processing module specifically comprises:
the denoising processing unit is used for denoising each original image in the initial image sequence by adopting a median filtering method to generate a denoised image;
the enhancement processing unit is used for carrying out image enhancement processing on the denoised image by adopting a single-scale Retinex method to generate an enhanced image; a plurality of successive enhanced images constitutes the pre-processed image sequence.
8. The wildfire detection system as claimed in claim 7, wherein the static and dynamic sample division module specifically comprises:
a training sample dividing unit, configured to divide a plurality of continuous enhanced images in the preprocessed image sequence into a training sample and a test sample according to a ratio of 4: 1;
and the static and dynamic sample dividing unit is used for dividing the plurality of continuous enhanced images in the training sample into a static sample and a dynamic sample according to the proportion of 3: 2.
9. The wildfire detection system as claimed in claim 8, wherein the static feature extraction module specifically comprises:
the characteristic diagram acquisition unit is used for inputting the static sample into a VGG-19 convolutional neural network and acquiring a characteristic diagram output by the VGG-19 convolutional neural network;
the characteristic matrix calculation unit is used for carrying out inner product operation on the characteristics of different channels according to the height, the width and the channel number of the characteristic diagram to obtain a characteristic matrix;
and the static feature extraction unit is used for vectorizing the feature matrix into the static features of the static sample.
10. The wildfire detection system as claimed in claim 9, wherein the dynamic feature extraction module specifically comprises:
the motion region extraction unit is used for extracting a motion region in the dynamic sample by using a single Gaussian background model method;
the flame texture feature extraction unit is used for extracting flame texture features from the motion area by utilizing a gray level co-occurrence matrix;
the area change characteristic and flicker characteristic extraction unit is used for extracting the area change characteristic and flicker characteristic of the motion area;
and the dynamic characteristic extraction unit is used for weighting and combining the flame texture characteristic, the area change characteristic and the flicker characteristic into the dynamic characteristic of the dynamic sample.
CN202010046385.0A 2020-01-16 2020-01-16 A method and system for wildfire detection based on static and dynamic multi-feature fusion Pending CN111310566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010046385.0A CN111310566A (en) 2020-01-16 2020-01-16 A method and system for wildfire detection based on static and dynamic multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010046385.0A CN111310566A (en) 2020-01-16 2020-01-16 A method and system for wildfire detection based on static and dynamic multi-feature fusion

Publications (1)

Publication Number Publication Date
CN111310566A true CN111310566A (en) 2020-06-19

Family

ID=71144877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010046385.0A Pending CN111310566A (en) 2020-01-16 2020-01-16 A method and system for wildfire detection based on static and dynamic multi-feature fusion

Country Status (1)

Country Link
CN (1) CN111310566A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797761A (en) * 2020-07-02 2020-10-20 温州智视科技有限公司 A three-stage smoke detection system, method and readable medium
CN112101145A (en) * 2020-08-28 2020-12-18 西北工业大学 SVM classifier based pose estimation method for mobile robot
CN112733616A (en) * 2020-12-22 2021-04-30 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium
CN115512148A (en) * 2021-06-03 2022-12-23 中国石油大学(华东) Pumping unit well pump detection period prediction method based on feature fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965115B1 (en) * 2013-03-14 2015-02-24 Hrl Laboratories, Llc Adaptive multi-modal detection and fusion in videos via classification-based-learning
CN107480729A (en) * 2017-09-05 2017-12-15 江苏电力信息技术有限公司 A kind of transmission line forest fire detection method based on depth space-time characteristic of field
CN109165577A (en) * 2018-08-07 2019-01-08 东北大学 A kind of early stage forest fire detection method based on video image
US20190042895A1 (en) * 2016-06-12 2019-02-07 Grg Banking Equipment Co., Ltd. Offline identity authentication method and apparatus
CN110427825A (en) * 2019-07-01 2019-11-08 上海宝钢工业技术服务有限公司 The video flame recognition methods merged based on key frame with quick support vector machines
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A fire video detection and early warning method based on image multi-feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965115B1 (en) * 2013-03-14 2015-02-24 Hrl Laboratories, Llc Adaptive multi-modal detection and fusion in videos via classification-based-learning
US20190042895A1 (en) * 2016-06-12 2019-02-07 Grg Banking Equipment Co., Ltd. Offline identity authentication method and apparatus
CN107480729A (en) * 2017-09-05 2017-12-15 江苏电力信息技术有限公司 A kind of transmission line forest fire detection method based on depth space-time characteristic of field
CN109165577A (en) * 2018-08-07 2019-01-08 东北大学 A kind of early stage forest fire detection method based on video image
CN110427825A (en) * 2019-07-01 2019-11-08 上海宝钢工业技术服务有限公司 The video flame recognition methods merged based on key frame with quick support vector machines
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A fire video detection and early warning method based on image multi-feature fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
唐岩岩: "基于视频图像的火灾检测方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 1, 15 December 2013 (2013-12-15), pages 138 - 613 *
许洁等: "核典型相关分析特征融合方法及应用", 计算机科学, vol. 43, no. 01, pages 141 - 142 *
赵小川: "MATLAB图像处理——程序实现与模块化仿真", 北京航空航天大学出版社, pages: 209 - 210 *
钟玲等: "基于SVM的视频图像火焰检测", 《软件工程》 *
钟玲等: "基于SVM的视频图像火焰检测", 《软件工程》, no. 06, 5 June 2017 (2017-06-05), pages 1 - 4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797761A (en) * 2020-07-02 2020-10-20 温州智视科技有限公司 A three-stage smoke detection system, method and readable medium
CN111797761B (en) * 2020-07-02 2023-05-16 温州智视科技有限公司 Three-stage smoke detection system, method and readable medium
CN112101145A (en) * 2020-08-28 2020-12-18 西北工业大学 SVM classifier based pose estimation method for mobile robot
CN112101145B (en) * 2020-08-28 2022-05-17 西北工业大学 SVM classifier based pose estimation method for mobile robot
CN112733616A (en) * 2020-12-22 2021-04-30 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium
CN112733616B (en) * 2020-12-22 2022-04-01 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium
CN115512148A (en) * 2021-06-03 2022-12-23 中国石油大学(华东) Pumping unit well pump detection period prediction method based on feature fusion

Similar Documents

Publication Publication Date Title
CN111339882B (en) Power transmission line hidden danger detection method based on example segmentation
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN109902715B (en) Infrared dim target detection method based on context aggregation network
Liu et al. Remote sensing image change detection based on information transmission and attention mechanism
CN111310566A (en) A method and system for wildfire detection based on static and dynamic multi-feature fusion
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN111160249A (en) Multi-class target detection method in optical remote sensing images based on cross-scale feature fusion
CN110309781A (en) Remote sensing recognition method for house damage based on multi-scale spectral texture adaptive fusion
CN109684922A (en) A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN110298297A (en) Flame identification method and device
CN112766223B (en) Hyperspectral Image Target Detection Method Based on Sample Mining and Background Reconstruction
CN110533100A (en) A method of CME detection and tracking is carried out based on machine learning
CN115661443A (en) Multi-scale forward characteristic gain infrared dim target detection method in complex environment
CN110796677A (en) Cirrus cloud false alarm source detection method based on multiband characteristics
CN115019163A (en) Identification method of urban elements based on multi-source big data
CN108734122B (en) A hyperspectral urban water detection method based on adaptive sample selection
CN110427868A (en) A kind of pedestrian identify again in feature extracting method
CN116721314A (en) Small target detection method based on smooth interactive compression network
CN110910497B (en) Method and system for realizing augmented reality map
CN106952251B (en) An Image Saliency Detection Method Based on Adsorption Model
CN105930793A (en) Human body detection method based on SAE characteristic visual learning
CN110458064B (en) Combining data-driven and knowledge-driven low-altitude target detection and recognition methods
CN111460943A (en) Remote sensing image ground object classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220208

Address after: 030024 No. 6, Qingnian Road, Shanxi, Taiyuan

Applicant after: STATE GRID ELECTRIC POWER Research Institute OF SEPC

Address before: 030000 Shanxi Electric Power Research Institute of State Grid, No. 6, Qingnian Road, Yingze District, Taiyuan City, Shanxi Province

Applicant before: STATE GRID ELECTRIC POWER Research Institute OF SEPC

Applicant before: SHANXI ZHENZHONG ELECTRIC POWER Co.,Ltd.

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: Li Bing

Document name: Notice of Priority Review in the Reexamination Procedure

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载