+

CN114821042B - R-FCN knife switch detection method combining local features and global features - Google Patents

R-FCN knife switch detection method combining local features and global features

Info

Publication number
CN114821042B
CN114821042B CN202210453322.6A CN202210453322A CN114821042B CN 114821042 B CN114821042 B CN 114821042B CN 202210453322 A CN202210453322 A CN 202210453322A CN 114821042 B CN114821042 B CN 114821042B
Authority
CN
China
Prior art keywords
fcn
knife switch
global
features
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210453322.6A
Other languages
Chinese (zh)
Other versions
CN114821042A (en
Inventor
肖振远
宗起振
陶征勇
李佑文
褚红健
曾清旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sac Rail Traffic Engineering Co ltd
Original Assignee
Nanjing Sac Rail Traffic Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sac Rail Traffic Engineering Co ltd filed Critical Nanjing Sac Rail Traffic Engineering Co ltd
Priority to CN202210453322.6A priority Critical patent/CN114821042B/en
Publication of CN114821042A publication Critical patent/CN114821042A/en
Application granted granted Critical
Publication of CN114821042B publication Critical patent/CN114821042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/327Testing of circuit interrupters, switches or circuit-breakers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种联合局部特征和全局特征的R‑FCN刀闸检测方法,该方法基于深度学习网络模型R‑FCN对变电站的刀闸进行状态检测。通过对变电站辅助监控系统嵌入网络摄像机集控软件实现相机多位置、多角度在不同室外天气环境、背景的刀闸图像采集,构建多样性的刀闸数据;通过对R‑FCN输出预测网络并联融入全局特征预测模块,补充原始网络只通过局部特征预测刀闸产生的感受野不足缺陷,提升对部分遮挡的刀闸检测准确率,降低对复杂背景中刀闸的漏检率和误检率;通过对局部特征预测结果和全局特征预测结果正则化后进行预测结果累加,实现全局特征预测结果对局部特征预测结果的补充,满足变电站刀闸远程检测和无人化值守的需求。

The present invention provides an R‑FCN knife switch detection method combining local features and global features, and the method performs status detection on the knife switch of a substation based on a deep learning network model R‑FCN. By embedding the network camera centralized control software into the substation auxiliary monitoring system, the camera can collect knife switch images in multiple positions and angles in different outdoor weather environments and backgrounds, and construct diverse knife switch data; by integrating the R‑FCN output prediction network in parallel with the global feature prediction module, the original network only predicts the knife switch through local features to generate insufficient receptive field defects, improve the detection accuracy of partially blocked knife switches, and reduce the missed detection rate and false detection rate of knife switches in complex backgrounds; by regularizing the local feature prediction results and the global feature prediction results and accumulating the prediction results, the global feature prediction results can be supplemented by the local feature prediction results, meeting the needs of remote detection and unmanned duty of substation knife switches.

Description

R-FCN knife switch detection method combining local features and global features
Technical Field
The invention relates to the technical field of disconnecting link detection in an auxiliary monitoring system of a transformer substation, and relates to an R-FCN disconnecting link detection method combining local characteristics and global characteristics.
Background
The transformer substation is an important transit place in high-voltage transmission, and various vital on-off control devices are operated in the transformer substation. When the transformer substation is interrupted in the past, related operators need to search for faults one by one, and the length of the fault detection time determines when the transformer substation works normally, so that the daily life of residents is seriously affected. The disconnecting link is used as an important power control switch of the transformer substation, and the opening and closing states of the disconnecting link directly determine whether the whole electrified circuit operates or not. The traditional investigation to the power failure state appears in the transformer substation, and the operating personnel at first need get into the transformer substation high voltage area and look for whether the switch breaks off, then investigation other connecting device one by one, takes place the electric shock danger easily. With the unmanned development demand of the transformer substation, the research on the automatic detection of the state of the disconnecting link in the snap-shot image is favorable for quickly identifying the current state of the disconnecting link, shortens the fault detection efficiency of the transformer substation, ensures the life safety of operators and effectively improves the safe operation of the power grid.
For automatic detection of the disconnecting link, the problems that the working environment of the disconnecting link is outdoor, various devices which are the same as the disconnecting link and are metal products exist, colors are similar, the disconnecting link is not easy to distinguish, the disconnecting link at the same position is affected by weather environment and different in recognition difficulty under different environments, and the acquired disconnecting link is often shielded by other object devices or cannot acquire the whole characteristics completely due to the influence of various voltage control devices, wires and rectangular shapes of the disconnecting link existing outdoors when an auxiliary monitoring system is deployed on a transformer substation are solved.
The knife switch detection method is divided into two types, namely 1) an image processing-based method and 2) a deep learning-based method. The method is affected by the acquisition of knife gate data, most of the current methods adopt a method based on image processing, a knife gate template is firstly established for a standard image acquired at a fixed angle position, then an image processing algorithm such as a characteristic point matching algorithm or a template matching algorithm is used for positioning and state identification, the method can obtain higher monitoring accuracy under the condition of good weather environment in a short time, but as the weather condition is frequently changed, the camera position can deviate on the basis of the angle of the pre-established template image, and the state of the knife gate cannot be identified, so that the camera position needs to be frequently checked and adjusted manually or the template is newly established. In addition, most of the current deep learning network models predict through global receptive fields, such as a fast-Rcnn series network model, a Yolo series network model and the like, when detecting the knife gate, if the knife gate is shielded or an acquired object is incomplete, the whole characteristic information of the knife gate is lack to generate missed detection or false detection, and part of the deep learning network models predict through local characteristics, such as an R-FCN network model, a VIT network model and the like, when detecting the knife gate, although incomplete knife gate can be effectively detected, when the knife gate is full of the whole image, the local characteristic prediction also can generate missed detection.
Disclosure of Invention
Aiming at the technical problems, the invention aims to provide a region-based full convolution target detection network R-FCN (R-FCN: object Detection via Region-based Fully Convolutional Networks) knife switch detection method based on the combination of local features and global features based on an R-FCN network model, wherein the global feature prediction is embedded into the R-FCN network model, and the accuracy of knife switch detection is improved by combining the advantages of the local features and the global feature prediction, so that the accuracy of the existing knife switch detection method is improved.
In order to achieve the purpose, the technical scheme adopted by the invention is that the R-FCN knife switch detection combining local characteristics and global characteristics comprises the following steps:
Step 1, a transformer substation auxiliary monitoring system is built to collect disconnecting link images;
Step 2, dividing, cleaning and labeling an image dataset;
Step 3, selecting an R-FCN trunk feature extraction network;
step 4, adjusting a trunk feature extraction network of the R-FCN;
Step 5, constructing a local feature prediction branch;
step 6, constructing a global feature prediction branch;
step 7, fusing a local prediction result and a global prediction result;
and 8, training a storage model.
Furthermore, the transformer substation auxiliary monitoring system in the step1 integrates all network monitoring cameras and related control equipment in the transformer substation, and the image data sets containing the disconnecting link and the image data sets not containing the disconnecting link at different moments, weather and backgrounds are acquired by adjusting the angles of all cameras, and the number of the image data sets containing the disconnecting link should be uniform and contain the two states of the switch of the disconnecting link.
Further, in the step 2, the data set is divided into an image containing a knife switch and an image not containing the knife switch, the data set containing the knife switch image is divided into a training set and a test set in a ratio of 8:2, the data set is cleaned, the data set with blur in the collected data set is removed, and the labeling data only labels the image containing the knife switch object.
Further, in step 3, an R-FCN backbone feature extraction network is selected, and the implementation manner is that a classification network ResNet is adopted as the backbone feature extraction network, a region suggestion network (RPN: region Proposal Network) is used after a fourth group of convolution layers of ResNet101 to operate to generate a region of interest (used for detecting a later knife gate), a pooling layer and a full connection layer after a fifth group of convolution layers of ResNet are discarded, and the number of channels finally output is 2048. The region of interest is a region in which a target is present.
In step 4, the trunk feature extraction network of the R-FCN is adjusted by adding a convolution layer with a convolution kernel size of 1×1 to reduce the number of channels to 1024 after the trimmed trunk feature extraction network ResNet is selected in step 3, so as to reduce the dimension of feature data but not change the size of the feature map, and improve the calculation speed.
Further, in the step 5, a local feature prediction branch is constructed, and the implementation mode is that a local prediction result is output by adopting an original network model R-FCN with local feature prediction, and the local feature prediction is divided into 7 multiplied by 7 local regions by adopting RPN suggestion regions for prediction.
Further, a global feature prediction branch is constructed in the step 6, and the implementation mode is that pooling operation is carried out on the extracted semantic features, the sizes of the extracted semantic features are unified, and then a convolution layer with the convolution kernel sizes of 7 multiplied by 7 and 1 multiplied by 1 is used in series to output a global prediction result.
Further, in the step 7, the local prediction result and the global prediction result are fused, and L2 regularization is required to be used for the output prediction results in the step 5 and the step 6, and the mathematical expression is as follows: Wherein x and y are vectors, x= (x 0,x1,x2...xn) represents a predicted output result, y= (y 0,y1,y2...yn) represents a regularized predicted output result, and finally, uniformly scaling the values of the two predicted output results to a range from 0 to 1, carrying out vector addition on the two predicted output results, and then carrying out Softmax operation to output a final predicted result, wherein the mathematical expression of Softmax is as follows: r i is the ith output value in the output result vector.
Further, in the step 8, the model is trained, and two kinds of divided data sets are used for training, wherein the first training only uses clear data sets containing the disconnecting link, and the second training uses fuzzy data sets containing the disconnecting link and not containing the disconnecting link but similar to the disconnecting link in a data quantity ratio of 1:1.
Further, in step 8, a model is trained, and a loss function mathematical expression used for training is:
where L (s, t) represents the total loss of classification loss and regression loss, s is the class prediction probability, Representing the prediction probability of category c *, t representing the regression frame of model prediction, L cls(s) being the classification loss, the mathematical expression being: Lambda multitask balance factor, c * is a category real label, when c * =0 represents background category, c * noteq 0 represents corresponding pair category, [ c * >0] constitutes a guide factor, the value is taken as 1, the regression frame is used for adjusting non-background category objects only, L reg(t,t*) is regression loss, t represents a model predicted target frame, and t * represents a manually marked real frame.
Compared with the prior art, the method has the beneficial effects that for knife switch detection with partial characteristic shielding and larger characteristics, the R-FCN network based on the combined local characteristics and global characteristics can predict and judge compared with the R-FCN network based on the original local characteristics. The R-FCN disconnecting link detection method based on the combined local features and the global features effectively reduces false detection rate and omission rate of disconnecting link detection in a complex scene of a transformer substation.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the following description is further given with reference to the drawings required in the embodiments.
FIG. 1 is a schematic diagram of the operation flow of the R-FCN knife switch detection method combining local features and global features.
FIG. 2 is a block diagram of an R-FCN model combining local features and global features of the present invention.
FIG. 3 is a block diagram of the local feature prediction of the present invention.
FIG. 4 is a block diagram of global feature prediction of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail with reference to FIGS. 1-4.
As shown in FIG. 1, the R-FCN knife switch detection method combining local features and global features comprises the following steps:
and 1, when the substation auxiliary monitoring system collects the disconnecting link images, the disconnecting link images are collected from natural factors such as different angles, different visual fields, different backgrounds and the like, so that the diversity of the collected pictures is ensured.
And 2, cleaning the data set acquired in the step 1 to remove damaged image data, and dividing the data set into an image data set only containing the knife switch and a data set which does not contain the knife switch but is similar to the knife switch. The total number of images of the acquired data set containing the knife switch is 800, 800 images which do not contain the knife switch but are similar to the knife switch are respectively divided, wherein the ratio of the training set to the testing set is 8:2, the data set only containing the knife switch images is marked with the position and the state of the knife switch, the number of samples is shown in table 1 for the first training, and the number of samples is shown in table 2 for the second training.
And 3, adjusting an R-FCN trunk feature extraction network, namely adopting a classification network ResNet101 as the trunk feature extraction network, generating a region of interest (for detection of a later disconnecting link) by using RPN operation after a fourth group of convolution layers of ResNet101, discarding a pooling layer and a full-connection layer after a fifth group of convolution layers of ResNet, and finally outputting 2048 channels, wherein the number of the channels is shown in figure 2.
Step 4, adding a convolution layer with a convolution kernel size of 1×1 after selecting the trimmed trunk feature extraction network ResNet101 in step 3 reduces the number of channels to 1024 to reduce the feature data dimension without changing the size of the feature map, as shown in fig. 2.
And 5, constructing a local feature prediction branch, namely outputting a local prediction result by adopting an original network model R-FCN with local feature prediction, and dividing the local feature prediction into 7X 7 local regions by adopting an RPN suggestion region to predict, wherein the local feature prediction is shown in figure 3.
And 6, constructing a global feature prediction branch, namely carrying out pooling operation on the extracted semantic features, unifying the sizes of the extracted semantic features, and then outputting a global prediction result by serially using convolution layers with the convolution kernel sizes of 7 multiplied by 7 and 1 multiplied by 1, as shown in figure 4.
And 7, merging the local prediction result and the global prediction result, namely regularizing the local prediction result and the global prediction result, uniformly scaling to the same numerical interval, and adding to complete information fusion prediction.
Further, the mathematical expression is: Wherein x and y are vectors, x= (x 0,x1,x2...xn) represents a predicted output result, y= (y 0,y1,y2...yn) represents a regularized predicted output result, and finally, uniformly scaling the values of the two predicted output results to a range from 0 to 1, carrying out vector addition on the two predicted output results, and then carrying out Softmax operation to output a final predicted result, wherein the mathematical expression of Softmax is as follows: r i is the ith output value in the output result vector.
And 8, training the model, namely training the model by using the initialized parameters of the model trunk feature extraction network ResNet101 on an ImageNet dataset, selecting a loss function which is the same as the R-FCN network by using the loss function used for model training, training by using data only containing a disconnecting link image for the first time until the loss function converges, storing the network model, training by using data containing a disconnecting link and data not containing a disconnecting link similar to the disconnecting link on the basis of the model parameters of the first training for the second time, and further characterizing the disconnecting link distinguishing capability of the network model, wherein the test result of the obtained model is shown in a table 3.
Further, in step 8, a model is trained, and a loss function mathematical expression used for training is:
where L (s, t) represents the total loss of classification loss and regression loss, s is the class prediction probability, Representing the prediction probability of category c *, t representing the regression frame of model prediction, L cls(s) being the classification loss, the mathematical expression being: Lambda multitask balance factor, c * is a category real label, when c * =0 represents background category, c * noteq 0 represents corresponding pair category, [ c * >0] constitutes a guide factor, the value is taken as 1, the regression frame is used for adjusting non-background category objects only, L reg(t,t*) is regression loss, t represents a model predicted target frame, and t * represents a manually marked real frame.
Table 1 is the statistics of the number of samples of the first training set.
Table 2 is the statistics of the number of samples of the second training set.
Table 3 shows the comparison of the accuracy of the R-FCN knife switch detection combining the local feature and the global feature.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.

Claims (9)

1.一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于,包括以下步骤:1. A R-FCN knife switch detection method combining local features and global features, characterized in that it comprises the following steps: 步骤1:搭建变电站辅助监控系统,从不同的相机角度和不同时刻的天气情况对目标进行图像采集;Step 1: Build a substation auxiliary monitoring system to collect images of the target from different camera angles and weather conditions at different times; 步骤2:对步骤1采集的图像划分为训练集图像、测试集图像,并对图像数据集进行清洗以及标注;Step 2: Divide the images collected in step 1 into training set images and test set images, and clean and annotate the image data set; 步骤3:构建具有联合局部特征和全局特征的R-FCN刀闸检测模型:调整R-FCN主干特征提取网络:采用分类网络ResNet101作为主干特征提取网络,在ResNet101第四组卷积层后使用区域建议网络RPN操作产生感兴趣区域,抛弃使用ResNet101第五组卷积层后面的池化层和全连接层;Step 3: Build an R-FCN knife switch detection model with joint local features and global features: Adjust the R-FCN backbone feature extraction network: Use the classification network ResNet101 as the backbone feature extraction network, use the region proposal network RPN operation after the fourth group of convolutional layers of ResNet101 to generate the region of interest, and discard the pooling layer and fully connected layer after the fifth group of convolutional layers of ResNet101; 步骤4:基于步骤3调整的主干特征提取网络,输出的通道数为2048个,在其后面附加卷积核大小为1×1的卷积层将通道数降低为1024个;Step 4: Based on the backbone feature extraction network adjusted in step 3, the number of output channels is 2048, and a convolution layer with a convolution kernel size of 1×1 is added to it to reduce the number of channels to 1024; 步骤5:构建局部特征预测分支:采用原始具有局部特征预测的网络模型R-FCN输出局部预测结果,且局部特征预测采用RPN建议区域划分为7×7个局部区域进行预测;Step 5: Construct a local feature prediction branch: Use the original network model R-FCN with local feature prediction to output the local prediction results, and the local feature prediction uses the RPN recommended area to be divided into 7×7 local areas for prediction; 步骤6:构建全局特征预测分支:全局特征预测基于步骤4操作提取的语义特征,首先对提取的语义特征进行池化操作,统一提取的语义特征大小;然后,串联使用卷积核大小为7×7和1×1的卷积层输出全局预测结果;Step 6: Construct a global feature prediction branch: The global feature prediction is based on the semantic features extracted in step 4. First, the extracted semantic features are pooled to unify the size of the extracted semantic features. Then, convolutional layers with kernel sizes of 7×7 and 1×1 are used in series to output the global prediction results. 步骤7:融合局部预测结果和全局预测结果:对局部预测结果和全局预测结果进行正则化,统一缩放至同一数值区间并进行相加,完成信息融合预测;Step 7: Fusion of local prediction results and global prediction results: Regularization of local prediction results and global prediction results, uniform scaling to the same numerical range and addition, to complete information fusion prediction; 步骤8:训练模型:模型训练使用的损失函数选择与R-FCN网络相同的损失函数,用于指导模型参数的优化;网络训练参数更新至损失函数收敛,保存网络模型。Step 8: Training model: The loss function used for model training is the same as that of the R-FCN network to guide the optimization of model parameters; the network training parameters are updated until the loss function converges and the network model is saved. 2.根据权利要求1所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述步骤1中的辅助监控系统是由多个网络摄像机组合对变电站各角度实行全面覆盖、实时监控,所述辅助监控系统嵌入相机集控程序,通过多个网络摄像机实现不同角度、不同背景、不同天气因素的数据采集。2. According to the R-FCN knife switch detection method combining local features and global features described in claim 1, it is characterized in that: the auxiliary monitoring system in step 1 is composed of a combination of multiple network cameras to implement comprehensive coverage and real-time monitoring of all angles of the substation, and the auxiliary monitoring system is embedded in the camera centralized control program to realize data collection from different angles, different backgrounds, and different weather factors through multiple network cameras. 3.根据权利要求1所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述步骤2中的清洗数据集具体为:将相机震动造成的模糊图像进行剔除,将包含刀闸的图像归为一个数据集,将未包含刀闸但存在类似刀闸的图像归为一个数据集。3. According to the R-FCN knife switch detection method combining local features and global features described in claim 1, it is characterized in that: the cleaning data set in the step 2 is specifically: the blurred image caused by camera vibration is eliminated, the images containing knife switches are classified into one data set, and the images that do not contain knife switches but have similar knife switches are classified into one data set. 4.根据权利要求1所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述步骤5中的局部特征预测是将RPN建议区域划分为7×7个局部区域进行预测。4. According to the R-FCN knife switch detection method combining local features and global features described in claim 1, it is characterized in that: the local feature prediction in step 5 is to divide the RPN recommended area into 7×7 local areas for prediction. 5.根据权利要求1所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述步骤7中的融合局部预测结果和全局预测结果具体为:使用正则化方式分别对局部预测结果和全局预测结果进行正则化,且使用的是L2正则化,数学表达式为:5. According to the R-FCN knife switch detection method combining local features and global features of claim 1, it is characterized in that: the fusion of local prediction results and global prediction results in step 7 is specifically: regularizing the local prediction results and the global prediction results respectively using a regularization method, and using L2 regularization, the mathematical expression is: 式中x、y为向量,x=(x0,x1,x2...xn),y=(y0,y1,y2...yn),表示预测输出结果或者,最终将两种预测结果数值统一缩放至0到1区间。 Wherein x and y are vectors, x=( x0 , x1 , x2 ... xn ), y=( y0 , y1 , y2 ... yn ), representing the predicted output result or, finally, the two predicted results are uniformly scaled to the interval of 0 to 1. 6.根据权利要求5所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于,使用的融合方式为:对相同维度的向量相加。6. According to the R-FCN knife switch detection method combining local features and global features as described in claim 5, it is characterized in that the fusion method used is: adding vectors of the same dimension. 7.根据权利要求1所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述步骤8中模型训练使用划分的两种数据集进行两次训练:第一次训练只使用包含刀闸的清晰数据集,第二次训练使用包含刀闸的和未包含刀闸但类似刀闸的数据数量比例为1:1的模糊数据集。7. According to the R-FCN knife switch detection method combining local features and global features described in claim 1, it is characterized in that: the model training in step 8 uses two divided data sets for two trainings: the first training uses only the clear data set containing the knife switch, and the second training uses the fuzzy data set with a ratio of 1:1 between the data containing the knife switch and the data not containing the knife switch but similar to the knife switch. 8.根据权利要求7所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述训练模型具体为:训练时使用迁移学习思想,对主干特征提取网络采用再ImageNet数据集上训练好的模型权重;所述清晰数据集指的是所有训练图像至少包含一个刀闸图像,所述模糊数据集指的是包含刀闸图像和未包含刀闸且类似刀闸的图像各占50%。8. According to the R-FCN knife switch detection method combining local features and global features described in claim 7, it is characterized in that: the training model is specifically: the idea of transfer learning is used during training, and the model weights trained on the ImageNet data set are used for the backbone feature extraction network; the clear data set refers to all training images containing at least one knife switch image, and the fuzzy data set refers to images containing knife switch images and images that do not contain knife switches and are similar to knife switches, each accounting for 50%. 9.根据权利要求7所述的一种联合局部特征和全局特征的R-FCN刀闸检测方法,其特征在于:所述训练模型:训练使用的损失函数数学表达式为:9. The R-FCN knife switch detection method combining local features and global features according to claim 7 is characterized in that: the training model: the loss function mathematical expression used in training is: 其中L(s,t)表示分类损失和回归损失的总损失,s为类别预测概率,表示类别c*的预测概率,t表示模型预测的回归框,Lcls(s)为分类损失,数学表达式为:λ为多任务平衡因子,c*为类别真实标签,当c*=0表示背景类别,c*≠0表示对应的对别,[c*>0]组成指导因子,其值取为1,表示回归框只对非背景类别对象进行调整,Lreg(t,t*)为回归损失,t表示模型预测的目标框,t*表示人工标注的真实框。 Where L(s,t) represents the total loss of classification loss and regression loss, s is the category prediction probability, represents the predicted probability of category c * , t represents the regression box predicted by the model, L cls (s) is the classification loss, and the mathematical expression is: λ is the multi-task balancing factor, c * is the true label of the category, when c * = 0 represents the background category, c * ≠0 represents the corresponding pair, [c * > 0] constitutes the guidance factor, its value is 1, indicating that the regression box only adjusts the non-background category objects, Lreg (t, t * ) is the regression loss, t represents the target box predicted by the model, and t * represents the true box manually annotated.
CN202210453322.6A 2022-04-27 2022-04-27 R-FCN knife switch detection method combining local features and global features Active CN114821042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210453322.6A CN114821042B (en) 2022-04-27 2022-04-27 R-FCN knife switch detection method combining local features and global features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210453322.6A CN114821042B (en) 2022-04-27 2022-04-27 R-FCN knife switch detection method combining local features and global features

Publications (2)

Publication Number Publication Date
CN114821042A CN114821042A (en) 2022-07-29
CN114821042B true CN114821042B (en) 2025-07-22

Family

ID=82509689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210453322.6A Active CN114821042B (en) 2022-04-27 2022-04-27 R-FCN knife switch detection method combining local features and global features

Country Status (1)

Country Link
CN (1) CN114821042B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359441A (en) * 2022-08-29 2022-11-18 安徽大学 Anomaly detection method for spilled objects based on Vit network heuristic self-supervised training
CN118015555A (en) * 2024-04-10 2024-05-10 南京国电南自轨道交通工程有限公司 A switch state recognition method based on visual detection and mask image direction vector

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)
CN113221969A (en) * 2021-04-25 2021-08-06 浙江师范大学 Semantic segmentation system and method based on Internet of things perception and based on dual-feature fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104102A1 (en) * 2013-10-11 2015-04-16 Universidade De Coimbra Semantic segmentation method with second-order pooling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)
CN113221969A (en) * 2021-04-25 2021-08-06 浙江师范大学 Semantic segmentation system and method based on Internet of things perception and based on dual-feature fusion

Also Published As

Publication number Publication date
CN114821042A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN112380952B (en) Real-time detection and recognition method of infrared image of power equipment based on artificial intelligence
CN113436184B (en) Power equipment image defect identification method and system based on improved twin network
CN114821042B (en) R-FCN knife switch detection method combining local features and global features
CN110334661A (en) Infrared Power Transmission and Transformation Abnormal Hot Spot Target Detection Method Based on Deep Learning
CN109101906A (en) A kind of converting station electric power equipment infrared image exception real-time detection method and device
CN113160184B (en) Unmanned aerial vehicle intelligent inspection cable surface defect detection method based on deep learning
CN109544501A (en) A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching
CN110378221A (en) A kind of power grid wire clamp detects and defect identification method and device automatically
CN112332541B (en) Monitoring system and method for transformer substation
CN112697798A (en) Infrared image-oriented diagnosis method and device for current-induced thermal defects of power transformation equipment
CN111539355A (en) Photovoltaic panel foreign matter detection system and detection method based on deep neural network
CN116681885B (en) Infrared image target identification method and system for power transmission and transformation equipment
CN109389322A (en) The disconnected broken lot recognition methods of grounded-line based on target detection and long memory models in short-term
CN113205039A (en) Power equipment fault image identification and disaster investigation system and method based on multiple DCNNs
CN118468241A (en) A distribution station visualization system and method based on digital twin technology
Shan et al. Research on efficient detection method of foreign objects on transmission lines based on improved YOLOv4 network
CN116664490A (en) Defect detection method for power equipment based on structural reparameterization
CN115690659A (en) Substation safety detection method and device under occlusion conditions
CN110618129A (en) Automatic power grid wire clamp detection and defect identification method and device
CN117036665B (en) Knob switch state identification method based on twin neural network
CN118552894A (en) Substation indoor state identification method and device
Yang et al. Abnormal scene image recognition method for intelligent operation and maintenance of electrical equipment in substations
CN117791864A (en) Remote inspection system for power line
CN116205905A (en) Power distribution network construction safety and quality image detection method and system based on mobile terminal
Zhang et al. Intelligent Detection Model for Power Grids Based on Graph Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载