+

CN110863935B - Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout - Google Patents

Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout Download PDF

Info

Publication number
CN110863935B
CN110863935B CN201911132810.1A CN201911132810A CN110863935B CN 110863935 B CN110863935 B CN 110863935B CN 201911132810 A CN201911132810 A CN 201911132810A CN 110863935 B CN110863935 B CN 110863935B
Authority
CN
China
Prior art keywords
attachments
vgg16
image
segunet
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911132810.1A
Other languages
Chinese (zh)
Other versions
CN110863935A (en
Inventor
彭海洋
王天真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201911132810.1A priority Critical patent/CN110863935B/en
Publication of CN110863935A publication Critical patent/CN110863935A/en
Application granted granted Critical
Publication of CN110863935B publication Critical patent/CN110863935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03BMACHINES OR ENGINES FOR LIQUIDS
    • F03B11/00Parts or details not provided for in, or of interest apart from, the preceding groups, e.g. wear-protection couplings, between turbine and generator
    • F03B11/008Measuring or testing arrangements
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03BMACHINES OR ENGINES FOR LIQUIDS
    • F03B11/00Parts or details not provided for in, or of interest apart from, the preceding groups, e.g. wear-protection couplings, between turbine and generator
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03BMACHINES OR ENGINES FOR LIQUIDS
    • F03B13/00Adaptations of machines or engines for special use; Combinations of machines or engines with driving or driven apparatus; Power stations or aggregates
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/20Hydro energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于海流机故障诊断领域,具体涉及一种基于VGG16‑SegUnet和dropout的海流机叶片附着物识别方法,步骤如下:对海流机图像进行语义标注,完成原始数据集的创建;旋转增强原始数据集并进行标准化预处理;搭建VGG16‑SegUnet网络;使用Adadelta优化器对网络进行训练;测试训练好的网络,完成海流机叶片附着物位置和大小的识别,同时估计识别结果的不确定度;最后计算出准确的附着物面积占比以及平均交并比。本发明解决了现有基于图像信号的海流机叶片附着物诊断方法不能定位附着物、输出准确附着物占比以及估计识别不确定度的问题,并为海流机叶片视情维护以及后续的容错控制提供了指导性建议。

Figure 201911132810

The invention belongs to the field of fault diagnosis of ocean current machines, and in particular relates to a method for recognizing the attachments on the blades of ocean current machines based on VGG16-SegUnet and dropout. Collect and perform standardized preprocessing; build VGG16‑SegUnet network; use Adadelta optimizer to train the network; test the trained network, complete the identification of the location and size of the attachments on the blades of the current machine, and estimate the uncertainty of the identification results; finally Calculate the exact percentage of attachment area and the average intersection ratio. The present invention solves the problems that the existing method for diagnosing the attachments of marine current turbine blades based on the image signal cannot locate the attachments, output the accurate proportion of the attachments and estimate the identification uncertainty, and provides conditions for the maintenance of the marine current turbine blades and subsequent fault-tolerant control. Guidance is provided.

Figure 201911132810

Description

基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout

技术领域technical field

本发明涉及海流机故障诊断领域,具体涉及一种基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法。The invention relates to the field of fault diagnosis of ocean current machines, in particular to a method for identifying the attachments to the blades of ocean current machines based on VGG16-SegUnet and dropout.

背景技术Background technique

海流能是一种被誉为“蓝色油田”和“海上沙特阿拉伯”的可再生清洁能源,主要有以下两种形成方式:海底水道和海峡中较为稳定的海水流动;由潮汐运动产生的有规律海水流动。相比于风能和太阳能,海流能有可预测性、高能量密度等优势。海流机作为一种海流能发电装置具有低噪音,可靠运行且没有苛刻的选址要求等优点,它的发电原理是:通过旋转机械吸收流动的海水能量,并将其转换为电能传输到电网中实现并网发电。与在陆上安装的风机不同,海流机一旦正式投入运行将被长时间安置在水下,这将产生如下几个潜在问题:(1)小型海洋生物很可能以附着物的形式在海流机叶片表面进行繁殖,这将可能引发叶片不平衡故障;(2)海流机叶片一般是金属材质的,所以常年的海水浸泡会锈蚀叶片,从而影响机械性能。具体而言,由附着物形成的不平衡故障会导致发电机输出电压频率、幅值的降低以及波形的扭曲最终影响发电质量和效率甚至引发电网波动。因而,对于海流发电系统来讲,在这些故障形成的“萌芽”阶段就有效地检查到相应的故障状态并做出预警显得尤为重要。Ocean current energy is a kind of renewable and clean energy known as "blue oil field" and "offshore Saudi Arabia". It is mainly formed in the following two ways: relatively stable seawater flow in submarine channels and straits; Regular sea flow. Compared with wind and solar energy, ocean current energy has the advantages of predictability and high energy density. As an ocean current power generation device, the ocean current machine has the advantages of low noise, reliable operation and no strict site selection requirements. Realize grid-connected power generation. Different from the wind turbine installed on land, once the current turbine is officially put into operation, it will be placed underwater for a long time, which will cause the following potential problems: (1) Small marine organisms are likely to be attached to the turbine blades in the form of attachments. (2) The blades of ocean current machines are generally made of metal, so the perennial seawater immersion will corrode the blades, thus affecting the mechanical properties. Specifically, the unbalanced fault caused by the attachment will lead to the reduction of the frequency and amplitude of the generator output voltage, and the distortion of the waveform, which will ultimately affect the quality and efficiency of power generation and even cause grid fluctuations. Therefore, for the ocean current power generation system, it is particularly important to effectively check the corresponding fault status and make an early warning in the "germination" stage of these faults.

目前,关于海流机故障检测和诊断的方法相对较少,主要分为基于电信号(海流机定子电流、电压)和图像信号(海流机水下图像数据)两种类型。但是,面对复杂的水下环境,单纯对定子电流、电压信号进行分析不足以完成对附着物程度的精确诊断。此外,现有的基于图像信号的海流机叶片附着物诊断方法存在如下问题:(1)没有对附着物的位置和大小进行识别;(2)没有诊断出精确的附着物面积占比;(3)不能对不同的附着物分布进行识别,且缺少对诊断结果的不确定度进行分析。At present, there are relatively few methods for fault detection and diagnosis of the current machine, which are mainly divided into two types based on electrical signals (the current and voltage of the current machine stator) and image signals (the underwater image data of the current machine). However, in the face of the complex underwater environment, simply analyzing the stator current and voltage signals is not enough to accurately diagnose the degree of attachment. In addition, the existing methods for diagnosing attachments on ocean current turbine blades based on image signals have the following problems: (1) the location and size of attachments are not identified; (2) the accurate area ratio of attachments is not diagnosed; (3) ) cannot identify the distribution of different attachments, and there is a lack of analysis of the uncertainty of the diagnostic results.

发明内容SUMMARY OF THE INVENTION

为解决上述提到的基于图像信号的海流机叶片附着物诊断方法存在的问题,实现更为直观和精确的附着物程度识别,本发明提供一种基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法。In order to solve the above-mentioned problems existing in the method for diagnosing the attachments on the blades of the ocean current machine based on the image signal and realize more intuitive and accurate identification of the degree of the attachments, the present invention provides a blade attachment of the ocean current machine based on VGG16-SegUnet and dropout. recognition methods.

所述基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法包括以下步骤:The VGG16-SegUnet and dropout-based method for identifying the attachments on a blade of a current machine includes the following steps:

步骤一、首先,采集不同附着类型的海流机水下图像,然后使用开源工具labelme进行语义标注,从而完成原始图像-语义标签数据集的创建:背景,叶片,附着物分别被标注为0,1,2。Step 1. First, collect underwater images of current machines with different attachment types, and then use the open source tool labelme for semantic labeling to complete the creation of the original image-semantic label dataset: the background, leaves, and attachments are marked as 0, 1, respectively ,2.

步骤二、采用[0°,360°]的旋转数据增强技术扩充原始图像-语义标签数据集,然后对原始图像进行标准化预处理:Step 2: Expand the original image-semantic label dataset using the rotation data enhancement technique of [0°, 360°], and then standardize the original image:

Figure BDA0002278797250000021
Figure BDA0002278797250000021

其中,x表示海流机图像中R,G,B任意一个维度的数据;xmin,xmax分别表示x中的最小,最大像素值;x最终被标准化到[-1,1]。Among them, x represents the data of any dimension of R, G, B in the current machine image; x min , x max represent the minimum and maximum pixel values in x respectively; x is finally normalized to [-1, 1].

再将增强后的数据按3∶1∶1的比例划分为训练集、验证集和测试集。The enhanced data is then divided into training set, validation set and test set according to the ratio of 3:1:1.

步骤三、搭建VGG16-SegUnet新型语义分割网络:将VGG16前13层的卷积和最大池化模型设定为特征提取编码层并使用ImageNet预训练权重初始化这些卷积结构;解码层的结构与SegNet中的解码层相同,采用反最大池化进行特征恢复;编码层与解码层之间除前向连接外,还融合了Unet中的特征级联以及SegNet中的最大池化索引保留技术,并在中间位置插入了一个30%dropout层;Dropout的引入在缓解训练过拟合现象的同时,也提供了不同的概率分类结果。Step 3. Build a new semantic segmentation network of VGG16-SegUnet: set the convolution and max pooling models of the first 13 layers of VGG16 as the feature extraction coding layer and use ImageNet pre-training weights to initialize these convolutional structures; the structure of the decoding layer is the same as that of SegNet. The decoding layer is the same, and the inverse max pooling is used for feature recovery; in addition to the forward connection between the encoding layer and the decoding layer, the feature cascade in Unet and the maximum pooling index retention technology in SegNet are also integrated, and in the A 30% dropout layer is inserted in the middle position; the introduction of dropout not only alleviates the phenomenon of training overfitting, but also provides different probabilistic classification results.

步骤四、将预处理好的训练集图像数据输入至VGG16-SegUnet中,输出逐像素softmax概率分类结果:Step 4. Input the preprocessed training set image data into VGG16-SegUnet, and output the pixel-by-pixel softmax probability classification result:

Figure BDA0002278797250000031
Figure BDA0002278797250000031

其中,x(i)表示一张训练图像x中的第i个像素点;θ为softmax分类器的权重参数矩阵,且

Figure BDA0002278797250000032
p(y(i)=l|x(i);θ)表示x(i)的预测结果y(i)为语义标签l的概率;N表示待语义标注的类别个数;exp(·)表示指数函数;hθ(x(i))为softmax预测结果向量。Among them, x (i) represents the ith pixel in a training image x; θ is the weight parameter matrix of the softmax classifier, and
Figure BDA0002278797250000032
p(y (i) =l|x (i) ; θ) represents the probability that the prediction result y (i) of x ( i) is the semantic label 1; N represents the number of categories to be semantically labeled; exp ( ) represents Exponential function; h θ (x (i) ) is the softmax prediction result vector.

然后,使用Adadelta优化器对整个网络进行全局训练,降低交叉熵损失直到训练次数达到设定的最大值,并记录最终的训练权重:Then, use the Adadelta optimizer to globally train the entire network, reduce the cross-entropy loss until the number of training times reaches the set maximum value, and record the final training weights:

Figure BDA0002278797250000033
Figure BDA0002278797250000033

其中,Loss(θ)表示交叉熵损失函数;Ntrain表示训练数据的个数;Nn表示第n张图像的像素总个数;log(·)表示对数函数;1{·}是一个示性函数,当{·}内的表达式成立时,输出1,反之,输出0。Among them, Loss(θ) represents the cross entropy loss function; N train represents the number of training data; N n represents the total number of pixels in the nth image; log( ) represents the logarithmic function; Sex function, when the expression in {·} holds, output 1, otherwise, output 0.

步骤五、将预处理好的测试集图像输入到载入了训练权重的VGG16-SegUnet中,输出语义分割图,完成对图像中背景、叶片、附着物位置和大小的识别,同时对识别结果的不确定度进行估计,具体实现过程如下:Step 5. Input the preprocessed test set image into the VGG16-SegUnet loaded with training weights, output the semantic segmentation map, and complete the recognition of the background, leaves, and attachment positions and sizes in the image. The uncertainty is estimated, and the specific implementation process is as follows:

i.将每张测试图像输入到融合有30%dropout的VGG16-SegUnet中,并重复进行50次测试,得到50个softmax概率分类结果,这里记为Test50i. Input each test image into VGG16-SegUnet fused with 30% dropout, and repeat the test 50 times to obtain 50 softmax probability classification results, which are recorded as Test 50 here.

ii.求取Test50的均值

Figure BDA0002278797250000034
以及方差
Figure BDA0002278797250000035
ii. Find the mean of Test 50
Figure BDA0002278797250000034
and variance
Figure BDA0002278797250000035

iii.从

Figure BDA0002278797250000036
中找出每个像素点的最大概率类别,然后通过可视化技术显示出语义分割图;最大概率类别对应的方差以图像的形式直观地显示出来,即为不确定度图像。iii. From
Figure BDA0002278797250000036
Find out the maximum probability category of each pixel point, and then display the semantic segmentation map through visualization technology; the variance corresponding to the maximum probability category is visually displayed in the form of an image, which is the uncertainty image.

步骤六、最后,根据语义分割图计算出精确的附着物面积占比以及识别准确率指标MIoU:Step 6. Finally, according to the semantic segmentation map, calculate the exact percentage of attachment area and the recognition accuracy index MIoU:

Figure BDA0002278797250000041
Figure BDA0002278797250000041

其中,AAP为附着物面积占比;attachment,blade分别表示附着物和整个海流机叶片区域;Count(·)用于计算指定区域内的像素点个数。Among them, AAP is the area ratio of attachments; attachment and blade represent attachments and the entire blade area of the current turbine respectively; Count( ) is used to calculate the number of pixels in the specified area.

Figure BDA0002278797250000042
Figure BDA0002278797250000042

其中,MIoU表示平均交并比;pij表示真实标签为i而被误识别为标签j的像素个数;pii表示真实标签为i且被识别为标签i的像素个数;pji表示真实标签为j而被误识别为标签i的像素个数。Among them, MIoU represents the average intersection ratio; p ij represents the number of pixels whose true label is i but is misidentified as label j; p ii represents the number of pixels whose true label is i and is recognized as label i; p ji represents the true The number of pixels with label j that were misidentified as label i.

有益效果beneficial effect

与现有技术相比,本发明有如下几点技术效果:Compared with the prior art, the present invention has the following technical effects:

1)本发明使用的是具有三个维度(R,G,B)的图像数据,所以相比于一维的电流、电压信号能提取到更丰富、直观的附着物特征信息。1) The present invention uses image data with three dimensions (R, G, B), so compared with one-dimensional current and voltage signals, more abundant and intuitive attachment feature information can be extracted.

2)本发明所采用的语义标注方法可以有效提高网络训练速度和泛化能力;旋转数据增强技术在模拟海流机转动运行的同时,大大降低了语义标注工作量。2) The semantic labeling method adopted in the present invention can effectively improve the network training speed and generalization ability; the rotational data enhancement technology greatly reduces the workload of semantic labeling while simulating the rotational operation of the current machine.

3)本发明提出的VGG16-SegUnet新型语义分割网络能对海流机图像进行有效分割,并完成背景、叶片、附着物位置和大小的识别,输出精确的附着物面积占比。3) The new semantic segmentation network of VGG16-SegUnet proposed by the present invention can effectively segment the current machine image, and complete the identification of the background, leaves, the location and size of the attachment, and output the accurate attachment area ratio.

4)本发明中的附着物识别方法能识别不同的叶片附着分布,且可以对附着物识别结果的不确定度进行估计,为后续的海流机视情维护以及容错控制提供指导性建议。4) The attachment identification method in the present invention can identify different blade attachment distributions, and can estimate the uncertainty of the attachment identification results, so as to provide guiding suggestions for the subsequent maintenance and fault-tolerant control of the current machine.

附图说明Description of drawings

本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:

图1为本发明中基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法的算法流程示意图;Fig. 1 is the algorithm flow schematic diagram of the method for identifying the attachments to the blades of the ocean current machine based on VGG16-SegUnet and dropout in the present invention;

图2为本发明中图像语义标注方法的示意图;Fig. 2 is the schematic diagram of the image semantic labeling method in the present invention;

图3为本发明中所提出的语义分割网络VGG16-SegUnet的架构示意图。FIG. 3 is a schematic diagram of the architecture of the semantic segmentation network VGG16-SegUnet proposed in the present invention.

具体实施方式Detailed ways

下面详细描述本发明的实施例,所述实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。The embodiments of the present invention are described in detail below, and the embodiments are exemplary and intended to explain the present invention, but should not be construed as a limitation of the present invention.

如图1所示,本发明提供一种基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法包括以下步骤:As shown in FIG. 1 , the present invention provides a method for identifying the attachments on a blade of an ocean current machine based on VGG16-SegUnet and dropout, which includes the following steps:

步骤一、首先,采集不同附着类型的海流机水下图像,然后使用开源工具labelme进行语义标注,从而完成原始图像-语义标签数据集的创建:背景,叶片,附着物分别被标注为0,1,2,如图2所示。Step 1. First, collect underwater images of current machines with different attachment types, and then use the open source tool labelme for semantic labeling to complete the creation of the original image-semantic label dataset: the background, leaves, and attachments are marked as 0, 1, respectively , 2, as shown in Figure 2.

步骤二、采用[0°,360°]的旋转数据增强技术扩充原始图像-语义标签数据集,然后对原始图像进行标准化预处理:Step 2: Expand the original image-semantic label dataset using the rotation data enhancement technique of [0°, 360°], and then standardize the original image:

Figure BDA0002278797250000051
Figure BDA0002278797250000051

其中,x表示海流机图像中R,G,B任意一个维度的数据;xmin,xmax分别表示x中的最小,最大像素值;x最终被标准化到[-1,1]。Among them, x represents the data of any dimension of R, G, B in the current machine image; x min , x max represent the minimum and maximum pixel values in x respectively; x is finally normalized to [-1, 1].

再将增强后的数据按3∶1∶1的比例划分为训练集、验证集和测试集。The enhanced data is then divided into training set, validation set and test set according to the ratio of 3:1:1.

步骤三、搭建VGG16-SegUnet新型语义分割网络:将VGG16前13层的卷积和最大池化模型设定为特征提取编码层并使用ImageNet预训练权重初始化这些卷积结构;解码层的结构与SegNet中的解码层相同,采用反最大池化进行特征恢复;编码层与解码层之间除前向连接外,还融合了Unet中的特征级联以及SegNet中的最大池化索引保留技术,并在中间位置插入了一个30%dropout层;Dropout的引入在缓解训练过拟合现象的同时,也提供了不同的概率分类结果;VGG16-SegUnet的具体架构设计如图3所示。Step 3. Build a new semantic segmentation network of VGG16-SegUnet: set the convolution and max pooling models of the first 13 layers of VGG16 as the feature extraction coding layer and use ImageNet pre-training weights to initialize these convolutional structures; the structure of the decoding layer is the same as that of SegNet. The decoding layer is the same, and the inverse max pooling is used for feature recovery; in addition to the forward connection between the encoding layer and the decoding layer, the feature cascade in Unet and the maximum pooling index retention technology in SegNet are also integrated, and in the A 30% dropout layer is inserted in the middle; the introduction of dropout not only alleviates the phenomenon of training overfitting, but also provides different probability classification results; the specific architecture design of VGG16-SegUnet is shown in Figure 3.

步骤四、将预处理好的训练集图像数据输入至VGG16-SegUnet中,输出逐像素softmax概率分类结果:Step 4. Input the preprocessed training set image data into VGG16-SegUnet, and output the pixel-by-pixel softmax probability classification result:

Figure BDA0002278797250000061
Figure BDA0002278797250000061

其中,x(i)表示一张训练图像x中的第i个像素点;θ为softmax分类器的权重参数矩阵,且

Figure BDA0002278797250000062
p(y(i)=l|x(i);θ)表示x(i)的预测结果y(i)为语义标签l的概率;N表示待语义标注的类别个数;exp(·)表示指数函数;hθ(x(i))为softmax预测结果向量。Among them, x (i) represents the ith pixel in a training image x; θ is the weight parameter matrix of the softmax classifier, and
Figure BDA0002278797250000062
p(y (i) =l|x (i) ; θ) represents the probability that the prediction result y (i) of x ( i) is the semantic label 1; N represents the number of categories to be semantically labeled; exp ( ) represents Exponential function; h θ (x (i) ) is the softmax prediction result vector.

然后,使用Adadelta优化器对整个网络进行全局训练,降低交叉熵损失直到训练次数达到设定的最大值,并记录最终的训练权重:Then, use the Adadelta optimizer to globally train the entire network, reduce the cross-entropy loss until the number of training times reaches the set maximum value, and record the final training weights:

Figure BDA0002278797250000063
Figure BDA0002278797250000063

其中,Loss(θ)表示交叉熵损失函数;Ntrain表示训练数据的个数;Nn表示第n张图像的像素总个数;log(·)表示对数函数;1{·}是一个示性函数,当{·}内的表达式成立时,输出1,反之,输出0。Among them, Loss(θ) represents the cross entropy loss function; N train represents the number of training data; N n represents the total number of pixels in the nth image; log( ) represents the logarithmic function; Sex function, when the expression in {·} holds, output 1, otherwise, output 0.

步骤五、将预处理好的测试集图像输入到载入了训练权重的VGG16-SegUnet中,输出语义分割图,完成对图像中背景、叶片、附着物位置和大小的识别,同时对识别结果的不确定度进行估计,具体实现过程如下:Step 5. Input the preprocessed test set image into the VGG16-SegUnet loaded with training weights, output the semantic segmentation map, and complete the recognition of the background, leaves, and attachment positions and sizes in the image. The uncertainty is estimated, and the specific implementation process is as follows:

i.将每张测试图像输入到融合有30%dropout的VGG16-SegUnet中,并重复进行50次测试,得到50个softmax概率分类结果,这里记为Test50i. Input each test image into VGG16-SegUnet fused with 30% dropout, and repeat the test 50 times to obtain 50 softmax probability classification results, which are recorded as Test 50 here.

ii.求取Test50的均值

Figure BDA0002278797250000064
以及方差
Figure BDA0002278797250000065
ii. Find the mean of Test 50
Figure BDA0002278797250000064
and variance
Figure BDA0002278797250000065

iii.从

Figure BDA0002278797250000066
中找出每个像素点的最大概率类别,然后通过可视化技术显示出语义分割图;最大概率类别对应的方差以图像的形式直观地显示出来,即为不确定度图像。iii. From
Figure BDA0002278797250000066
Find out the maximum probability category of each pixel point, and then display the semantic segmentation map through visualization technology; the variance corresponding to the maximum probability category is visually displayed in the form of an image, which is the uncertainty image.

步骤六、最后,根据语义分割图计算出精确的附着物面积占比以及识别准确率指标MIoU:Step 6. Finally, according to the semantic segmentation map, calculate the exact percentage of attachment area and the recognition accuracy index MIoU:

Figure BDA0002278797250000071
Figure BDA0002278797250000071

其中,AAP为附着物面积占比;attachment,blade分别表示附着物和整个海流机叶片区域;Count(·)用于计算指定区域内的像素点个数。Among them, AAP is the area ratio of attachments; attachment and blade represent attachments and the entire blade area of the current turbine respectively; Count( ) is used to calculate the number of pixels in the specified area.

Figure BDA0002278797250000072
Figure BDA0002278797250000072

其中,MIoU表示平均交并比;pij表示真实标签为i而被误识别为标签j的像素个数;pii表示真实标签为i且被识别为标签i的像素个数;pji表示真实标签为j而被误识别为标签i的像素个数。Among them, MIoU represents the average intersection ratio; p ij represents the number of pixels whose true label is i but is misidentified as label j; p ii represents the number of pixels whose true label is i and is recognized as label i; p ji represents the true The number of pixels with label j that were misidentified as label i.

Claims (1)

1.一种基于VGG16-SegUnet和dropout的海流机叶片附着物识别方法,其特征在于,包括以下步骤:1. a method for identifying the attachments of ocean current machine blades based on VGG16-SegUnet and dropout, is characterized in that, comprises the following steps: 步骤一、首先,采集不同附着类型的海流机水下图像,然后使用开源工具labelme进行语义标注,从而完成原始图像-语义标签数据集的创建:背景,叶片,附着物分别被标注为0,1,2;Step 1. First, collect underwater images of current machines with different attachment types, and then use the open source tool labelme for semantic labeling to complete the creation of the original image-semantic label dataset: the background, leaves, and attachments are marked as 0, 1, respectively ,2; 步骤二、采用[0°,360°]的旋转数据增强技术扩充原始图像-语义标签数据集,然后对原始图像进行标准化预处理:Step 2: Expand the original image-semantic label dataset using the [0°, 360°] rotation data enhancement technique, and then standardize the original image:
Figure FDA0002278797240000011
Figure FDA0002278797240000011
其中,x表示海流机图像中R,G,B任意一个维度的数据;xmin,xmax分别表示x中的最小,最大像素值;x最终被标准化到[-1,1];Among them, x represents the data of any dimension of R, G, B in the current machine image; x min , x max represent the minimum and maximum pixel values in x respectively; x is finally normalized to [-1, 1]; 再将增强后的数据按3:1:1的比例划分为训练集、验证集和测试集;Then the enhanced data is divided into training set, validation set and test set according to the ratio of 3:1:1; 步骤三、搭建VGG16-SegUnet新型语义分割网络:将VGG16前13层的卷积和最大池化模型设定为特征提取编码层并使用ImageNet预训练权重初始化这些卷积结构;解码层的结构与SegNet中的解码层相同,采用反最大池化进行特征恢复;编码层与解码层之间除前向连接外,还融合了Unet中的特征级联以及SegNet中的最大池化索引保留技术,并在中间位置插入了一个30%dropout层;Dropout的引入在缓解训练过拟合现象的同时,也提供了不同的概率分类结果;Step 3. Build a new semantic segmentation network of VGG16-SegUnet: set the convolution and max pooling models of the first 13 layers of VGG16 as the feature extraction coding layer and use ImageNet pre-training weights to initialize these convolutional structures; the structure of the decoding layer is the same as that of SegNet. The decoding layer is the same, and the inverse max pooling is used for feature recovery; in addition to the forward connection between the encoding layer and the decoding layer, the feature cascade in Unet and the maximum pooling index retention technology in SegNet are also integrated, and in the A 30% dropout layer is inserted in the middle position; the introduction of dropout not only alleviates the phenomenon of training overfitting, but also provides different probability classification results; 步骤四、将预处理好的训练集图像数据输入至VGG16-SegUnet中,输出逐像素softmax概率分类结果:Step 4. Input the preprocessed training set image data into VGG16-SegUnet, and output the pixel-by-pixel softmax probability classification result:
Figure FDA0002278797240000012
Figure FDA0002278797240000012
其中,x(i)表示一张训练图像x中的第i个像素点;θ为softmax分类器的权重参数矩阵,且
Figure FDA0002278797240000013
p(y(i)=l|x(i);θ)表示x(i)的预测结果y(i)为语义标签l的概率;N表示待语义标注的类别个数;exp(·)表示指数函数;hθ(x(i))为softmax预测结果向量;
Among them, x (i) represents the ith pixel in a training image x; θ is the weight parameter matrix of the softmax classifier, and
Figure FDA0002278797240000013
p(y (i) =l|x (i) ; θ) represents the probability that the prediction result y (i) of x ( i) is the semantic label 1; N represents the number of categories to be semantically labeled; exp ( ) represents Exponential function; h θ (x (i) ) is the vector of softmax prediction results;
然后,使用Adadelta优化器对整个网络进行全局训练,降低交叉熵损失直到训练次数达到设定的最大值,并记录最终的训练权重:Then, use the Adadelta optimizer to globally train the entire network, reduce the cross-entropy loss until the number of training times reaches the set maximum value, and record the final training weights:
Figure FDA0002278797240000021
Figure FDA0002278797240000021
其中,Loss(θ)表示交叉熵损失函数;Ntrain表示训练数据的个数;Nn表示第n张图像的像素总个数;log(·)表示对数函数;1{·}是一个示性函数,当{·}内的表达式成立时,输出1,反之,输出0;Among them, Loss(θ) represents the cross entropy loss function; N train represents the number of training data; N n represents the total number of pixels in the nth image; log( ) represents the logarithmic function; Sex function, when the expression in {·} is established, output 1, otherwise, output 0; 步骤五、将预处理好的测试集图像输入到载入了训练权重的VGG16-SegUnet中,输出语义分割图,完成对图像中背景、叶片、附着物位置和大小的识别,同时对识别结果的不确定度进行估计,具体实现过程如下:Step 5. Input the preprocessed test set image into the VGG16-SegUnet loaded with training weights, output the semantic segmentation map, and complete the recognition of the background, leaves, and attachment positions and sizes in the image. The uncertainty is estimated, and the specific implementation process is as follows: i.将每张测试图像输入到融合有30%dropout的VGG16-SegUnet中,并重复进行50次测试,得到50个softmax概率分类结果,这里记为Test50i. Input each test image into the VGG16-SegUnet fused with 30% dropout, and repeat the test 50 times to obtain 50 softmax probability classification results, which are denoted as Test 50 here; ii.求取Test50的均值
Figure FDA0002278797240000022
以及方差
Figure FDA0002278797240000023
ii. Find the mean of Test 50
Figure FDA0002278797240000022
and variance
Figure FDA0002278797240000023
iii.从
Figure FDA0002278797240000024
中找出每个像素点的最大概率类别,然后通过可视化技术显示出语义分割图;最大概率类别对应的方差以图像的形式直观地显示出来,即为不确定度图像;
iii. From
Figure FDA0002278797240000024
Find out the maximum probability category of each pixel point, and then display the semantic segmentation map through visualization technology; the variance corresponding to the maximum probability category is visually displayed in the form of an image, which is the uncertainty image;
步骤六、最后,根据语义分割图计算出精确的附着物面积占比以及识别准确率指标MIoU:Step 6. Finally, according to the semantic segmentation map, calculate the exact percentage of attachment area and the recognition accuracy index MIoU:
Figure FDA0002278797240000025
Figure FDA0002278797240000025
其中,AAP为附着物面积占比;attachment,blade分别表示附着物和整个海流机叶片区域;Count(·)用于计算指定区域内的像素点个数;Among them, AAP is the percentage of the area of attachments; attachment and blade represent the attachments and the entire blade area of the current turbine respectively; Count( ) is used to calculate the number of pixels in the specified area;
Figure FDA0002278797240000026
Figure FDA0002278797240000026
其中,MIoU表示平均交并比;pij表示真实标签为i而被误识别为标签j的像素个数;pii表示真实标签为i且被识别为标签i的像素个数;pji表示真实标签为j而被误识别为标签i的像素个数。Among them, MIoU represents the average intersection ratio; p ij represents the number of pixels whose true label is i but is misidentified as label j; p ii represents the number of pixels whose true label is i and is recognized as label i; p ji represents the true The number of pixels with label j that were misidentified as label i.
CN201911132810.1A 2019-11-19 2019-11-19 Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout Active CN110863935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911132810.1A CN110863935B (en) 2019-11-19 2019-11-19 Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911132810.1A CN110863935B (en) 2019-11-19 2019-11-19 Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout

Publications (2)

Publication Number Publication Date
CN110863935A CN110863935A (en) 2020-03-06
CN110863935B true CN110863935B (en) 2020-09-22

Family

ID=69655089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911132810.1A Active CN110863935B (en) 2019-11-19 2019-11-19 Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout

Country Status (1)

Country Link
CN (1) CN110863935B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666985B (en) * 2020-05-21 2022-10-21 武汉大学 A deep learning adversarial image classification defense method based on dropout
CN111914948B (en) * 2020-08-20 2024-07-26 上海海事大学 Ocean current machine blade attachment self-adaptive identification method based on rough and fine semantic segmentation network
CN112950617B (en) * 2021-03-24 2024-05-10 上海海事大学 Tidal current machine blade attachment identification method based on continuous rotation image enhancement and condition generation countermeasure network
CN113971774B (en) * 2021-10-11 2024-07-02 天津大学 Water delivery structure surface limnoperna lacustris spatial distribution characteristic identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649612B1 (en) * 2010-01-06 2014-02-11 Apple Inc. Parallelizing cascaded face detection
CN103793700A (en) * 2014-02-27 2014-05-14 彭大维 Wind turbine blade image automatic recognition system based on neural network technology
CN107256546A (en) * 2017-05-23 2017-10-17 上海海事大学 Ocean current machine blade attachment method for diagnosing faults based on PCA convolution pond SOFTMAX
CN108510001A (en) * 2018-04-04 2018-09-07 北京交通大学 A kind of blade of wind-driven generator defect classification method and its categorizing system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2233198A4 (en) * 2007-12-17 2012-09-05 Nitto Denko Corp Spiral type film filtering device and mounting member, and film filtering device managing system and film filtering device managing method using the same
WO2018009202A1 (en) * 2016-07-07 2018-01-11 Flagship Biosciences Inc. Continuous tissue analysis scoring scheme based on cell classifications
US11676296B2 (en) * 2017-08-11 2023-06-13 Sri International Augmenting reality using semantic segmentation
CN109100648B (en) * 2018-05-16 2020-07-24 上海海事大学 Fusion diagnosis method of turbine impeller winding fault based on CNN-ARMA-Softmax
CN110070091B (en) * 2019-04-30 2022-05-24 福州大学 Semantic segmentation method and system based on dynamic interpolation reconstruction and used for street view understanding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649612B1 (en) * 2010-01-06 2014-02-11 Apple Inc. Parallelizing cascaded face detection
CN103793700A (en) * 2014-02-27 2014-05-14 彭大维 Wind turbine blade image automatic recognition system based on neural network technology
CN107256546A (en) * 2017-05-23 2017-10-17 上海海事大学 Ocean current machine blade attachment method for diagnosing faults based on PCA convolution pond SOFTMAX
CN108510001A (en) * 2018-04-04 2018-09-07 北京交通大学 A kind of blade of wind-driven generator defect classification method and its categorizing system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Novel Method for Detection of Wind Turbine Blade Imbalance Based on Multi-Variable Spectrum Imaging and Convolutional Neural Network;Zhe Cao;《Proceedings of the 38th Chinese Control Conference》;20190730;全文 *
A Sparse Autoencoder and Softmax Regression Based Diagnosis Method for the Attachment on the Blades of Marine Current Turbine;YiLai Zheng;《sensors》;20190217;全文 *
U-SEGNET: FULLY CONVOLUTIONAL NEURAL NETWORK BASED AUTOMATED BRAIN TISSUE SEGMENTATION TOOL;Pulkit Kumar;《arXiv》;20180612;全文 *

Also Published As

Publication number Publication date
CN110863935A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110863935B (en) Recognition method of blade attachments of ocean current machine based on VGG16-SegUnet and dropout
CN112200244B (en) Intelligent detection method for anomaly of aerospace engine based on hierarchical countermeasure training
CN112508429B (en) A fault diagnosis method for buried pipeline cathodic protection system based on convolutional neural network
CN108896296A (en) A kind of wind turbine gearbox method for diagnosing faults based on convolutional neural networks
CN109461458B (en) Audio anomaly detection method based on generation countermeasure network
CN110543860A (en) Mechanical fault diagnosis method and system based on TJM transfer learning
CN110232188A (en) The Automatic document classification method of power grid user troublshooting work order
CN111368690A (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN111931805B (en) Knowledge-guided CNN-based small sample similar abrasive particle identification method
CN111914948A (en) Ocean current machine blade attachment self-adaptive identification method based on rough and fine semantic segmentation network
CN113255661B (en) Bird species image identification method related to bird-involved fault of power transmission line
CN115953666B (en) A Substation Field Progress Recognition Method Based on Improved Mask-RCNN
CN107256546A (en) Ocean current machine blade attachment method for diagnosing faults based on PCA convolution pond SOFTMAX
CN113159046A (en) Method and device for detecting foreign matters in ballastless track bed
CN118485212A (en) Intelligent agent autonomous inspection method and system based on large model
CN115017828A (en) Power cable fault identification method and system based on bidirectional long short-term memory network
CN109919921B (en) Environmental impact degree modeling method based on generation countermeasure network
CN110318731A (en) A kind of oil well fault diagnostic method based on GAN
CN115240069A (en) A real-time obstacle detection method in foggy scene
CN117274192A (en) A pipeline magnetic leakage defect detection method based on improved YOLOv5
CN116152674A (en) Dam unmanned aerial vehicle image crack intelligent recognition method based on improved U-Net model
CN119878471B (en) Marine wind power blade health monitoring method and system
CN119513780A (en) A method and system for fault diagnosis of main power grid equipment based on voiceprint cloud-edge collaboration
Liu et al. Channel-Spatial attention convolutional neural networks trained with adaptive learning rates for surface damage detection of wind turbine blades
CN109829887B (en) Image quality evaluation method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载