+

CN117152139A - Patch inductance defect detection method based on example segmentation technology - Google Patents

Patch inductance defect detection method based on example segmentation technology Download PDF

Info

Publication number
CN117152139A
CN117152139A CN202311414253.9A CN202311414253A CN117152139A CN 117152139 A CN117152139 A CN 117152139A CN 202311414253 A CN202311414253 A CN 202311414253A CN 117152139 A CN117152139 A CN 117152139A
Authority
CN
China
Prior art keywords
inductance
patch
bounding box
detection model
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311414253.9A
Other languages
Chinese (zh)
Inventor
喻璟怡
邱俊航
夏军
胡黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Weichuang Electronics Co ltd
East China Jiaotong University
Original Assignee
Jiangxi Weichuang Electronics Co ltd
East China Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Weichuang Electronics Co ltd, East China Jiaotong University filed Critical Jiangxi Weichuang Electronics Co ltd
Priority to CN202311414253.9A priority Critical patent/CN117152139A/en
Publication of CN117152139A publication Critical patent/CN117152139A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision detection, and particularly discloses a patch inductance defect detection method based on an example segmentation technology, which comprises the following steps of: collecting various patch inductance images, and then preprocessing and labeling to obtain a patch inductance image data set; constructing a network detection model based on an example segmentation technology, and training the network detection model by using a training set; checking the trained network detection model to obtain a final network detection model; and preprocessing a patch inductance image acquired in real-time production, and inputting the preprocessed patch inductance image into a final network detection model to detect patch inductance defects in real time. The invention can effectively improve the detection efficiency by detecting the defects of the patch inductor through computer vision, can classify the patch inductor with the defects detected, simultaneously improves the detection stability and accuracy, can detect in real time in the production process, and is beneficial to improving the yield of the produced products.

Description

一种基于实例分割技术的贴片电感缺陷检测方法A defect detection method for chip inductors based on instance segmentation technology

技术领域Technical field

本发明属于计算机视觉检测技术领域,具体涉及一种基于实例分割技术的贴片电感缺陷检测方法。The invention belongs to the field of computer vision detection technology, and specifically relates to a patch inductor defect detection method based on instance segmentation technology.

背景技术Background technique

贴片电感是电子设备中的关键元器件,其性能直接影响电子设备的稳定性、可靠性和寿命。因此在电子设备制造过程中,对贴片电感的质量要求较高,需要对其进行严格的检测。通过对贴片电感进行缺陷检测,可以及时发现并排除不良产品,提高产品整体的良品率。传统的贴片电感检测方法主要依赖于人工视觉检测,操作员在生产线上对贴片电感进行逐一观察,识别并标记存在缺陷的产品。人工检测的效率较低,并且容易受工作人员的疲劳程度、经验等因素的影响,导致检测结果不稳定,容易出现漏检和误检;而目前还出现了涡流检测、漏磁检测和红外检测,其中涡流检测和漏磁检测在检测粗糙表面时容易出现错检,而红外检测受限条件较多,通常只用于小范围的离线检查,无法满足目前工业生产实时监测的需求。实例分割网络在标准的数据集上取得了较好的效果,目前广泛使用的yolov7使用了更深的网络结构、更细粒度的特征金字塔和更有效的损失函数等,使其在目标检测任务上取得了更高的准确率。但是由于贴片电感缺陷种类繁多且目标相对较小,因此采用上述的实例分割网络不能满足贴片电感的缺陷检测。Chip inductors are key components in electronic equipment, and their performance directly affects the stability, reliability and life of electronic equipment. Therefore, in the manufacturing process of electronic equipment, the quality requirements for chip inductors are relatively high, and they need to be strictly tested. By detecting defects in chip inductors, defective products can be discovered and eliminated in a timely manner, thereby improving the overall yield rate of the product. The traditional chip inductor inspection method mainly relies on manual visual inspection. The operator observes the chip inductors one by one on the production line to identify and mark defective products. The efficiency of manual inspection is low, and it is easily affected by factors such as the fatigue level and experience of the staff, resulting in unstable inspection results and prone to missed detections and false detections. At present, eddy current testing, magnetic flux leakage testing and infrared testing have also appeared. Among them, eddy current detection and magnetic flux leakage detection are prone to false detections when detecting rough surfaces, while infrared detection has many restricted conditions and is usually only used for small-scale offline inspection, which cannot meet the current needs of real-time monitoring of industrial production. The instance segmentation network has achieved good results on standard data sets. The currently widely used yolov7 uses a deeper network structure, a more fine-grained feature pyramid and a more effective loss function, etc., to achieve good results in target detection tasks. A higher accuracy rate. However, due to the wide variety of chip inductor defects and relatively small targets, the above example segmentation network cannot satisfy the defect detection of chip inductors.

发明内容Contents of the invention

本发明所要解决的技术问题便是针对上述现有技术的不足,提供一种基于实例分割技术的贴片电感缺陷检测方法,它能够有效提高检测效率,同时提高了检测的稳定性和准确性。The technical problem to be solved by the present invention is to provide a chip inductor defect detection method based on instance segmentation technology in view of the above-mentioned shortcomings of the existing technology, which can effectively improve the detection efficiency and at the same time improve the stability and accuracy of detection.

本发明所采用的技术方案是:一种基于实例分割技术的贴片电感缺陷检测方法,包括以下步骤:The technical solution adopted by the present invention is: a patch inductor defect detection method based on instance segmentation technology, which includes the following steps:

步骤1:通过视觉相机拍摄并收集各种贴片电感图像,并对贴片电感图像进行预处理,并在预处理后对其中存在贴片电感缺陷的贴片电感图像进行标注,获得贴片电感图像数据集,并将贴片电感图像数据集划分为训练集和测试集;Step 1: Shoot and collect various chip inductor images through a vision camera, preprocess the chip inductor images, and mark the chip inductor images with chip inductor defects after preprocessing to obtain the chip inductor. Image data set, and divide the patch inductance image data set into a training set and a test set;

步骤2:构建基于实例分割技术的网络检测模型,所述网络检测模型包括主干网络和头部网络;Step 2: Construct a network detection model based on instance segmentation technology. The network detection model includes a backbone network and a head network;

所述主干网络层中加入了CA-ELAN模块;The CA-ELAN module is added to the backbone network layer;

所述头部网络中在最终的三个预测头部加入了一层含有一个3×3卷积核以及一个1×1卷积核的结构,并在LeakyReLu激活函数前添加批归一化作为前置操作的残差网络;可以提高训练速度并且使模型在保证预测速度的基础之上提高模型的预测准确率;In the head network, a layer containing a 3×3 convolution kernel and a 1×1 convolution kernel is added to the final three prediction heads, and batch normalization is added before the LeakyReLu activation function. Residual network with setting operation; it can increase the training speed and improve the prediction accuracy of the model while ensuring the prediction speed;

步骤3:将步骤1中的训练集输入网络检测模型中,对网络检测模型进行训练,在训练中不断的计算总损失函数,并通过小批量梯度下降不断更新检测模型参数,得到训练后的网络检测模型;Step 3: Input the training set in step 1 into the network detection model, train the network detection model, continuously calculate the total loss function during training, and continuously update the detection model parameters through small-batch gradient descent to obtain the trained network. detection model;

步骤4:将步骤1中的测试集输入训练后的网络检测模型中进行校验,校验后得到最终的网络检测模型;Step 4: Input the test set in step 1 into the trained network detection model for verification, and obtain the final network detection model after verification;

步骤5:将实时生产中采集的贴片电感图像预处理后输入最终的网络检测模型中进行可视化的贴点电感缺陷的实时检测;Step 5: Preprocess the patch inductor images collected during real-time production and input them into the final network inspection model for visual real-time detection of patch inductor defects;

步骤6:通过步骤5的检测结果,定期对网络检测模型的贴片电感图像数据集进行更新,并对更新了贴片电感图像数据集的网络检测模型进行重新训练。Step 6: Based on the detection results of Step 5, regularly update the patch inductor image data set of the network detection model, and retrain the network detection model with the updated patch inductor image data set.

作为优选,步骤1中对贴片电感图像进行预处理包括:将贴片电感图像依次进行缩放、去噪和图像增强。Preferably, the preprocessing of the chip inductor image in step 1 includes: scaling, denoising and image enhancement of the chip inductor image in sequence.

作为优选,步骤1中获得的贴片电感图像数据集中还采用了马赛克数据增强,即在贴片电感图像数据集中随机抽取4个贴片电感图像组合成新的贴片电感图像,并将新的贴片电感图像加入到贴片电感图像数据集中。As an option, the patch inductor image data set obtained in step 1 also uses mosaic data enhancement, that is, four patch inductor images are randomly selected from the patch inductor image data set and combined into a new patch inductor image, and the new patch inductor image is The patch inductor image is added to the patch inductor image data set.

作为优选,步骤1中所述的贴片电感缺陷包括磁环暗裂缺陷、电极露铜缺陷、电极露线缺陷和磁环破损缺陷。Preferably, the chip inductor defects described in step 1 include magnetic ring dark crack defects, electrode exposed copper defects, electrode exposed wire defects and magnetic ring damage defects.

作为优选,所述CA-ELAN模块为在主干网络层的E-ELAN模块中引入CA模块,通过CA模块获取贴片电感图像的宽度和高度,并对精准位置进行编码,使网络捕捉多尺度信息的贴片电感图像的上下文信息,再经过E-ELAN模块进行特征融合,得到融合了多尺度信息的具有上下文信息的贴片电感缺陷图像特征;Preferably, the CA-ELAN module is introduced into the E-ELAN module of the backbone network layer, and the width and height of the patch inductor image are obtained through the CA module, and the precise position is encoded to enable the network to capture multi-scale information. The contextual information of the chip inductor image is then used for feature fusion through the E-ELAN module to obtain the chip inductor defect image features with contextual information that integrate multi-scale information;

CA模块即为坐标注意力机制模块,其是一种用于增强神经网络对对位置信息感知的模块;在传统的注意力机制中,仅通过特征的通道信息来计算注意力权重,而没有考虑位置信息,然而在贴片电感图像的目标检测和图像分割中,位置信息对于正确理解和处理图像内容非常重要,因此,通过引入CA模块,使网络检测模型可以自动学习哪些特征通道对于目标检测任务更为重要,使网络检测模型能够根据特征通道的重要性,对不同通道进行加权,从而使得网络检测模型更关注对目标检测有贡献的关键特征,减轻了噪声和无用特征的影响,即通过引入CA模块,帮助模型自动挑选重要的特征通道、提升目标表达能力,并且提高目标检测的性能,并且可以使模型更加有效地利用特征信息,从而提升目标检测任务的精度和效果,即可使模型能定位到对缺陷检测有用的特征,抑制无用特征。The CA module is the coordinate attention mechanism module, which is a module used to enhance the neural network's perception of position information; in the traditional attention mechanism, the attention weight is calculated only through the channel information of the feature, without considering Position information. However, in target detection and image segmentation of patch inductance images, position information is very important for correctly understanding and processing the image content. Therefore, by introducing the CA module, the network detection model can automatically learn which feature channels are suitable for the target detection task. More importantly, the network detection model can weight different channels according to the importance of the feature channels, so that the network detection model pays more attention to the key features that contribute to target detection, reducing the impact of noise and useless features, that is, by introducing The CA module helps the model automatically select important feature channels, improve target expression capabilities, and improve target detection performance. It also enables the model to use feature information more effectively, thereby improving the accuracy and effect of the target detection task, that is, the model can Locate features useful for defect detection and suppress useless features.

作为优选,所述CA模块包括以下步骤:Preferably, the CA module includes the following steps:

步骤a,输入大小为的贴片电感缺陷特征图,并使用大小为(1,W)和(H,1)的池化核来进行全局平均池化,得到通道维度上的全局平均值向量编码的水平方向特征向量和垂直方向特征向量:In step a, the input size is The chip inductor defect feature map is used, and pooling kernels of size (1, W) and (H, 1) are used to perform global average pooling to obtain the horizontal direction feature vector sum encoded by the global average vector in the channel dimension. Vertical eigenvector:

其中,H表示高度,W表示宽度,C表示通道数,表示输入特征图X在第c个通道的全局平均值,/>表示第c个通道坐标为(i,j)特征信息;Among them, H represents the height, W represents the width, and C represents the number of channels. Represents the global average value of the input feature map X in the c-th channel,/> Indicates that the c-th channel coordinates are (i, j) feature information;

步骤b,将步骤a中得到的水平方向特征向量和垂直方向特征向量使用大小为1×1的卷积核进行卷积、批归一化和LeakyReLu激活函数进行特征映射:Step b: Convolve the horizontal feature vector and vertical feature vector obtained in step a using a convolution kernel of size 1×1, batch normalize and use the LeakyReLu activation function for feature mapping:

其中,表示输入特征图X在第c个通道上的注意力响应,/>表示1×1卷积核;in, Represents the attention response of the input feature map X on the c-th channel, /> Represents a 1×1 convolution kernel;

步骤c,将步骤b映射得到的特征沿着水平-垂直方向按原来的W和H分解为两个独立的分解特征,并采用两个1×1大小的卷积核卷积和LeakyReLu激活函数分别针对其进行特征转换:Step c, decompose the features mapped in step b into two independent decomposition features according to the original W and H along the horizontal-vertical direction. , and use two 1×1 size convolution kernel convolution and LeakyReLu activation function to perform feature transformation respectively:

,

其中,分别为输入的贴片电感缺陷特征图水平和垂直方向上的注意力权重, />为1×1大小的卷积核;in, are the attention weights in the horizontal and vertical directions of the input patch inductor defect feature map, /> It is a convolution kernel of size 1×1;

步骤d,获得最终的特征向量:Step d, obtain the final feature vector:

最终输出的贴片电感缺陷特征图The final output chip inductor defect characteristic map ;

作为优选,步骤3中的总损失函数包括类别损失函数、定位损失函数和目标置信度损失函数。Preferably, the total loss function in step 3 includes a category loss function, a positioning loss function and a target confidence loss function.

作为优选,所述定位损失函数中引入了宽高损失,使真实边界框与预测边界框的宽度和高度之差最小,具体为:As a preference, the width and height loss is introduced in the positioning loss function to minimize the difference in width and height between the real bounding box and the predicted bounding box, specifically:

其中,b为预测框、bgt为真实框,IOU为预测边界框和真实边界框之间的覆盖程度,为预测边界框与实际边界框中心点之间的欧氏距离的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的对角线长度的平方;/>为预测边界框的宽度与实际边界框宽度之间的欧氏距离的平方;/>为预测边界框的高度与实际边界框的高度之间的欧氏距离的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的宽度的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的高度的平方;wgt为实际边界框的宽度、hgt为实际边界框的高度,wp为预测边界框的宽度、hp为预测边界框的高度。Among them, b is the predicted box, b gt is the real box, IOU is the coverage degree between the predicted bounding box and the real bounding box, is the square of the Euclidean distance between the predicted bounding box and the center point of the actual bounding box;/> is the square of the diagonal length of the smallest closed rectangle containing the predicted bounding box and the actual bounding box;/> is the square of the Euclidean distance between the width of the predicted bounding box and the width of the actual bounding box;/> is the square of the Euclidean distance between the height of the predicted bounding box and the height of the actual bounding box;/> is the square of the width of the smallest closed rectangle containing the predicted bounding box and the actual bounding box;/> is the square of the height of the smallest closed rectangle containing the predicted bounding box and the actual bounding box; w gt is the width of the actual bounding box, h gt is the height of the actual bounding box, w p is the width of the predicted bounding box, and h p is the predicted boundary The height of the box.

本发明的有益效果在于:本发明通过采用计算机视觉对贴片电感的缺陷进行检测,能有效提高检测效率,并能对检测出缺陷的贴片电感进行分类,同时还提高了检测的稳定性和准确性,且能在生产过程中进行实时检测,有利于提高生产产品的良品率。The beneficial effects of the present invention are that: by using computer vision to detect defects in chip inductors, the present invention can effectively improve detection efficiency, classify defective chip inductors, and at the same time improve the stability and stability of detection. It is accurate and can perform real-time detection during the production process, which is beneficial to improving the yield rate of produced products.

附图说明Description of the drawings

图1为本发明的流程框图;Figure 1 is a flow chart of the present invention;

图2为本发明网络检测模型中主干网络的结构框图;Figure 2 is a structural block diagram of the backbone network in the network detection model of the present invention;

图3为本发明网络检测模型中头部网络的结构框图;Figure 3 is a structural block diagram of the head network in the network detection model of the present invention;

图4为本发明网络检测模型中CA-ELAN模块的结构框图;Figure 4 is a structural block diagram of the CA-ELAN module in the network detection model of the present invention;

图5为本发明CA-ELAN模块中CA模块的结构框图。Figure 5 is a structural block diagram of the CA module in the CA-ELAN module of the present invention.

具体实施方式Detailed ways

下面将结合附图及具体实施例对本发明作进一步详细说明。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.

如图1、图2、图3、图4和图5所示,本实施例提供的基于实例分割技术的贴片电感缺陷检测方法,包括以下步骤:As shown in Figures 1, 2, 3, 4 and 5, the patch inductor defect detection method based on instance segmentation technology provided in this embodiment includes the following steps:

步骤1:通过视觉相机拍摄并收集各种贴片电感图像,收集的贴片电感图像中包括无缺陷的贴片电感图像和有缺陷的贴片电感图像,其中有缺陷的贴片电感图像定义规则通过查阅工厂生产规范相应制度以及通过与实地工厂的基层工人详细协商所制定的,包括存在磁环暗裂缺陷的贴片电感图像、存在电极露铜缺陷的贴片电感图像、存在电极露线缺陷的贴片电感图像和存在磁环破损缺陷的贴片电感图像;Step 1: Take and collect various patch inductor images through the vision camera. The collected patch inductor images include defect-free patch inductor images and defective patch inductor images. The defective patch inductor image defines the rules. It was formulated by consulting the corresponding system of factory production specifications and through detailed consultation with grassroots workers in the field factory, including images of chip inductors with magnetic ring crack defects, images of chip inductors with electrode exposed copper defects, and electrode exposed wire defects. The image of the chip inductor and the image of the chip inductor with the magnetic ring damage defect;

构建贴片电感图像数据集,收集足够多的无缺陷的贴片电感图像和足够多的各种有缺陷的贴片电感图像,对收集到的有缺陷的贴片电感图像进行预处理,即先将图像缩放至640×640像素,在依次进行去噪和图像增强操作;将有缺陷的贴片电感图像在预处理后进行标注,标注出贴片电感的位置、类型和缺陷类型,得到贴片电感图像数据集;同时对预处理后的贴片电感图像采用马赛克数据增强,即从众多的贴片电感图像中随机选择4个图像数据组合成一个新的图像,并将组合后的图像进行标注,标注后作为一个新的训练数据添加到贴片电感图像数据集中,通过图像的组合,可以在训练阶段为模型提供更多的上下文信息,从而提高模型的泛化能力;Construct a patch inductor image data set, collect enough defect-free patch inductor images and enough various defective patch inductor images, and preprocess the collected defective patch inductor images, that is, first Scale the image to 640×640 pixels, and perform denoising and image enhancement operations in sequence; label the defective chip inductor image after preprocessing, mark the location, type and defect type of the chip inductor, and obtain the chip Inductor image data set; at the same time, the preprocessed chip inductor image is enhanced with mosaic data, that is, 4 image data are randomly selected from numerous chip inductor images to form a new image, and the combined image is labeled. , after annotation, it is added to the patch inductor image data set as a new training data. Through the combination of images, more contextual information can be provided for the model in the training phase, thereby improving the generalization ability of the model;

将构建好的贴片电感图像数据集以8:2的比例分为训练集和测试集;Divide the constructed patch inductor image data set into a training set and a test set at a ratio of 8:2;

步骤2:构建基于实例分割技术的网络检测模型,包括主干网络和头部网络;将贴片电感图像数据集中的图像输入到主干网络中,并在主干网络中的E-ELAN模块中引入CA模块,通过CA模块获取贴片电感图像的宽度和高度,并对精准位置进行编码,使网络捕捉多尺度信息的贴片电感图像的上下文信息,具体的步骤为:Step 2: Build a network detection model based on instance segmentation technology, including a backbone network and a head network; input the images in the patch inductor image data set into the backbone network, and introduce the CA module into the E-ELAN module in the backbone network , obtain the width and height of the chip inductor image through the CA module, and encode the precise position, so that the network captures the context information of the chip inductor image with multi-scale information. The specific steps are:

步骤a,输入大小为的贴片电感缺陷特征图,并使用大小为(1,W)和(H,1)的池化核来进行全局平均池化,得到通道维度上的全局平均值向量编码的水平方向特征向量和垂直方向特征向量:In step a, the input size is The chip inductor defect feature map is used, and pooling kernels of size (1, W) and (H, 1) are used to perform global average pooling to obtain the horizontal direction feature vector sum encoded by the global average vector in the channel dimension. Vertical eigenvector:

其中,H表示高度,W表示宽度,C表示通道数,表示输入特征图X在第c个通道的全局平均值,/>表示第c个通道坐标为(i,j)特征信息;Among them, H represents the height, W represents the width, and C represents the number of channels. Represents the global average value of the input feature map X in the c-th channel,/> Indicates that the c-th channel coordinates are (i, j) feature information;

步骤b,将步骤a中得到的水平方向特征向量和垂直方向特征向量使用大小为1×1的卷积核进行卷积、批归一化和LeakyReLu激活函数进行特征映射:Step b: Convolve the horizontal feature vector and vertical feature vector obtained in step a using a convolution kernel of size 1×1, batch normalize and use the LeakyReLu activation function for feature mapping:

其中,表示输入特征图X在第c个通道上的注意力响应,/>表示1×1卷积核;in, Represents the attention response of the input feature map X on the c-th channel, /> Represents a 1×1 convolution kernel;

步骤c,将步骤b映射得到的特征沿着水平-垂直方向按原来的W和H分解为两个独立的分解特征,并采用两个1×1大小的卷积核卷积和LeakyReLu激活函数分别针对其进行特征转换:Step c, decompose the features mapped in step b into two independent decomposition features according to the original W and H along the horizontal-vertical direction. , and use two 1×1 size convolution kernel convolution and LeakyReLu activation function to perform feature transformation respectively:

,

其中,分别为输入的贴片电感缺陷特征图水平和垂直方向上的注意力权重, />为1×1大小的卷积核;in, are the attention weights in the horizontal and vertical directions of the input patch inductor defect feature map, /> It is a convolution kernel of size 1×1;

步骤d,获得最终的特征向量:Step d, obtain the final feature vector:

最终输出的贴片电感缺陷特征图The final output chip inductor defect characteristic map ;

再将输出的贴片电感缺陷特征图经过E-ELAN模块进行特征融合,得到融合了多尺度信息的具有上下文信息的贴片电感缺陷图像特征;The output chip inductor defect feature map is then used for feature fusion through the E-ELAN module to obtain chip inductor defect image features that incorporate multi-scale information and have contextual information;

即通过主干网络输出三层不同大小的特征图,三层不同大小的特征图经RepVGG模块处理后,分别通过一层含有一个3×3卷积核以及一个1×1卷积核并在使用LeakyReLu激活函数前添加batch normalization(批归一化)作为前置操作的残差网络层后,在经过一个卷积层后分别输出20×20×27、40×40×27、80×80×27的特征图;That is, three layers of feature maps of different sizes are output through the backbone network. After the three layers of feature maps of different sizes are processed by the RepVGG module, each layer contains a 3×3 convolution kernel and a 1×1 convolution kernel and uses LeakyReLu. After adding batch normalization (batch normalization) as a pre-operation residual network layer before the activation function, after passing through a convolution layer, 20×20×27, 40×40×27, and 80×80×27 are output respectively. feature map;

步骤3:将步骤1中的训练集输入网络检测模型中,对网络检测模型进行训练,在训练中不断的计算总损失函数,所述总损失函数包括类别损失函数,定位损失函数,以及目标置信度损失函数;Step 3: Input the training set in step 1 into the network detection model, train the network detection model, and continuously calculate the total loss function during the training. The total loss function includes the category loss function, the positioning loss function, and the target confidence. degree loss function;

所述总损失函数:The total loss function:

其中,tp为预测向量、tgt为真实值向量;K为输出特征图、S2为网格、B为每个网格上锚框的数量; LCIoU为定位损失函数、Lobj为目标置信度损失函数、Lcls为类别损失函数;为对应项的权重;/>为第k个输出特征图,第i个网格, 第j个锚框是否是正样本,如果是正样本则为1,反之为0;/>为平衡每个尺度的输出特征图的权重,依次对应80×80×27,40×40×27,20×20×27的输出特征图;Among them, t p is the prediction vector, t gt is the true value vector; K is the output feature map, S 2 is the grid, and B is the number of anchor boxes on each grid; L CIoU is the positioning loss function, and L obj is the target The confidence loss function, L cls is the category loss function; is the weight of the corresponding item;/> It is the k-th output feature map, the i-th grid, and whether the j-th anchor box is a positive sample. If it is a positive sample, it is 1, otherwise it is 0;/> In order to balance the weight of the output feature map of each scale, it corresponds to the output feature map of 80×80×27, 40×40×27, and 20×20×27;

所述类别损失函数:The category loss function:

其中,cp为预测框的类别分数,cgt为目标框真实类别,C为类别数量,为第i个类别的权重;Among them, c p is the category score of the prediction frame, c gt is the true category of the target frame, C is the number of categories, is the weight of the i-th category;

所述定位损失函数中引入了宽高损失,具体为:The width and height loss is introduced in the positioning loss function, specifically:

其中,b为预测框、bgt为真实框,IOU为预测边界框和真实边界框之间的覆盖程度,为预测边界框与实际边界框中心点之间的欧氏距离的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的对角线长度的平方;/>为预测边界框的宽度与实际边界框宽度之间的欧氏距离的平方;/>为预测边界框的高度与实际边界框的高度之间的欧氏距离的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的宽度的平方;/>为包含预测边界框与实际边界框的最小闭合矩形的高度的平方;wgt为实际边界框的宽度、hgt为实际边界框的高度,wp为预测边界框的宽度、hp为预测边界框的高度;Among them, b is the predicted box, b gt is the real box, IOU is the coverage degree between the predicted bounding box and the real bounding box, is the square of the Euclidean distance between the predicted bounding box and the center point of the actual bounding box;/> is the square of the diagonal length of the smallest closed rectangle containing the predicted bounding box and the actual bounding box;/> is the square of the Euclidean distance between the width of the predicted bounding box and the width of the actual bounding box;/> is the square of the Euclidean distance between the height of the predicted bounding box and the height of the actual bounding box;/> is the square of the width of the smallest closed rectangle containing the predicted bounding box and the actual bounding box;/> is the square of the height of the smallest closed rectangle containing the predicted bounding box and the actual bounding box; w gt is the width of the actual bounding box, h gt is the height of the actual bounding box, w p is the width of the predicted bounding box, and h p is the predicted boundary The height of the box;

所述目标置信度损失函数:The target confidence loss function:

其中,po为预测框中的目标置信度分数,piou为预测框与之对应的目标框的iou值,wobj为正样本的权重;Among them, p o is the target confidence score in the prediction box, p iou is the iou value of the target box corresponding to the prediction box, and w obj is the weight of the positive sample;

计算总损失函数后再通过小批量梯度下降对检测模型进行优化,即每次使用一小部分贴片电感图像来更新参数,更新网络权重,加速网络收敛,减少收敛过程中的振荡,从而提高检测模型的精度,得到训练后的网络检测模型;After calculating the total loss function, the detection model is optimized through small-batch gradient descent, that is, a small part of the patch inductance image is used each time to update parameters, update network weights, accelerate network convergence, reduce oscillation during the convergence process, thereby improving detection The accuracy of the model is used to obtain the trained network detection model;

步骤4:将步骤1中的测试集输入训练后的网络检测模型中进行校验,校验后得到最终的网络检测模型;Step 4: Input the test set in step 1 into the trained network detection model for verification, and obtain the final network detection model after verification;

步骤5:将实时生产中采集的贴片电感图像预处理后输入最终的网络检测模型中,网络检测模型会实时预测贴片电感上可能存在的缺陷并输出预测结果,预测结果包括边界框的位置、大小以及对应的缺陷类别;为了提高检测准确性,我们还可以对检测结果进行处理,首先使用非极大值抑制算法,根据边界框的置信度筛选出最可信的边界框,同时消除重叠度较高的边界框,然后再利用OpenCV框架的绘框组件将处理后的检测结果绘制在原始图像上;该绘框组件会在原始图像中贴片电感缺陷区域绘制边界框,并在框旁标注出对应的缺陷类别;Step 5: Preprocess the image of the chip inductor collected in real-time production and input it into the final network detection model. The network detection model will predict possible defects on the chip inductor in real time and output the prediction results. The prediction results include the position of the bounding box. , size and corresponding defect category; in order to improve the detection accuracy, we can also process the detection results. First, use the non-maximum suppression algorithm to filter out the most credible bounding boxes based on the confidence of the bounding boxes and eliminate overlaps. The bounding box with a higher degree of accuracy is then used to draw the processed detection results on the original image using the drawing frame component of the OpenCV framework; the drawing frame component will draw a bounding box in the patch inductor defect area in the original image, and place it next to the frame. Mark the corresponding defect category;

步骤6:在网络检测模型使用一段时间后,由于生产设备、原材料或工艺的变化,可能会产生新的缺陷,为了确保网络检测模型在面对新的缺陷特征时仍能保持较高的识别性能,需要及时更新贴片电感图像样本集,并对网络检测模型进行再次训练和微调。Step 6: After the network detection model is used for a period of time, new defects may occur due to changes in production equipment, raw materials or processes. In order to ensure that the network detection model can still maintain high recognition performance in the face of new defect characteristics , it is necessary to update the chip inductor image sample set in a timely manner, and retrain and fine-tune the network detection model.

Claims (8)

1. The patch inductance defect detection method based on the example segmentation technology is characterized by comprising the following steps of:
step 1: shooting and collecting various patch inductance images through a vision camera, preprocessing the patch inductance images, marking the patch inductance images with patch inductance defects after preprocessing, obtaining a patch inductance image data set, and dividing the patch inductance image data set into a training set and a test set;
step 2: constructing a network detection model based on an instance segmentation technology, wherein the network detection model comprises a backbone network and a head network;
the CA-ELAN module is added into the backbone network layer;
adding a layer of structure containing a 3×3 convolution kernel and a 1×1 convolution kernel into the final three prediction heads in the head network, and adding batch normalization as a residual network of a pre-operation before a LeakyReLu activation function;
step 3: inputting the training set in the step 1 into a network detection model, training the network detection model, continuously calculating a total loss function in training, and continuously updating detection model parameters through small-batch gradient descent to obtain a trained network detection model;
step 4: inputting the test set in the step 1 into the trained network detection model for verification, and obtaining a final network detection model after verification;
step 5: preprocessing a patch inductance image acquired in real-time production, and inputting the patch inductance image into a final network detection model for visual real-time detection of patch inductance defects;
step 6: and (3) updating the patch inductance image data set of the network detection model at regular intervals according to the detection result in the step (5), and retraining the network detection model with the updated patch inductance image data set.
2. The method for detecting a chip inductance defect based on an instance segmentation technique according to claim 1, wherein the preprocessing of the chip inductance image in step 1 includes: and sequentially scaling, denoising and enhancing the patch inductance image.
3. The method for detecting a chip inductance defect based on an example segmentation technique according to claim 1 or 2, wherein the chip inductance image data set obtained in step 1 is further enhanced by using mosaic data, that is, 4 chip inductance images are randomly extracted from the chip inductance image data set to be combined into a new chip inductance image, and the new chip inductance image is added into the chip inductance image data set.
4. A method for detecting a chip inductor defect based on an instance division technique as claimed in claim 3, wherein the chip inductor defect in step 1 includes a magnetic ring dark crack defect, an electrode copper exposure defect, an electrode line exposure defect, and a magnetic ring breakage defect.
5. The method for detecting the chip inductance defect based on the instance segmentation technology according to claim 1, wherein the CA-ELAN module is characterized in that the CA module is introduced into the E-ELAN module of the backbone network layer, the CA module obtains the width and the height of the chip inductance image in the chip inductance defect feature map of the backbone network layer, and encodes the accurate position, so that the network captures the context information of the chip inductance image of the multi-scale information, and then the E-ELAN module performs feature fusion to obtain the chip inductance defect image feature with the context information, wherein the multi-scale information is fused.
6. The method for detecting patch inductance defects based on the instance division technique according to claim 5, wherein the CA module comprises the steps of:
step a, inputting the input size asAnd using pooling cores with the sizes of (1, W) and (H, 1) to carry out global average pooling to obtain a horizontal direction feature vector and a vertical direction feature vector of global average value vector codes in channel dimension:
wherein H represents height, W represents width, C represents channel number,representing the global average of the input profile X at the c-th channel,/or->Representing that the c-th channel coordinates are (i, j) feature information;
step b, carrying out convolution, batch normalization and LeakyReLu activation functions on the horizontal direction feature vector and the vertical direction feature vector obtained in the step a by using a convolution kernel with the size of 1 multiplied by 1 to carry out feature mapping:
wherein,representing the attention response of the input profile X on the c-th channel,/for>Representing a 1 x 1 convolution kernel;
c, decomposing the features mapped in the step b into two independent decomposition features along the horizontal-vertical direction according to the original W and HAnd the characteristic conversion is carried out respectively on the two convolution kernel convolutions with the size of 1 multiplied by 1 and the activation function of the LeakyReLu:
wherein,attention weights in the horizontal and vertical directions of the input patch inductance defect feature map,convolution kernel of size 1 x 1;
step d, obtaining a final feature vector:
final output chip inductor defect characteristic diagram
7. The method of claim 1, wherein the total loss function in step 3 includes a class loss function, a location loss function, and a target confidence loss function.
8. The method for detecting patch inductance defects based on the example segmentation technique according to claim 7, wherein the positioning loss function introduces a wide-high loss, specifically:
wherein b is a prediction frame, b gt For a real bounding box, the IOU is the degree of coverage between the prediction bounding box and the real bounding box,squaring the Euclidean distance between the prediction boundary box and the actual boundary box center point; />Square of the diagonal length of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Squaring the Euclidean distance between the width of the prediction bounding box and the actual bounding box width; />Is the square of the euclidean distance between the height of the prediction bounding box and the height of the actual bounding box; />Square of the width of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; />Square of the height of the smallest closed rectangle that contains the prediction bounding box and the actual bounding box; w (w) gt For the width of the actual bounding box, h gt To the height of the actual bounding box, w p For predicting the width of the bounding box, h p Is the height of the prediction bounding box.
CN202311414253.9A 2023-10-30 2023-10-30 Patch inductance defect detection method based on example segmentation technology Pending CN117152139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311414253.9A CN117152139A (en) 2023-10-30 2023-10-30 Patch inductance defect detection method based on example segmentation technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311414253.9A CN117152139A (en) 2023-10-30 2023-10-30 Patch inductance defect detection method based on example segmentation technology

Publications (1)

Publication Number Publication Date
CN117152139A true CN117152139A (en) 2023-12-01

Family

ID=88904666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311414253.9A Pending CN117152139A (en) 2023-10-30 2023-10-30 Patch inductance defect detection method based on example segmentation technology

Country Status (1)

Country Link
CN (1) CN117152139A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223088A (en) * 2020-01-16 2020-06-02 东南大学 A casting surface defect recognition method based on deep convolutional neural network
CN112733924A (en) * 2021-01-04 2021-04-30 哈尔滨工业大学 Multi-patch component detection method
CN113674247A (en) * 2021-08-23 2021-11-19 河北工业大学 An X-ray weld defect detection method based on convolutional neural network
CN114548231A (en) * 2022-01-26 2022-05-27 广东工业大学 Patch resistor micro-cavity and welding spot feature extraction method based on multilayer convolution network
CN115457026A (en) * 2022-10-11 2022-12-09 陕西科技大学 Paper defect detection method based on improved YOLOv5
CN115511812A (en) * 2022-09-19 2022-12-23 华侨大学 Industrial product surface defect detection method based on deep learning
CN115546144A (en) * 2022-09-30 2022-12-30 湖南科技大学 PCB surface defect detection method based on improved Yolov5 algorithm
CN115908382A (en) * 2022-12-20 2023-04-04 东华大学 A Fabric Defect Detection Method Based on HCS-YOLOV5
CN116309451A (en) * 2023-03-20 2023-06-23 佛山科学技术学院 Method and system for surface defect detection of chip inductors based on token fusion
CN116385401A (en) * 2023-04-06 2023-07-04 浙江理工大学桐乡研究院有限公司 High-precision visual detection method for textile defects
CN116399888A (en) * 2023-04-20 2023-07-07 广东工业大学 A detection method and device based on chip resistor solder joint voids
CN116416613A (en) * 2023-04-13 2023-07-11 广西壮族自治区农业科学院 Citrus fruit identification method and system based on improved YOLO v7
CN116468716A (en) * 2023-04-26 2023-07-21 山东省计算中心(国家超级计算济南中心) YOLOv 7-ECD-based steel surface defect detection method
CN116630263A (en) * 2023-05-18 2023-08-22 西安工程大学 Weld X-ray image defect detection and identification method based on deep neural network
WO2023173598A1 (en) * 2022-03-15 2023-09-21 中国华能集团清洁能源技术研究院有限公司 Fan blade defect detection method and system based on improved ssd model
CN116843636A (en) * 2023-06-26 2023-10-03 三峡大学 Insulator defect detection method based on improved YOLOv7 algorithm in foggy weather scene

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223088A (en) * 2020-01-16 2020-06-02 东南大学 A casting surface defect recognition method based on deep convolutional neural network
CN112733924A (en) * 2021-01-04 2021-04-30 哈尔滨工业大学 Multi-patch component detection method
CN113674247A (en) * 2021-08-23 2021-11-19 河北工业大学 An X-ray weld defect detection method based on convolutional neural network
CN114548231A (en) * 2022-01-26 2022-05-27 广东工业大学 Patch resistor micro-cavity and welding spot feature extraction method based on multilayer convolution network
WO2023173598A1 (en) * 2022-03-15 2023-09-21 中国华能集团清洁能源技术研究院有限公司 Fan blade defect detection method and system based on improved ssd model
CN115511812A (en) * 2022-09-19 2022-12-23 华侨大学 Industrial product surface defect detection method based on deep learning
CN115546144A (en) * 2022-09-30 2022-12-30 湖南科技大学 PCB surface defect detection method based on improved Yolov5 algorithm
CN115457026A (en) * 2022-10-11 2022-12-09 陕西科技大学 Paper defect detection method based on improved YOLOv5
CN115908382A (en) * 2022-12-20 2023-04-04 东华大学 A Fabric Defect Detection Method Based on HCS-YOLOV5
CN116309451A (en) * 2023-03-20 2023-06-23 佛山科学技术学院 Method and system for surface defect detection of chip inductors based on token fusion
CN116385401A (en) * 2023-04-06 2023-07-04 浙江理工大学桐乡研究院有限公司 High-precision visual detection method for textile defects
CN116416613A (en) * 2023-04-13 2023-07-11 广西壮族自治区农业科学院 Citrus fruit identification method and system based on improved YOLO v7
CN116399888A (en) * 2023-04-20 2023-07-07 广东工业大学 A detection method and device based on chip resistor solder joint voids
CN116468716A (en) * 2023-04-26 2023-07-21 山东省计算中心(国家超级计算济南中心) YOLOv 7-ECD-based steel surface defect detection method
CN116630263A (en) * 2023-05-18 2023-08-22 西安工程大学 Weld X-ray image defect detection and identification method based on deep neural network
CN116843636A (en) * 2023-06-26 2023-10-03 三峡大学 Insulator defect detection method based on improved YOLOv7 algorithm in foggy weather scene

Similar Documents

Publication Publication Date Title
CN110598736B (en) Power equipment infrared image fault positioning, identifying and predicting method
CN110310261B (en) A catenary suspension string defect detection model training method and defect detection method
CN110175982B (en) A Defect Detection Method Based on Target Detection
CN109829893B (en) Defect target detection method based on attention mechanism
CN113920107A (en) A method of insulator damage detection based on improved yolov5 algorithm
CN109840900B (en) A fault online detection system and detection method applied to intelligent manufacturing workshops
CN111784685A (en) A transmission line defect image recognition method based on cloud-edge collaborative detection
CN112967271B (en) A Casting Surface Defect Recognition Method Based on Improved DeepLabv3+ Network Model
CN113393438B (en) A resin lens defect detection method based on convolutional neural network
CN110598698B (en) Natural scene text detection method and system based on adaptive regional suggestion network
CN106127204A (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN111444939A (en) Small-scale equipment component detection method based on weakly supervised collaborative learning in open scenarios in the power field
CN115862073B (en) A method for detection and recognition of endangered bird species in substations based on machine vision
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN116385958A (en) An edge intelligent detection method for power grid inspection and monitoring
CN111652853A (en) A detection method of magnetic particle flaw detection based on deep convolutional neural network
CN112686833A (en) Industrial product surface defect detecting and classifying device based on convolutional neural network
CN114529839A (en) Unmanned aerial vehicle routing inspection-oriented power transmission line hardware anomaly detection method and system
CN117576095A (en) A metal surface defect detection and classification method for multi-scale learning tasks
CN114170686A (en) Elbow bending behavior detection method based on human body key points
CN116402769A (en) A high-precision textile defect intelligent detection method considering both large and small objects
CN117372339A (en) PCB defect detection and identification method based on YOLO-SEE
CN118485631A (en) A glass bead defect detection method based on improved spatial pyramid pooling
CN116912670A (en) Deep sea fish identification method based on improved YOLO model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20231201

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载