+

CN113469287A - Spacecraft multi-local component detection method based on instance segmentation network - Google Patents

Spacecraft multi-local component detection method based on instance segmentation network Download PDF

Info

Publication number
CN113469287A
CN113469287A CN202110850338.6A CN202110850338A CN113469287A CN 113469287 A CN113469287 A CN 113469287A CN 202110850338 A CN202110850338 A CN 202110850338A CN 113469287 A CN113469287 A CN 113469287A
Authority
CN
China
Prior art keywords
bounding box
spacecraft
information
local
instance segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110850338.6A
Other languages
Chinese (zh)
Inventor
陈榆琅
郭淼
高晶敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202110850338.6A priority Critical patent/CN113469287A/en
Publication of CN113469287A publication Critical patent/CN113469287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于实例分割网络的航天器多局部构件检测方法,包括如下步骤:对航天器输入图像进行特征提取,得到特征图S3~S5;将S3~S5输入到主干网络FPN结构中,得到多尺度的特征图P3~P6;将所述多尺度特征图P3~P6输入到目标检测头部进行航天器局部构件检测,从而获得对目标的类别、边界框以及中心度的预测结果;通过所述类别及中心度信息对所述边界框进行优化,得到优化后的边界框;将所述优化后的边界框输入到掩码生成分支进行实例分割,得到航天器局部构件的掩码信息,进一步可获得构件的轮廓信息。本发明所提出的局部构件检测及分割方法较现有网络模型在检测精度及速度上具有更优的性能表现。

Figure 202110850338

The present invention provides a method for detecting multiple local components of a spacecraft based on an instance segmentation network. In the FPN structure, multi-scale feature maps P 3 to P 6 are obtained; the multi-scale feature maps P 3 to P 6 are input into the target detection head to detect local components of the spacecraft, so as to obtain the category and bounding box of the target. and the prediction result of the centrality; optimize the bounding box through the category and centrality information to obtain the optimized bounding box; input the optimized bounding box into the mask generation branch for instance segmentation to obtain the aerospace The mask information of the local component of the device can be obtained, and the outline information of the component can be further obtained. Compared with the existing network model, the local component detection and segmentation method proposed by the present invention has better performance in detection accuracy and speed.

Figure 202110850338

Description

Spacecraft multi-local component detection method based on instance segmentation network
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a spacecraft multi-local-component detection method based on an example segmentation network.
Background
Obtaining the relative position and the relative attitude (hereinafter referred to as relative pose) between the spacecrafts is an important premise for ensuring the smooth performance of space interaction tasks such as spacecraft pointing tracking and the like, and the detection and identification technology of local components (solar wings, antennas and the like) of the spacecrafts is a key technology. The method has the advantages that the category of the target member is accurately identified, the high-level characteristic information of the local members such as the edge contour, the corner point and the size of the local members of the spacecraft is effectively detected, and powerful data support can be provided for accurate estimation of the relative pose between the spacecrafts.
When a target detection task is performed on a spacecraft in space, two main characteristics and difficulties exist: 1. the on-orbit spacecraft is in a real-time and high-speed motion state, the attitude of the on-orbit spacecraft is constantly changed, and the problems that the overall shape and the size of the spacecraft are greatly changed, the components are partially shielded and the like often exist. 2. The problems of serious noise pollution, low contrast, low average brightness and the like of the space image imaging image can be caused by the influences of continuous change of space illumination conditions, shaking of imaging load, image quality degradation of an imaging system and the like. The above problems can greatly limit the detection accuracy of the spacecraft local member.
Referring to fig. 1, a detection model of the centrmask is provided, which is composed of a feature extractor, a target detection head, and a mask generation head. The backbone network of the feature extractor is VoVNetV2, a residual structure and an eSE (Effective squeee-attention) attention module are merged into the VoVNet, and a target detection head consists of a category prediction branch, a centrality prediction branch and a boundary box regression branch; at the same time, the authors of the centrmask also proposed a spatial attention mechanism SAM for directing mask generation branches to highlight pixels with valid information and to suppress pixels without valid information. In the CenterMask, the feature extractor first performs 6-layer down-sampling of an input image and outputs feature maps (P3 to P7). Then, the detector performs class prediction, bounding box regression and centrality prediction on the feature map output by each layer. Finally, generating the head through the mask can obtain an image segmentation result. However, the centrmask introduces an attention mechanism in the mask generation header, but it still has insufficient attention to the inter-channel information, thereby affecting the accuracy of detection.
Therefore, it is desirable to provide an improved detection model.
Disclosure of Invention
The invention aims to provide a spacecraft multi-local component detection method based on an example segmentation network, which can more accurately obtain the category information, the boundary frame information and the outline information of a target component, and further improve the precision and the speed of component detection.
In order to achieve the above object, the present invention provides a method for detecting multiple local components of a spacecraft based on an example segmentation network, comprising the following steps:
the method comprises the following steps: performing feature extraction on the spacecraft input image to obtain feature maps S3-S5;
step two; inputting S3-S5 into a backbone network FPN structure to obtain multi-scale feature maps P3-P6, wherein the process is defined as follows:
P5=Conv1×1(S5)
P6=Maxpooling(P5)
Pi=Upsample(Si+1)+Conv1×1(Si),i=3,4
wherein, Conv1 × 1 represents a convolution layer with a convolution kernel size of 1 × 1, and Upesample represents a non-linear upsampling layer;
step three: inputting the multi-scale feature maps P3-P6 into a target detection head to detect the local components of the spacecraft, thereby obtaining the prediction results of the type, the bounding box and the centrality of the target;
step four: optimizing the bounding box according to the category and the centrality information to obtain an optimized bounding box;
step five: and inputting the optimized bounding box into a mask generation branch for example segmentation to obtain mask information of the local component of the spacecraft, and further obtaining contour information of the component.
Wherein, step one specifically includes: performing feature extraction on the input image by using VoVNet v2 to obtain a feature map S3~S5
Wherein, the fourth step specifically comprises: and multiplying the classification fraction of each bounding box obtained in the step three by the predicted centrality to obtain a final evaluation fraction value of each bounding box, wherein the fraction value of the bounding box far away from the center of the object is lower, the fraction value of the bounding box near the center of the object is higher, and the fraction values are used for sorting and screening each bounding box to obtain the optimized bounding box.
In the fourth step, the bounding box is screened in a non-maximum suppression method mode.
Wherein, step five specifically includes: inputting the optimized bounding box feature map obtained in the step four into an SCAM mask branch to obtain an information-reinforced feature map; then, the class of each pixel is predicted by using 1 × 1 convolution, a mask of a specific class is generated, and after mask information of a spacecraft local component is obtained with the dimension of 28 × 28 × 2, contour information of the component can be further obtained.
The invention has the following beneficial effects: the detection method can be used for detecting the category information, the boundary box information and the mask information of the local member of the target spacecraft under the condition of less sample number. Firstly, the feature extraction layer of the anchor-frame-free detector Fcos is reduced, the relevance between the centrality prediction branch and the boundary frame prediction branch is enhanced, and the component detection precision is improved. Then, a space-channel attention mechanism is designed and introduced into a mask generation branch of the CenterMask, and the segmentation precision of the building block is improved. The experiment result shows that the provided local component detection and segmentation method has better performance in detection precision and speed than the original network model.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a prior art detection method of the CenterMask detection model;
FIG. 2 is a method for detecting multiple local members of a spacecraft based on an example segmented network according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a spacecraft multi-local component detection method based on an example segmentation network, which comprises the following steps:
the method comprises the following steps: performing feature extraction on the spacecraft input image to obtain feature maps S3-S5;
step two; inputting S3-S5 into a backbone network FPN structure to obtain multi-scale feature maps P3-P6, wherein the process is defined as follows:
P5=Conv1×1(S5)
P6=Maxpooling(P5)
Pi=Upsample(Si+1)+Conv1×1(Si),i=3,4
wherein, Conv1 × 1 represents a convolution layer with a convolution kernel size of 1 × 1, and Upesample represents a non-linear upsampling layer;
step three: inputting the multi-scale feature maps P3-P6 into a target detection head to detect the local components of the spacecraft, thereby obtaining the prediction results of the type, the bounding box and the centrality of the target;
step four: optimizing the bounding box according to the category and the centrality information to obtain an optimized bounding box;
step five: and inputting the optimized bounding box into a mask generation branch for example segmentation to obtain mask information of the local component of the spacecraft, and further obtaining contour information of the component.
In a specific embodiment, the invention firstly uses the VoVNet v2 to perform five-stage feature extraction on an input image, and outputs feature maps S2-S5. In order to reduce the amount of calculation of the model and improve the detection efficiency, only S3 to S5 are input into the FPN structure. The high resolution signature S3 may enable the detector to better detect small components, such as antennas. The signature S5 with a wider field of view enables the detector to better detect large members, such as the solar wing.
In a specific embodiment, the invention constructs a CNN-based detection model SCD (satellite components detection), and initializes model parameters by using parameters in a CenterMask pre-trained in an MS-COCO dataset, and initializes different parameters in the CenterMask and SCD by using a standard normal distribution. Model training, namely transfer learning, is then performed on the constructed small sample training set. And finally, obtaining the SCD model after training optimization. By the detection model, the automatic and accurate detection of the local component of the target spacecraft can be realized, and the category, the boundary box and the mask information of the target component are obtained.
In one specific embodiment, the local component detector structure is mainly composed of three parts, namely a backbone network, a feature pyramid and a target detection head. In the embodiment, the VoVNet v2 is used as a backbone network, and a residual structure and an eSE attention module are merged into the VoVNet v2, so that not only can multiple perception fields be efficiently captured, but also the interdependence relation between channels mapped by features can be clarified, and the feature information representation can be enhanced. The details of the backbone network are shown in table 1. The target detection head consists of three branches, namely a classification prediction branch, a bounding box regression branch and a centrality prediction branch. The classification prediction branch is used for predicting confidence degrees of a plurality of classes, the class with the highest confidence degree is used as a prediction class, and the boundary box regression branch is used for predicting four offset values of four boundaries (a left boundary, a right boundary, a top part and a bottom part) of a boundary box relative to a certain position. Since the centrality is correlated with the offset of the bounding box, that is, when the offset of the bounding box is close to the true value, the accurate centrality can be obtained through the offset, and there is basically no correlation with the classification task, in this embodiment, the centrality prediction header is parallel to the regression branch of the bounding box, which not only can strengthen the correlation between the two, but also can reduce the model parameters by sharing the convolution layer.
TABLE 1 backbone network architecture
Figure BDA0003182215800000051
In one specific embodiment, the target component optimization bounding box detection flow is as follows. During prediction, the detected category fraction, the centrality and the bounding box information can be obtained through the feature extraction structure and the target detection head. Then, the classification fraction of each bounding box is multiplied by the predicted centrality to obtain a final evaluation fraction value of each bounding box, the fraction value of the bounding box positioned far away from the center of the object is lower, the fraction value of the bounding box positioned close to the center of the object is higher, each bounding box is sorted by using the fraction values, and then a Non-Maximum Suppression (NMS) method is used for screening to obtain an optimized bounding box. The method can obviously improve the target detection performance of the model, input the optimized bounding boxes into the mask generation branch for example segmentation, and further improve the segmentation precision.
In one specific embodiment, the spatial-channel attention mechanism enhances/suppresses attention mechanisms with different strengths in the channel dimension, and retains more channel feature information aiming at directing the mask generation branch to focus on objects with salient features between different channels.
In a specific embodiment, the dimension of an input feature map of a Spatial-channel attention module (SCAM) is W × H × C, after single-layer convolution, an attention map with the dimension of W × H × C is generated, after Sigmoid activation, elements in the attention map are mapped to [0,1], and finally, the activated attention map is multiplied by an input original feature map pixel by pixel, so that features rich in information are enhanced, and features without effective information are suppressed.
In a specific embodiment, in a prediction stage, the feature map with a high-quality bounding box is firstly input into an SCAM mask branch to obtain an information-reinforced feature map; then, the class of each pixel is predicted by using 1 × 1 convolution, a mask of a specific class is generated, and the dimension of the mask is 28 × 28 × 2, so that the spacecraft local member segmentation task is completed.
Table 2 shows comparison of detection and segmentation performance of various models, and in terms of detection accuracy, the SCD provided by the present invention can achieve optimal APbox (average accuracy rate of bounding box detection) and APmask (average accuracy rate of Mask detection), which are respectively increased by 2.5% and 1.5% compared with cm (centermask), and increased by 4.9% and 2.5% compared with MR (Mask R-CNN). It is substantially consistent with both methods in terms of detection speed. The SU-SCD (Speed-up SCD) provided by the invention is higher than CM-Lite (CenterMask-Lite) in the detection Speed and accuracy of the solar wing and the antenna. Compared with CM-Lite, SU-SCD is improved by 1.8% and 0.8% respectively on APbox and APmask, and simultaneously, the speed is improved by 0.5FPS, compared with CM and MR, the SU-SCD is improved by 5FPS in speed.
TABLE 2 multiple model detection and segmentation Performance comparison
Figure BDA0003182215800000061
The invention has the following beneficial effects: the detection method can be used for detecting the category information, the boundary box information and the mask information of the local member of the target spacecraft under the condition of less sample number. Firstly, the feature extraction layer of the anchor-frame-free detector Fcos is reduced, the relevance between the centrality prediction branch and the boundary frame prediction branch is enhanced, and the component detection precision is improved. Then, a space-channel attention mechanism is designed and introduced into a mask generation branch of the CenterMask, and the segmentation precision of the building block is improved. The experiment result shows that the provided local component detection and segmentation method has better performance in detection precision and speed than the original network model.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1.一种基于实例分割网络的航天器多局部构件检测方法,其特征在于,包括如下步骤:1. a kind of spacecraft multi-part component detection method based on instance segmentation network, is characterized in that, comprises the steps: 步骤一:对航天器输入图像进行特征提取,得到特征图S3~S5Step 1: perform feature extraction on the input image of the spacecraft to obtain feature maps S 3 to S 5 ; 步骤二;将S3~S5输入到主干网络FPN结构中,得到多尺度的特征图P3~P6,该过程定义如下:Step 2: Input S 3 to S 5 into the FPN structure of the backbone network to obtain multi-scale feature maps P 3 to P 6 , and the process is defined as follows: P5=Conv1×1(S5)P 5 =Conv 1×1 (S 5 ) P6=Maxpooling(P5)P 6 =Maxpooling(P 5 ) Pi=Upsample(Si+1)+Conv1×1(Si),i=3,4P i =Upsample(S i+1 )+Conv 1×1 (S i ), i=3,4 其中,Conv1×1代表卷积核尺寸为1×1的卷积层,Upsample代表非线性上采样层;Among them, Conv 1×1 represents a convolutional layer with a convolution kernel size of 1×1, and Upsample represents a nonlinear upsampling layer; 步骤三:将所述多尺度特征图P3~P6输入到目标检测头部进行航天器局部构件检测,从而获得对目标的类别、边界框以及中心度的预测结果;Step 3: inputting the multi-scale feature maps P 3 to P 6 into the target detection head to detect local components of the spacecraft, so as to obtain prediction results for the category, bounding box and centrality of the target; 步骤四:通过所述类别及中心度信息对所述边界框进行优化,得到优化后的边界框;Step 4: Optimizing the bounding box through the category and centrality information to obtain an optimized bounding box; 步骤五:将所述优化后的边界框输入到掩码生成分支进行实例分割,得到航天器局部构件的掩码信息,进一步可获得构件的轮廓信息。Step 5: Input the optimized bounding box into the mask generation branch to perform instance segmentation to obtain the mask information of the local components of the spacecraft, and further obtain the outline information of the components. 2.根据权利要求1所述的一种基于实例分割网络的航天器多局部构件检测方法,其特征在于,步骤一具体包括:利用VoVNet v2对输入图像进行特征提取,得到特征图S3~S52. a kind of spacecraft multi-local component detection method based on instance segmentation network according to claim 1, is characterized in that, step 1 specifically comprises: utilize VoVNet v2 to carry out feature extraction to input image, obtain feature map S 3 ~S 5 . 3.根据权利要求1所述的一种基于实例分割网络的航天器多局部构件检测方法,其特征在于,步骤四具体包括:对步骤三所获得的每个边界框的分类分数与预测的中心度相乘,得到每个边界框最终的评价分数值,位置远离物体中心的边界框的分数值更低,位置靠近物体中心的边界框的分数值会更高,利用这些分数值对每个边界框进行排序及筛选,得到优化后的边界框。3. a kind of spacecraft multi-local component detection method based on instance segmentation network according to claim 1, is characterized in that, step 4 specifically comprises: to the classification score of each bounding box obtained in step 3 and the center of prediction Multiply the degrees to get the final evaluation score of each bounding box. The bounding box located far from the center of the object will have a lower score, and the bounding box located close to the center of the object will have a higher score. Use these scores to evaluate each boundary. The boxes are sorted and filtered to obtain the optimized bounding box. 4.根据权利要求3所述的一种基于实例分割网络的航天器多局部构件检测方法,其特征在于,步骤四中,通过非极大值抑制方法的方式对边界框进行筛选。4 . The method for detecting multiple local components of a spacecraft based on an instance segmentation network according to claim 3 , wherein, in step 4, the bounding box is screened by means of a non-maximum value suppression method. 5 . 5.根据权利要求1所述的一种基于实例分割网络的航天器多局部构件检测方法,其特征在于,步骤五具体包括:将步骤四所得到的优化后的边界框特征图输入到空间-通道注意力掩码分支中,得到信息加强的特征图;然后,利用1×1卷积对每个像素的类别进行预测,生成特定类的掩码,其维度为28×28×N,得到航天器局部构件的掩码信息后,进一步可获得构件的轮廓信息。5. a kind of spacecraft multi-local component detection method based on instance segmentation network according to claim 1, is characterized in that, step 5 specifically comprises: the optimized bounding box feature map obtained in step 4 is input into space- In the channel attention mask branch, the information-enhanced feature map is obtained; then, the category of each pixel is predicted by 1 × 1 convolution to generate a class-specific mask with a dimension of 28 × 28 × N, and the aerospace After the mask information of the local component of the device is obtained, the outline information of the component can be further obtained.
CN202110850338.6A 2021-07-27 2021-07-27 Spacecraft multi-local component detection method based on instance segmentation network Pending CN113469287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110850338.6A CN113469287A (en) 2021-07-27 2021-07-27 Spacecraft multi-local component detection method based on instance segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110850338.6A CN113469287A (en) 2021-07-27 2021-07-27 Spacecraft multi-local component detection method based on instance segmentation network

Publications (1)

Publication Number Publication Date
CN113469287A true CN113469287A (en) 2021-10-01

Family

ID=77882738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110850338.6A Pending CN113469287A (en) 2021-07-27 2021-07-27 Spacecraft multi-local component detection method based on instance segmentation network

Country Status (1)

Country Link
CN (1) CN113469287A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332506A (en) * 2021-12-30 2022-04-12 四川沐迪圣科技有限公司 A multi-scale space joint model and its visual detection method
CN114419405A (en) * 2021-12-13 2022-04-29 上海悠络客电子科技股份有限公司 A lightweight target detection network and detection method based on OSA block
CN114549833A (en) * 2022-01-25 2022-05-27 北京交通大学 Instance partitioning method and device, electronic equipment and storage medium
CN114549543A (en) * 2021-12-30 2022-05-27 浙江大华技术股份有限公司 Construction method, device, terminal and storage medium of three-dimensional model of building
CN115439689A (en) * 2022-09-03 2022-12-06 哈尔滨工业大学(威海) Near-shore visual ship target detection method based on direct guide mask detection network
CN115578569A (en) * 2022-09-28 2023-01-06 深圳市华汉伟业科技有限公司 Training method, segmentation method and device for small target object instance segmentation model
WO2023083231A1 (en) * 2021-11-12 2023-05-19 Huawei Technologies Co., Ltd. System and methods for multiple instance segmentation and tracking

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN111598030A (en) * 2020-05-21 2020-08-28 山东大学 Method and system for detecting and segmenting vehicle in aerial image
CN111738110A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Vehicle target detection method in remote sensing images based on multi-scale attention mechanism
CN112070713A (en) * 2020-07-03 2020-12-11 中山大学 A Multi-scale Object Detection Method Introducing Attention Mechanism
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112837330A (en) * 2021-03-02 2021-05-25 中国农业大学 Leaf segmentation method based on multi-scale dual attention mechanism and fully convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN111598030A (en) * 2020-05-21 2020-08-28 山东大学 Method and system for detecting and segmenting vehicle in aerial image
CN111738110A (en) * 2020-06-10 2020-10-02 杭州电子科技大学 Vehicle target detection method in remote sensing images based on multi-scale attention mechanism
CN112070713A (en) * 2020-07-03 2020-12-11 中山大学 A Multi-scale Object Detection Method Introducing Attention Mechanism
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112837330A (en) * 2021-03-02 2021-05-25 中国农业大学 Leaf segmentation method based on multi-scale dual attention mechanism and fully convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YULANG CHEN 等: "Satellite Components Detection from Optical Images Based on Instance Segmentation Networks", 《ARC》, pages 355 - 365 *
YU-LANG CHEN等: "SURF-Based Image Matching Method for Landing on Small Celestial Bodies", 《INTERNATIONAL CONFERENCE ON MODELING, ANALYSIS, SIMULATION TECHNOLOGIES AND APPLICATIONS (MASTA 2019)》, pages 1 - 7 *
陈榆琅: "卫星多局部构件检测及低照度图像增强方法", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, pages 031 - 140 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023083231A1 (en) * 2021-11-12 2023-05-19 Huawei Technologies Co., Ltd. System and methods for multiple instance segmentation and tracking
US12033307B2 (en) 2021-11-12 2024-07-09 Huawei Technologies Co., Ltd. System and methods for multiple instance segmentation and tracking
CN114419405A (en) * 2021-12-13 2022-04-29 上海悠络客电子科技股份有限公司 A lightweight target detection network and detection method based on OSA block
CN114332506A (en) * 2021-12-30 2022-04-12 四川沐迪圣科技有限公司 A multi-scale space joint model and its visual detection method
CN114549543A (en) * 2021-12-30 2022-05-27 浙江大华技术股份有限公司 Construction method, device, terminal and storage medium of three-dimensional model of building
CN114549543B (en) * 2021-12-30 2025-03-25 浙江大华技术股份有限公司 Method, device, terminal and storage medium for constructing three-dimensional model of building
CN114332506B (en) * 2021-12-30 2025-04-01 四川成电多物理智能感知科技有限公司 A multi-scale spatial joint model and its visual detection method
CN114549833A (en) * 2022-01-25 2022-05-27 北京交通大学 Instance partitioning method and device, electronic equipment and storage medium
CN115439689A (en) * 2022-09-03 2022-12-06 哈尔滨工业大学(威海) Near-shore visual ship target detection method based on direct guide mask detection network
CN115439689B (en) * 2022-09-03 2025-08-08 哈尔滨工业大学(威海) Nearshore visual ship target detection method based on directly guided mask detection network
CN115578569A (en) * 2022-09-28 2023-01-06 深圳市华汉伟业科技有限公司 Training method, segmentation method and device for small target object instance segmentation model

Similar Documents

Publication Publication Date Title
CN113469287A (en) Spacecraft multi-local component detection method based on instance segmentation network
CN109949317B (en) Semi-supervised image example segmentation method based on gradual confrontation learning
CN109902715B (en) Infrared dim target detection method based on context aggregation network
Yan et al. Combining the best of convolutional layers and recurrent layers: A hybrid network for semantic segmentation
CN112686304A (en) Target detection method and device based on attention mechanism and multi-scale feature fusion and storage medium
CN113743505A (en) An improved SSD object detection method based on self-attention and feature fusion
CN111680705B (en) MB-SSD method and MB-SSD feature extraction network suitable for target detection
CN112183649A (en) An Algorithm for Predicting Pyramid Feature Maps
CN113569881A (en) Self-adaptive semantic segmentation method based on chain residual error and attention mechanism
Guo et al. D3-Net: Integrated multi-task convolutional neural network for water surface deblurring, dehazing and object detection
CN117392640A (en) A traffic sign detection method based on improved YOLOv8s
CN117994573A (en) Infrared dim target detection method based on superpixel and deformable convolution
CN114863199B (en) An object detection method based on optimized anchor box mechanism
Liu et al. A generative adversarial network for infrared and visible image fusion using adaptive dense generator and Markovian discriminator
Hong et al. Study on lightweight strategies for L-YOLO algorithm in road object detection
Ji et al. EFR-ACENet: Small object detection for remote sensing images based on explicit feature reconstruction and adaptive context enhancement
CN110503090A (en) Character Detection Network Training Method, Character Detection Method and Character Detector Based on Restricted Attention Model
CN113469286A (en) Spacecraft multi-local component detection method based on regional convolutional neural network
CN114494827A (en) A small target detection method for detecting aerial pictures
CN117853582B (en) Star sensor rapid star image extraction method based on improved Faster R-CNN
Yang et al. 3DF-FCOS: Small object detection with 3D features based on FCOS
Wu et al. STD-YOLOv8: A lightweight small target detection algorithm for UAV perspectives.
CN115578721A (en) Streetscape text real-time detection method based on attention feature fusion
CN115035390A (en) Aerial photography image detection method based on GAN and feature enhancement
Jiang et al. FPGA-based accurate star segmentation with moon interference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211001

RJ01 Rejection of invention patent application after publication
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载