+

CN109523569A - A kind of remote sensing image dividing method and device based on more granularity network integrations - Google Patents

A kind of remote sensing image dividing method and device based on more granularity network integrations Download PDF

Info

Publication number
CN109523569A
CN109523569A CN201811215642.8A CN201811215642A CN109523569A CN 109523569 A CN109523569 A CN 109523569A CN 201811215642 A CN201811215642 A CN 201811215642A CN 109523569 A CN109523569 A CN 109523569A
Authority
CN
China
Prior art keywords
layer
network
image
sub
convolutional layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811215642.8A
Other languages
Chinese (zh)
Other versions
CN109523569B (en
Inventor
李叶
王先锋
许乐乐
郭丽丽
阎镇
饶骏
金山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technology and Engineering Center for Space Utilization of CAS
Original Assignee
Technology and Engineering Center for Space Utilization of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technology and Engineering Center for Space Utilization of CAS filed Critical Technology and Engineering Center for Space Utilization of CAS
Priority to CN201811215642.8A priority Critical patent/CN109523569B/en
Publication of CN109523569A publication Critical patent/CN109523569A/en
Application granted granted Critical
Publication of CN109523569B publication Critical patent/CN109523569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于多粒度网络融合的光学遥感图像分割方法和装置。该方法包括:步骤1,采集至少一张光学遥感图像作为训练图像,设置至少一种图像类别,并对所述训练图像标注所述图像类别;步骤2,基于被标注的训练图像,利用反向传播算法对预先构建的多粒度网络融合模型进行训练,其中,所述多粒度网络融合模型包括四个子神经网络;步骤3,将待处理图像输入经训练的多粒度网络融合模型中,根据所述四个子神经网络确定所述待处理图像中每个像素的所述图像类别,并将所述待处理图像中每个像素的所述图像类别作为分割结果输出。本发明的技术方案可以有效实现对光学遥感图像的精细分割并抑制背景干扰。

The invention relates to an optical remote sensing image segmentation method and device based on multi-granularity network fusion. The method includes: step 1, collecting at least one optical remote sensing image as a training image, setting at least one image category, and labeling the image category for the training image; step 2, based on the labeled training image, using reverse The propagation algorithm trains the pre-built multi-granularity network fusion model, wherein the multi-granularity network fusion model includes four sub-neural networks; step 3, input the image to be processed into the trained multi-granularity network fusion model, according to the The four sub-neural networks determine the image category of each pixel in the image to be processed, and output the image category of each pixel in the image to be processed as a segmentation result. The technical scheme of the invention can effectively realize fine segmentation of optical remote sensing images and suppress background interference.

Description

A kind of remote sensing image dividing method and device based on more granularity network integrations
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of optical remote sensing figure based on more granularity network integrations As dividing method and device.
Background technique
With the fast development of remote sensing technology, remote sensing image spatial resolution is higher and higher, and high definition optics is distant now Sense image has entered the commercialization stage.Have benefited from the high spatial resolution of high definition remote sensing image, many details energy in image It is enough clearly to be showed, strong data, which are provided, to remote sensing images fine segmentation supports.Deep learning method is wide General to be used for Remote Sensing Image Segmentation, common deep learning method has convolutional neural networks, includes the networks such as VGG, FCN.For multiple Miscellaneous remote sensing image, VGG network can be realized the fine segmentation of remote sensing image, but also while greatly by complex background Interference, generate many erroneous detections.At the same time, although FCN network can largely inhibit the interference of complex background, It is inadequate to the fine degree of remote sensing image segmentation, it cannot achieve fine remote sensing image segmentation.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of remote sensing image segmentation based on more granularity network integrations Method and apparatus.
In a first aspect, the present invention provides a kind of remote sensing image dividing method based on more granularity network integrations, it should Method includes:
Step 1, at least one image category is arranged as training image in an acquisition at least remote sensing image, and right The training image marks described image classification.
Step 2, more granularity network integrations based on the training image being marked, using back-propagation algorithm to constructing in advance Model is trained, wherein more granularity network integration models include four sub-neural networks.
Step 3, image to be processed is inputted in housebroken more granularity network integration models, according to four sons nerve Network determines the described image classification of each pixel in the image to be processed, and by each pixel in the image to be processed Described image classification is exported as segmentation result.
Second aspect, the present invention provides a kind of remote sensing image segmenting devices based on more granularity network integrations, should Device includes:
At least one figure is arranged for acquiring an at least remote sensing image as training image in first processing module Described image classification is marked as classification, and to the training image.
Second processing module is more to what is constructed in advance using back-propagation algorithm for based on the training image being marked Granularity network integration model is trained, wherein more granularity network integration models include four sub-neural networks.
Third processing module, for inputting image to be processed in housebroken more granularity network integration models, according to institute State the described image classification that four sub-neural networks determine each pixel in the image to be processed, and by the image to be processed In each pixel described image classification as segmentation result export.
The third aspect, the present invention provides a kind of remote sensing image segmenting devices based on more granularity network integrations, should Device includes memory and processor;The memory, for storing computer program;The processor, for when execution institute When stating computer program, the remote sensing image dividing method as described above based on more granularity network integrations is realized.
Fourth aspect is stored with computer on the storage medium the present invention provides a kind of computer readable storage medium Program realizes that the optics as described above based on more granularity network integrations is distant when the computer program is executed by processor Feel image partition method.
The beneficial effect of remote sensing image dividing method and device provided by the invention based on more granularity network integrations It is to be trained and recognize using the more granularity network integration models for the sub-neural network for including multiple and different design features, by The characteristics of in the model of fusion can be respectively provided with fine segmentation remote sensing image the characteristics of, enhancing stability and effectively inhibit The characteristics of complex background interferes, so that the fine segmentation ability and complex background AF panel energy to remote sensing image can be improved Power can be realized effectively to the fine segmentation of image for complicated remote sensing image and effectively inhibit background interference.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of process of remote sensing image dividing method based on more granularity network integrations of the embodiment of the present invention Schematic diagram;
Fig. 2 is the schematic diagram of more granularity network integration models of the embodiment of the present invention;
Fig. 3 is a kind of structure of remote sensing image segmenting device based on more granularity network integrations of the embodiment of the present invention Block diagram.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of remote sensing image dividing method based on more granularity network integrations of the embodiment of the present invention Include:
Step 1, at least one image category is arranged as training image in an acquisition at least remote sensing image, and right The training image marks described image classification.Wherein, image category can be building, road, meadow, trees etc., can instruct Classification mark is carried out to each pixel of training image before practicing.
Step 2, more granularity network integrations based on the training image being marked, using back-propagation algorithm to constructing in advance Model is trained, and obtains corresponding model parameter, wherein more granularity network integration models include four sub-neural networks.
Step 3, image to be processed is inputted in housebroken more granularity network integration models, according to four sons nerve Network determines the described image classification of each pixel in the image to be processed, and by each pixel in the image to be processed Described image classification is exported as segmentation result.
In the present embodiment, using the more granularity network integration models for the sub-neural network for including multiple and different design features The characteristics of being trained and recognize, fine segmentation remote sensing image can be respectively provided with due to the model of fusion, enhancing stability The characteristics of and the characteristics of effectively inhibit complex background interference, thus can be improved to the fine segmentation ability of remote sensing image and Complex background interference rejection capability can be realized effectively to the fine segmentation of image and effectively for complicated remote sensing image Inhibit background interference.
Preferably, four sub-neural networks include the first sub-network N1, the second sub-network N2, third sub-network N3With 4th sub-network N4, the specific implementation of the step 3 includes: by the first sub-network N1The image to be processed is successively located Reason obtains fine granularity and divides shot chart, by the second sub-network N2The image to be processed is successively handled to granularity point in acquisition Shot chart is cut, by the third sub-network N3The image to be processed is successively handled and obtains coarseness segmentation shot chart, by institute State the 4th sub-network N4The fine granularity segmentation shot chart, the middle granulometric shot chart and the coarseness segmentation is merged to obtain Component.
First sub-network N1Can fine segmentation remote sensing image, the second sub-network N2Model can be enhanced to locate at the same time Manage stability when fine segmentation and background interference, third sub-network N3The interference of complex background can effectively be inhibited, the Four sub-network N4By more granularity network Ns1、N2、N3It is merged, in conjunction with the advantages of various granularity networks, so that being based on more granularity nets The image partition method of network Fusion Model can be achieved at the same time to the effective of the fine segmentation of remote sensing image and complex background Inhibit.
Preferably, the first sub-network N1Including at least one convolutional layer C set gradually1, down-sampling layer P1And Quan Lian Meet a layer F1;The second sub-network N2Including at least one convolutional layer C set gradually2, down-sampling layer P2, up-sampling layer D2With melt Close layer A2;The third sub-network N3Including at least one convolutional layer C set gradually3, down-sampling layer P3With up-sampling layer D3;Institute State the 4th sub-network N4Including at least one fused layer A set gradually4With convolutional layer C4
In the first sub-network N1In, it is also denoted as convolutional layer { Ci 1, i=1 ..., nc 1,nc 1>=1 }, down-sampling layer { Pi 1,i =1 ..., np 1,np 1>=1 }, full articulamentum { Fi 1, i=1 ..., nf 1,nf 1≥1}.Wherein, i is variable, nc 1Indicate convolutional layer The number of plies, np 1Indicate the number of plies of down-sampling layer, nf 1Indicate the number of plies of full articulamentum.
In the second sub-network N2In, it is also denoted as convolutional layer { Ci 2, i=1 ..., nc 2,nc 2>=1 }, down-sampling layer { Pi 2,i =1 ..., np 2,np 2>=1 }, layer { D is up-sampledi 2, i=1 ..., nd 2,nd 2>=1 }, fused layer { Ai 2, i=1 ..., na 2,na 2≥ 1}.Wherein, i is variable, nc 2Indicate the number of plies of convolutional layer, np 2Indicate the number of plies of down-sampling layer, nd 2Indicate the layer of up-sampling layer Number, na 2Indicate the number of plies of fused layer.
In third sub-network N3In, it is also denoted as convolutional layer { Ci 3, i=1 ..., nc 3,nc 3>=1 }, down-sampling layer { Pi 3,i =1 ..., np 3,np 3>=1 }, layer { D is up-sampledi 3, i=1 ..., nd 3,nd 3≥1}.Wherein, i is variable, nc 3Indicate convolutional layer The number of plies, np 3Indicate the number of plies of down-sampling layer, nd 3Indicate the number of plies of up-sampling layer.
In the 4th sub-network N4In, it is also denoted as fused layer { Ai 4, i=1 ..., na 4,na 4>=1 }, convolutional layer { Ci 4, i= 1,…,nc 4,nc 4≥1}.Wherein, i is variable, na 4Indicate the number of plies of fused layer, nc 4Indicate the number of plies of convolutional layer.
Preferably, in the second sub-network N2In, the fused layer A2Splicing is added positioned at the fused layer A2Before Any at least two layers of output, and splicing result is transmitted to the fused layer A2Next layer later;In the 4th subnet Network N4In, the fused layer A4Splice or be added the first sub-network N1The last layer, the second sub-network N2It is last One layer and the third sub-network N3The last layer output, and splicing result is transmitted to the fused layer A4Later next Layer.
Preferably, as shown in Fig. 2, the first sub-network N1Including the convolutional layer C set gradually1 1, convolutional layer C2 1, under adopt Sample layer P1 1, convolutional layer C3 1, convolutional layer C4 1, down-sampling layer P2 1, convolutional layer C5 1, convolutional layer C6 1, convolutional layer C7 1, down-sampling layer P3 1、 Convolutional layer C8 1, convolutional layer C9 1, convolutional layer C10 1, down-sampling layer P4 1, convolutional layer C11 1, convolutional layer C12 1, convolutional layer C13 1, down-sampling Layer P5 1, full articulamentum F1 1, full articulamentum F2 1With full articulamentum F3 1
That is, in the first sub-network N1In, it altogether include nc 1=13 convolutional layers, np 1=5 down-sampling layers and nf 1= 3 full articulamentums.
The second sub-network N2Including the convolutional layer C set gradually1 2, convolutional layer C2 2, down-sampling layer P1 2, convolutional layer C3 2、 Convolutional layer C4 2, down-sampling layer P2 2, convolutional layer C5 2, convolutional layer C6 2, down-sampling layer P3 2, convolutional layer C7 2, convolutional layer C8 2, down-sampling layer P4 2, convolutional layer C9 2, convolutional layer C10 2, up-sampling layer D1 2, fused layer A1 2, convolutional layer C11 2, convolutional layer C12 2, up-sampling layer D2 2, melt Close layer A2 2, convolutional layer C13 2, convolutional layer C14 2, up-sampling layer D3 2, fused layer A3 2, convolutional layer C15 2, convolutional layer C16 2, up-sampling layer D4 2, fused layer A4 2, convolutional layer C17 2, convolutional layer C18 2With convolutional layer C19 2
That is, in the second sub-network N2In, it altogether include nc 2=19 convolutional layers, np 2=4 down-sampling layers, nd 2=4 A up-sampling layer and na 2=4 fused layers.
The third sub-network N3Including the convolutional layer C set gradually1 3, convolutional layer C2 3, down-sampling layer P1 3, convolutional layer C3 3、 Convolutional layer C4 3, down-sampling layer P2 3, convolutional layer C5 3, convolutional layer C6 3, convolutional layer C7 3, down-sampling layer P3 3, convolutional layer C8 3, convolutional layer C9 3, convolutional layer C10 3, down-sampling layer P4 3, convolutional layer C11 3, convolutional layer C12 3, convolutional layer C13 3, down-sampling layer P5 3, convolutional layer C14 3、 Convolutional layer C15 3, convolutional layer C16 3, up-sampling layer D1 3, up-sampling layer D2 3, up-sampling layer D3 3, up-sampling layer D4 3With up-sampling layer D5 3
That is, in third sub-network N3In, it altogether include nc 3=16 convolutional layers, np 3=5 down-sampling layers and nd 3= 5 up-sampling layers.
The 4th sub-network N4Including the fused layer A set gradually1 4, convolutional layer C1 4, convolutional layer C2 4, convolutional layer C3 4, volume Lamination C4 4With convolutional layer C5 4
That is, in the 4th sub-network N4In, it altogether include na 4=1 fused layer and nc 4=5 convolutional layers.
Wherein, N1、N2、N3In down-sampling layer can use pondization operation realize down-sampling, also can use with stepping Convolution operation greater than 1 realizes down-sampling, to realize the dimensionality reduction to characteristics of image.N2、N3In up-sampling layer can utilize deconvolution Up-sampling is realized in operation, also be can use upper storage reservoirization operation and is realized up-sampling, to realize that the liter to characteristics of image is tieed up.N2、N4In Fused layer can realize fusion using concatenation, also can use phase add operation and realize fusion, melted with realizing to characteristics of image It closes.
Preferably, the step 3 specifically includes:
Step 3.1, by the image cropping to be processed at least two image blocks, such as with multiple pictures in image to be processed Each described image block is inputted first subnet by the image block for cutting out multiple 9 X, 9 Pixel Dimensions centered on vegetarian refreshments respectively Network N1In, described image block successively passes through the first sub-network N1In each layer processing, and by the last layer be each figure As block output category score, the corresponding category score of all described image blocks is pieced together into the fine granularity and divides score Figure.
Step 3.2, the image to be processed is inputted into the second sub-network N2In, the image to be processed successively passes through The second sub-network N2In each layer processing, and the middle granulometric shot chart is exported by the last layer.
Step 3.3, the image to be processed is inputted into the third sub-network N3In, the image to be processed successively passes through The third sub-network N3In each layer processing, and the coarseness segmentation shot chart is exported by the last layer.
Step 3.4, the fine granularity is divided shot chart, the middle granulometric shot chart and the coarseness segmentation to obtain Component inputs the 4th sub-network N4In, the 4th sub-network N4First fused layer by the fine granularity divide score Figure, the middle granulometric shot chart are spliced or are added with the coarseness segmentation shot chart, splicing or after being added Component passes through the processing of each layer after first fused layer, and exports each picture in the image to be processed by the last layer The described image classification of element.
First sub-network N1Middle convolutional layer is used for for extracting characteristics of image, down-sampling layer to characteristics of image dimensionality reduction, Quan Lian Layer is connect for calculating segmentation score for each image block, finally by all image blocks of split corresponding score acquisition entire image Corresponding segmentation shot chart.By above-mentioned N1Separation calculation process it is found that N1Pixel each in image to be processed can be executed Separation calculation, therefore N1It can be realized the fine segmentation to remote sensing image, output fine granularity divides shot chart.
Second sub-network N2Middle convolutional layer is used to above adopt characteristics of image dimensionality reduction for extracting characteristics of image, down-sampling layer Sample layer is used to rise characteristics of image and tie up, fused layer A1 2For splicing C8 2And D1 2The characteristics of image of layer output, fused layer A2 2For spelling Meet C6 2And D2 2The characteristics of image of layer output, fused layer A3 2For splicing C4 2And D3 2The characteristics of image of layer output, fused layer A4 2With In splicing C2 2And D4 2The characteristics of image of layer output.Fused layer has merged the detailed information from low layer and the semanteme from high level Information, thus N2Granulometric shot chart in output.In addition, the detailed information is beneficial to fine segmentation, institute's semantic information It is beneficial to handle complex background interference.Therefore, N2It can be realized the performance between fine segmentation and complex background AF panel Balance enhances model stability.
Third sub-network N3Middle convolutional layer is used to above adopt characteristics of image dimensionality reduction for extracting characteristics of image, down-sampling layer Sample layer is used to rise characteristics of image and tie up.By above-mentioned N3Structure it is found that down-sampling layer makes image special by executing Feature Dimension Reduction Sign reduces detailed information, and the up-sampling layer of rear end is during executing liter dimension also that additional detailed information is not added.Therefore, N3Coarseness segmentation shot chart is exported, since the reduction of detailed information is so that N3The interference of complex background can be avoided as far as possible.
4th sub-network N4Utilize fused layer A1 4Splice N1、N2、N3The fine granularity of output, middle granularity and coarseness segmentation obtain Component, then convolutional layer extracts characteristics of image, finally exports final image segmentation result.In conjunction with N1、N2、N3The advantages of, N4Energy It enough realizes the fine segmentation of remote sensing image, while effectively inhibiting complex background interference.
As shown in figure 3, a kind of remote sensing image segmenting device based on more granularity network integrations of the embodiment of the present invention Include:
At least one figure is arranged for acquiring an at least remote sensing image as training image in first processing module Described image classification is marked as classification, and to the training image.
Second processing module is more to what is constructed in advance using back-propagation algorithm for based on the training image being marked Granularity network integration model is trained, wherein more granularity network integration models include four sub-neural networks.
Third processing module, for inputting image to be processed in housebroken more granularity network integration models, according to institute State the described image classification that four sub-neural networks determine each pixel in the image to be processed, and by the image to be processed In each pixel described image classification as segmentation result export.
Preferably, four sub-neural networks include the first sub-network N1, the second sub-network N2, third sub-network N3With 4th sub-network N4, the third processing module is specifically used for: by the first sub-network N1The image to be processed is successively located Reason obtains fine granularity and divides shot chart, by the second sub-network N2The image to be processed is successively handled to granularity point in acquisition Shot chart is cut, by the third sub-network N3The image to be processed is successively handled and obtains coarseness segmentation shot chart, by institute State the 4th sub-network N4The fine granularity segmentation shot chart, the middle granulometric shot chart and the coarseness segmentation is merged to obtain Component.
In an alternative embodiment of the invention, a kind of remote sensing image segmenting device based on more granularity network integrations includes Memory and processor.The memory, for storing computer program.The processor, for when the execution computer When program, the remote sensing image dividing method as described above based on more granularity network integrations is realized.
In an alternative embodiment of the invention, it is stored with computer program on a kind of computer readable storage medium, when described When computer program is executed by processor, the remote sensing image segmentation side as described above based on more granularity network integrations is realized Method.
Reader should be understood that in the description of this specification reference term " one embodiment ", " is shown " some embodiments " The description of example ", specific examples or " some examples " etc. mean specific features described in conjunction with this embodiment or example, structure, Material or feature are included at least one embodiment or example of the invention.In the present specification, above-mentioned term is shown The statement of meaning property need not be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples Sign is combined.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned Embodiment is changed, modifies, replacement and variant.

Claims (10)

1.一种基于多粒度网络融合的光学遥感图像分割方法,其特征在于,所述方法包括:1. a kind of optical remote sensing image segmentation method based on multi-granularity network fusion, it is characterized in that, described method comprises: 步骤1,采集至少一张光学遥感图像作为训练图像,设置至少一种图像类别,并对所述训练图像标注所述图像类别;Step 1, collecting at least one optical remote sensing image as a training image, setting at least one image category, and labeling the training image with the image category; 步骤2,基于被标注的训练图像,利用反向传播算法对预先构建的多粒度网络融合模型进行训练,其中,所述多粒度网络融合模型包括四个子神经网络;Step 2, based on the labeled training images, use the backpropagation algorithm to train the pre-built multi-granularity network fusion model, wherein the multi-granularity network fusion model includes four sub-neural networks; 步骤3,将待处理图像输入经训练的多粒度网络融合模型中,根据所述四个子神经网络确定所述待处理图像中每个像素的所述图像类别,并将所述待处理图像中每个像素的所述图像类别作为分割结果输出。Step 3, input the image to be processed into the trained multi-granularity network fusion model, determine the image category of each pixel in the image to be processed according to the four sub-neural networks, and transfer each pixel in the image to be processed to The image category of pixels is output as the segmentation result. 2.根据权利要求1所述的基于多粒度网络融合的光学遥感图像分割方法,其特征在于,所述四个子神经网络包括第一子网络N1、第二子网络N2、第三子网络N3和第四子网络N42. The optical remote sensing image segmentation method based on multi-granularity network fusion according to claim 1, wherein the four sub-neural networks comprise the first sub-network N 1 , the second sub-network N 2 , the third sub-network N 3 and the fourth son network n 4 ; 所述步骤3的具体实现包括:The specific implementation of step 3 includes: 由所述第一子网络N1将所述待处理图像逐层处理获得细粒度分割得分图;Process the image to be processed layer by layer by the first sub-network N1 to obtain a fine-grained segmentation score map; 由所述第二子网络N2将所述待处理图像逐层处理获得中粒度分割得分图;Process the image to be processed layer by layer by the second sub-network N2 to obtain a medium-grained segmentation score map; 由所述第三子网络N3将所述待处理图像逐层处理获得粗粒度分割得分图;Process the image to be processed layer by layer by the third sub-network N3 to obtain a coarse-grained segmentation score map; 由所述第四子网络N4融合所述细粒度分割得分图、所述中粒度分割得分图和所述粗粒度分割得分图。The fourth sub-network N4 fuses the fine-grained segmentation score map, the medium-grained segmentation score map and the coarse-grained segmentation score map. 3.根据权利要求2所述的基于多粒度网络融合的光学遥感图像分割方法,其特征在于,所述第一子网络N1包括依次设置的至少一个卷积层C1、下采样层P1和全连接层F1;所述第二子网络N2包括依次设置的至少一个卷积层C2、下采样层P2、上采样层D2和融合层A2;所述第三子网络N3包括依次设置的至少一个卷积层C3、下采样层P3和上采样层D3;所述第四子网络N4包括依次设置的至少一个融合层A4和卷积层C43. The optical remote sensing image segmentation method based on multi-granularity network fusion according to claim 2, wherein the first sub-network N 1 includes at least one convolutional layer C 1 and a downsampling layer P 1 arranged in sequence and fully connected layer F 1 ; the second sub-network N 2 includes at least one convolutional layer C 2 , down-sampling layer P 2 , up-sampling layer D 2 and fusion layer A 2 arranged in sequence; the third sub-network N 3 includes at least one convolutional layer C 3 , downsampling layer P 3 and upsampling layer D 3 arranged in sequence; the fourth sub-network N 4 includes at least one fusion layer A 4 and convolutional layer C 4 arranged in sequence 4.根据权利要求3所述的基于多粒度网络融合的光学遥感图像分割方法,其特征在于,在所述第二子网络N2中,所述融合层A2拼接或相加位于所述融合层A2之前的任意至少两层的输出,并将拼接结果传给所述融合层A2之后的下一层;在所述第四子网络N4中,所述融合层A4拼接或相加所述第一子网络N1的最后一层、所述第二子网络N2的最后一层和所述第三子网络N3的最后一层的输出,并将拼接结果传给所述融合层A4之后的下一层。4. The optical remote sensing image segmentation method based on multi-granularity network fusion according to claim 3, characterized in that, in the second sub-network N 2 , the fusion layer A 2 splicing or addition is located in the fusion The output of any at least two layers before layer A 2 , and the splicing result is passed to the next layer after the fusion layer A 2 ; in the fourth sub-network N 4 , the fusion layer A 4 splicing or phase Add the output of the last layer of the first sub-network N1, the last layer of the second sub-network N2 and the last layer of the third sub-network N3, and pass the splicing result to the The next layer of fusion layer A 4 . 5.根据权利要求3所述的基于多粒度网络融合的光学遥感图像分割方法,其特征在于,所述第一子网络N1包括依次设置的卷积层C1 1、卷积层C2 1、下采样层P1 1、卷积层C3 1、卷积层C4 1、下采样层P2 1、卷积层C5 1、卷积层C6 1、卷积层C7 1、下采样层P3 1、卷积层C8 1、卷积层C9 1、卷积层C10 1、下采样层P4 1、卷积层C11 1、卷积层C12 1、卷积层C13 1、下采样层P5 1、全连接层F1 1、全连接层F2 1和全连接层F3 15. The optical remote sensing image segmentation method based on multi-granularity network fusion according to claim 3, wherein the first sub-network N 1 includes convolutional layers C 1 1 , convolutional layers C 2 1 arranged in sequence , downsampling layer P 1 1 , convolutional layer C 3 1 , convolutional layer C 4 1 , downsampling layer P 2 1 , convolutional layer C 5 1 , convolutional layer C 6 1 , convolutional layer C 7 1 , Downsampling layer P 3 1 , convolutional layer C 8 1 , convolutional layer C 9 1 , convolutional layer C 10 1 , downsampling layer P 4 1 , convolutional layer C 11 1 , convolutional layer C 12 1 , convolutional layer Multilayer C 13 1 , downsampling layer P 5 1 , fully connected layer F 1 1 , fully connected layer F 2 1 and fully connected layer F 3 1 ; 所述第二子网络N2包括依次设置的卷积层C1 2、卷积层C2 2、下采样层P1 2、卷积层C3 2、卷积层C4 2、下采样层P2 2、卷积层C5 2、卷积层C6 2、下采样层P3 2、卷积层C7 2、卷积层C8 2、下采样层P4 2、卷积层C9 2、卷积层C10 2、上采样层D1 2、融合层A1 2、卷积层C11 2、卷积层C12 2、上采样层D2 2、融合层A2 2、卷积层C13 2、卷积层C14 2、上采样层D3 2、融合层A3 2、卷积层C15 2、卷积层C16 2、上采样层D4 2、融合层A4 2、卷积层C17 2、卷积层C18 2和卷积层C19 2The second sub-network N 2 includes convolutional layer C 1 2 , convolutional layer C 2 2 , downsampling layer P 1 2 , convolutional layer C 3 2 , convolutional layer C 4 2 , downsampling layer P 2 2 , convolutional layer C 5 2 , convolutional layer C 6 2 , downsampling layer P 3 2 , convolutional layer C 7 2 , convolutional layer C 8 2 , downsampling layer P 4 2 , convolutional layer C 9 2 , convolutional layer C 10 2 , upsampling layer D 1 2 , fusion layer A 1 2 , convolutional layer C 11 2 , convolutional layer C 12 2 , upsampling layer D 2 2 , fusion layer A 2 2 , Convolutional layer C 13 2 , convolutional layer C 14 2 , upsampling layer D 3 2 , fusion layer A 3 2 , convolutional layer C 15 2 , convolutional layer C 16 2 , upsampling layer D 4 2 , fusion layer A 4 2 , convolutional layer C 17 2 , convolutional layer C 18 2 and convolutional layer C 19 2 ; 所述第三子网络N3包括依次设置的卷积层C1 3、卷积层C2 3、下采样层P1 3、卷积层C3 3、卷积层C4 3、下采样层P2 3、卷积层C5 3、卷积层C6 3、卷积层C7 3、下采样层P3 3、卷积层C8 3、卷积层C9 3、卷积层C10 3、下采样层P4 3、卷积层C11 3、卷积层C12 3、卷积层C13 3、下采样层P5 3、卷积层C14 3、卷积层C15 3、卷积层C16 3、上采样层D1 3、上采样层D2 3、上采样层D3 3、上采样层D4 3和上采样层D5 3The third sub -network N 3 includes the convolution layer C 1 3. The convolutional layer C 2 3. The sample layer P 1 3. The convolution layer C 3 , the involved layer C 4 3 . P 2 3 , convolutional layer C 5 3 , convolutional layer C 6 3 , convolutional layer C 7 3 , downsampling layer P 3 3 , convolutional layer C 8 3 , convolutional layer C 9 3 , convolutional layer C 10 3 , downsampling layer P 4 3 , convolutional layer C 11 3 , convolutional layer C 12 3 , convolutional layer C 13 3 , downsampling layer P 5 3 , convolutional layer C 14 3 , convolutional layer C 15 3. Convolutional layer C 16 3 , upsampling layer D 1 3 , upsampling layer D 2 3 , upsampling layer D 3 3 , upsampling layer D 4 3 and upsampling layer D 5 3 ; 所述第四子网络N4包括依次设置的融合层A1 4、卷积层C1 4、卷积层C2 4、卷积层C3 4、卷积层C4 4和卷积层C5 4The fourth sub-network N 4 includes fusion layer A 1 4 , convolution layer C 1 4 , convolution layer C 2 4 , convolution layer C 3 4 , convolution layer C 4 4 and convolution layer C 5 4 . 6.根据权利要求2至5任一项所述的基于多粒度网络融合的光学遥感图像分割方法,其特征在于,所述步骤3具体包括:6. The optical remote sensing image segmentation method based on multi-granularity network fusion according to any one of claims 2 to 5, wherein said step 3 specifically comprises: 步骤3.1,将所述待处理图像裁剪成至少两个图像块,将每个所述图像块输入所述第一子网络N1中,所述图像块依次经过所述第一子网络N1中各层的处理,并由最后一层为每个所述图像块输出类别得分,将所有所述图像块对应的所述类别得分拼合成所述细粒度分割得分图;Step 3.1, cutting the image to be processed into at least two image blocks, inputting each of the image blocks into the first sub-network N1, and the image blocks pass through the first sub-network N1 in turn The processing of each layer, and the last layer outputs category scores for each of the image blocks, and stitches the category scores corresponding to all the image blocks into the fine-grained segmentation score map; 步骤3.2,将所述待处理图像输入所述第二子网络N2中,所述待处理图像依次经过所述第二子网络N2中各层的处理,并由最后一层输出所述中粒度分割得分图;Step 3.2, input the image to be processed into the second sub-network N2 , the image to be processed is sequentially processed by each layer in the second sub-network N2 , and the last layer outputs the middle Granularity segmentation score; 步骤3.3,将所述待处理图像输入所述第三子网络N3中,所述待处理图像依次经过所述第三子网络N3中各层的处理,并由最后一层输出所述粗粒度分割得分图;Step 3.3, input the image to be processed into the third sub-network N3, the image to be processed is sequentially processed by each layer in the third sub-network N3, and the last layer outputs the coarse Granularity segmentation score; 步骤3.4,将所述细粒度分割得分图、所述中粒度分割得分图和所述粗粒度分割得分图输入所述第四子网络N4中,所述第四子网络N4的第一个融合层将所述细粒度分割得分图、所述中粒度分割得分图和所述粗粒度分割得分图进行拼接或相加,拼接或相加后的得分图经过所述第一个融合层之后各层的处理,并由最后一层输出所述待处理图像中每个像素的所述图像类别。Step 3.4, enter the fine particle size segmentation score, the medium -granularity segmentation score diagram, and the coarse particle size segmentation score map. Enter the first sub -network n 4 , the first one of the fourth sub -network N 4 The fusion layer sends the fine particle size segmentation score, the medium particle size segmentation score diagram, and the coarse particle size segmentation score diagram for stitching or addition. layer, and the last layer outputs the image category of each pixel in the image to be processed. 7.一种基于多粒度网络融合的光学遥感图像分割装置,其特征在于,所述装置包括:7. An optical remote sensing image segmentation device based on multi-granularity network fusion, characterized in that the device comprises: 第一处理模块,用于采集至少一张光学遥感图像作为训练图像,设置至少一种图像类别,并对所述训练图像标注所述图像类别;The first processing module is used to collect at least one optical remote sensing image as a training image, set at least one image category, and mark the image category for the training image; 第二处理模块,用于基于被标注的训练图像,利用反向传播算法对预先构建的多粒度网络融合模型进行训练,其中,所述多粒度网络融合模型包括四个子神经网络;The second processing module is configured to use a backpropagation algorithm to train a pre-built multi-granularity network fusion model based on the labeled training image, wherein the multi-granularity network fusion model includes four sub-neural networks; 第三处理模块,用于将待处理图像输入经训练的多粒度网络融合模型中,根据所述四个子神经网络确定所述待处理图像中每个像素的所述图像类别,并将所述待处理图像中每个像素的所述图像类别作为分割结果输出。The third processing module is used to input the image to be processed into the trained multi-granularity network fusion model, determine the image category of each pixel in the image to be processed according to the four sub-neural networks, and transfer the image to be processed The image category of each pixel in the processed image is output as a segmentation result. 8.根据权利要求7所述的基于多粒度网络融合的光学遥感图像分割装置,其特征在于,所述四个子神经网络包括第一子网络N1、第二子网络N2、第三子网络N3和第四子网络N4,所述第三处理模块具体用于:由所述第一子网络N1将所述待处理图像逐层处理获得细粒度分割得分图,由所述第二子网络N2将所述待处理图像逐层处理获得中粒度分割得分图,由所述第三子网络N3将所述待处理图像逐层处理获得粗粒度分割得分图,由所述第四子网络N4融合所述细粒度分割得分图、所述中粒度分割得分图和所述粗粒度分割得分图。8. The optical remote sensing image segmentation device based on multi-granularity network fusion according to claim 7, wherein the four sub-neural networks comprise a first sub-network N 1 , a second sub-network N 2 , a third sub-network N 3 and a fourth sub-network N 4 , the third processing module is specifically configured to: use the first sub-network N 1 to process the image to be processed layer by layer to obtain a fine-grained segmentation score map, and use the second The sub-network N 2 processes the image to be processed layer by layer to obtain a medium-grained segmentation score map, the third sub-network N 3 processes the image to be processed layer-by-layer to obtain a coarse-grained segmentation score map, and the fourth Sub-network N 4 fuses the fine-grained segmentation score map, the medium-grained segmentation score map and the coarse-grained segmentation score map. 9.一种基于多粒度网络融合的光学遥感图像分割装置,其特征在于,所述装置包括存储器和处理器;9. An optical remote sensing image segmentation device based on multi-granularity network fusion, characterized in that the device includes a memory and a processor; 所述存储器,用于存储计算机程序;The memory is used to store computer programs; 所述处理器,用于当执行所述计算机程序时,实现如权利要求1至6任一项所述的基于多粒度网络融合的光学遥感图像分割方法。The processor is configured to implement the optical remote sensing image segmentation method based on multi-granularity network fusion according to any one of claims 1 to 6 when executing the computer program. 10.一种计算机可读存储介质,其特征在于,所述存储介质上存储有计算机程序,当所述计算机程序被处理器执行时,实现如权利要求1至6任一项所述的基于多粒度网络融合的光学遥感图像分割方法。10. A computer-readable storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed by a processor, the multi-based A Segmentation Method for Optical Remote Sensing Images Based on Granularity Network Fusion.
CN201811215642.8A 2018-10-18 2018-10-18 A method and device for optical remote sensing image segmentation based on multi-granularity network fusion Active CN109523569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811215642.8A CN109523569B (en) 2018-10-18 2018-10-18 A method and device for optical remote sensing image segmentation based on multi-granularity network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811215642.8A CN109523569B (en) 2018-10-18 2018-10-18 A method and device for optical remote sensing image segmentation based on multi-granularity network fusion

Publications (2)

Publication Number Publication Date
CN109523569A true CN109523569A (en) 2019-03-26
CN109523569B CN109523569B (en) 2020-01-31

Family

ID=65770571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811215642.8A Active CN109523569B (en) 2018-10-18 2018-10-18 A method and device for optical remote sensing image segmentation based on multi-granularity network fusion

Country Status (1)

Country Link
CN (1) CN109523569B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288638A (en) * 2019-07-27 2021-01-29 华为技术有限公司 Image enhancement device and system
CN113554655A (en) * 2021-07-13 2021-10-26 中国科学院空间应用工程与技术中心 Optical remote sensing image segmentation method and device based on multi-feature enhancement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995006288A2 (en) * 1993-08-26 1995-03-02 The Regents Of The University Of California Cnn bionic eye or other topographic sensory organs or combinations of same
CN102646200A (en) * 2012-03-08 2012-08-22 武汉大学 Image classification method and system based on multi-classifier adaptive weight fusion
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107122796A (en) * 2017-04-01 2017-09-01 中国科学院空间应用工程与技术中心 A kind of remote sensing image sorting technique based on multiple-limb network integration model
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108564029A (en) * 2018-04-12 2018-09-21 厦门大学 Face character recognition methods based on cascade multi-task learning deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995006288A2 (en) * 1993-08-26 1995-03-02 The Regents Of The University Of California Cnn bionic eye or other topographic sensory organs or combinations of same
CN102646200A (en) * 2012-03-08 2012-08-22 武汉大学 Image classification method and system based on multi-classifier adaptive weight fusion
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107122796A (en) * 2017-04-01 2017-09-01 中国科学院空间应用工程与技术中心 A kind of remote sensing image sorting technique based on multiple-limb network integration model
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108564029A (en) * 2018-04-12 2018-09-21 厦门大学 Face character recognition methods based on cascade multi-task learning deep neural network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288638A (en) * 2019-07-27 2021-01-29 华为技术有限公司 Image enhancement device and system
CN113554655A (en) * 2021-07-13 2021-10-26 中国科学院空间应用工程与技术中心 Optical remote sensing image segmentation method and device based on multi-feature enhancement
CN113554655B (en) * 2021-07-13 2021-12-31 中国科学院空间应用工程与技术中心 Optical remote sensing image segmentation method and device based on multi-feature enhancement

Also Published As

Publication number Publication date
CN109523569B (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN115187820B (en) Lightweight target detection method, device, equipment, and storage medium
CN108062754B (en) Segmentation and recognition method and device based on dense network image
US9940539B2 (en) Object recognition apparatus and method
CN111860138B (en) Three-dimensional point cloud semantic segmentation method and system based on full fusion network
US20220130141A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
CN110298361A (en) A kind of semantic segmentation method and system of RGB-D image
CN107122796B (en) An Optical Remote Sensing Image Classification Method Based on Multi-branch Network Fusion Model
CN109145730B (en) An automatic semantic segmentation method for mining areas in remote sensing images
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN111062964B (en) Image segmentation method and related device
CN109816671A (en) A kind of object detection method, device and storage medium
CN110298841B (en) A fusion network-based image multi-scale semantic segmentation method and device
CN113658189B (en) A real-time semantic segmentation method and system for cross-scale feature fusion
CN116403127A (en) A method, device, and storage medium for object detection in aerial images taken by drones
CN109034198A (en) The Scene Segmentation and system restored based on characteristic pattern
CN118799563A (en) Foreign object detection method for transmission lines based on YOLOv9 and diffusion model
CN114155510A (en) Road element detection method and device based on double-branch semantic segmentation network
CN109523569A (en) A kind of remote sensing image dividing method and device based on more granularity network integrations
CN115497140A (en) Real-time expression recognition method based on YOLOv5l and attention mechanism
CN107886533A (en) Vision significance detection method, device, equipment and the storage medium of stereo-picture
CN119271756A (en) Route generation method, system and storage medium based on multimodal large model
CN112907488A (en) Image restoration method, device, equipment and storage medium
Ai et al. ELUNet: an efficient and lightweight U-shape network for real-time semantic segmentation
CN112801266A (en) Neural network construction method, device, equipment and medium
CN111524090A (en) Depth prediction image-based RGB-D significance detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载