Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and
It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of remote sensing image dividing method based on more granularity network integrations of the embodiment of the present invention
Include:
Step 1, at least one image category is arranged as training image in an acquisition at least remote sensing image, and right
The training image marks described image classification.Wherein, image category can be building, road, meadow, trees etc., can instruct
Classification mark is carried out to each pixel of training image before practicing.
Step 2, more granularity network integrations based on the training image being marked, using back-propagation algorithm to constructing in advance
Model is trained, and obtains corresponding model parameter, wherein more granularity network integration models include four sub-neural networks.
Step 3, image to be processed is inputted in housebroken more granularity network integration models, according to four sons nerve
Network determines the described image classification of each pixel in the image to be processed, and by each pixel in the image to be processed
Described image classification is exported as segmentation result.
In the present embodiment, using the more granularity network integration models for the sub-neural network for including multiple and different design features
The characteristics of being trained and recognize, fine segmentation remote sensing image can be respectively provided with due to the model of fusion, enhancing stability
The characteristics of and the characteristics of effectively inhibit complex background interference, thus can be improved to the fine segmentation ability of remote sensing image and
Complex background interference rejection capability can be realized effectively to the fine segmentation of image and effectively for complicated remote sensing image
Inhibit background interference.
Preferably, four sub-neural networks include the first sub-network N1, the second sub-network N2, third sub-network N3With
4th sub-network N4, the specific implementation of the step 3 includes: by the first sub-network N1The image to be processed is successively located
Reason obtains fine granularity and divides shot chart, by the second sub-network N2The image to be processed is successively handled to granularity point in acquisition
Shot chart is cut, by the third sub-network N3The image to be processed is successively handled and obtains coarseness segmentation shot chart, by institute
State the 4th sub-network N4The fine granularity segmentation shot chart, the middle granulometric shot chart and the coarseness segmentation is merged to obtain
Component.
First sub-network N1Can fine segmentation remote sensing image, the second sub-network N2Model can be enhanced to locate at the same time
Manage stability when fine segmentation and background interference, third sub-network N3The interference of complex background can effectively be inhibited, the
Four sub-network N4By more granularity network Ns1、N2、N3It is merged, in conjunction with the advantages of various granularity networks, so that being based on more granularity nets
The image partition method of network Fusion Model can be achieved at the same time to the effective of the fine segmentation of remote sensing image and complex background
Inhibit.
Preferably, the first sub-network N1Including at least one convolutional layer C set gradually1, down-sampling layer P1And Quan Lian
Meet a layer F1;The second sub-network N2Including at least one convolutional layer C set gradually2, down-sampling layer P2, up-sampling layer D2With melt
Close layer A2;The third sub-network N3Including at least one convolutional layer C set gradually3, down-sampling layer P3With up-sampling layer D3;Institute
State the 4th sub-network N4Including at least one fused layer A set gradually4With convolutional layer C4。
In the first sub-network N1In, it is also denoted as convolutional layer { Ci 1, i=1 ..., nc 1,nc 1>=1 }, down-sampling layer { Pi 1,i
=1 ..., np 1,np 1>=1 }, full articulamentum { Fi 1, i=1 ..., nf 1,nf 1≥1}.Wherein, i is variable, nc 1Indicate convolutional layer
The number of plies, np 1Indicate the number of plies of down-sampling layer, nf 1Indicate the number of plies of full articulamentum.
In the second sub-network N2In, it is also denoted as convolutional layer { Ci 2, i=1 ..., nc 2,nc 2>=1 }, down-sampling layer { Pi 2,i
=1 ..., np 2,np 2>=1 }, layer { D is up-sampledi 2, i=1 ..., nd 2,nd 2>=1 }, fused layer { Ai 2, i=1 ..., na 2,na 2≥
1}.Wherein, i is variable, nc 2Indicate the number of plies of convolutional layer, np 2Indicate the number of plies of down-sampling layer, nd 2Indicate the layer of up-sampling layer
Number, na 2Indicate the number of plies of fused layer.
In third sub-network N3In, it is also denoted as convolutional layer { Ci 3, i=1 ..., nc 3,nc 3>=1 }, down-sampling layer { Pi 3,i
=1 ..., np 3,np 3>=1 }, layer { D is up-sampledi 3, i=1 ..., nd 3,nd 3≥1}.Wherein, i is variable, nc 3Indicate convolutional layer
The number of plies, np 3Indicate the number of plies of down-sampling layer, nd 3Indicate the number of plies of up-sampling layer.
In the 4th sub-network N4In, it is also denoted as fused layer { Ai 4, i=1 ..., na 4,na 4>=1 }, convolutional layer { Ci 4, i=
1,…,nc 4,nc 4≥1}.Wherein, i is variable, na 4Indicate the number of plies of fused layer, nc 4Indicate the number of plies of convolutional layer.
Preferably, in the second sub-network N2In, the fused layer A2Splicing is added positioned at the fused layer A2Before
Any at least two layers of output, and splicing result is transmitted to the fused layer A2Next layer later;In the 4th subnet
Network N4In, the fused layer A4Splice or be added the first sub-network N1The last layer, the second sub-network N2It is last
One layer and the third sub-network N3The last layer output, and splicing result is transmitted to the fused layer A4Later next
Layer.
Preferably, as shown in Fig. 2, the first sub-network N1Including the convolutional layer C set gradually1 1, convolutional layer C2 1, under adopt
Sample layer P1 1, convolutional layer C3 1, convolutional layer C4 1, down-sampling layer P2 1, convolutional layer C5 1, convolutional layer C6 1, convolutional layer C7 1, down-sampling layer P3 1、
Convolutional layer C8 1, convolutional layer C9 1, convolutional layer C10 1, down-sampling layer P4 1, convolutional layer C11 1, convolutional layer C12 1, convolutional layer C13 1, down-sampling
Layer P5 1, full articulamentum F1 1, full articulamentum F2 1With full articulamentum F3 1。
That is, in the first sub-network N1In, it altogether include nc 1=13 convolutional layers, np 1=5 down-sampling layers and nf 1=
3 full articulamentums.
The second sub-network N2Including the convolutional layer C set gradually1 2, convolutional layer C2 2, down-sampling layer P1 2, convolutional layer C3 2、
Convolutional layer C4 2, down-sampling layer P2 2, convolutional layer C5 2, convolutional layer C6 2, down-sampling layer P3 2, convolutional layer C7 2, convolutional layer C8 2, down-sampling layer
P4 2, convolutional layer C9 2, convolutional layer C10 2, up-sampling layer D1 2, fused layer A1 2, convolutional layer C11 2, convolutional layer C12 2, up-sampling layer D2 2, melt
Close layer A2 2, convolutional layer C13 2, convolutional layer C14 2, up-sampling layer D3 2, fused layer A3 2, convolutional layer C15 2, convolutional layer C16 2, up-sampling layer
D4 2, fused layer A4 2, convolutional layer C17 2, convolutional layer C18 2With convolutional layer C19 2。
That is, in the second sub-network N2In, it altogether include nc 2=19 convolutional layers, np 2=4 down-sampling layers, nd 2=4
A up-sampling layer and na 2=4 fused layers.
The third sub-network N3Including the convolutional layer C set gradually1 3, convolutional layer C2 3, down-sampling layer P1 3, convolutional layer C3 3、
Convolutional layer C4 3, down-sampling layer P2 3, convolutional layer C5 3, convolutional layer C6 3, convolutional layer C7 3, down-sampling layer P3 3, convolutional layer C8 3, convolutional layer
C9 3, convolutional layer C10 3, down-sampling layer P4 3, convolutional layer C11 3, convolutional layer C12 3, convolutional layer C13 3, down-sampling layer P5 3, convolutional layer C14 3、
Convolutional layer C15 3, convolutional layer C16 3, up-sampling layer D1 3, up-sampling layer D2 3, up-sampling layer D3 3, up-sampling layer D4 3With up-sampling layer
D5 3。
That is, in third sub-network N3In, it altogether include nc 3=16 convolutional layers, np 3=5 down-sampling layers and nd 3=
5 up-sampling layers.
The 4th sub-network N4Including the fused layer A set gradually1 4, convolutional layer C1 4, convolutional layer C2 4, convolutional layer C3 4, volume
Lamination C4 4With convolutional layer C5 4。
That is, in the 4th sub-network N4In, it altogether include na 4=1 fused layer and nc 4=5 convolutional layers.
Wherein, N1、N2、N3In down-sampling layer can use pondization operation realize down-sampling, also can use with stepping
Convolution operation greater than 1 realizes down-sampling, to realize the dimensionality reduction to characteristics of image.N2、N3In up-sampling layer can utilize deconvolution
Up-sampling is realized in operation, also be can use upper storage reservoirization operation and is realized up-sampling, to realize that the liter to characteristics of image is tieed up.N2、N4In
Fused layer can realize fusion using concatenation, also can use phase add operation and realize fusion, melted with realizing to characteristics of image
It closes.
Preferably, the step 3 specifically includes:
Step 3.1, by the image cropping to be processed at least two image blocks, such as with multiple pictures in image to be processed
Each described image block is inputted first subnet by the image block for cutting out multiple 9 X, 9 Pixel Dimensions centered on vegetarian refreshments respectively
Network N1In, described image block successively passes through the first sub-network N1In each layer processing, and by the last layer be each figure
As block output category score, the corresponding category score of all described image blocks is pieced together into the fine granularity and divides score
Figure.
Step 3.2, the image to be processed is inputted into the second sub-network N2In, the image to be processed successively passes through
The second sub-network N2In each layer processing, and the middle granulometric shot chart is exported by the last layer.
Step 3.3, the image to be processed is inputted into the third sub-network N3In, the image to be processed successively passes through
The third sub-network N3In each layer processing, and the coarseness segmentation shot chart is exported by the last layer.
Step 3.4, the fine granularity is divided shot chart, the middle granulometric shot chart and the coarseness segmentation to obtain
Component inputs the 4th sub-network N4In, the 4th sub-network N4First fused layer by the fine granularity divide score
Figure, the middle granulometric shot chart are spliced or are added with the coarseness segmentation shot chart, splicing or after being added
Component passes through the processing of each layer after first fused layer, and exports each picture in the image to be processed by the last layer
The described image classification of element.
First sub-network N1Middle convolutional layer is used for for extracting characteristics of image, down-sampling layer to characteristics of image dimensionality reduction, Quan Lian
Layer is connect for calculating segmentation score for each image block, finally by all image blocks of split corresponding score acquisition entire image
Corresponding segmentation shot chart.By above-mentioned N1Separation calculation process it is found that N1Pixel each in image to be processed can be executed
Separation calculation, therefore N1It can be realized the fine segmentation to remote sensing image, output fine granularity divides shot chart.
Second sub-network N2Middle convolutional layer is used to above adopt characteristics of image dimensionality reduction for extracting characteristics of image, down-sampling layer
Sample layer is used to rise characteristics of image and tie up, fused layer A1 2For splicing C8 2And D1 2The characteristics of image of layer output, fused layer A2 2For spelling
Meet C6 2And D2 2The characteristics of image of layer output, fused layer A3 2For splicing C4 2And D3 2The characteristics of image of layer output, fused layer A4 2With
In splicing C2 2And D4 2The characteristics of image of layer output.Fused layer has merged the detailed information from low layer and the semanteme from high level
Information, thus N2Granulometric shot chart in output.In addition, the detailed information is beneficial to fine segmentation, institute's semantic information
It is beneficial to handle complex background interference.Therefore, N2It can be realized the performance between fine segmentation and complex background AF panel
Balance enhances model stability.
Third sub-network N3Middle convolutional layer is used to above adopt characteristics of image dimensionality reduction for extracting characteristics of image, down-sampling layer
Sample layer is used to rise characteristics of image and tie up.By above-mentioned N3Structure it is found that down-sampling layer makes image special by executing Feature Dimension Reduction
Sign reduces detailed information, and the up-sampling layer of rear end is during executing liter dimension also that additional detailed information is not added.Therefore,
N3Coarseness segmentation shot chart is exported, since the reduction of detailed information is so that N3The interference of complex background can be avoided as far as possible.
4th sub-network N4Utilize fused layer A1 4Splice N1、N2、N3The fine granularity of output, middle granularity and coarseness segmentation obtain
Component, then convolutional layer extracts characteristics of image, finally exports final image segmentation result.In conjunction with N1、N2、N3The advantages of, N4Energy
It enough realizes the fine segmentation of remote sensing image, while effectively inhibiting complex background interference.
As shown in figure 3, a kind of remote sensing image segmenting device based on more granularity network integrations of the embodiment of the present invention
Include:
At least one figure is arranged for acquiring an at least remote sensing image as training image in first processing module
Described image classification is marked as classification, and to the training image.
Second processing module is more to what is constructed in advance using back-propagation algorithm for based on the training image being marked
Granularity network integration model is trained, wherein more granularity network integration models include four sub-neural networks.
Third processing module, for inputting image to be processed in housebroken more granularity network integration models, according to institute
State the described image classification that four sub-neural networks determine each pixel in the image to be processed, and by the image to be processed
In each pixel described image classification as segmentation result export.
Preferably, four sub-neural networks include the first sub-network N1, the second sub-network N2, third sub-network N3With
4th sub-network N4, the third processing module is specifically used for: by the first sub-network N1The image to be processed is successively located
Reason obtains fine granularity and divides shot chart, by the second sub-network N2The image to be processed is successively handled to granularity point in acquisition
Shot chart is cut, by the third sub-network N3The image to be processed is successively handled and obtains coarseness segmentation shot chart, by institute
State the 4th sub-network N4The fine granularity segmentation shot chart, the middle granulometric shot chart and the coarseness segmentation is merged to obtain
Component.
In an alternative embodiment of the invention, a kind of remote sensing image segmenting device based on more granularity network integrations includes
Memory and processor.The memory, for storing computer program.The processor, for when the execution computer
When program, the remote sensing image dividing method as described above based on more granularity network integrations is realized.
In an alternative embodiment of the invention, it is stored with computer program on a kind of computer readable storage medium, when described
When computer program is executed by processor, the remote sensing image segmentation side as described above based on more granularity network integrations is realized
Method.
Reader should be understood that in the description of this specification reference term " one embodiment ", " is shown " some embodiments "
The description of example ", specific examples or " some examples " etc. mean specific features described in conjunction with this embodiment or example, structure,
Material or feature are included at least one embodiment or example of the invention.In the present specification, above-mentioned term is shown
The statement of meaning property need not be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described
It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this
The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples
Sign is combined.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, modifies, replacement and variant.