+

CN110334703B - A method for ship detection and recognition in day and night images - Google Patents

A method for ship detection and recognition in day and night images Download PDF

Info

Publication number
CN110334703B
CN110334703B CN201910514333.9A CN201910514333A CN110334703B CN 110334703 B CN110334703 B CN 110334703B CN 201910514333 A CN201910514333 A CN 201910514333A CN 110334703 B CN110334703 B CN 110334703B
Authority
CN
China
Prior art keywords
ship
image
images
network
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910514333.9A
Other languages
Chinese (zh)
Other versions
CN110334703A (en
Inventor
袁鑫
徐新
陈姚节
徐进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Technology WHUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Technology WHUST filed Critical Wuhan University of Science and Technology WHUST
Priority to CN201910514333.9A priority Critical patent/CN110334703B/en
Publication of CN110334703A publication Critical patent/CN110334703A/en
Application granted granted Critical
Publication of CN110334703B publication Critical patent/CN110334703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明提出了一种昼夜图像中的船舶检测和识别方法,包括如下步骤:S1、利用光感元件探测不同时段船舶图像的照度,根据船舶图像的不同照度范围将其分成白天图像和夜间图像两类;S2、针对白天图像,首先对所有出现在探测范围内的物体进行检测,然后从中筛选出船舶类物体;S3、针对夜间图像,首先检测夜间图像中的显著目标,从中筛选出船舶类物体;S4、基于筛选出的船舶类物体,获取当前视频帧中的所有船舶的实时位置和所属类别信息。本发明的方法实现了船舶目标在全时段场景下的检测与识别,具有较好的鲁棒性。

Figure 201910514333

The present invention proposes a method for detecting and identifying ships in day and night images, which includes the following steps: S1. Detecting the illuminance of ship images at different time periods by using photosensitive elements, and dividing the ship images into daytime images and nighttime images according to the different illuminance ranges of the ship images. Class; S2. For daytime images, first detect all objects that appear in the detection range, and then screen out ship-like objects; S3. For nighttime images, first detect salient targets in nighttime images, and filter out ship-like objects. ; S4. Based on the filtered ship objects, obtain the real-time position and category information of all ships in the current video frame. The method of the invention realizes the detection and recognition of the ship target under the scene of the whole time period, and has good robustness.

Figure 201910514333

Description

Ship detection and identification method in day and night image
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to a ship detection and identification method in day and night images based on statistical learning and regional covariance.
Background
The target detection is to find out all interested objects in the image, and comprises two subtasks of object positioning and object classification, namely, the category and the position of the object are determined simultaneously. The target detection is a hot direction of computer vision and image processing, is widely applied to the fields of robot navigation, intelligent video monitoring, industrial detection and the like, reduces the consumption of human capital through the computer vision, and has important practical significance. Therefore, target detection becomes a research hotspot of theory and application in recent years, and is an important branch of image processing and computer vision discipline and a core part of an intelligent monitoring system. Meanwhile, target detection is also a basic algorithm in the field of universal identity recognition, and plays an important role in subsequent tasks such as face recognition, gait recognition, crowd counting, instance segmentation and the like.
Due to the wide application of deep learning, the target detection algorithm is developed rapidly. Since 2006, a large number of deep neural networks were published under the guidance of Hinton, Bengio, Lecun et al, and particularly 2012, the Hinton topic group first participated in ImageNet image recognition competitions, which captured the champions at a time through the CNN network AlexNet constructed, and since then the neural networks received extensive attention. Deep learning utilizes a multi-layer computational model to learn abstract data representations, so that complex structures in big data can be discovered, and the technology is successfully applied to various pattern classification problems including the field of computer vision at present.
The analysis of the target motion by the computer vision can be roughly divided into three levels, namely motion segmentation and target detection; tracking a target; and (4) action recognition and behavior description. The target detection is one of basic tasks to be solved in the field of computer vision, and is also a basic task of a video monitoring technology. As the targets in the video have different postures and are often shielded, and the motion of the targets has irregularity, the conditions of depth of field, resolution, weather, illumination and the like of the monitoring video and the diversity of scenes are considered, and the results of the target detection algorithm directly influence the effects of subsequent tracking, action recognition and action description. Even today with technological development, the basic task of object detection remains a very challenging task, with great potential and space for improvement.
At present, a target detection and identification method based on deep learning is applied to detection and identification of ships, daytime scenes are good in performance, but for nighttime scenes, due to the fact that the difference between illumination, contrast and signal-to-noise ratio of images at night is large, the detection and identification performance of ships at night is sharply reduced. In order to intelligently detect the position of a ship in the video monitoring of the whole time and automatically identify the type of a target ship, the key point is to extract the image characteristics of the ship. However, in practical application, the difference of characteristics such as signal-to-noise ratio, contrast and the like of images at different periods of time in day and night is large, and great challenges are brought to image feature extraction of ships.
Currently, the mainstream target detection algorithms based on the deep learning model can be mainly divided into two categories: One-Stage and Two-Stage. Generally, the One-Stage detection algorithm does not need a Region prompt Stage, directly generates the class probability and the position coordinate value of an object, and has relatively high speed; Two-Stage object detection algorithms, which divide the detection problem into Two stages, first generate candidate regions (regions), and then classify and position-refine the candidate regions, are much more accurate than the previous one, but are slower.
Disclosure of Invention
In order to realize the detection and identification of a water surface target ship in the whole time period, the invention provides a ship detection and identification method in a day and night image, which comprises the following steps:
s1, detecting the illumination of the ship images at different time intervals by using the light sensing elements, and dividing the ship images into a daytime image and a nighttime image according to different illumination ranges of the ship images;
s2, aiming at the daytime image, firstly detecting all objects appearing in a detection range, and then screening ship objects from the detected objects;
s3, aiming at the night image, firstly detecting a significant target in the night image, and screening out a ship object from the night image;
and S4, acquiring the real-time positions and the affiliated category information of all ships in the current video frame based on the screened ship objects.
Further, step S1 specifically includes:
s11, collecting a large number of scene pictures at different time intervals, and statistically analyzing the image illumination range of each time interval to form an illumination range reference comparison table;
and S12, detecting the illumination of the ship image transmitted by the camera through the light sensing element, and comparing the illumination range with the reference comparison table to judge whether the type of the ship image is a daytime image or a nighttime image.
Further, in step S2, the daytime image is processed by using a target detection algorithm Fast R-CNN based on a deep convolutional neural network, where the network structure includes two parts, RPN and Fast R-CNN, where RPN is used to predict a candidate region that may include a target in the input image, and output a suggestion box that may include a ship target; fast R-CNN is used to classify the candidate regions and to revise the bounding boxes of the candidate regions.
Further, the training step of the target detection algorithm fast R-CNN based on the deep convolutional neural network is as follows:
1) initializing RPN network parameters by using a pre-training network model, and finely adjusting the RPN network parameters by using a random gradient descent algorithm and a back propagation algorithm;
2) initializing fast R-CNN target detection network parameters by using a pre-training network model, extracting a candidate region by using the RPN network in the first step, and training a target detection network;
3) reinitializing and fine-tuning RPN network parameters by the target detection network in the second step;
4) extracting a candidate area by using the RPN network in the third step and finely adjusting the parameters of the target detection network;
5) and repeating the third step and the fourth step until the maximum iteration number is reached or the network converges.
Further, step S2 specifically includes:
s21, calculating a convolution characteristic diagram of the daytime image to be detected;
s22, processing the convolution feature graph by using an RPN to obtain a target suggestion frame;
s23, extracting features of each suggestion box by utilizing RoI Pooling;
and S24, classifying by using the extracted features.
Further, in step S3, the nighttime image is processed using a convolutional neural network algorithm based on regional covariance guidance.
Further, step S3 specifically includes:
s31, extracting low-level features of the night image by taking pixels as units;
s32, constructing area covariance by taking the multi-dimensional feature vector as the basis;
s33, constructing a convolutional neural network model by taking the covariance matrix as a training sample;
s34, calculating the image saliency based on the local and global contrast principle;
and S35, framing a remarkable ship target and acquiring the position of the ship.
Further, the ship detection and identification method of the present invention further comprises:
s5, evaluating an image detection result by using AUC and MAE evaluation indexes; the AUC and MAE calculation formulas are respectively as follows:
Figure BDA0002094511170000041
wherein rankinsiThe sequence numbers representing the ith sample, which represent the probability scores arranged from small to large at the rank position, M, N are the number of positive samples and the number of negative samples respectively,
Figure BDA0002094511170000042
indicating that only the sequence numbers of the positive samples are added up;
Figure BDA0002094511170000043
wherein
Figure BDA0002094511170000044
The significant map is represented by a map of the feature,
Figure BDA0002094511170000045
representing the reference map, W and H represent the pixel value width and height of the image, respectively.
The invention has the following beneficial effects:
according to the ship detection and identification method, the images are classified based on different illumination intensities of the ship images, and different processing strategies are respectively used for the classified daytime images and nighttime images, so that most ships can be detected even on the nighttime images with poor image quality, and in addition, the ships can be detected even if the ship changes in scale, so that the detection and identification of the ship target in the whole time scene are realized, and the ship detection and identification method has better robustness.
Drawings
Fig. 1 is a basic flow diagram of an embodiment of the ship detection and identification method of the present invention.
FIG. 2 is an example of a diurnal ship image in an embodiment of the invention.
FIG. 3 is a flowchart of the Faster R-CNN target detection algorithm used in the embodiments of the present invention.
Fig. 4 is a diagram of an implementation effect obtained by a ship model simulation real ship motion test on a lake in a campus according to the embodiment of the ship detection and identification method of the present invention, wherein: a is a far ship image, b is a near ship image, c is a multi-obstacle ship image, and d is a ship scale transformation image.
FIG. 5 is a block diagram of a convolutional neural network based on regional covariance steering used in an embodiment of the present invention.
FIG. 6 is a diagram of an implementation effect obtained by a ship model simulation real ship motion test of a lake in a campus at night by using the ship detection and identification method of the present invention.
Detailed Description
For a further understanding of the invention, reference will now be made to the preferred embodiments of the invention by way of example, and it is to be understood that the description is intended to further illustrate features and advantages of the invention, and not to limit the scope of the claims.
The embodiment of the invention provides a ship detection and identification method in a day and night image based on statistical learning and regional covariance. As shown in fig. 1, the process includes:
1. firstly, detecting a video frame image acquired by a photoelectric holder by using a light sensing element to realize day and night image classification;
2. for the daytime image and the nighttime image, respectively detecting the size and the position of the ship by using a Faster RCNN and a region guide covariance guide CNN, wherein the specific processes are respectively shown in FIG. 3 and FIG. 5;
3. and screening the detected ships to determine the types and positions of the ships.
In step 1, an example of the diurnal ship images is shown in fig. 2, wherein the first image is a daytime image and the second image is a nighttime image. The specific process of utilizing the light sensing element to realize day and night image classification comprises the following steps: firstly, a large number of scene pictures in different time periods are collected, the image illumination range of each time period is analyzed in a statistical manner, the detailed parameters are shown in table 1, the illumination range of the scene in the daytime is on the left side, and the illumination range of the scene at night is on the right side. The image type, namely the daytime image or the nighttime image, is judged by comparing the reference values of the range of the following table through a light sensing element detection method, namely, a common light sensing element detects the illumination of the image transmitted by the camera.
TABLE 1 reference value of illumination range under various natural periods
Natural conditions of the world Luminance value (Lx) Natural conditions of the world Luminance value (Lx)
Direct sunlight (1~1.3)×105 Deep dusk 1
Full daylight (1~2)×104 Full moon 10-1
Day (Yin) 103 Moon in half 10-2
Very dark daytime 102 Starlight 10-3
Dusk (dawn) 10 Night (Yin) 10-4
In step 2, the main steps of the fast-based Convolutional Neural Networks (Faster Convolutional Neural Networks) target detection algorithm for processing ship detection are as follows:
1) calculating a convolution characteristic diagram of the ship image;
2) processing the convolution characteristic graph by using an RPN (Region suggestion Network) to obtain a target suggestion box;
3) extracting features of each suggestion box by utilizing RoI Pooling (Region of interest Pooling);
4) and classifying by using the extracted features.
Aiming at the detection and identification of the ship target in the daytime scene, the data set selected in the embodiment is a ship picture data set and a ship model data set in the Yangtze river actually photographed by a Nautilus bridge, and before the realization, part of the ship image in the acquired data set is used as a training data set, and the other part of the acquired data set is used as a test set. The ships are classified into five types, namely passenger ships, cargo ships, beacon ships, warships and sailing ships. The pre-training model selected in this embodiment is ResNet 50. The RPN is trained end-to-end during the training phase. The initial learning rate in the Faster R-CNN network is 0.0003, the iteration is 20000 times, and the specific training steps are as follows:
1) initializing RPN network parameters by using a pre-training network model, and finely adjusting the RPN network parameters by using a random gradient descent algorithm and a back propagation algorithm;
2) initializing fast R-CNN target detection network parameters by using a pre-training network model, extracting a candidate region by using the RPN network in the first step, and training a target detection network;
3) reinitializing and fine-tuning RPN network parameters by the target detection network in the second step;
4) extracting a candidate area by using the RPN in the third step and finely adjusting target detection network parameters;
5) and repeating the third step and the fourth step until the maximum iteration number is reached or the network converges.
The model performance was verified on the test set and the resulting false negative and false positive indicators were counted as shown in table 2.
TABLE 2 Faster R-CNN model false alarm missing rate
Type of index Cargo ship Passenger ship Lamp beacon boat Warship Sailing boat
Rate of missing reports 0.221 0.117 0.667 0.212 0.006
False alarm rate 0.051 0.072 0.015 0.103 0.077
In the experimental effect diagram shown in fig. 4, the following description is made with respect to the lower left multi-ship regression diagram, i.e., fig. 4 (c): the resolution of the picture is 233 × 151, wherein the white foam is a water surface obstacle interfering object, and the parameter values of the algorithm operation result are as shown in table 3 below. The other three pictures are similar to (c) in fig. 4.
TABLE 3 Ship position type information Table
Ship number Coordinate information (x, y) width&eight Species of Confidence level
Boat 1 (left 1) (25,101) 21&12 Speed-boat 84%
Boat 2 (left 2) (47,82) 21&13 Speed-boat 76%
Boat 3 (left 3) (121,29) 28&26 Pump-ship 99%
Boat 4 (left 4) (210,77) 12&12 Speed-boat 89%
In step 2, aiming at a night video frame image, in order to solve the problem that a training sample is unbalanced due to single visual information, the embodiment of the invention provides a convolutional neural network algorithm based on regional covariance guidance, which is used for detecting a significant target in the night image. The salient object detection is a research which is provided by simulating a human eye visual attention mechanism and takes the most interesting area of human eyes as a detection object, a ship object is a salient object when a ship sails on the water surface with a single background, and the position of the ship can be obtained by returning to a boundary frame of the salient ship object after the detection. As shown in fig. 5, the convolutional neural network algorithm based on the regional covariance guidance mainly comprises the following steps:
1) extracting low-level features of the image by taking a pixel as a unit;
2) constructing a region covariance based on the multi-dimensional feature vector;
3) constructing a convolutional neural network model by taking the covariance matrix as a training sample;
4) calculating the image significance based on the local and global contrast principles;
5) and (5) framing out a remarkable ship target and acquiring the position of the ship.
The model training for the night scene is different from the day scene, but the training and testing steps can refer to the training scheme of the Faster R-CNN target detection algorithm in step 2 and the training and testing steps in FIG. 3. The data set used by this module is a night time period image that is the same as the day scene location. Due to the particularity of the night scene and the characteristics of the algorithm used by the module, the evaluation standard selected by the module is the mainstream AUC and MAE evaluation index in the field, and the unit of running time of each image is as follows: and second. Specific index values are shown in table 4.
The AUC and MAE calculation formulas are respectively as follows:
Figure BDA0002094511170000091
wherein rankinsiThe index (probability score is arranged from small to large and at rank position) representing the ith sample, M, N is the number of positive samples and the number of negative samples respectively,
Figure BDA0002094511170000092
indicating that only the sequence numbers of the positive samples are added.
Figure BDA0002094511170000093
Wherein
Figure BDA0002094511170000094
The significant map is represented by a map of the feature,
Figure BDA0002094511170000095
reference map is shown, and W and H are respectively shownThe pixel values of the pixels are wide and high.
TABLE 4 night Ship detection Algorithm evaluation index
Index name MAE AUC Time
Index value 0.1329 0.8546 1.553
As can be seen from table 4, in the nighttime period, due to the lack of the ship image information, the ship image is processed significantly, so that the real-time effect cannot be achieved, but the ship target detection function can be better achieved. MAE is the mean absolute error, with smaller values representing better algorithm performance. The AUC is a probability value, so that the quality of the classifier can be visually evaluated, and the larger the value is, the better the value is.
The implementation effect obtained by using a ship model to simulate real ship motion test on a lake in a campus at night is shown in fig. 6, and the following explanation is made for a regression graph of multiple ships below the right: the resolution of the picture is 233 × 155, and the output parameter values of the ship information after the algorithm operation result are shown in table 5. The other three pictures are similar.
TABLE 5 Ship position type information Table
Ship model number Coordinate information (x, y) width height Confidence level
Boat 1 (left 1) (75,97) 52 50 94%
Boat 2 (left 2) (128,41) 37 25 86%
As can be seen from the detection results: the ship detection model provided by the embodiment of the invention can detect most ships even on night images with poor image quality, and can detect the ships even if the ships have scale changes. In conclusion, the method and the device realize the detection and identification of the ship target in the whole time period scene, and have better robustness.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core idea. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (3)

1.一种昼夜图像中的船舶检测和识别方法,其特征在于,包括如下步骤:1. a ship detection and identification method in a day and night image, is characterized in that, comprises the steps: S1、利用光感元件探测不同时段船舶图像的照度,根据船舶图像的不同照度范围将其分成白天图像和夜间图像两类,具体包括:S1. Use photosensitive elements to detect the illuminance of ship images in different time periods, and divide them into two types: daytime images and nighttime images according to the different illuminance ranges of the ship images, including: S11、采集大量不同时段场景图片,统计分析出各时段的图像照度范围,形成照度范围参考对照表;S11. Collect a large number of scene pictures in different time periods, statistically analyze the image illumination range of each time period, and form a reference comparison table of illumination ranges; S12、通过光感元件探测摄像头传来的船舶图像的照度,对比所述照度范围参考对照表判断船舶图像的类别是白天图像还是夜间图像;S12, detecting the illuminance of the ship image transmitted by the camera through the photosensitive element, and comparing the illuminance range with reference to the comparison table to determine whether the category of the ship image is a daytime image or a nighttime image; S2、针对白天图像,使用基于深度卷积神经网络的目标检测算法Faster R-CNN处理,首先对所有出现在探测范围内的物体进行检测,然后从中筛选出船舶类物体,具体包括:S2. For daytime images, use the target detection algorithm Faster R-CNN based on the deep convolutional neural network to process, firstly detect all objects appearing in the detection range, and then screen out ship objects, including: S21、计算待检测白天图像的卷积特征图;S21, calculating the convolution feature map of the daytime image to be detected; S22、采用RPN对所述卷积特征图进行处理,得到目标建议框;S22, using RPN to process the convolution feature map to obtain a target suggestion frame; S23、利用RoI Pooling对每个建议框提取特征;S23. Use RoI Pooling to extract features for each proposal frame; S24、利用提取的特征进行分类;S24, classifying using the extracted features; S3、针对夜间图像,使用基于区域协方差引导的卷积神经网络算法处理,首先检测夜间图像中的显著目标,从中筛选出船舶类物体,具体包括:S3. For nighttime images, the convolutional neural network algorithm based on regional covariance guidance is used to process, firstly detect the salient targets in the nighttime images, and screen out ship-like objects from them, including: S31、以像素为单元提取夜间图像的低级特征;S31. Extract the low-level features of the nighttime image in units of pixels; S32、以多维特征向量为基础构造区域协方差;S32. Construct the regional covariance based on the multi-dimensional feature vector; S33、以协方差矩阵为训练样本构造卷积神经网络模型;S33. Construct a convolutional neural network model with the covariance matrix as a training sample; S34、基于局部和全局对比度原则计算图像显著性;S34. Calculate image saliency based on local and global contrast principles; S35、框出显著的船舶目标,获取船舶位置;S35, frame a significant ship target, and obtain the position of the ship; S4、基于筛选出的船舶类物体,获取当前视频帧中的所有船舶的实时位置和所属类别信息;S4. Based on the filtered ship objects, obtain the real-time position and category information of all ships in the current video frame; S5、使用AUC和MAE评价指标评判图像检测结果;AUC和MAE计算公式分别如下:S5. Use the AUC and MAE evaluation indicators to evaluate the image detection results; the calculation formulas of AUC and MAE are as follows:
Figure FDA0003068519760000021
Figure FDA0003068519760000021
其中rankinsi代表第i条样本的序号,其表示概率得分从小到大排,排在第rank个位置,M、N分别是正样本的个数和负样本的个数,
Figure FDA0003068519760000022
表示只把正样本的序号加起来;
Among them, rank insi represents the serial number of the ith sample, which indicates that the probability score is ranked from small to large, and it is ranked in the rank position. M and N are the number of positive samples and the number of negative samples, respectively.
Figure FDA0003068519760000022
Indicates that only the serial numbers of positive samples are added up;
Figure FDA0003068519760000023
Figure FDA0003068519760000023
其中
Figure FDA0003068519760000024
表示显著图谱,
Figure FDA0003068519760000025
表示基准图谱,W和H分别表示图像的像素值宽和高。
in
Figure FDA0003068519760000024
represents a significant map,
Figure FDA0003068519760000025
represents the reference map, and W and H represent the pixel value width and height of the image, respectively.
2.根据权利要求1所述的昼夜图像中的船舶检测和识别方法,其特征在于,步骤S2中,所述Faster R-CNN网络结构包括RPN和Fast R-CNN两个部分,其中RPN用于预测输入图像中可能包含目标的候选区域,输出可能包含船舶目标的建议框;Fast R-CNN用于分类所述候选区域,并修正候选区域的边界框。2. the ship detection and identification method in day and night image according to claim 1, is characterized in that, in step S2, described Faster R-CNN network structure comprises two parts of RPN and Fast R-CNN, and wherein RPN is used for. Predict the candidate regions that may contain targets in the input image, and output a proposal box that may contain ship targets; Fast R-CNN is used to classify the candidate regions and correct the bounding boxes of the candidate regions. 3.根据权利要求2所述的昼夜图像中的船舶检测和识别方法,其特征在于,所述的基于深度卷积神经网络的目标检测算法Faster R-CNN的训练步骤如下:3. the ship detection and identification method in the day and night image according to claim 2, is characterized in that, the described training step of the target detection algorithm Faster R-CNN based on deep convolutional neural network is as follows: 1)用预训练网络模型初始化RPN网络参数,通过随机梯度下降算法和反向传播算法微调RPN网络参数;1) Initialize the RPN network parameters with the pre-trained network model, and fine-tune the RPN network parameters through the stochastic gradient descent algorithm and the backpropagation algorithm; 2)用预训练网络模型初始化Faster R-CNN目标检测网络参数,并用第一步中的RPN网络提取候选区域,训练目标检测网络;2) Initialize the Faster R-CNN target detection network parameters with the pre-trained network model, and use the RPN network in the first step to extract candidate regions and train the target detection network; 3)用第二步中的目标检测网络重新初始化并微调RPN网络参数;3) Re-initialize and fine-tune the RPN network parameters with the target detection network in the second step; 4)用第三步中的RPN网络提取候选区域并对目标检测网络参数进行微调;4) Use the RPN network in the third step to extract candidate regions and fine-tune the parameters of the target detection network; 5)重复第三步和第四步,直到达到最大迭代次数或网络收敛。5) Repeat steps 3 and 4 until the maximum number of iterations is reached or the network converges.
CN201910514333.9A 2019-06-14 2019-06-14 A method for ship detection and recognition in day and night images Active CN110334703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910514333.9A CN110334703B (en) 2019-06-14 2019-06-14 A method for ship detection and recognition in day and night images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910514333.9A CN110334703B (en) 2019-06-14 2019-06-14 A method for ship detection and recognition in day and night images

Publications (2)

Publication Number Publication Date
CN110334703A CN110334703A (en) 2019-10-15
CN110334703B true CN110334703B (en) 2021-10-19

Family

ID=68142123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910514333.9A Active CN110334703B (en) 2019-06-14 2019-06-14 A method for ship detection and recognition in day and night images

Country Status (1)

Country Link
CN (1) CN110334703B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582182B (en) * 2020-05-11 2023-08-11 广东创亿源智能科技有限公司 Ship name recognition method, system, computer equipment and storage medium
CN112101282B (en) * 2020-09-25 2024-04-26 北京瞰天科技有限公司 Water target identification method and device, electronic equipment and storage medium
CN114881336A (en) * 2022-05-17 2022-08-09 广州海事科技有限公司 Method, system, computer equipment and storage medium for automatically marking virtual navigation aids
CN115294387B (en) * 2022-07-08 2025-07-15 西安电子科技大学广州研究院 Image classification method under complex illumination imaging based on deep learning
CN118071997B (en) * 2024-03-06 2024-09-10 武汉船用电力推进装置研究所(中国船舶集团有限公司第七一二研究所) Water surface target identification method and device based on visual image and electronic equipment
CN118372853B (en) * 2024-05-17 2025-02-14 泉州世纪众创信息科技有限公司 A car automatic driving system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469115A (en) * 2015-11-25 2016-04-06 天津大学 Statistical feature-based day and night image recognition method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469115A (en) * 2015-11-25 2016-04-06 天津大学 Statistical feature-based day and night image recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Research on Ship Automatic Modeling and Identification System》;Yao-jie Chen 等;《2017 2nd International Conference on Mechatronics and Information Technology (ICMIT 2017)》;20171231;全文 *
《一种鲁棒的夜间图像显著性对象检测模型》;徐新 等;《软件学报》;20181231;第2616-2631页 *
《基于改进faster-rcnn的舰船目标检测与识别(阅读笔记)》;jin_mumu;《https://blog.csdn.net/qq_42521031/article/details/85321098》;20181228;第1-3页 *

Also Published As

Publication number Publication date
CN110334703A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110334703B (en) A method for ship detection and recognition in day and night images
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN119006469B (en) Automatic detection method and system for surface defects of substrate glass based on machine vision
CN106845487B (en) End-to-end license plate identification method
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN111160249A (en) Multi-class target detection method in optical remote sensing images based on cross-scale feature fusion
CN113592911B (en) Apparent enhanced depth target tracking method
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
CN110059642B (en) Face image screening method and device
CN107133616A (en) A kind of non-division character locating and recognition methods based on deep learning
CN112434599B (en) Pedestrian re-identification method based on random occlusion recovery of noise channel
CN105930822A (en) Human face snapshot method and system
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN104200237A (en) High speed automatic multi-target tracking method based on coring relevant filtering
CN106683119A (en) Moving vehicle detecting method based on aerially photographed video images
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
CN117557784A (en) Target detection method, target detection device, electronic equipment and storage medium
Viraktamath et al. Comparison of YOLOv3 and SSD algorithms
CN110599463A (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network
CN116665015B (en) A method for detecting weak and small targets in infrared sequence images based on YOLOv5
CN113763424A (en) Real-time intelligent target detection method and system based on embedded platform
TWI696958B (en) Image adaptive feature extraction method and its application
CN114998801A (en) Forest fire smoke video detection method based on contrastive self-supervised learning network
CN116844234A (en) A moving target detection method based on visual brain network background modeling
CN114463619B (en) Infrared dim target detection method based on integrated fusion features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载