CN112183579B - Method, medium and system for detecting micro target - Google Patents
Method, medium and system for detecting micro target Download PDFInfo
- Publication number
- CN112183579B CN112183579B CN202010905792.2A CN202010905792A CN112183579B CN 112183579 B CN112183579 B CN 112183579B CN 202010905792 A CN202010905792 A CN 202010905792A CN 112183579 B CN112183579 B CN 112183579B
- Authority
- CN
- China
- Prior art keywords
- target
- tiny
- network
- tiny target
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开一种微小目标检测方法、介质及系统。该方法包括:将包含微小目标的图像输入深度学习目标检测网络后,输出从所述图像中提取出的所述微小目标,其中,所述微小目标的面积小于预设面积;将提取出的所述微小目标输入放大深度学习网络,输出放大预设倍数的所述微小目标;将放大预设倍数的所述微小目标输入深度学习分类网络,输出所述微小目标的类别。本发明可有效提高微小目标的检测精度,降低误检率,从而提高整个计算机视觉系统的性能和稳定性,具有较好的实际应用价值和经济效益。
The invention discloses a micro target detection method, medium and system. The method comprises: after inputting an image containing a tiny target into a deep learning target detection network, outputting the tiny target extracted from the image, wherein the area of the tiny target is smaller than a preset area; The tiny target is input into the amplification deep learning network, and the tiny target with a preset magnification is output; the tiny target with a preset magnification is input into the deep learning classification network, and the category of the tiny target is output. The invention can effectively improve the detection accuracy of tiny targets, reduce the false detection rate, thereby improving the performance and stability of the entire computer vision system, and has good practical application value and economic benefits.
Description
技术领域technical field
本发明涉及目标检测技术领域,尤其涉及一种微小目标检测方法、介质及系统。The invention relates to the technical field of target detection, in particular to a tiny target detection method, medium and system.
背景技术Background technique
近年来,随着深度学习和计算机算力的不断发展突破,深度学习在分类任务、目标检测、语义分割等计算机视觉领域取得了传统算法不可比拟的效果,并应用在各行各业。In recent years, with the continuous development and breakthrough of deep learning and computer computing power, deep learning has achieved incomparable effects in traditional algorithms in computer vision fields such as classification tasks, object detection, and semantic segmentation, and has been applied in various industries.
在目标检测领域,目前主流的应用算法有:SSD、FasterRcnn、YOLO等,其在实际应用中取得了不错的效果,但这些算法普遍存在的一个问题,在检测微小目标时,其检测精度较低。主要原因首先在于微小目标所含像素较少,其携带的数据信息量较少,其次深度学习的一般规律是网络深度越深检测精度越高。这样会导致微小目标经过较深的深度学习网络,在下采样的作用下,微小目标本身所携带的较少信息在较深的特征图中几乎消失了,从而导致微小目标检测精度的下降。In the field of target detection, the current mainstream application algorithms are: SSD, FasterRcnn, YOLO, etc., which have achieved good results in practical applications, but a common problem with these algorithms is that when detecting tiny targets, their detection accuracy is low. . The main reason is firstly that tiny targets contain fewer pixels and carry less data information, and secondly, the general rule of deep learning is that the deeper the network depth, the higher the detection accuracy. This will cause the tiny target to pass through a deep deep learning network. Under the effect of downsampling, the less information carried by the tiny target itself will almost disappear in the deeper feature map, resulting in a decrease in the detection accuracy of the tiny target.
发明内容Contents of the invention
本发明实施例提供了一种微小目标检测方法、介质及系统,以解决现有技术检测微小目标的精度低的问题。Embodiments of the present invention provide a micro-target detection method, medium and system to solve the problem of low precision in detecting micro-targets in the prior art.
第一方面,提供一种微小目标检测方法,包括:将包含微小目标的图像输入深度学习目标检测网络后,输出从所述图像中提取出的所述微小目标,其中,所述微小目标的面积小于预设面积;将提取出的所述微小目标输入放大深度学习网络,输出放大预设倍数的所述微小目标;将放大预设倍数的所述微小目标输入深度学习分类网络,输出所述微小目标的类别。In the first aspect, a method for detecting a tiny target is provided, comprising: after inputting an image containing a tiny target into a deep learning target detection network, outputting the tiny target extracted from the image, wherein the area of the tiny target is Smaller than the preset area; input the extracted tiny target into the enlarged deep learning network, and output the tiny target enlarged by a preset multiple; input the tiny target enlarged by a preset multiple into the deep learning classification network, and output the tiny target The category of the target.
第二方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序指令;所述计算机程序指令被处理器执行时实现如第一方面实施例所述的微小目标检测方法。In the second aspect, a computer-readable storage medium is provided, and computer program instructions are stored on the computer-readable storage medium; when the computer program instructions are executed by a processor, the tiny target detection as described in the embodiment of the first aspect is realized method.
第三方面,提供一种微小目标检测系统,包括:如第二方面实施例所述的计算机可读存储介质。In a third aspect, a tiny target detection system is provided, including: the computer-readable storage medium as described in the embodiment of the second aspect.
这样,本发明实施例,可有效提高微小目标的检测精度,降低误检率,从而提高整个计算机视觉系统的性能和稳定性,具有较好的实际应用价值和经济效益。In this way, the embodiment of the present invention can effectively improve the detection accuracy of tiny targets, reduce the false detection rate, thereby improving the performance and stability of the entire computer vision system, and has good practical application value and economic benefits.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments of the present invention. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention , for those skilled in the art, other drawings can also be obtained according to these drawings without paying creative labor.
图1是本发明实施例的微小目标检测方法的流程图;Fig. 1 is the flow chart of the tiny object detection method of the embodiment of the present invention;
图2是本发明实施例的放大深度学习网络的结构示意图;Fig. 2 is a schematic structural diagram of an enlarged deep learning network according to an embodiment of the present invention;
图3是本发明实施例的残差ResNet网络的局部结构示意图;3 is a schematic diagram of a local structure of a residual ResNet network according to an embodiment of the present invention;
图4是本发明实施例的周期混牌算子的示意图。Fig. 4 is a schematic diagram of a periodic card mixing operator according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获取的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
本发明实施例公开了一种微小目标检测方法。如图1所示,该微小目标检测方法包括如下的步骤:The embodiment of the invention discloses a tiny target detection method. As shown in Figure 1, the tiny target detection method includes the following steps:
步骤S1:将包含微小目标的图像输入深度学习目标检测网络后,输出从图像中提取出的微小目标。Step S1: After inputting the image containing the tiny target into the deep learning target detection network, output the tiny target extracted from the image.
深度学习目标检测网络采用公知的神经网络,例如,YoloV3目标检测网络、SSD目标检测网络、FasterRcnn目标检测网络。应当理解的是,一般输入深度学习目标检测网络的图像都应缩放到一定尺寸。因此,在步骤S1之前,本发明实施例的方法还包括:将原始图像缩放为预设像素的图像。该预设像素可根据选择的深度学习目标检测网络确定。例如,对于YoloV3目标检测网络,缩放后的图像为608×608(px)的图像。具体的,可通过opencv的imread方法读取原始图像,再通过opencv的resize方法将读取的原始图像缩放到608×608的大小。The deep learning target detection network uses a well-known neural network, for example, YoloV3 target detection network, SSD target detection network, FasterRcnn target detection network. It should be understood that, generally, the images input to the deep learning object detection network should be scaled to a certain size. Therefore, before step S1, the method in the embodiment of the present invention further includes: scaling the original image into an image with preset pixels. The preset pixels can be determined according to the selected deep learning target detection network. For example, for the YoloV3 object detection network, the scaled image is a 608×608 (px) image. Specifically, the original image can be read by the imread method of opencv, and then the read original image can be scaled to a size of 608×608 by the resize method of opencv.
本发明实施例所述的微小目标的面积小于预设面积。例如,本发明实施例中,预设面积为20×20(px)。应当理解的是,该微小目标针对的是输入深度学习目标检测网络的图像而言的,并不是原始图像。The area of the tiny target described in the embodiment of the present invention is smaller than a preset area. For example, in the embodiment of the present invention, the preset area is 20×20 (px). It should be understood that the tiny target is for the image input to the deep learning target detection network, not the original image.
具体的,深度学习目标检测网络可以输出图像的宽度和高度,根据宽度和高度的乘积可以计算面积,从而确定其是否属于微小目标。Specifically, the deep learning target detection network can output the width and height of the image, and the area can be calculated according to the product of the width and height, so as to determine whether it belongs to a tiny target.
步骤S2:将提取出的微小目标输入放大深度学习网络,输出放大预设倍数的微小目标。Step S2: Input the extracted tiny target into the amplified deep learning network, and output the tiny target with a preset magnification factor.
具体的,如图2所示,该放大深度学习网络的结构由依次连接的微小目标ResNet网络结构块和子像素卷积层组成。Specifically, as shown in Figure 2, the structure of the enlarged deep learning network consists of sequentially connected micro-target ResNet network structural blocks and sub-pixel convolutional layers.
其中,微小目标ResNet网络结构块由级联预设次数的残差ResNet网络组成,形成一个较深的网络结构,能充分提取微小目标深度语义信息。本发明实施例中,残差ResNet网络级联9次。如图3所示,残差ResNet网络中的两层权重层(weight layer)替换为稠密DenseNet网络。残差ResNet网络和稠密DenseNet网络为公知的网络结构,在此不再赘述。Among them, the micro target ResNet network structure block is composed of the residual ResNet network with a preset number of cascades, forming a deep network structure, which can fully extract the deep semantic information of the micro target. In the embodiment of the present invention, the residual ResNet network is cascaded 9 times. As shown in Figure 3, the two-layer weight layer (weight layer) in the residual ResNet network is replaced by a dense DenseNet network. The residual ResNet network and the dense DenseNet network are well-known network structures, and will not be described here.
残差ResNet网络的输出结果Xl=Hl(Xl-1,wi,bi)+Xl-1。其中,Xl-1表示当前残差ResNet网络的输入对象。Hl表示稠密DenseNet网络的函数。wi和bi表示稠密DenseNet网络的参数,可结合实例由模型训练得到。The output result of the residual ResNet network X l =H l (X l-1 , wi , bi )+X l-1 . Among them, X l-1 represents the input object of the current residual ResNet network. H l represents a function of the dense DenseNet network. w i and b i represent the parameters of the dense DenseNet network, which can be obtained by model training in combination with examples.
微小目标ResNet网络结构块输出的微小目标特征图Fl-1=R(Xlr,W,B)。其中,Xlr表示输入微小目标ResNet网络结构块的微小目标。R表示微小目标ResNet网络结构块的非线性函数,一般为Relu非线性函数。W和B表示微小目标ResNet网络结构块的参数权值和偏值,可结合实例由模型训练得到。微小目标特征图的尺寸为w×h×c×r2。r表示微小目标特征图的放大倍数。w×h×c表示输入微小目标ResNet网络结构块的微小目标的大小,w表示输入微小目标ResNet网络结构块的微小目标的宽度,h表示输入微小目标ResNet网络结构块的微小目标的高度,c表示图像通道数,一般为3,即RGB三通道。The tiny target feature map F l-1 =R(X lr ,W,B) output by the tiny target ResNet network structure block. Among them, X lr represents the tiny target input to the building block of the tiny target ResNet network. R represents the nonlinear function of the small target ResNet network structure block, generally a Relu nonlinear function. W and B represent the parameter weights and biases of the micro-target ResNet network building block, which can be obtained by model training in combination with examples. The size of the tiny object feature map is w×h×c×r 2 . r represents the magnification of the tiny object feature map. w×h×c represents the size of the tiny target input into the tiny target ResNet network building block, w represents the width of the tiny target input into the tiny target ResNet network building block, h represents the height of the tiny target input into the tiny target ResNet network building block, c Indicates the number of image channels, generally 3, namely RGB three channels.
通过将残差ResNet网络和稠密DenseNet网络相结合,提出一种新的网络结构块,残差ResNet网络确保网络的深度,稠密DenseNet网络确保更多的微小目标信息传递到深层特征图。By combining the residual ResNet network and the dense DenseNet network, a new network structure block is proposed. The residual ResNet network ensures the depth of the network, and the dense DenseNet network ensures that more tiny target information is transferred to the deep feature map.
由于微小目标本身因所含像素较少导致携带信息量较少,为了使放大后的目标携带更多的目标特征信息,本发明实施例利用子像素卷积,将微小目标放大。具体的,子像素卷积层的子像素卷积公式为Isr=PS(Wl×Fl-1+Bl)。其中,Isr表示子像素卷积层的输出结果。Fl-1表示微小目标ResNet网络结构块输出的微小目标特征图。PS表示周期混牌算子,如图4所示,将低像素特征图,通过r2个卷积核得到r2个特征图,再将这r2个特征图中的子像素点,由左到右,由上到下依次排列得到最终的放大图像。Wl和Bl分别表示子像素卷积权值和偏值,可结合实例由模型训练得到。本发明具体实施例中,该预设倍数为8倍,则输出的微小目标为8*wi×8*hi×3的图像,其中,3表示有R、G、B三个通道。Since the tiny target itself contains less information due to fewer pixels, in order to make the enlarged target carry more target feature information, the embodiment of the present invention uses sub-pixel convolution to enlarge the tiny target. Specifically, the sub-pixel convolution formula of the sub-pixel convolution layer is I sr =PS(W l ×F l-1 +B l ). Among them, I sr represents the output result of the sub-pixel convolutional layer. F l-1 represents the tiny target feature map output by the tiny target ResNet network building block. PS represents the periodic card mixing operator. As shown in Figure 4, the low-pixel feature map is obtained by r 2 convolution kernels to obtain r 2 feature maps, and then the sub-pixel points in these r 2 feature maps are divided from left to right. To the right, arranged in order from top to bottom to get the final enlarged image. W l and B l represent sub-pixel convolution weights and biases respectively, which can be obtained by model training in combination with examples. In a specific embodiment of the present invention, the preset multiplier is 8 times, and the output micro target is an image of 8*w i ×8*h i ×3, where 3 represents three channels of R, G, and B.
步骤S3:将放大预设倍数的微小目标输入深度学习分类网络,输出微小目标的类别。Step S3: input the tiny target enlarged by a preset multiple into the deep learning classification network, and output the category of the tiny target.
深度学习分类网络采用公知的神经网络,例如,inceptionv4分类网络、vgg16分类网络、resnet50分类网络。The deep learning classification network adopts a well-known neural network, for example, the inceptionv4 classification network, the vgg16 classification network, and the resnet50 classification network.
本发明一具体实施例中,采用inceptionv4分类网络输出结果(pi,ci)。其中,pi表示第i个微小目标的置信度,ci表示第i个微小目标的分类结果。一般的,该分类结果由具体实例确定,例如可以包括行人、车辆。In a specific embodiment of the present invention, the inceptionv4 classification network is used to output the result (p i , ci ) . Among them, p i represents the confidence of the i-th tiny target, and c i represents the classification result of the i-th tiny target. Generally, the classification result is determined by specific examples, for example, pedestrians and vehicles may be included.
除了上述的微小目标的分类结果,本发明实施例的方法还可以检测出微小目标的位置信息,优选的,本发明实施例的微小目标检测方法还包括如下的步骤:In addition to the classification results of the above-mentioned tiny targets, the method of the embodiment of the present invention can also detect the location information of the tiny targets. Preferably, the tiny target detection method of the embodiment of the present invention further includes the following steps:
将包含微小目标的图像输入深度学习目标检测网络后,输出微小目标的位置信息。After the image containing the tiny target is input into the deep learning target detection network, the location information of the tiny target is output.
该深度学习目标检测网络如前述步骤S1所述,在此不再赘述。通过深度学习目标检测网络输出所有微小目标的位置信息(xi,yi,wi,hi)。其中,xi表示第i个微小目标的中心的横坐标,yi表示第i个微小目标的中心的纵坐标,wi为第i个微小目标的宽度,hi为第i个微小目标的高度。wi和hi可用于计算输出的目标的面积,从而确定该目标是否是微小目标。The deep learning object detection network is as described in the aforementioned step S1, and will not be repeated here. Output the location information (xi , y , w, h i ) of all tiny targets through the deep learning target detection network. Among them, x i represents the abscissa of the center of the i-th micro-target, y i represents the ordinate of the center of the i-th micro-target, w i is the width of the i-th micro-target, h i is the width of the i-th micro-target high. w i and hi can be used to calculate the area of the output target, so as to determine whether the target is a tiny target.
因此,通过本发明实施例的微小目标检测方法不仅可以检测出微小目标的类别,还可以得到微小目标的位置信息,可用(pi,ci,xi,yi,wi,hi)表示包含微小目标类别和位置信息的综合结果。Therefore, the tiny target detection method of the embodiment of the present invention can not only detect the category of the tiny target, but also obtain the location information of the tiny target, which can be used (p i , c i , x i , y i , w i , h i ) Represents the comprehensive result including tiny object category and location information.
本发明实施例还公开了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序指令;所述计算机程序指令被处理器执行时实现如上述实施例所述的微小目标检测方法。The embodiment of the present invention also discloses a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium; when the computer program instructions are executed by a processor, the tiny target detection as described in the above-mentioned embodiments is realized method.
本发明实施例还公开了一种微小目标检测系统,包括:如上述实施例所述的计算机可读存储介质。The embodiment of the present invention also discloses a tiny target detection system, including: the computer-readable storage medium as described in the above-mentioned embodiments.
综上,本发明实施例,可有效提高微小目标的检测精度,降低误检率,从而提高整个计算机视觉系统的性能和稳定性,具有较好的实际应用价值和经济效益。To sum up, the embodiments of the present invention can effectively improve the detection accuracy of tiny targets, reduce the false detection rate, thereby improving the performance and stability of the entire computer vision system, and have good practical application value and economic benefits.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present invention. Should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010905792.2A CN112183579B (en) | 2020-09-01 | 2020-09-01 | Method, medium and system for detecting micro target |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010905792.2A CN112183579B (en) | 2020-09-01 | 2020-09-01 | Method, medium and system for detecting micro target |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112183579A CN112183579A (en) | 2021-01-05 |
| CN112183579B true CN112183579B (en) | 2023-05-30 |
Family
ID=73924095
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010905792.2A Active CN112183579B (en) | 2020-09-01 | 2020-09-01 | Method, medium and system for detecting micro target |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112183579B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
| CN109344821A (en) * | 2018-08-30 | 2019-02-15 | 西安电子科技大学 | Small target detection method based on feature fusion and deep learning |
| CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
| CN111179212A (en) * | 2018-11-10 | 2020-05-19 | 杭州凝眸智能科技有限公司 | Method for realizing micro target detection chip integrating distillation strategy and deconvolution |
-
2020
- 2020-09-01 CN CN202010905792.2A patent/CN112183579B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
| CN109344821A (en) * | 2018-08-30 | 2019-02-15 | 西安电子科技大学 | Small target detection method based on feature fusion and deep learning |
| CN111179212A (en) * | 2018-11-10 | 2020-05-19 | 杭州凝眸智能科技有限公司 | Method for realizing micro target detection chip integrating distillation strategy and deconvolution |
| CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
Non-Patent Citations (2)
| Title |
|---|
| 双路径反馈网络的图像超分辨重建算法;陶状等;《计算机系统应用》;20200405;第181–186页 * |
| 基于多尺度稠密卷积网络的单图像超分辨率重建;唐家福等;《包装工程》;20200710(第13期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112183579A (en) | 2021-01-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112418108B (en) | Remote sensing image multi-class target detection method based on sample reweighing | |
| CN112801164A (en) | Training method, device and equipment of target detection model and storage medium | |
| CN109215123B (en) | Method, system, storage medium and terminal for generating infinite terrain based on cGAN | |
| CN112560862A (en) | Text recognition method, device and electronic device | |
| CN115035295B (en) | Remote sensing image semantic segmentation method based on shared convolution kernel and boundary loss function | |
| CN106650615B (en) | A kind of image processing method and terminal | |
| CN113554653B (en) | Semantic segmentation method based on mutual information calibration point cloud data long tail distribution | |
| CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
| CN112183269B (en) | A target detection method and system suitable for intelligent video surveillance | |
| CN117437201A (en) | A road crack detection method based on improved YOLOv7 | |
| CN116152254B (en) | Industrial leakage target gas detection model training method, detection method and electronic equipment | |
| CN117631618B (en) | A real-time optimization method and system for DCS logic configuration screen connection | |
| CN115810149A (en) | Building Extraction Method of High Resolution Remote Sensing Image Based on Superpixel and Graph Convolution | |
| CN112733756A (en) | Remote sensing image semantic segmentation method based on W divergence countermeasure network | |
| KR20240159454A (en) | How to create an image super-resolution dataset, an image super-resolution model, and a training method | |
| CN117036931A (en) | Ecological landscape engineering small target pest detection method based on convolutional neural network | |
| CN114723894B (en) | Three-dimensional coordinate acquisition method and device and electronic equipment | |
| CN111161289A (en) | Method, device and computer program product for improving contour precision of object in image | |
| CN101546438B (en) | Overlapping Analysis Method of Multiple Regional Topological Layers Based on Constrained Dirouning Triangulation Technology | |
| CN112183579B (en) | Method, medium and system for detecting micro target | |
| CN109063834A (en) | A kind of neural networks pruning method based on convolution characteristic response figure | |
| CN110188682B (en) | Optical remote sensing image target detection method based on geometric structure two-way convolutional network | |
| CN118429906A (en) | Expressway dangerous object small target detection method and device based on feature fusion and self-attention mechanism residual block | |
| CN116246248B (en) | A lightweight traffic sign recognition method, system, device and storage medium based on YOLOv5 | |
| CN110782023A (en) | Reduced Residual Module Atrous Convolutional Architecture Network and Fast Semantic Segmentation Method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |