+

CN111967345A - Method for judging shielding state of camera in real time - Google Patents

Method for judging shielding state of camera in real time Download PDF

Info

Publication number
CN111967345A
CN111967345A CN202010736809.6A CN202010736809A CN111967345A CN 111967345 A CN111967345 A CN 111967345A CN 202010736809 A CN202010736809 A CN 202010736809A CN 111967345 A CN111967345 A CN 111967345A
Authority
CN
China
Prior art keywords
camera
image
points
feature points
areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010736809.6A
Other languages
Chinese (zh)
Other versions
CN111967345B (en
Inventor
申富饶
李金桥
姜少魁
陆志浩
金祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
State Grid Shanghai Electric Power Co Ltd
Original Assignee
Nanjing University
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, State Grid Shanghai Electric Power Co Ltd filed Critical Nanjing University
Priority to CN202010736809.6A priority Critical patent/CN111967345B/en
Publication of CN111967345A publication Critical patent/CN111967345A/en
Application granted granted Critical
Publication of CN111967345B publication Critical patent/CN111967345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种实时判定摄像头遮挡状态的方法,包括:实时读取摄像头拍摄的一帧RGB图像;将RGB图像先缩放到目标尺寸,再矫正去畸变;将步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取灰度图像中的特征点,得到灰度图像的特征点集合;将灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;若4个区域中任一区域的特征点个数小于预设个数阈值,则输出摄像头被遮挡的判定结果;若4个区域中所有区域的特征点个数均大于预设个数阈值,则输出摄像头未被遮挡的判定结果。本发明具有方法简单高效、速度快的优点,仅需一帧图像即可准确判断摄像头的遮挡状态,适用于对实时性要求较高的场景和嵌入式设备。

Figure 202010736809

The invention discloses a method for determining the occlusion state of a camera in real time. The RGB image is converted into chromaticity space to obtain a grayscale image, and then the feature points in the grayscale image are extracted to obtain a set of feature points of the grayscale image; the grayscale image is divided into 4 areas and numbered, and each area is calculated separately If the number of feature points in any of the 4 areas is less than the preset number threshold, the result of the determination that the camera is blocked will be output; if the number of feature points in all the 4 areas is greater than the preset number threshold Set the number threshold, and output the judgment result that the camera is not blocked. The invention has the advantages of simple, efficient and fast method, only one frame of image is needed to accurately determine the occlusion state of the camera, and is suitable for scenarios and embedded devices that require high real-time performance.

Figure 202010736809

Description

一种实时判定摄像头遮挡状态的方法A method for real-time determination of camera occlusion state

技术领域technical field

本发明涉及计算机视觉领域,尤其涉及一种实时判定摄像头遮挡状态的方法。The invention relates to the field of computer vision, in particular to a method for determining the occlusion state of a camera in real time.

背景技术Background technique

近年来,由于视觉理论及计算机科学与技术的迅速发展,越来越多的学者投入到计算机视觉领域的研究中。计算机视觉在自动驾驶和辅助驾驶的应用中表现出良好的发展前景,备受人们的重视。现阶段的自动驾驶与辅助驾驶大多通过摄像头获取车辆前方的图像信息,但是摄像头可能被淤泥等物体局部遮挡,影响驾驶系统正常工作,也可能因为其他特殊原因被意外遮挡,如果驾驶系统没有及时发现这种情况并进行处理,将会引发非常严重的安全问题。In recent years, due to the rapid development of vision theory and computer science and technology, more and more scholars have devoted themselves to the research in the field of computer vision. Computer vision has shown good development prospects in the application of automatic driving and assisted driving, and has attracted much attention. At present, most of the automatic driving and assisted driving use the camera to obtain the image information in front of the vehicle, but the camera may be partially blocked by objects such as mud, which affects the normal operation of the driving system, or may be accidentally blocked for other special reasons. This situation and handling it will cause very serious security problems.

名称为“基于视频图像信号判定摄像镜头遮挡状态的方法”的中国专利(公开号为CN103139547.B)采用的遮挡检测方法是:使用帧差法提取图像的背景,使用背景减法得出前景,并将前景二值化,划分出多个前景检测单元;剔除像素面积小于阈值的前景检测单元,进一步筛选出候选遮挡区域;对候选遮挡区域的后续帧的像素进行跟踪,若像素的灰度信息和纹理信息的变化小于阈值则判定为疑似遮挡区域;对疑似遮挡区域的后续帧进行跟踪计数,若其稳定的在视频帧中存在超过预设的时间阈值,则确定摄像头被遮挡。该方法虽然可以判断出长期处理摄像头上的遮挡区域,但需要处理一段时间的连续帧,速度较慢,不适用于对实时性要求较高的应用场景。并且对于具有移动性的遮挡物,如恶意的人为操作等不具有识别性。The Chinese patent (publication number CN103139547.B) titled "Method for Determining the Occlusion State of a Camera Lens Based on Video Image Signals" adopts the occlusion detection method: extracting the background of the image by using the frame difference method, using the background subtraction to obtain the foreground, and Binarize the foreground and divide it into multiple foreground detection units; remove the foreground detection units whose pixel area is less than the threshold, and further screen out candidate occlusion areas; track the pixels of subsequent frames of the candidate occlusion areas, if the grayscale information of the pixels and If the change of the texture information is less than the threshold, it is determined as a suspected occlusion area; the subsequent frames of the suspected occlusion area are tracked and counted, and if it stably exists in the video frame exceeding the preset time threshold, it is determined that the camera is occluded. Although this method can determine the occlusion area on the camera for long-term processing, it needs to process continuous frames for a period of time, and the speed is slow, which is not suitable for application scenarios with high real-time requirements. And it is not recognizable for mobile occluders, such as malicious human operations.

名称为“一种基于网络视频监控中的检测视频遮挡的方法”的中国专利(公开号为CN200710145468.X)公开了一种遮挡检测方法,该方法需要先确定参考帧,然后从运动区域检测出遮挡。该方法具有一定的效果,但是局限性在于参考帧的选择,要出现符合条件的参考帧才能进行遮挡检测,并且该方法也需要处理一段时间的连续帧,速度较慢,实用性较低。The Chinese patent (publication number CN200710145468.X) titled "A method for detecting video occlusion based on network video surveillance" discloses a occlusion detection method, which needs to determine a reference frame first, and then detect from the motion area. occlude. This method has a certain effect, but the limitation lies in the selection of reference frames. Only qualified reference frames can be used for occlusion detection, and this method also needs to process consecutive frames for a period of time, which is slow and practical.

综上所述,如何提供一种遮挡判定准确率高、判定速度快、可以用于对实时性要求较高的应用场景的摄像头遮挡状态判定的方法是目前的问题。To sum up, how to provide a method for judging the occlusion state of a camera in application scenarios with high real-time requirements with high occlusion determination accuracy and fast determination speed is a current problem.

发明内容SUMMARY OF THE INVENTION

本发明提供一种实时判定摄像头遮挡状态的方法,以解决现有的摄像头遮挡判定方法存在的精度较低,速度较慢,导致系统的安全性低这一问题。The present invention provides a method for judging the occlusion state of a camera in real time, so as to solve the problem of low accuracy and slow speed of the existing judging method of camera occlusion, which leads to low security of the system.

一种实时判定摄像头遮挡状态的方法,包括如下步骤:A method for determining the occlusion state of a camera in real time, comprising the following steps:

步骤1,实时读取摄像头拍摄的一帧RGB图像;Step 1, read a frame of RGB image captured by the camera in real time;

步骤2,将所述RGB图像先缩放到目标尺寸,再矫正去畸变;Step 2, the RGB image is first scaled to the target size, and then corrected and de-distorted;

步骤3,将所述步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取所述灰度图像中的特征点,得到所述灰度图像的特征点集合;Step 3: Perform chromaticity space conversion on the corrected and dedistorted RGB image obtained in step 2 to obtain a grayscale image, and then extract feature points in the grayscale image to obtain a feature point set of the grayscale image ;

步骤4,将所述灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;Step 4, the grayscale image is divided into 4 areas and numbered, and the number of feature points in each area is calculated respectively;

步骤5,若所述4个区域中任一区域的特征点个数小于预设个数阈值t,则输出所述摄像头被遮挡的判定结果;若所述4个区域中所有区域的特征点个数均大于预设个数阈值t,则输出所述摄像头未被遮挡的判定结果。Step 5: If the number of feature points in any of the 4 areas is less than the preset number threshold t, output the result of the determination that the camera is blocked; if the number of feature points in all the 4 areas is If the numbers are all greater than the preset number threshold t, the determination result that the camera is not blocked is output.

进一步地,在一种实现方式中,所述步骤1包括:在使用所述摄像头前,对所述摄像头进行标定,得到所述摄像头的内参矩阵和畸变系数;Further, in an implementation manner, the step 1 includes: before using the camera, calibrating the camera to obtain an internal parameter matrix and a distortion coefficient of the camera;

所述步骤2包括:根据所述摄像头的内参矩阵和畸变系数,结合opencv_undistort算法,对缩放到所述目标尺寸后的RGB图像进行矫正去畸变。The step 2 includes: correcting and undistorting the RGB image scaled to the target size according to the internal parameter matrix and the distortion coefficient of the camera, combined with the opencv_undistort algorithm.

进一步地,在一种实现方式中,所述步骤3包括基于角点检测法提取图像特征点,所述角点检测法包括:Further, in an implementation manner, the step 3 includes extracting image feature points based on a corner detection method, and the corner detection method includes:

若所述灰度图像的某一像素点的灰度值与该像素点周围领域内一定数量的像素点的灰度值之间的差值均大于或等于预设差值阈值tp,则确定所述像素点为角点,即所述像素点为图像特征点。If the difference between the gray value of a certain pixel of the grayscale image and the gray value of a certain number of pixels in the area around the pixel is greater than or equal to the preset difference threshold t p , then determine The pixel points are corner points, that is, the pixel points are image feature points.

进一步地,在一种实现方式中,所述基于快速角点检测法提取图像特征点包括:Further, in an implementation manner, the extraction of image feature points based on the fast corner detection method includes:

步骤3-1,从所述灰度图像中选取像素点P,所述像素点P的灰度值为IPStep 3-1, select a pixel point P from the grayscale image, and the grayscale value of the pixel point P is IP;

步骤3-2,以所述像素点P为圆心,3像素为半径,设置离散化的Bresenham圆,所述离散化的Bresenham圆上有16个像素点;Step 3-2, with the pixel point P as the center and 3 pixels as the radius, set a discrete Bresenham circle, and there are 16 pixels on the discrete Bresenham circle;

步骤3-3,若所述离散化的Bresenham圆上存在n个连续的像素点,所述n个连续的像素点的灰度值与圆心的灰度值之差的绝对值均大于预设差值阈值tp,即:Step 3-3, if there are n continuous pixel points on the discretized Bresenham circle, the absolute value of the difference between the gray value of the n continuous pixel points and the gray value of the center of the circle is greater than the preset difference. value threshold t p , namely:

Figure BDA0002605278040000031
Figure BDA0002605278040000031

其中,Ii为n个连续的像素点中第i个像素点的灰度值,i=1,2…,n表示像素点的序号,IP为圆心的灰度值,tp为预设差值阈值;Among them, I i is the gray value of the ith pixel in n consecutive pixels, i = 1,2...,n represents the serial number of the pixel, IP is the gray value of the center of the circle, and t p is the preset difference threshold;

则提取所述离散的Bresenham圆的圆心为基于快速角点检测法的图像特征点。Then, the center of the discrete Bresenham circle is extracted as the image feature point based on the fast corner detection method.

进一步地,在一种实现方式中,所述步骤4包括:将所述灰度图像等分为4个区域,所述4个区域分别位于灰度图像的左上、右上、左下和右下位置,通过各个所述区域的左上角坐标、宽和高表示各个区域的具体位置,即:Further, in an implementation manner, the step 4 includes: equally dividing the grayscale image into 4 regions, and the 4 regions are respectively located at the upper left, upper right, lower left and lower right positions of the grayscale image, The specific position of each area is represented by the coordinates, width and height of the upper left corner of each area, namely:

1号区域

Figure BDA0002605278040000032
2号区域
Figure BDA0002605278040000033
3号区域
Figure BDA0002605278040000034
和4号区域
Figure BDA0002605278040000035
其中,w为灰度图像的宽,h为灰度图像的高。Area 1
Figure BDA0002605278040000032
Area 2
Figure BDA0002605278040000033
Area 3
Figure BDA0002605278040000034
and area 4
Figure BDA0002605278040000035
where w is the width of the grayscale image, and h is the height of the grayscale image.

进一步地,在一种实现方式中,所述步骤4还包括:Further, in an implementation manner, the step 4 further includes:

步骤4-1,通过所述步骤3得到灰度图像特征点集合后,提取所述灰度图像的特征点信息,所述特征点信息包括:所述特征点在灰度图像上的坐标;Step 4-1, after obtaining the feature point set of the grayscale image through the step 3, extract the feature point information of the grayscale image, and the feature point information includes: the coordinates of the feature point on the grayscale image;

步骤4-2,根据所述特征点在灰度图像上的坐标,确定每个所述特征点位于灰度图像中区域的编号r,并分别对4个区域内的特征点个数进行计数:Step 4-2, according to the coordinates of the feature points on the grayscale image, determine the number r of each of the feature points located in the grayscale image, and count the number of feature points in the four regions respectively:

Figure BDA0002605278040000036
并且
Figure BDA0002605278040000037
则确定r=1,所述特征点Pj位于1号区域内,所述1号区域的特征点个数加1;like
Figure BDA0002605278040000036
and
Figure BDA0002605278040000037
Then it is determined that r=1, the feature point P j is located in the No. 1 area, and the number of feature points in the No. 1 area is increased by 1;

Figure BDA0002605278040000038
并且
Figure BDA0002605278040000039
则确定r=3,所述特征点Pj位于3号区域内,所述3号区域的特征点个数加1;like
Figure BDA0002605278040000038
and
Figure BDA0002605278040000039
Then it is determined that r=3, the feature point P j is located in the No. 3 area, and the number of feature points in the No. 3 area is increased by 1;

Figure BDA00026052780400000310
并且
Figure BDA00026052780400000311
则确定r=2,所述特征点Pj位于2号区域内,所述2号区域的特征点个数加1;like
Figure BDA00026052780400000310
and
Figure BDA00026052780400000311
Then it is determined that r=2, the feature point P j is located in the No. 2 area, and the number of feature points in the No. 2 area is increased by 1;

Figure BDA00026052780400000312
并且
Figure BDA00026052780400000313
则确定r=4,所述特征点Pj位于4号区域内,所述4号区域的特征点个数加1;like
Figure BDA00026052780400000312
and
Figure BDA00026052780400000313
Then it is determined that r=4, the feature point P j is located in the No. 4 area, and the number of feature points in the No. 4 area is increased by 1;

其中,(xj,yj)为特征点Pj在灰度图像上的坐标。Among them, (x j , y j ) are the coordinates of the feature point P j on the grayscale image.

进一步地,在一种实现方式中,在所述步骤5之前需要确定预设个数阈值t,包括:Further, in an implementation manner, the preset number threshold t needs to be determined before the step 5, including:

步骤5-1,拍摄所述摄像头未被遮挡情况下的视频;Step 5-1, shooting a video when the camera is not blocked;

步骤5-2,根据所述摄像头未被遮挡情况下的视频,使用随机采样的方法,确定需要使用的采样图像帧序号集合,即:Step 5-2, according to the video when the camera is not blocked, use the method of random sampling to determine the set of sampling image frame numbers to be used, namely:

Sk=Sk-1∪{yk}S k =S k-1 ∪{y k }

Figure BDA0002605278040000042
Figure BDA0002605278040000042

k=1,…,γk=1,...,γ

其中,Sk为第k次采样后的图像帧序号的集合,设

Figure BDA0002605278040000041
F为摄像头未被遮挡情况下拍摄的视频的帧数,θ为随机变量,在[0,1)区间上满足均匀分布,即θk是第k次采样时随机生成的在区间[0,1)上的实数,yk是第k次采样得到的图像帧序号,γ为采样次数,Sγ即为最终得到的采样结果,即需要使用的采样图像帧序号集合;Among them, Sk is the set of image frame serial numbers after the kth sampling, set
Figure BDA0002605278040000041
F is the number of frames of the video shot when the camera is not blocked, θ is a random variable, which satisfies a uniform distribution in the [0,1) interval, that is, θ k is randomly generated during the kth sampling in the interval [0,1 ), y k is the image frame serial number obtained by the kth sampling, γ is the number of sampling times, and S γ is the final sampling result, that is, the set of sampling image frame serial numbers to be used;

步骤5-3,按照采样顺序,读取所述采样图像集合的每一帧,依次对所述每一帧执行步骤1到步骤4,得到所述每一帧对应的灰度图像中4个区域的特征点个数;Step 5-3, according to the sampling order, read each frame of the sampled image set, and perform steps 1 to 4 for each frame in turn to obtain 4 regions in the grayscale image corresponding to each frame The number of feature points;

步骤5-4,将所有所述灰度图像中所有区域的特征点个数按从小到大排序,得到特征点个数的序列a,a=(a1,a2,…,a4×γ),其中al表示第l小的特征点个数;Step 5-4, sort the number of feature points of all regions in all the grayscale images from small to large to obtain a sequence a of the number of feature points, a=(a 1 , a 2 ,...,a 4×γ ), where a l represents the number of the lth smallest feature points;

步骤5-5,基于箱型图分析法计算所述预设个数阈值t:Step 5-5, calculate the preset number threshold t based on the box plot analysis method:

t=Q1-1.5×IQRt=Q 1 -1.5×IQR

IQR=|Q1-Q2|IQR=|Q 1 -Q 2 |

其中,t表示预设个数阈值,即箱型图分析法的正常值下界,Q1表示下四分位数,Q2表示上四分位数,IQR是四分位距,表示上四分位数Q2与下四分位数Q1的差值的绝对值;Among them, t represents the preset number threshold, that is, the lower bound of the normal value of the box plot analysis method, Q 1 represents the lower quartile, Q 2 represents the upper quartile, and IQR is the interquartile range, representing the upper quartile The absolute value of the difference between the quantile Q 2 and the lower quartile Q 1 ;

所述下四分位数Q1与上四分位数Q2的计算方法为:The calculation method of the lower quartile Q 1 and the upper quartile Q 2 is:

Figure BDA0002605278040000043
Figure BDA0002605278040000043

do=μo-co d o = μ o -c o

Figure BDA0002605278040000051
Figure BDA0002605278040000051

μo=II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)μ o =II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)

o=1,2o=1,2

其中,μo表示下四分位数Q1或上四分位数Q2在步骤5-4特征点个数序列a=(a1,a2,…,a4×γ)中的位置,o=1时,μ1表示下四分位数Q1在特征点个数序列a中的位置,o=2时,μ2表示上四分位数Q2在特征点个数序列a中的位置,II为指示函数,用于区分当前计算的是上四分位数Q2或是下四分位数Q1,co为μo的整数部分,do为μo的小数部分。Among them, μ o represents the position of the lower quartile Q 1 or the upper quartile Q 2 in the feature point number sequence a=(a 1 ,a 2 ,...,a 4×γ ) in step 5-4, When o=1, μ 1 represents the position of the lower quartile Q 1 in the feature point number sequence a, and when o=2, μ 2 represents the upper quartile Q 2 in the feature point number sequence a. position, II is an indicator function, used to distinguish whether the current calculation is the upper quartile Q 2 or the lower quartile Q 1 , c o is the integer part of μ o , and do is the fractional part of μ o .

由以上技术方案可知,本发明实施例提供一种实时判定摄像头遮挡状态的方法,所述方法包括:步骤1,实时读取摄像头拍摄的一帧RGB图像;步骤2,将所述RGB图像先缩放到目标尺寸,再矫正去畸变;步骤3,将所述步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取所述灰度图像中的特征点,得到所述灰度图像的特征点集合;步骤4,将所述灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;步骤5,若所述4个区域中任一区域的特征点个数小于预设个数阈值t,则输出所述摄像头被遮挡的判定结果;若所述4个区域中所有区域的特征点个数均大于所述预设个数阈值t,则输出所述摄像头未被遮挡的判定结果。As can be seen from the above technical solutions, an embodiment of the present invention provides a method for determining the occlusion state of a camera in real time. The method includes: step 1, reading a frame of RGB image captured by the camera in real time; step 2, zooming the RGB image first to the target size, then correct and dedistort; step 3, perform chromaticity space conversion on the corrected and dedistorted RGB image obtained in step 2 to obtain a grayscale image, and then extract the feature points in the grayscale image to obtain The feature point set of the grayscale image; Step 4, divide the grayscale image into 4 regions and number them, respectively calculate the number of feature points in each region; Step 5, if any one of the 4 regions If the number of feature points in the area is less than the preset number threshold t, output the judgment result that the camera is blocked; if the number of feature points in all areas in the four areas is greater than the preset number threshold t, Then output the determination result that the camera is not blocked.

现有技术中,摄像头遮挡判定方法存在的精度较低,速度较慢。而采用前述方法或装置,可适用于配有摄像头的任何设备,仅需单个摄像头即可实现遮挡检测;本方法基于角点检测提取特征点,特征点提取速度快;本方法使用单帧图像即可实现检测,不依赖于连续视频图像帧信息,也不依赖于预存储信息;通过对图像划分区域,本方法能很好地适应动态场景、全遮挡和部分遮挡。综上所述,本方法是一种适用于对实时性要求较高地应用场景的单一摄像头遮挡检测的方法,具有速度快、准确率高、不依赖连续帧的优点。In the prior art, the camera occlusion determination method has low precision and slow speed. The aforementioned method or device can be applied to any device equipped with a camera, and only a single camera can realize occlusion detection; this method extracts feature points based on corner detection, and the feature point extraction speed is fast; this method uses a single frame of image, namely Detection can be achieved without relying on continuous video image frame information or pre-stored information; by dividing images into regions, the method can be well adapted to dynamic scenes, full occlusion and partial occlusion. In summary, this method is a single-camera occlusion detection method suitable for application scenarios with high real-time requirements, and has the advantages of high speed, high accuracy, and no dependence on consecutive frames.

附图说明Description of drawings

为了更清楚地说明本发明的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the present invention more clearly, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, for those of ordinary skill in the art, without creative work, the Additional drawings can be obtained from these drawings.

图1是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法的工作流程示意图;1 is a schematic work flow diagram of a method for real-time determination of a camera occlusion state provided by an embodiment of the present invention;

图2是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法的原理示意图;FIG. 2 is a schematic diagram of the principle of a method for determining the occlusion state of a camera in real time according to an embodiment of the present invention;

图3是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的图像分区示意图;3 is a schematic diagram of an image partition in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图4a是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的特征提取的第一效果示意图;4a is a schematic diagram of the first effect of feature extraction in a method for determining a camera occlusion state in real time according to an embodiment of the present invention;

图4b是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的特征提取的第二效果示意图;4b is a schematic diagram of a second effect of feature extraction in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图4c是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的特征提取的第三效果示意图;4c is a schematic diagram of a third effect of feature extraction in a method for determining a camera occlusion state in real time provided by an embodiment of the present invention;

图5a是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第一效果示意图;5a is a schematic diagram of a first effect of occlusion detection in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图5b是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第二效果示意图;5b is a schematic diagram of a second effect of occlusion detection in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图5c是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第三效果示意图;5c is a schematic diagram of a third effect of occlusion detection in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图5d是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第四效果示意图;5d is a schematic diagram of the fourth effect of occlusion detection in a method for determining a camera occlusion state in real time provided by an embodiment of the present invention;

图5e是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第五效果示意图;5e is a schematic diagram of a fifth effect of occlusion detection in a method for determining a camera occlusion state in real time provided in part by an embodiment of the present invention;

图5f是本发明实施例部分提供的一种实时判定摄像头遮挡状态的方法中的遮挡检测的第六效果示意图。FIG. 5f is a schematic diagram of a sixth effect of occlusion detection in a method for determining the occlusion state of a camera in real time provided by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

本发明实施例公开一种实时判定摄像头遮挡状态的方法,本方法应用于对实时性要求较高地应用场景,并且仅需单个摄像头即可实现遮挡检测。因为本方法基于快速角点检测法提取特征点,特征点提取速度快;特别的,本方法使用单帧图像即可实现检测,不依赖于连续视频图像帧信息,也不依赖于预存储信息;通过对图像划分区域,本方法能很好地适应动态场景、全遮挡和部分遮挡。所以本方法是一种适用于对实时性要求较高地应用场景的单一摄像头遮挡检测的方法,具有速度快、准确率高、不依赖连续帧的优点。The embodiment of the present invention discloses a method for judging the occlusion state of a camera in real time. The method is applied to application scenarios with high real-time requirements, and only a single camera can realize occlusion detection. Because the method extracts feature points based on the fast corner detection method, the feature point extraction speed is fast; in particular, the method can realize detection by using a single frame image, and does not rely on continuous video image frame information, nor does it rely on pre-stored information; By dividing the image into regions, the method can well adapt to dynamic scenes, full occlusion and partial occlusion. Therefore, this method is a single-camera occlusion detection method suitable for application scenarios with high real-time requirements, and has the advantages of high speed, high accuracy, and independent of continuous frames.

图1是本发明遮挡检测的流程示意图,一种基于图像特征点实时判定摄像头状态的方法,包括5个步骤:1 is a schematic flowchart of occlusion detection according to the present invention, a method for determining the state of a camera in real time based on image feature points, including 5 steps:

步骤1,实时读取摄像头拍摄的一帧RGB图像;Step 1, read a frame of RGB image captured by the camera in real time;

步骤2,将所述RGB图像先缩放到目标尺寸,再矫正去畸变;Step 2, the RGB image is first scaled to the target size, and then corrected and de-distorted;

步骤3,将所述步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取所述灰度图像中的特征点,得到所述灰度图像的特征点集合;Step 3: Perform chromaticity space conversion on the corrected and dedistorted RGB image obtained in step 2 to obtain a grayscale image, and then extract feature points in the grayscale image to obtain a feature point set of the grayscale image ;

步骤4,将所述灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;Step 4, the grayscale image is divided into 4 areas and numbered, and the number of feature points in each area is calculated respectively;

本步骤中,划分区域是为了更好地检测局部的遮挡。但是划分区域过多会导致每个区域的面积过小,若图像中某些小区域的特征点个数太少,会误识别为遮挡。因此,4个区域能够保证实现局部遮挡检测,并且可以使得误报遮挡的情况尽量少出现。In this step, the division of regions is to better detect local occlusions. However, too many divided areas will cause the area of each area to be too small. If the number of feature points in some small areas in the image is too small, it will be mistakenly identified as occlusion. Therefore, the four regions can ensure the realization of partial occlusion detection, and can minimize the occurrence of false positive occlusions.

步骤5,若所述4个区域中任一区域的特征点个数小于预设个数阈值t,则输出所述摄像头被遮挡的判定结果;若所述4个区域中所有区域的特征点个数均大于预设个数阈值t,则输出所述摄像头未被遮挡的判定结果。Step 5: If the number of feature points in any of the 4 areas is less than the preset number threshold t, output the result of the determination that the camera is blocked; if the number of feature points in all the 4 areas is If the numbers are all greater than the preset number threshold t, the determination result that the camera is not blocked is output.

本实施例所述的一种实时判定摄像头遮挡状态的方法中,所述步骤1包括:在使用所述摄像头前,对所述摄像头进行标定,得到所述摄像头的内参矩阵和畸变系数;In the method for determining the occlusion state of a camera in real time according to this embodiment, the step 1 includes: before using the camera, calibrating the camera to obtain an internal parameter matrix and a distortion coefficient of the camera;

所述步骤2包括:根据所述摄像头的内参矩阵和畸变系数,结合opencv_undistort算法,对缩放到所述目标尺寸后的RGB图像进行矫正去畸变。The step 2 includes: correcting and undistorting the RGB image scaled to the target size according to the internal parameter matrix and the distortion coefficient of the camera, combined with the opencv_undistort algorithm.

如图2所示,本实施例所述的所述一种实时判定摄像头遮挡状态的方法中,所述步骤3包括基于快速角点检测法提取图像特征点,所述快速角点检测法包括:As shown in FIG. 2 , in the method for determining the occlusion state of a camera in real time according to this embodiment, the step 3 includes extracting image feature points based on a fast corner detection method, and the fast corner detection method includes:

若所述灰度图像的某一像素点的灰度值与该像素点周围领域内一定数量的像素点的灰度值之间的差值均大于或等于预设差值阈值tp,则确定所述像素点为角点,即所述像素点为图像特征点。If the difference between the gray value of a certain pixel of the grayscale image and the gray value of a certain number of pixels in the area around the pixel is greater than or equal to the preset difference threshold t p , then determine The pixel points are corner points, that is, the pixel points are image feature points.

本实施例中,所述图像特征点具体为基于角点检测的图像特征点。所述角点是含有关键信息的像素点,所述特征点是角点概念的扩展,这里将检测到的角点作为图像中的特征点In this embodiment, the image feature points are specifically image feature points based on corner detection. The corner point is a pixel point containing key information, and the feature point is an extension of the corner point concept. Here, the detected corner point is used as the feature point in the image.

基于角点检测特征点的基本思想为:若某像素点与其周围领域内一定数量的像素点处于不同的图像区域,则该像素点可能为角点。特别的,对于灰度图像,若该点的灰度值比其周围领域内一定数量的像素点的灰度值大或者小,则该点可能为角点。The basic idea of detecting feature points based on corner points is: if a pixel point and a certain number of pixel points in the surrounding area are in different image areas, the pixel point may be a corner point. In particular, for a grayscale image, if the grayscale value of the point is larger or smaller than the grayscale value of a certain number of pixels in its surrounding area, the point may be a corner point.

本实施例所述的所述一种实时判定摄像头遮挡状态的方法中,所述基于角点检测法提取图像特征点包括:In the method for determining the occlusion state of a camera in real time according to this embodiment, the extraction of image feature points based on the corner detection method includes:

步骤3-1,从所述灰度图像中选取像素点P,所述像素点P的灰度值为IPStep 3-1, select a pixel point P from the grayscale image, and the grayscale value of the pixel point P is IP;

步骤3-2,以所述像素点P为中心,3像素为半径设置离散化的Bresenham圆,所述离散化的Bresenham圆上有16个像素点;Step 3-2, with the pixel point P as the center and 3 pixels as the radius to set a discrete Bresenham circle, and there are 16 pixels on the discrete Bresenham circle;

步骤3-3,若所述离散化的Bresenham圆上存在n个连续的像素点,所述n个连续的像素点的灰度值与中心的灰度值之差的绝对值均大于预设差值阈值tp,即:Step 3-3, if there are n continuous pixel points on the discretized Bresenham circle, the absolute value of the difference between the gray value of the n continuous pixel points and the center gray value is greater than the preset difference. value threshold t p , namely:

Figure BDA0002605278040000081
Figure BDA0002605278040000081

其中,Ii为n个连续的像素点中第i个像素点的灰度值,i=1,2…,n表示像素点的序号,IP为圆心的灰度值,tp为预设差值阈值;Among them, I i is the gray value of the ith pixel in n consecutive pixels, i = 1,2...,n represents the serial number of the pixel, IP is the gray value of the center of the circle, and t p is the preset difference threshold;

则提取所述离散的Bresenham圆的中心为基于角点检测法的图像特征点。一般的,若所述Bresenham圆上包含N个像素点,则需满足

Figure BDA0002605278040000082
上述Bresenham圆包含16个像素点,具体的,本实施例中,的值可以设置为12或9,n通常取9较为合适。大多数情况下,为避免检测到的特征点为伪特征点,需要将预设差值阈值tp设置为较大的数值,所述预设差值阈值tp一般可以设置为50。Then, the center of the discrete Bresenham circle is extracted as the image feature point based on the corner detection method. In general, if the Bresenham circle contains N pixels, it is necessary to satisfy
Figure BDA0002605278040000082
The above Bresenham circle includes 16 pixels. Specifically, in this embodiment, the value of , can be set to 12 or 9, and n is usually set to 9. In most cases, in order to prevent the detected feature points from being false feature points, the preset difference threshold t p needs to be set to a relatively large value, and the preset difference threshold t p can generally be set to 50.

如图3所示,本实施例所述的所述一种实时判定摄像头遮挡状态的方法中,所述步骤4包括:将所述灰度图像等分为4个区域,所述4个区域分别位于灰度图像的左上、右上、左下和右下位置,通过各个所述区域的左上角坐标、宽和高表示各个区域的具体位置,即:As shown in FIG. 3 , in the method for determining the occlusion state of a camera in real time according to this embodiment, the step 4 includes: dividing the grayscale image into 4 regions, and the 4 regions are respectively It is located in the upper left, upper right, lower left and lower right positions of the grayscale image, and the specific position of each region is represented by the coordinates, width and height of the upper left corner of each said region, namely:

1号区域

Figure BDA0002605278040000091
2号区域
Figure BDA0002605278040000092
3号区域
Figure BDA0002605278040000093
和4号区域
Figure BDA0002605278040000094
其中,w为灰度图像的宽,h为灰度图像的高。Area 1
Figure BDA0002605278040000091
Area 2
Figure BDA0002605278040000092
Area 3
Figure BDA0002605278040000093
and area 4
Figure BDA0002605278040000094
where w is the width of the grayscale image, and h is the height of the grayscale image.

本实施例所述的所述一种实时判定摄像头遮挡状态的方法中,所述步骤4还包括:In the method for determining the occlusion state of a camera in real time according to this embodiment, the step 4 further includes:

步骤4-1,通过所述步骤3得到灰度图像特征点集合后,提取所述灰度图像的特征点信息,所述特征点信息包括:所述特征点在灰度图像上的坐标;Step 4-1, after obtaining the feature point set of the grayscale image through the step 3, extract the feature point information of the grayscale image, and the feature point information includes: the coordinates of the feature point on the grayscale image;

步骤4-2,根据所述特征点在灰度图像上的坐标,确定每个所述特征点位于灰度图像中区域的编号r,并分别对4个区域内的特征点个数进行计数:Step 4-2, according to the coordinates of the feature points on the grayscale image, determine the number r of each of the feature points located in the grayscale image, and count the number of feature points in the four regions respectively:

Figure BDA0002605278040000095
并且
Figure BDA0002605278040000096
则确定r=1,所述特征点Pj位于1号区域内,所述1号区域的特征点个数加1;like
Figure BDA0002605278040000095
and
Figure BDA0002605278040000096
Then it is determined that r=1, the feature point P j is located in the No. 1 area, and the number of feature points in the No. 1 area is increased by 1;

Figure BDA0002605278040000097
并且
Figure BDA0002605278040000098
则确定r=3,所述特征点Pj位于3号区域内,所述3号区域的特征点个数加1;like
Figure BDA0002605278040000097
and
Figure BDA0002605278040000098
Then it is determined that r=3, the feature point P j is located in the No. 3 area, and the number of feature points in the No. 3 area is increased by 1;

Figure BDA0002605278040000099
并且
Figure BDA00026052780400000910
则确定r=2,所述特征点Pj位于2号区域内,所述2号区域的特征点个数加1;like
Figure BDA0002605278040000099
and
Figure BDA00026052780400000910
Then it is determined that r=2, the feature point P j is located in the No. 2 area, and the number of feature points in the No. 2 area is increased by 1;

Figure BDA00026052780400000911
并且
Figure BDA00026052780400000912
则确定r=4,所述特征点Pj位于4号区域内,所述4号区域的特征点个数加1;like
Figure BDA00026052780400000911
and
Figure BDA00026052780400000912
Then it is determined that r=4, the feature point P j is located in the No. 4 area, and the number of feature points in the No. 4 area is increased by 1;

其中,(xj,yj)为特征点Pj在灰度图像上的坐标。Among them, (x j , y j ) are the coordinates of the feature point P j on the grayscale image.

本实施例所述的一种实时判定摄像头遮挡状态的方法中,由于个数阈值是判断遮挡的关键因素,并且图像中特征点的个数与环境有一定关系,为了保证本方法具有较高的遮挡检测能力,需要根据实际环境预先设定个数阈值,因此,在所述步骤5之前需要确定预设个数阈值t,包括:In the method for judging the occlusion state of a camera in real time described in this embodiment, since the number threshold is a key factor for judging occlusion, and the number of feature points in the image has a certain relationship with the environment, in order to ensure that the method has a higher For the occlusion detection capability, the number threshold needs to be preset according to the actual environment. Therefore, before step 5, the preset number threshold t needs to be determined, including:

步骤5-1,拍摄所述摄像头未被遮挡情况下的视频;Step 5-1, shooting a video when the camera is not blocked;

步骤5-2,根据所述摄像头未被遮挡情况下的视频,为降低偶然性,视频时长应尽可能长,本实施例中选取10分钟以上的视频,使用随机采样的方法,确定需要使用的采样图像帧序号集合,即:Step 5-2, according to the video under the condition that the camera is not blocked, in order to reduce the chance, the video duration should be as long as possible. In this embodiment, a video of more than 10 minutes is selected, and the random sampling method is used to determine the sampling to be used. The set of image frame serial numbers, namely:

Sk=Sk-1∪{yk}S k =S k-1 ∪{y k }

Figure BDA0002605278040000101
Figure BDA0002605278040000101

k=1,…,γk=1,...,γ

其中,Sk为第k次采样后的图像帧序号的集合,设

Figure BDA0002605278040000102
F为摄像头未被遮挡情况下拍摄的视频的帧数,θ为随机变量,在[0,1)区间上满足均匀分布,即θk是第k次采样时随机生成的在区间[0,1)上的实数,yk是第k次采样得到的图像帧序号,γ为采样次数,Sγ即为最终得到的采样结果,即需要使用的采样图像帧序号集合;Among them, Sk is the set of image frame serial numbers after the kth sampling, set
Figure BDA0002605278040000102
F is the number of frames of the video shot when the camera is not blocked, θ is a random variable, which satisfies a uniform distribution in the [0,1) interval, that is, θ k is randomly generated during the kth sampling in the interval [0,1 ), y k is the image frame serial number obtained by the kth sampling, γ is the number of sampling times, and S γ is the final sampling result, that is, the set of sampling image frame serial numbers to be used;

步骤5-3,按照采样顺序,读取所述采样图像集合的每一帧,依次对所述每一帧执行步骤1到步骤4,得到所述每一帧对应的灰度图像中4个区域的特征点个数;Step 5-3, according to the sampling order, read each frame of the sampled image set, and perform steps 1 to 4 for each frame in turn to obtain 4 regions in the grayscale image corresponding to each frame The number of feature points;

步骤5-4,将所有所述灰度图像中所有区域的特征点个数按从小到大排序,得到特征点个数的序列a,a=(a1,a2,…,a4×γ),其中al表示第l小的特征点个数;Step 5-4, sort the number of feature points of all regions in all the grayscale images from small to large to obtain a sequence a of the number of feature points, a=(a 1 , a 2 ,...,a 4×γ ), where a l represents the number of the lth smallest feature points;

步骤5-5,基于箱型图分析法计算所述预设个数阈值t:Step 5-5, calculate the preset number threshold t based on the box plot analysis method:

t=Q1-1.5×IQRt=Q 1 -1.5×IQR

IQR=|Q1-Q2|IQR=|Q 1 -Q 2 |

其中,t表示预设个数阈值,即箱型图分析法的正常值下界,Q1表示下四分位数,Q2表示上四分位数,IQR是四分位距,表示上四分位数Q2与下四分位数Q1的差值的绝对值;Among them, t represents the preset number threshold, that is, the lower bound of the normal value of the box plot analysis method, Q 1 represents the lower quartile, Q 2 represents the upper quartile, and IQR is the interquartile range, representing the upper quartile The absolute value of the difference between the quantile Q 2 and the lower quartile Q 1 ;

所述下四分位数Q1与上四分位数Q2的计算方法为:The calculation method of the lower quartile Q 1 and the upper quartile Q 2 is:

Figure BDA0002605278040000103
Figure BDA0002605278040000103

do=μo-co d o = μ o -c o

Figure BDA0002605278040000104
Figure BDA0002605278040000104

μo=II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)μ o =II(o=1)×0.25×(4×γ+1)+II(o=2)×0.75×(4×γ+1)

o=1,2o=1,2

其中,μo表示下四分位数Q1或上四分位数Q2在步骤5-4特征点个数序列a=(a1,a2,…,a4×γ)中的位置,o=1时,μ1表示下四分位数Q1在特征点个数序列a中的位置,o=2时,μ2表示上四分位数Q2在特征点个数序列a中的位置,II为指示函数,用于区分当前计算的是上四分位数Q2或是下四分位数Q1,co为μo的整数部分,do为μo的小数部分。Among them, μ o represents the position of the lower quartile Q 1 or the upper quartile Q 2 in the feature point number sequence a=(a 1 ,a 2 ,...,a 4×γ ) in step 5-4, When o=1, μ 1 represents the position of the lower quartile Q 1 in the feature point number sequence a, and when o=2, μ 2 represents the upper quartile Q 2 in the feature point number sequence a. position, II is an indicator function, used to distinguish whether the current calculation is the upper quartile Q 2 or the lower quartile Q 1 , c o is the integer part of μ o , and do is the fractional part of μ o .

箱型图分析法常用于检测异常值,当观测值大于正常值上界或者小于正常值下界时,则可认为该观测值为异常值。这里可基于箱型图分析法,将所述正常值下界当作个数阈值,当区域内特征点个数小于个数阈值时,就认为发生了异常,即判定为遮挡。Boxplot analysis is often used to detect outliers. When the observed value is greater than the upper bound of the normal value or smaller than the lower bound of the normal value, the observed value can be considered an outlier. Here, based on the box plot analysis method, the lower bound of the normal value can be regarded as the number threshold. When the number of feature points in the area is less than the number threshold, it is considered that an abnormality has occurred, that is, it is determined as occlusion.

为了验证方法的的有效性,在实际采集的视频上进行实例验证。其中包括完全遮挡、部分遮挡、未遮挡的图像,对这些视频中的每一帧图像进行遮挡检测,实时判断摄像头的遮挡状态。In order to verify the effectiveness of the method, an example verification is carried out on the actual collected video. These include completely occluded, partially occluded, and unoccluded images, and occlusion detection is performed on each frame of images in these videos to determine the occlusion status of the camera in real time.

以这个实际采集的视频为例,对于视频中的每一帧图像,按照以下步骤判断摄像头的遮挡状态:Taking this actual collected video as an example, for each frame of image in the video, follow the steps below to determine the occlusion status of the camera:

步骤1,实时读取摄像头拍摄的一帧RGB图像;Step 1, read a frame of RGB image captured by the camera in real time;

步骤2,将所述RGB图像先缩放到目标尺寸,再矫正去畸变;Step 2, the RGB image is first scaled to the target size, and then corrected and de-distorted;

步骤3,将所述步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取所述灰度图像中的特征点,得到所述灰度图像的特征点集合;Step 3: Perform chromaticity space conversion on the corrected and dedistorted RGB image obtained in step 2 to obtain a grayscale image, and then extract feature points in the grayscale image to obtain a feature point set of the grayscale image ;

步骤4,将所述灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;Step 4, the grayscale image is divided into 4 areas and numbered, and the number of feature points in each area is calculated respectively;

步骤5,若所述4个区域中任一区域的特征点个数小于预设个数阈值t,则输出所述摄像头被遮挡的判定结果;若所述4个区域中所有区域的特征点个数均大于预设个数阈值t,则输出所述摄像头未被遮挡的判定结果。Step 5: If the number of feature points in any of the 4 areas is less than the preset number threshold t, output the result of the determination that the camera is blocked; if the number of feature points in all the 4 areas is If the numbers are all greater than the preset number threshold t, the determination result that the camera is not blocked is output.

在图4a~图4c中,展示了图像特征点提取的效果,其中,图4a是全遮挡的效果示意图,图4b是部分遮挡的效果示意图,图4c是未遮挡的效果示意图。In Figures 4a-4c, the effect of image feature point extraction is shown, wherein Figure 4a is a schematic diagram of the effect of full occlusion, Figure 4b is a schematic diagram of the effect of partial occlusion, and Figure 4c is a schematic diagram of the effect of no occlusion.

在图5a~图5f中,展示了本发明对摄像头的遮挡检测效果。这里为了方便说明,当摄像头被判定为遮挡时,将图像所属帧号以及遮挡字样输出到图中。图5a和图5b是全遮挡的效果示意图,图5c和图5d是部分遮挡的效果示意图,可见图5a、图5b、图5c和图5d中均显示了“Camera is obstructed!”字样;图5e和图5f是未遮挡的效果示意图,可见图5e和图5f中均未显示摄像头被遮挡的字样。经过在数据上的验证可以表明,本发明所提供的一种实时判定摄像头遮挡状态的方法的准确度和速度都表现出了令人满意的结果。5a to 5f show the occlusion detection effect of the present invention for the camera. Here, for the convenience of description, when the camera is determined to be blocked, the frame number to which the image belongs and the words blocked are output to the figure. Figures 5a and 5b are schematic diagrams of the effect of full occlusion, and Figures 5c and 5d are schematic diagrams of the effect of partial occlusion. It can be seen that the words "Camera is obstructed!" are displayed in Figure 5a, Figure 5b, Figure 5c and Figure 5d; Figure 5e And Figure 5f is a schematic diagram of the unobstructed effect. It can be seen that the words that the camera is blocked are not displayed in Figure 5e and Figure 5f. The verification on the data can show that the accuracy and speed of the method for determining the occlusion state of the camera in real time provided by the present invention both show satisfactory results.

由以上技术方案可知,本发明实施例提供一种实时判定摄像头遮挡状态的方法,所述方法包括:步骤1,实时读取摄像头拍摄的一帧RGB图像;步骤2,将所述RGB图像先缩放到目标尺寸,再矫正去畸变;步骤3,将所述步骤2获得的矫正去畸变后的RGB图像进行色度空间转换,得到灰度图像,再提取所述灰度图像中的特征点,得到所述灰度图像的特征点集合;步骤4,将所述灰度图像划分为4个区域并编号,分别计算每个区域的特征点个数;步骤5,若所述4个区域中任一区域的特征点个数小于预设个数阈值t,则输出所述摄像头被遮挡的判定结果;若所述4个区域中所有区域的特征点个数均大于所述预设个数阈值t,则输出所述摄像头未被遮挡的判定结果。As can be seen from the above technical solutions, an embodiment of the present invention provides a method for determining the occlusion state of a camera in real time. The method includes: step 1, reading a frame of RGB image captured by the camera in real time; step 2, zooming the RGB image first to the target size, then correct and dedistort; step 3, perform chromaticity space conversion on the corrected and dedistorted RGB image obtained in step 2 to obtain a grayscale image, and then extract the feature points in the grayscale image to obtain The feature point set of the grayscale image; Step 4, divide the grayscale image into 4 regions and number them, respectively calculate the number of feature points in each region; Step 5, if any one of the 4 regions If the number of feature points in the area is less than the preset number threshold t, output the judgment result that the camera is blocked; if the number of feature points in all areas in the four areas is greater than the preset number threshold t, Then output the determination result that the camera is not blocked.

现有技术中,摄像头遮挡判定方法存在的精度较低,速度较慢。而采用前述方法或装置,可适用于配有摄像头的任何设备,仅需单个摄像头即可实现遮挡检测;本方法基于角点检测提取特征点,特征点提取速度快;本方法使用单帧图像即可实现检测,不依赖于连续视频图像帧信息,也不依赖于预存储信息;通过对图像划分区域,本方法能很好地适应动态场景、全遮挡和部分遮挡。综上所述,本方法是一种适用于对实时性要求较高地应用场景的单一摄像头遮挡检测的方法,具有速度快、准确率高、不依赖连续帧的优点。In the prior art, the camera occlusion determination method has low precision and slow speed. The aforementioned method or device can be applied to any device equipped with a camera, and only a single camera can realize occlusion detection; this method extracts feature points based on corner detection, and the feature point extraction speed is fast; this method uses a single frame of image, namely Detection can be achieved without relying on continuous video image frame information or pre-stored information; by dividing images into regions, the method can be well adapted to dynamic scenes, full occlusion and partial occlusion. In summary, this method is a single-camera occlusion detection method suitable for application scenarios with high real-time requirements, and has the advantages of high speed, high accuracy, and no dependence on consecutive frames.

具体实现中,本发明还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时可包括本发明提供的一种实时判定摄像头遮挡状态的方法的各实施例中的部分或全部步骤。所述的存储介质可为磁碟、光盘、只读存储记忆体(英文:read-only memory,简称:ROM)或随机存储记忆体(英文:random access memory,简称:RAM)等。In a specific implementation, the present invention also provides a computer storage medium, wherein the computer storage medium can store a program, and when the program is executed, the program can include the methods in the various embodiments of the method for determining a camera occlusion state in real time provided by the present invention. some or all of the steps. The storage medium may be a magnetic disk, an optical disk, a read-only memory (English: read-only memory, ROM for short) or a random access memory (English: random access memory, RAM for short).

本领域的技术人员可以清楚地了解到本发明实施例中的技术可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本发明实施例中的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。Those skilled in the art can clearly understand that the technology in the embodiments of the present invention can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products may be stored in a storage medium, such as ROM/RAM , magnetic disk, optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments of the present invention.

本说明书中各个实施例之间相同相似的部分互相参见即可。以上所述的本发明实施方式并不构成对本发明保护范围的限定。It is sufficient to refer to each other for the same and similar parts among the various embodiments in this specification. The embodiments of the present invention described above do not limit the protection scope of the present invention.

Claims (7)

1. A method for judging the shielding state of a camera in real time is characterized by comprising the following steps:
step 1, reading a frame of RGB image shot by a camera in real time;
step 2, zooming the RGB image to a target size, and then correcting and removing distortion;
step 3, carrying out chromaticity space conversion on the corrected and undistorted RGB image obtained in the step 2 to obtain a gray level image, and extracting feature points in the gray level image to obtain a feature point set of the gray level image;
step 4, dividing the gray level image into 4 areas, numbering the areas, and respectively calculating the number of characteristic points of each area;
step 5, if the number of the characteristic points in any one of the 4 areas is smaller than a preset number threshold t, outputting a judgment result of the shielding of the camera; and if the number of the feature points of all the 4 regions is greater than a preset number threshold t, outputting a judgment result that the camera is not shielded.
2. The method for judging the shielding state of the camera in real time according to claim 1, wherein the step 1 comprises: before the camera is used, calibrating the camera to obtain an internal reference matrix and a distortion coefficient of the camera;
the step 2 comprises the following steps: and correcting and de-distorting the RGB image after being scaled to the target size by combining an opencv _ undistort algorithm according to the internal reference matrix and the distortion coefficient of the camera.
3. The method according to claim 1, wherein the step 3 comprises extracting image feature points based on a fast corner detection method, and the fast corner detection method comprises:
if the difference value between the gray value of a certain pixel point of the gray image and the gray values of a certain number of pixel points in the surrounding field of the pixel point is larger than or equal to a preset difference threshold tpAnd determining the pixel points as angular points, namely the pixel points are image feature points.
4. The method according to claim 3, wherein the extracting of the image feature points based on the fast corner detection method comprises:
step 3-1, selecting pixel points P from the gray level image, wherein the gray level value of the pixel points P is IP
Step 3-2, setting a discretized Bresenham circle by taking the pixel point P as a circle center and 3 pixels as a radius, wherein the discretized Bresenham circle is provided with 16 pixel points;
step 3-3, if n continuous pixels exist on the discretized Bresenham circle, the absolute values of the differences between the gray values of the n continuous pixels and the gray value of the circle center are all larger than a preset difference threshold tpNamely:
Figure FDA0002605278030000021
wherein, IiThe gray value of the ith pixel point in n continuous pixel points is 1,2 …, n represents the serial number of the pixel point, IPGray value with the center of a circle, tpIs a preset difference threshold value;
and extracting the circle center of the discrete Bresenham circle as an image feature point based on a fast corner point detection method.
5. The method for judging the shielding state of the camera in real time according to claim 1, wherein the step 4 comprises: equally dividing the gray level image into 4 areas, which are respectively marked as area No. 1, area No. 2, area No. 3 and area No. 4, wherein the 4 areas are respectively positioned at the upper left position, the upper right position, the lower left position and the lower right position of the gray level image, and the specific positions of the areas are represented by the coordinates, the widths and the heights of the upper left corners of the areas, namely:
region No. 1
Figure FDA0002605278030000022
Region No. 2
Figure FDA0002605278030000023
Region No. 3
Figure FDA0002605278030000024
And region No. 4
Figure FDA0002605278030000025
Where w is the width of the grayscale image and h is the height of the grayscale image.
6. The method for judging the shielding state of the camera in real time according to claim 5, wherein the step 4 further comprises:
step 4-1, after the gray image feature point set is obtained in the step 3, extracting feature point information of the gray image, wherein the feature point information comprises: coordinates of the feature points on the gray level image;
step 4-2, determining the number r of the area where each feature point is located in the gray level image according to the coordinates of the feature points on the gray level image, and counting the number of the feature points in 4 areas respectively:
if it is
Figure FDA0002605278030000026
And is
Figure FDA0002605278030000027
Then r is determined to be 1, and the feature point P is determinedjThe number of the characteristic points in the No. 1 area is added with 1;
if it is
Figure FDA0002605278030000028
And is
Figure FDA0002605278030000029
Then r is determined to be 3, the feature point PjThe number of the characteristic points in the No. 3 area is added with 1;
if it is
Figure FDA00026052780300000210
And is
Figure FDA00026052780300000211
Then r is determined to be 2, the feature point PjThe number of the characteristic points in the No. 2 area is added with 1;
if it is
Figure FDA00026052780300000212
And is
Figure FDA00026052780300000213
Then r is determined to be 4, and the feature point P is determinedjThe number of the characteristic points in the No. 4 area is added with 1;
wherein (x)j,yj) Is a characteristic point PjCoordinates on the grayscale image.
7. The method for judging the shielding state of the camera in real time according to claim 1, wherein before the step 5, a preset number threshold t needs to be determined, and the method comprises the following steps:
step 5-1, shooting a video under the condition that the camera is not shielded;
step 5-2, according to the video under the condition that the camera is not shielded, determining a sampling image frame number set which needs to be used by using a random sampling method, namely:
Sk=Sk-1∪{yk}
Figure FDA0002605278030000033
k=1,…,γ
wherein S iskSetting the image frame sequence number after the kth sampling
Figure FDA0002605278030000031
F is the frame number of the video shot under the condition that the camera is not shielded, theta is a random variable and meets the uniform distribution in the interval of 0,1, namely thetakIs a real number, y, over the interval [0,1) randomly generated at the kth samplekIs the image frame number obtained by the kth sampling, gamma is the sampling frequency, SγThe sampling result is the finally obtained sampling result, namely a sampling image frame number set which needs to be used;
step 5-3, reading each frame of the sampling image set according to a sampling sequence, and sequentially executing the steps 1 to 4 on each frame to obtain the number of feature points of 4 areas in the gray level image corresponding to each frame;
step 5-4, sorting the number of the feature points of all the areas in all the gray level images from small to large to obtain a sequence a of the number of the feature points, wherein a is (a)1,a2,…,a4×γ) Wherein a islThe number of characteristic points representing the ith smallest;
5-5, calculating the preset number threshold t based on a box chart analysis method:
t=Q1-1.5×IQR
IQR=|Q1-Q2|
wherein t represents a preset number threshold, i.e., the lower bound of the normal value of the boxplot analysis method, Q1Denotes the lower quartile, Q2Representing the upper quartile, IQR is the interquartile range, representing the upper quartile Q2And lower quartile Q1The absolute value of the difference of (a);
the lower quartile Q1And upper quartile Q2The calculation method comprises the following steps:
Figure FDA0002605278030000032
do=μo-co
Figure FDA0002605278030000043
Figure FDA0002605278030000041
o=1,2
wherein, muoRepresents the lower quartile Q1Or upper quartile Q2In step 5-4, the sequence of feature point numbers a ═ a1,a2,…,a4×γ) When o is 1, mu1Represents the lower quartile Q1When the position o in the feature point number sequence a is 2, mu2Representing the upper quartile Q2At a position in the sequence a of the number of feature points,
Figure FDA0002605278030000042
for indicating functions, it is the upper quartile Q that is used to distinguish the current calculation2Or a lower quartile Q1,coIs muoInteger part of (d)oIs muoThe fractional part of (a).
CN202010736809.6A 2020-07-28 2020-07-28 A method to determine camera occlusion status in real time Active CN111967345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010736809.6A CN111967345B (en) 2020-07-28 2020-07-28 A method to determine camera occlusion status in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010736809.6A CN111967345B (en) 2020-07-28 2020-07-28 A method to determine camera occlusion status in real time

Publications (2)

Publication Number Publication Date
CN111967345A true CN111967345A (en) 2020-11-20
CN111967345B CN111967345B (en) 2023-10-31

Family

ID=73362971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010736809.6A Active CN111967345B (en) 2020-07-28 2020-07-28 A method to determine camera occlusion status in real time

Country Status (1)

Country Link
CN (1) CN111967345B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927262A (en) * 2021-03-22 2021-06-08 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN113282208A (en) * 2021-05-25 2021-08-20 歌尔科技有限公司 Terminal device control method, terminal device and computer readable storage medium
CN115019221A (en) * 2022-04-20 2022-09-06 平安国际智慧城市科技股份有限公司 Method, device, equipment and storage medium for detecting shielding behavior
CN115604567A (en) * 2022-09-06 2023-01-13 深圳市震有软件科技有限公司(Cn) Method and device for detecting camera shading by green leaves and computer equipment
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment
CN116797777A (en) * 2022-03-18 2023-09-22 中国科学院深圳先进技术研究院 A target detection method and system for underwater in-situ images
CN118205976A (en) * 2024-05-21 2024-06-18 山东博尔特电梯有限公司 Automatic discernment elevator control system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118490A1 (en) * 2003-06-30 2007-05-24 Gyros Patent Ab Confidence determination
US20110164832A1 (en) * 2010-01-04 2011-07-07 Samsung Electronics Co., Ltd. Image-based localization feature point registration apparatus, method and computer-readable medium
CN105139016A (en) * 2015-08-11 2015-12-09 豪威科技(上海)有限公司 Interference detection system for surveillance camera and application method of interference detection system
CN105427276A (en) * 2015-10-29 2016-03-23 重庆电信系统集成有限公司 Camera detection method based on image local edge characteristics
CN105744268A (en) * 2016-05-04 2016-07-06 深圳众思科技有限公司 Camera shielding detection method and device
JP2016134804A (en) * 2015-01-20 2016-07-25 富士通株式会社 Imaging range abnormality determination device, imaging range abnormality determination method, and computer program for imaging range abnormality determination
JP2016148956A (en) * 2015-02-10 2016-08-18 株式会社デンソーアイティーラボラトリ Positioning device, positioning method and positioning computer program
CN107710279A (en) * 2015-07-02 2018-02-16 大陆汽车有限责任公司 Static dirty detection and correction
US20180224380A1 (en) * 2017-02-09 2018-08-09 Glasstech, Inc. System and associated method for online measurement of the optical characteristics of a glass sheet
CN108763346A (en) * 2018-05-15 2018-11-06 中南大学 A kind of abnormal point processing method of sliding window box figure medium filtering
CN110008964A (en) * 2019-03-28 2019-07-12 上海交通大学 Corner Feature Extraction and Description of Heterogeneous Image
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A Lane Line Detection Method and System Based on Homography Transformation and Feature Window
CN110751371A (en) * 2019-09-20 2020-02-04 苏宁云计算有限公司 Commodity inventory risk early warning method and system based on statistical four-bit distance and computer readable storage medium
CN110913212A (en) * 2019-12-27 2020-03-24 上海智驾汽车科技有限公司 Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system
CN111275658A (en) * 2018-12-03 2020-06-12 北京嘀嘀无限科技发展有限公司 Camera shielding detection method and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118490A1 (en) * 2003-06-30 2007-05-24 Gyros Patent Ab Confidence determination
US20110164832A1 (en) * 2010-01-04 2011-07-07 Samsung Electronics Co., Ltd. Image-based localization feature point registration apparatus, method and computer-readable medium
JP2016134804A (en) * 2015-01-20 2016-07-25 富士通株式会社 Imaging range abnormality determination device, imaging range abnormality determination method, and computer program for imaging range abnormality determination
JP2016148956A (en) * 2015-02-10 2016-08-18 株式会社デンソーアイティーラボラトリ Positioning device, positioning method and positioning computer program
CN107710279A (en) * 2015-07-02 2018-02-16 大陆汽车有限责任公司 Static dirty detection and correction
CN105139016A (en) * 2015-08-11 2015-12-09 豪威科技(上海)有限公司 Interference detection system for surveillance camera and application method of interference detection system
CN105427276A (en) * 2015-10-29 2016-03-23 重庆电信系统集成有限公司 Camera detection method based on image local edge characteristics
CN105744268A (en) * 2016-05-04 2016-07-06 深圳众思科技有限公司 Camera shielding detection method and device
US20180224380A1 (en) * 2017-02-09 2018-08-09 Glasstech, Inc. System and associated method for online measurement of the optical characteristics of a glass sheet
CN108763346A (en) * 2018-05-15 2018-11-06 中南大学 A kind of abnormal point processing method of sliding window box figure medium filtering
CN111275658A (en) * 2018-12-03 2020-06-12 北京嘀嘀无限科技发展有限公司 Camera shielding detection method and system
CN110008964A (en) * 2019-03-28 2019-07-12 上海交通大学 Corner Feature Extraction and Description of Heterogeneous Image
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A Lane Line Detection Method and System Based on Homography Transformation and Feature Window
CN110751371A (en) * 2019-09-20 2020-02-04 苏宁云计算有限公司 Commodity inventory risk early warning method and system based on statistical four-bit distance and computer readable storage medium
CN110913212A (en) * 2019-12-27 2020-03-24 上海智驾汽车科技有限公司 Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王红岩;汪晓帆;高亮;李强子;赵龙才;杜鑫;张源;: "基于季相变化特征的撂荒地遥感提取方法研究", 遥感技术与应用, no. 03 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927262A (en) * 2021-03-22 2021-06-08 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN112927262B (en) * 2021-03-22 2023-06-20 瓴盛科技有限公司 Camera lens shielding detection method and system based on video
CN113282208A (en) * 2021-05-25 2021-08-20 歌尔科技有限公司 Terminal device control method, terminal device and computer readable storage medium
CN116797777A (en) * 2022-03-18 2023-09-22 中国科学院深圳先进技术研究院 A target detection method and system for underwater in-situ images
CN115019221A (en) * 2022-04-20 2022-09-06 平安国际智慧城市科技股份有限公司 Method, device, equipment and storage medium for detecting shielding behavior
CN115604567A (en) * 2022-09-06 2023-01-13 深圳市震有软件科技有限公司(Cn) Method and device for detecting camera shading by green leaves and computer equipment
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment
CN116522417B (en) * 2023-07-04 2023-09-19 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment
CN118205976A (en) * 2024-05-21 2024-06-18 山东博尔特电梯有限公司 Automatic discernment elevator control system
CN118205976B (en) * 2024-05-21 2024-09-13 山东博尔特电梯有限公司 Automatic discernment elevator control system

Also Published As

Publication number Publication date
CN111967345B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111967345B (en) A method to determine camera occlusion status in real time
US7742650B2 (en) Object detection in images
CN112669344A (en) Method and device for positioning moving object, electronic equipment and storage medium
CN106056079B (en) A kind of occlusion detection method of image capture device and human face five-sense-organ
CN111723644A (en) A method and system for occlusion detection in surveillance video
CN110599523A (en) ViBe ghost suppression method fused with interframe difference method
US20190156499A1 (en) Detection of humans in images using depth information
CN106157329B (en) Adaptive target tracking method and device
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN113516609B (en) Split-screen video detection method and device, computer equipment and storage medium
CN111127358B (en) Image processing method, device and storage medium
CN113810611B (en) Method and device for data simulation of event camera
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112927262A (en) Camera lens shielding detection method and system based on video
CN105678737A (en) Digital image corner point detection method based on Radon transform
CN111160340B (en) A moving target detection method, device, storage medium and terminal equipment
CN106778822B (en) Image straight line detection method based on funnel transformation
CN114943729A (en) Cell counting method and system for high-resolution cell image
CN114998283B (en) Method and device for detecting lens shielding object
CN107507198B (en) Aircraft Image Detection and Tracking Method
Singh et al. Multi-level threshold based edge detector using logical operations
CN115205793A (en) Electric power machine room smoke detection method and device based on deep learning secondary confirmation
CN111027560B (en) Text detection method and related device
EP4096210A1 (en) Image exposure adjustment method and apparatus, device, and storage medium
CN114359183A (en) Image quality assessment method and equipment, determination method of lens occlusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载