+

WO2018131729A1 - Method and system for detection of moving object in image using single camera - Google Patents

Method and system for detection of moving object in image using single camera Download PDF

Info

Publication number
WO2018131729A1
WO2018131729A1 PCT/KR2017/000359 KR2017000359W WO2018131729A1 WO 2018131729 A1 WO2018131729 A1 WO 2018131729A1 KR 2017000359 W KR2017000359 W KR 2017000359W WO 2018131729 A1 WO2018131729 A1 WO 2018131729A1
Authority
WO
WIPO (PCT)
Prior art keywords
divided
image
optical flow
region
object detection
Prior art date
Application number
PCT/KR2017/000359
Other languages
French (fr)
Korean (ko)
Inventor
김정호
최병호
황영배
장성준
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Publication of WO2018131729A1 publication Critical patent/WO2018131729A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present invention relates to an object detection method, and more particularly, to a method and system for determining and detecting a moving object from an image sequence obtained by a single camera.
  • a stereo camera or a single camera can be used to calculate the movement of the camera and perform detection therefrom.
  • the technique of estimating camera movement has a problem in that accuracy decreases when there are many moving objects.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide an object detection method and system that can improve the accuracy in detecting a moving object based on a single camera.
  • an object detecting method includes: generating an optical flow image from an input image sequence; Extracting divided regions from an optical flow image; And generating a motion region from the extracted divided regions.
  • the motion regions may be generated by integrating or maintaining the divided regions through color comparison between the divided regions and neighboring divided regions.
  • the color difference values between the divided areas and the difference value in the boundary line may be compared to integrate or maintain the divided areas.
  • the moving region may be generated by maintaining or removing the divided region based on the size and position of the divided region in the optical flow image.
  • the size of the divided region may be the height of the divided region, and the position of the divided region may be the y-coordinate at the bottom of the divided region.
  • the input image sequence may be generated using a single camera.
  • the object detection system for generating an input image sequence; And a processor configured to generate an optical flow image from the input image sequence generated by the camera, extract the divided regions from the optical flow image, and generate a motion region from the extracted divided regions.
  • object detection performance may be improved by using an optical flow image, an image segmentation technique, and an object detection using an approximate size of a moving object using a single camera. Will be.
  • embodiments of the present invention can be applied to an intelligent driverless vehicle, and can be applied to a technology capable of recognizing such a situation.
  • FIG. 2 is a diagram illustrating an input image
  • FIG. 3 is a diagram illustrating an optical flow image of FIG. 2;
  • FIG. 4 is a diagram illustrating a result of extracting divided regions from an optical flow image
  • 5 to 8 are views provided for further explanation of a post-processing process for motion regions
  • 9 to 12 are views illustrating results of detecting a moving object from a single camera image by a method according to an embodiment of the present invention.
  • FIG. 13 is a block diagram of an object detection system according to another embodiment of the present invention.
  • the object detection method according to an embodiment of the present invention accurately detects moving objects in an image sequence generated by using a single camera.
  • the object detection method by using the image sequence obtained by a single camera to calculate the amount of movement of the pixels in the optical flow (moving by splitting the image through the size and direction of the flow) Detect objects and improve the performance of object detection through the approximate size of moving objects.
  • the object detection method by comparing the input image (current image) and the previous image in the image sequence generated using a single camera, Optical Flow ) Generates an image (S110).
  • FIG. 2 illustrates an input image
  • FIG. 3 illustrates an optical flow image of FIG. 2.
  • segmented regions are extracted by dividing adjacent pixels having the same colors that are determined to have a motion in the optical flow image generated in S110 (S120). 4 illustrates the results of extracting the divided regions from the optical flow image.
  • step S130 motion regions are generated from the divided regions obtained in operation S120 (S130).
  • the movement area generation in step S130 is performed by grouping the divided areas obtained in step S120 or removing / excluding a portion.
  • each partition is configured as a node in the graph, and neighboring partitions are connected as edges to compare two colors or to separate the partitions through color comparison.
  • Equation 1 the maximum color value in the partition is calculated by Equation 1 below.
  • Equation 2 The minimum color difference value at the boundary line between two neighboring partitions is calculated by Equation 2 below.
  • Equation 3 if the maximum color difference value, which is the difference value between the maximum color values inside the two partitions, is smaller than the minimum color difference value at the boundary line, the two partitions remain divided and vice versa. In this case, two partitions are combined and grouped into one partition.
  • k represents a constant value and ⁇ represents the number of pixels belonging to the divided region.
  • the motion regions determined as not the actual moving objects are removed by inferring whether the size of the movement regions corresponds to the size of the actual moving objects.
  • Equation 4 The correlation between the y coordinate and h is calculated from the linear relationship shown in Equation 4 below.
  • a linear function can be calculated by projecting an image at an interval of about 10 cm from an actual height of 1 m to 3 m (see FIG. 6), calculating a matrix A, calculating eigenvectors and eigenvalues from SVD (Singular Value Decomposition)
  • SVD Single Value Decomposition
  • h min and h max corresponding to y coordinates can be calculated.
  • the partition is determined to be not a moving object and removed.
  • FIG. 8 a result of removing regions which do not correspond to actual moving objects among the movement regions shown in FIG. 4 is shown in FIG. 8.
  • 9 to 12 illustrate the results of detecting a moving object from a single camera image by the method according to an embodiment of the present invention. It can be seen that the robust moving object can be detected by overcoming the limitation of estimating motion without distance information, which is the limitation of a single camera.
  • the object detection system includes a camera 110, an image processor 120, and an output unit 130 as shown in FIG. 13.
  • the camera 110 generates an image sequence with a single camera system, and the image processor 120 detects a moving object from a single camera image through the algorithm shown in FIG. 1.
  • the output unit 130 may be various means for outputting / saving a detection result of a moving object, such as a display, an interface, a memory, and the like.
  • the technical idea of the present invention can be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment.
  • the technical idea according to various embodiments of the present disclosure may be implemented in the form of computer readable codes recorded on a computer readable recording medium.
  • the computer-readable recording medium can be any data storage device that can be read by a computer and can store data.
  • the computer-readable recording medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical disk, a hard disk drive, or the like.
  • the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between the computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a method and a system for detection of a moving object in an image using a single camera. An object detection method, according to an embodiment of the present invention, generates an optical flow image from an input image sequence, extracts segmented regions from the optical flow image, and generates a motion region from the extracted segmented regions. Thereby, the present invention may improve object detection performance, and may be applied to a context awareness technology, such as an intelligent unmanned vehicle and the like.

Description

단일 카메라를 이용한 영상에서 움직이는 객체 검출 방법 및 시스템Method and system for detecting moving object in image using single camera

본 발명은 객체 검출 방법에 관한 것으로, 더욱 상세하게는 단일 카메라로 획득한 영상 시퀀스로부터 움직이는 물체를 결정하고 검출하는 방법 및 시스템에 관한 것이다.The present invention relates to an object detection method, and more particularly, to a method and system for determining and detecting a moving object from an image sequence obtained by a single camera.

움직이는 객체를 검출하기 위해, 스테레오 카메라를 사용하거나 단일 카메라를 이용할 수 있으며, 카메라의 움직임을 계산하고 이로부터 검출을 수행하게 된다.To detect moving objects, a stereo camera or a single camera can be used to calculate the movement of the camera and perform detection therefrom.

가격 경쟁력을 확보하기 위해서는, 단일 카메라 기반의 방법을 이용하는데, 카메라의 움직임을 추정하는 기술은 움직이는 객체가 많이 존재할 경우 정확도가 저하되는 문제가 있다.In order to secure price competitiveness, a single camera-based method is used. The technique of estimating camera movement has a problem in that accuracy decreases when there are many moving objects.

이에, 단일 카메라를 이용하는 경우, 카메라의 움직임 추정 없이 움직이는 객체를 검출 방안의 모색이 요청된다.Accordingly, when a single camera is used, a search for a method for detecting a moving object without estimating the movement of the camera is requested.

본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 단일 카메라 기반으로 움직이는 객체를 검출함에 있어, 정확도를 향상시킬 수 있는 객체 검출 방법 및 시스템을 제공함에 있다.The present invention has been made to solve the above problems, and an object of the present invention is to provide an object detection method and system that can improve the accuracy in detecting a moving object based on a single camera.

상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른, 객체 검출 방법은, 입력 영상 시퀀스로부터 광류(Optical Flow) 영상을 생성하는 단계; 광류 영상에서 분할 영역들을 추출하는 단계; 및 추출한 분할 영역들로부터 움직임 영역을 생성하는 단계;를 포함한다.According to an embodiment of the present invention, an object detecting method includes: generating an optical flow image from an input image sequence; Extracting divided regions from an optical flow image; And generating a motion region from the extracted divided regions.

움직임 영역 생성단계는, 분할 영역과 이웃하는 분할 영역의 색상 비교를 통해, 분할 영역들을 통합 또는 유지하여, 움직임 영역을 생성할 수 있다.In the motion region generation step, the motion regions may be generated by integrating or maintaining the divided regions through color comparison between the divided regions and neighboring divided regions.

움직임 영역 생성단계는, 분할 영역들 간의 색상 차이 값과 경계선에서의 차이 값을 비교하여, 분할 영역들을 통합 또는 유지할 수 있다.In the moving area generation step, the color difference values between the divided areas and the difference value in the boundary line may be compared to integrate or maintain the divided areas.

움직임 영역 생성단계는, 분할 영역의 크기가 작을수록 통합될 가능성을 높게 적용하고, 분할 영역의 크기가 클수록 통합될 가능성을 낮게 적용할 수 있다.In the motion region generation step, the smaller the size of the divided region, the higher the likelihood of integration, and the larger the size of the divided region, the lower the possibility of integration.

움직임 영역 생성단계는, 광류 영상에서 분할 영역의 크기와 위치를 기초로, 분할 영역을 유지 또는 제거하여, 움직임 영역을 생성할 수 있다.In the moving region generating step, the moving region may be generated by maintaining or removing the divided region based on the size and position of the divided region in the optical flow image.

분할 영역의 크기는, 분할 영역의 높이이고, 분할 영역의 위치는, 분할 영역의 최하단의 y 좌표일 수 있다.The size of the divided region may be the height of the divided region, and the position of the divided region may be the y-coordinate at the bottom of the divided region.

입력 영상 시퀀스는, 단일 카메라를 이용하여 생성할 수 있다.The input image sequence may be generated using a single camera.

한편, 본 발명의 다른 실시예에 따른, 객체 검출 시스템은, 입력 영상 시퀀스를 생성하는 카메라; 및 카메라에서 생성된 입력 영상 시퀀스로부터 광류(Optical Flow) 영상을 생성하며, 광류 영상에서 분할 영역들을 추출하고, 추출한 분할 영역들로부터 움직임 영역을 생성하는 프로세서;를 포함한다.On the other hand, the object detection system according to another embodiment of the present invention, the camera for generating an input image sequence; And a processor configured to generate an optical flow image from the input image sequence generated by the camera, extract the divided regions from the optical flow image, and generate a motion region from the extracted divided regions.

이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 단일 카메라를 이용하여 광류(Optical Flow) 영상, 영상 분할 기법 및 움직이는 객체의 대략적인 크기를 이용한 객체 검출을 통해, 객체 검출 성능을 향상시킬 수 있게 된다.As described above, according to embodiments of the present invention, object detection performance may be improved by using an optical flow image, an image segmentation technique, and an object detection using an approximate size of a moving object using a single camera. Will be.

또한, 본 발명의 실시예들은 지능형 무인 자동차에 적용 가능하며, 이와 같은 상황을 인지할 수 있는 기술에 적용 가능하다.In addition, embodiments of the present invention can be applied to an intelligent driverless vehicle, and can be applied to a technology capable of recognizing such a situation.

도 1은 본 발명의 일 실시예에 따른 객체 검출 방법의 설명에 제공되는 흐름도,1 is a flowchart provided to explain an object detection method according to an embodiment of the present invention;

도 2는 입력 영상을 예시한 도면,2 is a diagram illustrating an input image;

도 3은, 도 2에 대한 광류 영상을 예시한 도면,3 is a diagram illustrating an optical flow image of FIG. 2;

도 4는 광류 영상에서 분할 영역들을 추출한 결과를 예시한 도면,4 is a diagram illustrating a result of extracting divided regions from an optical flow image;

도 5 내지 도 8은, 움직임 영역들에 대한 후처리 과정의 부연 설명에 제공되는 도면들,5 to 8 are views provided for further explanation of a post-processing process for motion regions,

도 9 내지 도 12는, 본 발명의 실시예에 따른 방법으로 단일 카메라 영상으로부터 움직이는 객체를 검출한 결과들을 예시한 도면들, 그리고,9 to 12 are views illustrating results of detecting a moving object from a single camera image by a method according to an embodiment of the present invention, and

도 13은 본 발명의 다른 실시예에 따른 객체 검출 시스템의 블럭도이다.13 is a block diagram of an object detection system according to another embodiment of the present invention.

이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, with reference to the drawings will be described the present invention in more detail.

도 1은 본 발명의 일 실시예에 따른 객체 검출 방법의 설명에 제공되는 흐름도이다. 본 발명의 실시예에 따른 객체 검출 방법은, 단일 카메라를 이용하여 생성한 영상 시퀀스에서 움직이는 객체를 정확하게 검출한다.1 is a flowchart provided to explain an object detection method according to an embodiment of the present invention. The object detection method according to an embodiment of the present invention accurately detects moving objects in an image sequence generated by using a single camera.

이를 위해, 본 발명의 실시예에 따른 객체 검출 방법은, 단일 카메라로 획득한 영상 시퀀스를 이용하여 픽셀들의 이동량을 광류(Optical Flow)로 계산하고, Flow의 크기와 방향을 통해서 영상을 분할하여 움직이는 객체를 검출하며, 움직이는 객체의 대략적인 크기를 통해 객체 검출의 성능을 향상시킨다.To this end, the object detection method according to an embodiment of the present invention, by using the image sequence obtained by a single camera to calculate the amount of movement of the pixels in the optical flow (moving by splitting the image through the size and direction of the flow) Detect objects and improve the performance of object detection through the approximate size of moving objects.

구체적으로, 본 발명의 실시예에 따른 객체 검출 방법은, 도 1에 도시된 바와 같이, 단일 카메라를 이용하여 생성한 영상 시퀀스에서 입력 영상(현재 영상)과 이전 영상을 비교하여, 광류(Optical Flow) 영상을 생성한다(S110).Specifically, the object detection method according to an embodiment of the present invention, as shown in Figure 1, by comparing the input image (current image) and the previous image in the image sequence generated using a single camera, Optical Flow ) Generates an image (S110).

S110단계에서 생성된 광류 영상에는, 모든 픽셀의 이동방향과 이동량에 대한 정보가 색상으로 나타난다. 도 2에는 입력 영상을 예시하였고, 도 3에는 도 2에 대한 광류 영상을 예시하였다.In the optical flow image generated in step S110, information about the movement direction and the movement amount of all the pixels appears in color. 2 illustrates an input image, and FIG. 3 illustrates an optical flow image of FIG. 2.

다음, S110단계에서 생성된 광류 영상에서 움직임이 발생한 것으로 판단되는 동일 색상들을 갖는 인접한 픽셀들을 구분하여 분할 영역들을 추출한다(S120). 도 4에는 광류 영상에서 분할 영역들을 추출한 결과를 예시하였다.Next, segmented regions are extracted by dividing adjacent pixels having the same colors that are determined to have a motion in the optical flow image generated in S110 (S120). 4 illustrates the results of extracting the divided regions from the optical flow image.

이후, S120단계에서 얻어진 분할 영역들로부터 움직임 영역들을 생성한다(S130). S130단계에서의 움직임 영역 생성은, S120단계에서 얻어진 분할 영역들을 합쳐 그룹핑하거나 일부를 제거/배제하는 과정을 통해 이루어진다.Thereafter, motion regions are generated from the divided regions obtained in operation S120 (S130). The movement area generation in step S130 is performed by grouping the divided areas obtained in step S120 or removing / excluding a portion.

움직임 영역의 생성에는 효과적인 그래프 기반의 영상 분할 기술이 이용된다. 구체적으로, 각 분할 영역을 그래프에서 노드(node)로 구성하고, 이웃하는 분할 영역들을 에지(edge)로 연결하여 색상의 비교를 통해서 두 분할 영역을 통합할지 아니면 분리된 분할 영역들로 남겨둘지를 결정한다.An effective graph-based image segmentation technique is used to generate the motion region. In detail, each partition is configured as a node in the graph, and neighboring partitions are connected as edges to compare two colors or to separate the partitions through color comparison. Decide

이를 위해서 먼저 분할 영역 내에서 최대 색상 값을 아래의 수학식 1로 계산한다.To this end, first, the maximum color value in the partition is calculated by Equation 1 below.

[수학식 1][Equation 1]

Figure PCTKR2017000359-appb-I000001
Figure PCTKR2017000359-appb-I000001

그리고 이웃하는 두 분할 영역 사이의 경계선에서 최소 색상 차이 값을 아래의 수학식 2로 계산한다.The minimum color difference value at the boundary line between two neighboring partitions is calculated by Equation 2 below.

[수학식 2][Equation 2]

Figure PCTKR2017000359-appb-I000002
Figure PCTKR2017000359-appb-I000002

그리고 아래의 수학식 3에 따라, 두 분할 영역 내부의 최대 색상 값 간의 차이 값인 최대 색상 차이 값이 경계선에서의 최소 색상 차이 값 보다 작으면 두 분할 영역은 분할된 상태로 그대로 유지하고, 그와 반대일 경우 두 분할 영역을 합쳐서 하나의 분할 영역으로 그룹핑 한다.According to Equation 3 below, if the maximum color difference value, which is the difference value between the maximum color values inside the two partitions, is smaller than the minimum color difference value at the boundary line, the two partitions remain divided and vice versa. In this case, two partitions are combined and grouped into one partition.

[수학식 3][Equation 3]

Figure PCTKR2017000359-appb-I000003
Figure PCTKR2017000359-appb-I000003

Figure PCTKR2017000359-appb-I000004
Figure PCTKR2017000359-appb-I000004

Figure PCTKR2017000359-appb-I000005
Figure PCTKR2017000359-appb-I000005

여기서, k는 상수값을 나타내며, τ는 분할 영역에 소속된 픽셀의 개수를 나타낸다.Here, k represents a constant value and τ represents the number of pixels belonging to the divided region.

이에 따라, 분할 영역의 크기가 작을수록 값이 커서 분할 영역이 합쳐질 확률은 높아지게 된다. 반면, 분할 영역의 크기가 클수록 분할 영역이 합쳐질 확률이 낮아지게 된다.Accordingly, the smaller the size of the partition area is, the higher the value is, the higher the probability that the partition areas are combined. On the other hand, the larger the size of the partition is, the lower the probability that the partitions merge.

다음, 움직임 영역들에 대한 후처리 과정으로써, 움직임 영역들의 크기가 실제 움직이는 객체들의 크기에 상응하는지 추론하여 실제 움직이는 객체가 아니라고 판단되는 움직임 영역들을 제거한다.Next, as a post-processing process for the motion regions, the motion regions determined as not the actual moving objects are removed by inferring whether the size of the movement regions corresponds to the size of the actual moving objects.

이를 위해 먼저 검출하고자 하는 실제 움직이는 객체의 크기 범위를 정의/설정하여야 한다. 이를 테면, 1m로부터 3m까지의 높이값을 가지는 객체를 검출하기 위해, 도 5에 도시된 바와 같이 해당 높이의 가상 물체를 영상에 투영하여 영상에서의 객체 높이 h와 최하단의 y 좌표를 계산하여, 상관관계를 모델링한다.To do this, first define / set the size range of the actual moving object to be detected. For example, in order to detect an object having a height value from 1m to 3m, as shown in FIG. 5, by projecting a virtual object of the height to the image to calculate the object height h and the lowest y coordinate in the image, Model the correlation.

y 좌표와 h 간의 상관관계는 아래의 수학식 4에 제시된 선형적인 관계식으로부터 계산한다. 이때 실제 높이 1m에서 3m까지 약 10cm를 간격으로 영상으로 투영하여(도 6 참조) 선형 함수를 계산할 수 있으며, 아래와 같이 행렬 A를 계산하고 SVD(Singular Value Decomposition)으로부터 eigenvector와 eigenvalue값 들을 계산하고, eigenvalue의 크기가 가장 큰 eigenvector가 구하고자 하는 선형식의 파라미터 a와 b가 된다.The correlation between the y coordinate and h is calculated from the linear relationship shown in Equation 4 below. In this case, a linear function can be calculated by projecting an image at an interval of about 10 cm from an actual height of 1 m to 3 m (see FIG. 6), calculating a matrix A, calculating eigenvectors and eigenvalues from SVD (Singular Value Decomposition) The eigenvector with the largest eigenvalue is the linear parameters a and b to obtain.

[수학식 4][Equation 4]

Figure PCTKR2017000359-appb-I000006
Figure PCTKR2017000359-appb-I000006

이에 따르면 도 7에 도시된 바와 같이 y 좌표에 해당하는 hmin과 hmax를 계산할 수 있고, 분할 영역의 높이 값인 h가 이 범위에 포함되는지 확인하여, 만약 높이 h가 이 범위 내에 존재하지 않을 경우 분할 영역은 실제 움직이는 객체가 아닌 것으로 판단하여 제거한다.According to this, as shown in FIG. 7, h min and h max corresponding to y coordinates can be calculated. The partition is determined to be not a moving object and removed.

이와 같은 방법에 따라 도 4에 제시된 움직임 영역들 중 실제 움직이는 객체에 해당하지 않는 영역들을 제거한 결과를 도 8에 제시하였다.According to the method described above, a result of removing regions which do not correspond to actual moving objects among the movement regions shown in FIG. 4 is shown in FIG. 8.

지금까지, 단일 카메라로 획득한 영상에 대해 광류 영상을 생성하여 움직이는 객체를 검출하고, 실제 움직이는 객체의 크기 검증을 통해 객체 검출의 성능을 향상시키는 방법에 대해, 바람직한 실시예를 들어 상세히 설명하였다.Until now, a method of improving the performance of object detection by generating a light flow image of an image acquired by a single camera to detect a moving object and verifying the size of the actual moving object has been described in detail with reference to a preferred embodiment.

도 9 내지 도 12에는 본 발명의 실시예에 따른 방법으로 단일 카메라 영상으로부터 움직이는 객체를 검출한 결과들을 예시하였다. 단일 카메라의 한계점인 거리정보 없이 움직임을 추정해야 하는 한계를 극복하고 강인하게 움직이는 물체를 검출할 수 있음을 확인할 수 있다.9 to 12 illustrate the results of detecting a moving object from a single camera image by the method according to an embodiment of the present invention. It can be seen that the robust moving object can be detected by overcoming the limitation of estimating motion without distance information, which is the limitation of a single camera.

도 13은 본 발명의 다른 실시예에 따른 객체 검출 시스템의 블럭도이다. 본 발명의 실시예에 따른 객체 검출 시스템은, 도 13에 도시된 바와 같이, 카메라(110), 영상 프로세서(120) 및 출력부(130)를 포함한다.13 is a block diagram of an object detection system according to another embodiment of the present invention. The object detection system according to the exemplary embodiment of the present invention includes a camera 110, an image processor 120, and an output unit 130 as shown in FIG. 13.

카메라(110)는 단일 카메라 시스템으로 영상 시퀀스를 생성하고, 영상 프로세서(120)는 도 1에 제시된 알고리즘을 통해 단일 카메라 영상으로부터 움직이는 객체를 검출한다.The camera 110 generates an image sequence with a single camera system, and the image processor 120 detects a moving object from a single camera image through the algorithm shown in FIG. 1.

출력부(130)는 움직이는 객체의 검출 결과 출력/저장하는 다양한 수단, 이를 테면, 디스플레이, 인터페이스, 메모리 등이다.The output unit 130 may be various means for outputting / saving a detection result of a moving object, such as a display, an interface, a memory, and the like.

한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 사상이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 사상은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.On the other hand, the technical idea of the present invention can be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment. In addition, the technical idea according to various embodiments of the present disclosure may be implemented in the form of computer readable codes recorded on a computer readable recording medium. The computer-readable recording medium can be any data storage device that can be read by a computer and can store data. For example, the computer-readable recording medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical disk, a hard disk drive, or the like. In addition, the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between the computers.

또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although the preferred embodiment of the present invention has been shown and described above, the present invention is not limited to the specific embodiments described above, but the technical field to which the invention belongs without departing from the spirit of the invention claimed in the claims. Of course, various modifications can be made by those skilled in the art, and these modifications should not be individually understood from the technical spirit or the prospect of the present invention.

Claims (8)

입력 영상 시퀀스로부터 광류(Optical Flow) 영상을 생성하는 단계;Generating an optical flow image from an input image sequence; 광류 영상에서 분할 영역들을 추출하는 단계; 및Extracting divided regions from an optical flow image; And 추출한 분할 영역들로부터 움직임 영역을 생성하는 단계;를 포함하는 것을 특징으로 하는 객체 검출 방법.Generating a motion region from the extracted divided regions. 청구항 1에 있어서,The method according to claim 1, 움직임 영역 생성단계는,The motion area generation step, 분할 영역과 이웃하는 분할 영역의 색상 비교를 통해, 분할 영역들을 통합 또는 유지하여, 움직임 영역을 생성하는 것을 특징으로 하는 객체 검출 방법.A method for detecting an object, characterized in that the moving area is generated by integrating or maintaining the divided areas through color comparison between the divided area and neighboring divided areas. 청구항 2에 있어서,The method according to claim 2, 움직임 영역 생성단계는,The motion area generation step, 분할 영역들 간의 색상 차이 값과 경계선에서의 차이 값을 비교하여, 분할 영역들을 통합 또는 유지하는 것을 특징으로 하는 객체 검출 방법.Comparing the color difference value between the divided areas and the difference value in the boundary line, the object detection method characterized in that to integrate or maintain the divided areas. 청구항 3에 있어서,The method according to claim 3, 움직임 영역 생성단계는,The motion area generation step, 분할 영역의 크기가 작을수록 통합될 가능성을 높게 적용하고, 분할 영역의 크기가 클수록 통합될 가능성을 낮게 적용하는 것을 특징으로 하는 객체 검출 방법.The smaller the size of the partition is applied to the possibility of integration, the larger the size of the partition is applied to the object detection method characterized in that the lower. 청구항 1에 있어서,The method according to claim 1, 움직임 영역 생성단계는,The motion area generation step, 광류 영상에서 분할 영역의 크기와 위치를 기초로, 분할 영역을 유지 또는 제거하여, 움직임 영역을 생성하는 것을 특징으로 하는 객체 검출 방법.The object detection method of claim 1, wherein the moving region is generated by maintaining or removing the divided region based on the size and position of the divided region in the optical flow image. 청구항 5에 있어서,The method according to claim 5, 분할 영역의 크기는, 분할 영역의 높이이고,The size of the partition is the height of the partition, 분할 영역의 위치는, 분할 영역의 최하단의 y 좌표인 것을 특징으로 하는 객체 검출 방법.The position of the divided region is the lowest y coordinate of the divided region. 청구항 1에 있어서,The method according to claim 1, 입력 영상 시퀀스는,The input video sequence is 단일 카메라를 이용하여 생성하는 것을 특징으로 하는 객체 검출 방법.Object detection method characterized in that the generation using a single camera. 입력 영상 시퀀스를 생성하는 카메라;A camera for generating an input image sequence; 카메라에서 생성된 입력 영상 시퀀스로부터 광류(Optical Flow) 영상을 생성하며, 광류 영상에서 분할 영역들을 추출하고, 추출한 분할 영역들로부터 움직임 영역을 생성하는 프로세서;를 포함하는 것을 특징으로 하는 객체 검출 시스템.And a processor configured to generate an optical flow image from the input image sequence generated by the camera, extract the divided regions from the optical flow image, and generate a movement region from the extracted divided regions.
PCT/KR2017/000359 2017-01-11 2017-01-11 Method and system for detection of moving object in image using single camera WO2018131729A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170003980A KR102336284B1 (en) 2017-01-11 2017-01-11 Moving Object Detection Method and System with Single Camera
KR10-2017-0003980 2017-01-11

Publications (1)

Publication Number Publication Date
WO2018131729A1 true WO2018131729A1 (en) 2018-07-19

Family

ID=62840539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/000359 WO2018131729A1 (en) 2017-01-11 2017-01-11 Method and system for detection of moving object in image using single camera

Country Status (2)

Country Link
KR (1) KR102336284B1 (en)
WO (1) WO2018131729A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110007107B (en) * 2019-04-02 2021-02-09 上海交通大学 An Optical Flow Sensor Integrating Different Focal Length Cameras
WO2020242179A1 (en) * 2019-05-29 2020-12-03 (주) 애니펜 Method, system and non-transitory computer-readable recording medium for providing content

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110118376A (en) * 2010-04-23 2011-10-31 동명대학교산학협력단 Security vehicle detection system using optical flow
KR20130060274A (en) * 2010-08-02 2013-06-07 페킹 유니버시티 Representative motion flow extraction for effective video classification and retrieval
KR20130075636A (en) * 2011-12-27 2013-07-05 중앙대학교 산학협력단 Apparatus and method for automatic object segmentation for background composition
KR20150089677A (en) * 2014-01-28 2015-08-05 엘지이노텍 주식회사 Camera system, calibration device and calibration method
KR20150113751A (en) * 2014-03-31 2015-10-08 (주)트라이큐빅스 Method and apparatus for acquiring three-dimensional face model using portable camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004088599A (en) * 2002-08-28 2004-03-18 Toshiba Corp Image monitoring apparatus and method therefor
JP2008226261A (en) * 2008-04-07 2008-09-25 Toshiba Corp Object detection method
KR101141936B1 (en) * 2010-10-29 2012-05-07 동명대학교산학협력단 Method of tracking the region of a hand based on the optical flow field and recognizing gesture by the tracking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110118376A (en) * 2010-04-23 2011-10-31 동명대학교산학협력단 Security vehicle detection system using optical flow
KR20130060274A (en) * 2010-08-02 2013-06-07 페킹 유니버시티 Representative motion flow extraction for effective video classification and retrieval
KR20130075636A (en) * 2011-12-27 2013-07-05 중앙대학교 산학협력단 Apparatus and method for automatic object segmentation for background composition
KR20150089677A (en) * 2014-01-28 2015-08-05 엘지이노텍 주식회사 Camera system, calibration device and calibration method
KR20150113751A (en) * 2014-03-31 2015-10-08 (주)트라이큐빅스 Method and apparatus for acquiring three-dimensional face model using portable camera

Also Published As

Publication number Publication date
KR20180082739A (en) 2018-07-19
KR102336284B1 (en) 2021-12-08

Similar Documents

Publication Publication Date Title
WO2022050473A1 (en) Apparatus and method for estimating camera pose
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN106952269B (en) Near-neighbor reversible video foreground object sequence detection and segmentation method and system
WO2015126031A1 (en) Person counting method and device for same
WO2019169884A1 (en) Image saliency detection method and device based on depth information
WO2013048160A1 (en) Face recognition method, apparatus, and computer-readable recording medium for executing the method
CN109033972A (en) A kind of object detection method, device, equipment and storage medium
CN111382637B (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN106997478B (en) Salient object detection method in RGB-D images based on saliency center prior
KR101963404B1 (en) Two-step optimized deep learning method, computer-readable medium having a program recorded therein for executing the same and deep learning system
WO2017150899A9 (en) Object reidentification method for global multi-object tracking
WO2019147024A1 (en) Object detection method using two cameras having different focal distances, and apparatus therefor
CN105740751A (en) Object detection and identification method and system
CN114072839A (en) Hierarchical motion representation and extraction in monocular still camera video
CN107610177A (en) A kind of method and apparatus that characteristic point is determined in synchronous superposition
KR20230166840A (en) Method for tracking object movement path based on artificial intelligence
WO2023050810A1 (en) Target detection method and apparatus, electronic device, storage medium, and computer program product
WO2014185691A1 (en) Apparatus and method for extracting high watermark image from continuously photographed images
CN117593548A (en) Visual SLAM method for removing dynamic feature points based on weighted attention mechanism
WO2018131729A1 (en) Method and system for detection of moving object in image using single camera
CN113129249B (en) Depth video-based space plane detection method and system and electronic equipment
WO2016104842A1 (en) Object recognition system and method of taking account of camera distortion
Bikmullina et al. Stand for development of tasks of detection and recognition of objects on image
JP5027201B2 (en) Telop character area detection method, telop character area detection device, and telop character area detection program
CN114399729A (en) Monitoring object movement identification method, system, terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17890997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17890997

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载