+

WO2019175620A1 - View based object detection in images - Google Patents

View based object detection in images Download PDF

Info

Publication number
WO2019175620A1
WO2019175620A1 PCT/IB2018/051592 IB2018051592W WO2019175620A1 WO 2019175620 A1 WO2019175620 A1 WO 2019175620A1 IB 2018051592 W IB2018051592 W IB 2018051592W WO 2019175620 A1 WO2019175620 A1 WO 2019175620A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
view
different
version
pixel
Prior art date
Application number
PCT/IB2018/051592
Other languages
French (fr)
Inventor
Pratik Sharma
Original Assignee
Pratik Sharma
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pratik Sharma filed Critical Pratik Sharma
Priority to PCT/IB2018/051592 priority Critical patent/WO2019175620A1/en
Publication of WO2019175620A1 publication Critical patent/WO2019175620A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches

Definitions

  • the first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects.
  • Unmanned Aerial Vehicle which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems. Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The first step to segment the image into different objects is producing the sharpened version of an image by blurring the image slightly and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image with thick edges to segment it into different objects. Now each object will have different salient features in different views and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object Recognition.

Description

View Based Object Detection in Images
In this invention we have different images consisting of different objects in an image. We can do edge detection and segment the image into different objects by sharpening the different edges of the image. The first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects. Now each object will have different salient features in different views like top view, left side view, right side view, rear view and bottom view, and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object Recognition. The above technique could be used in Unmanned Aerial Vehicle, which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems. Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets.

Claims

Claims Following is the claim for this invention: -
1 . In this invention we have different images consisting of different objects in an image. We can do edge detection and segment the image into different objects by sharpening the different edges of the image. The first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects. Now each object will have different salient features in different views like top view, left side view, right side view, rear view and bottom view, and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object
Recognition. The above technique could be used in Unmanned Aerial Vehicle, which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems.
Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets. The above novel technique of doing View Based Object Detection in images is the claim for this invention.
PCT/IB2018/051592 2018-03-11 2018-03-11 View based object detection in images WO2019175620A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2018/051592 WO2019175620A1 (en) 2018-03-11 2018-03-11 View based object detection in images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2018/051592 WO2019175620A1 (en) 2018-03-11 2018-03-11 View based object detection in images

Publications (1)

Publication Number Publication Date
WO2019175620A1 true WO2019175620A1 (en) 2019-09-19

Family

ID=67907466

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/051592 WO2019175620A1 (en) 2018-03-11 2018-03-11 View based object detection in images

Country Status (1)

Country Link
WO (1) WO2019175620A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2302759A1 (en) * 1997-11-05 1999-05-14 British Aerospace Public Limited Company Automatic target recognition apparatus and process
EP1835460A1 (en) * 2005-01-07 2007-09-19 Sony Corporation Image processing system, learning device and method, and program
US8391645B2 (en) * 2003-06-26 2013-03-05 DigitalOptics Corporation Europe Limited Detecting orientation of digital images using face detection information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2302759A1 (en) * 1997-11-05 1999-05-14 British Aerospace Public Limited Company Automatic target recognition apparatus and process
US8391645B2 (en) * 2003-06-26 2013-03-05 DigitalOptics Corporation Europe Limited Detecting orientation of digital images using face detection information
EP1835460A1 (en) * 2005-01-07 2007-09-19 Sony Corporation Image processing system, learning device and method, and program

Similar Documents

Publication Publication Date Title
CN112417943B (en) Advanced Driver Assistance System (ADAS) operation with algorithmic astronomical line detection
CA2960240C (en) Method and system for aligning a taxi-assist camera
EP3224808B1 (en) Method and system for processing a sequence of images to identify, track, and/or target an object on a body of water
US11073389B2 (en) Hover control
US20180122051A1 (en) Method and device for image haze removal
US9892646B2 (en) Context-aware landing zone classification
EP3101502A3 (en) Autonomous unmanned aerial vehicle decision-making
US10303943B2 (en) Cloud feature detection
Nagarani et al. Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system
US10853969B2 (en) Method and system for detecting obstructive object at projected locations within images
CN114842359B (en) Method for detecting autonomous landing runway of fixed-wing unmanned aerial vehicle based on vision
CA3091897A1 (en) Image processing device, flight vehicle, and program
Faheem et al. Uav emergency landing site selection system using machine vision
CN108257179B (en) Image processing method
US10210389B2 (en) Detecting and ranging cloud features
WO2019175620A1 (en) View based object detection in images
Ogawa et al. Automated counting wild birds on UAV image using deep learning
US20200283163A1 (en) Flight vision system and method for presenting images from the surrounding of an airborne vehicle in a flight vision system
Ruf et al. Enhancing automated aerial reconnaissance onboard UAVs using sensor data processing-characteristics and pareto front optimization
US10872398B2 (en) Apparatus and method for removing haze from image using fuzzy membership function, and computer program for performing the method
Dudek et al. Cloud detection system for UAV sense and avoid: discussion of suitable algorithms
CN106506944B (en) Image tracking method and device for unmanned aerial vehicle
Singh et al. Investigating feasibility of target detection by visual servoing using UAV for oceanic applications
US9911059B1 (en) Process for recovering an unmanned vehicle
Sung et al. Onboard pattern recognition for autonomous UAV landing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18910171

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载