US20030020808A1 - Automatic zone monitoring - Google Patents
Automatic zone monitoring Download PDFInfo
- Publication number
- US20030020808A1 US20030020808A1 US09/484,096 US48409600A US2003020808A1 US 20030020808 A1 US20030020808 A1 US 20030020808A1 US 48409600 A US48409600 A US 48409600A US 2003020808 A1 US2003020808 A1 US 2003020808A1
- Authority
- US
- United States
- Prior art keywords
- moving
- monitored
- intersection
- monitoring system
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims 2
- 230000009471 action Effects 0.000 abstract description 3
- 239000004568 cement Substances 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000012489 doughnuts Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000005019 pattern of movement Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present invention relates to the development of an automatic zone monitoring system for use in the identification of zone infringements in applications such as security, health & safety, working practices analysis and traffic analysis.
- a single wide-angle camera may be covering a large zone, and 2-D aware systems may well trigger false alarms when, for eg, a crane lifts a load into the air, and visually the load (which may be close to the camera, and perfectly safe) might line-up with a more-distant object being monitored.
- the volume of the prohibited region may vary according to the state of dangerous equipment.
- the present invention provides an automatic zone monitoring system comprising: means for capturing live video using a plurality of video cameras; and processing means connected to said video cameras comprising: means for automatically identifying moving objects within the field of view of said video cameras; means for defining one or more 3 dimensional monitored volumes; and means for detecting the intersection between said moving objects and the or each monitored volume.
- FIG. 1 is an aerial map of a building site
- FIG. 2 is a view from camera 1 of FIG. 1 including a circle designating an object to be monitored;
- FIG. 3 is a view from camera 2 of FIG. 1;
- FIG. 4 illustrates the circle of FIG. 2 mapping to 3-D infinite cone
- FIG. 5 illustrates the intersection of 2 danger cones to define a monitored volume
- FIG. 6 shows a person, bounded by a box, appearing to approach the monitored volume of FIG. 5;
- FIG. 7 shows an aerial view of a moving pyramid defined around the person of FIG. 6;
- FIG. 8 shows an aerial view of a false-alarm due to a single-camera view
- FIG. 9 shows a 3-D moving volume formed from the 2 views of cameras 1 and 2 of FIG. 1.
- the present invention provides a monitoring system using two or more cameras to monitor a space in 3 dimensions.
- One or more virtual 3-D monitored volumes relating to dangerous or restricted areas, are constructed within this space.
- the system can determine if a moving object intersects any monitored volumes and take appropriate action.
- a site including a house and a cement mixer is being monitored by Camera 1 and Camera 2 . Images from the camera are fed back to a computer system where the images may be simultaneously displayed within respective windows on a computer display in a conventional manner.
- the connection between the cameras and the computer system may be by any conventional means ranging from simple direct cabling, through wireless RF or IR connections, to a network connection where the camera and the computer are connected typically via a TCP/IP link across a LAN, the Internet or an Intranet and in fact may be physically quite remote from one another.
- the field of view for Camera 1 a wide angle camera with fixed focal length lens, is a cone radiating from the camera and appears as a triangle in the figure. Nonetheless, it is possible to use catadioptic lens cameras which capture 360 degree panoramas in a hemispherical space.
- the advantage of a catadioptic lens being wider coverage of a zone with less camera equipment. For more information on catadioptic lenses, see http://www.eecs.lehigh.edu/ ⁇ tboult/VSAM/remote-reality.html.
- a monitored volume 16 is defined by a user drawing a boundary around an object, eg a cement mixer, in at least two of the camera views in which the object appears.
- FIG. 2 shows a manually-selected circle 10 superimposed on the image for Camera 1 .
- the 2-D circle 10 corresponds to an infinite cone 14 in 3-D, and an infinite number of possible spheres 12 can be located in the cone 14 , FIG. 4.
- FIG. 3 shows the view from Camera 2 including a second circle 10 ′ superimposed on the camera image and this in turn defines a cone 14 ′ emanating from Camera 2 .
- FIG. 5 shows the monitored volume 16 represented which is the intersection of the two cones 14 , 14 ′′ corresponding to the circles 10 , 10 ′.
- the boundaries 10 , 10 ′ in the various views are linked by allocating a common identifier to the boundaries eg Cement Mixer. It may be seen that the monitored volume 16 , which represents a sphere around the mixer, is completely contained in the intersection-volume of the two cones 14 , 14 ′. This monitored volume represents the best approximation to the desired spherical volume that can be achieved using only two camera-views.
- boundaries such as the circles 10 , 10 ′ for the monitored volume 16 may be defined in any number of camera views. It is also not necessary, although it is desirable, for a monitored volume to be defined for every camera within whose field of view it lies. In the first embodiment, it is clear that if a boundary for a monitored volume is not defined in a camera view, then the result is that an object moving in that view will never cause a conflict with that volume, although a conflict may arise from camera views in which the monitored volume has been defined.
- FIG. 6 shows a display where the computer has identified changing-pixels as a moving-object, in this case a person 60 .
- the system constructs a bounding box 18 around the moving pixels on detection of the set of moving pixels on the camera-image, and this corresponds to a rectangular moving pyramid 20 projected from the camera-viewpoint out into the 3-D real-world, FIG. 7.
- FIG. 8 shows that if the person 60 moves to the right, the moving pyramid 20 appears to intersect the monitored volume 16 at hatched area 22 . This shows that a system relying on Camera 1 on its own would consider that the person is in danger, and will trigger an alarm if it were the only decider. As such in 2-D systems, without employing complex heuristics, this would trigger a false-alarm. Thus, it is clear that using only a single camera is more likely to result in the generation of false-positives as the tracked object could exist at any position in the moving pyramid 20 .
- FIG. 9 shows a quadrilateral 24 constrained by moving pyramids 20 and 20 ′ of Cameras 1 and 2 respectively. This intersection of 2 moving pyramids from 2 different cameras defines a constrained moving volume which locates the person accurately in 3-D. This new, smaller moving volume 24 does NOT intersect the monitored volume 16 of the cement mixer, so a false alarm is avoided.
- a moving pyramid 20 , 20 ′ can be constructed using any 2-D closed shape to define its cross-section. Nonetheless, rectangular cross-sections have the advantage of computational simplicity. It should also be seen that the monitored volume 16 need not be a simple sphere, rather it can be a more complex shape defined by the intersection of pyramids projected from more complex boundaries 10 , 10 ′ defined in the camera views.
- the system checks for apparent boundary infringements using each camera independently of the other and raises an alarm if both cameras appear to show a moving object infringing the same identified monitored volume, for example, the “cement mixer”. No correlation is made between the camera views and it is possible to incorrectly relate boundaries from one view to another, so causing the system to produce spurious results. Neither is it possible to relate moving objects from one camera view to another, and so again it is possible for the system to return false or spurious responses.
- monitored volumes are constructed using Constructive Solid Geometry (CSG) techniques, where a monitored volume is defined as the CSG union/intersection/difference of primitive CSG objects, so allowing the construction of extremely complex shapes, if desired.
- CSG descriptions may be found at, for example, http://www.bath.ac.uk/ ⁇ ensab/G_mod/FYGM/gm.htm.
- a monitored volume may thus take many different forms depending on the reason for its definition. For example, it may be defined as a parallelepiped surrounding a dangerous-place, eg a piece of machinery with revolving parts.
- the second embodiment requires a world coordinate system enabling CSG objects to be defined in such coordinates, rather than 2-D camera coordinates, so that monitored volumes may be defined and moving pyramids, also defined as CSG objects, may be checked for possible intersection with such monitored volumes.
- the reference points can be beacons of a known shape or colour which are placed at locations of known coordinates and appear in each of the camera views to be automatically identified by conventional image processing techniques.
- U.S. Pat. No. 5,579,471 discloses a system for querying images by content which could usefully be incorporated in the present invention.
- the beacons may even be adapted to emit infra-red light, perhaps even modulated, which can be detected by band-pass filtering the camera image to locate the camera image X,Y coordinates of the discrete infra-red sources.
- the computer system can be adapted to allow a user to manually define the reference points simply by identifying a location of known coordinates in each of the camera views in which the location appears.
- the user could pick out one or more corners of the house as well as any other features at known locations in each of the camera views.
- the world coordinates need not be universal, where locations are taken for example from GPS systems, and in fact may be site based, where for example, the entrance to the site is the origin of the coordinate system and the units of the system are in metres rather than degrees of latitude and longitude. Nonetheless, any system based on this second embodiment will work as long as the camera characteristics, the moving pyramids and the monitored volumes are defined in the same coordinates.
- the reference points have world coordinates, as follows:
- the system is able to use the above mapping to define the world coordinates of a CSG rectangular pyramid object comprising four triangles emanating from the camera location, through respective adjacent vertices of the bounding box 18 , FIG. 6.
- the system is now able to detect conflicts between the objects, providing each monitored volume 16 is in view of two (or more) cameras, by executing a CSG intersection calculation on the moving pyramids and monitored volumes. For non-null intersections, the system then assess the significance of those conflicts.
- CSG intersection of the monitored volume 16 and the union of all the moving pyramids 20 from the first camera is first used to determine Zone 1 .
- Zone 2 is constructed from the intersection of the same monitored volume 16 , and all the moving pyramids 20 ′ from Camera 2 .
- the system has detected one (or more) instances of moving objects intersecting the monitored volume 16 , hence a real conflict is assumed and an alarm condition is set.
- the system locates the set of intersections of moving pyramids emanating from the various cameras of the system. An intersection of this set of intersections and the monitored volumes is then used to determine if there are any conflicts.
- a major advantage of the second embodiment is that it is not even necessary for a monitored volume visible in multiple views to be defined for all those views; the monitored volume can be constructed interactively using as little as two views and all the cameras that can see it will automatically monitor it.
- the invention enables operators to configure a system for actions to be taken in response to different conflict scenarios, particularly where object state information is available either because the operator has input the information or it has been derived through some knowledge of the domain. For example:
- Naturally video surveillance systems are subject to physical constraints. For example, there may be regions in the surveillance area which are obscured from the view of the cameras. Furthermore the size of the surveillance area needs to be constrained such that a change in the scene, for example the movement of a person, registers as a significant pixel change.
- Object tracking from frame to frame can help to mitigate such problems.
- the moving pyramids 20 , 20 ′ will always intersect as they in fact relate to the same moving object. However, it is possible that from time to time, one of the pyramids 20 , 20 ′ will intersect a pyramid associated with another moving object.
- the system can relate the boundaries 18 and associate these with the same moving object. The system can therefore discount an intersection volume associated with another object and eliminate what may have been a false positive.
- the system knows the 3-D location of the intersection of related moving pyramids relating to the same moving object, object tracking can also be used to prevent accidents or security breaches, because the system can attempt to predict the possible destination of a moving object or draw interference from the pattern of movement of an object, eg if an object seems to be moving from car to car in a car park, where cars are defined as the monitored objects, the system may decide this could be a thief and alert security; or if an object is moving towards a monitored volume an audible warning may sound deterring the object from moving closer.
- the position and operating parameters of a crane could be automatically fed to the system and used to determine the size and position of a monitored volume.
- the same method may be used to determine the position and size of moving objects, by triangulation using two or more views of the said object, to which may be attached a self-identifying beacon.
- monitored volumes designated as prohibited zones could vary depending on the time of day or working hours.
- CSG techniques to constrain the geometric volume of such moving pyramids, for eg a solid wall in the field of view of a camera restricts the view beyond it; in CSG terms this means a new bounding-volume can be constructed to extend beyond the wall, which restricts the operating-volume of the moving-pyramid to lie in front of the wall, or to one side or above it, but not behind it.
- the 3D volumes to be monitored by the cameras are effectively clipped, which will again help to reduce false positives.
- the use of “intelligent” determination for example identifying whether a moving object is a person or a vehicle, enables more appropriate alert generation. For example, a bull-dozer approaching a cement mixer might be considered “safe”, whereas a person in the same place might be considered “at risk”.
- the computer system need not be a single computer and its processing capabilities can be distributed or implemented according to the economics of the zone to be monitored.
- the processing means required to locate and create a boundary around a moving object within the video camera and to only transmit the video image to the computer when there is a moving object, thus reducing the required bandwidth and processing overhead on the computer.
- the means for defining and displaying monitored volumes may also be separated from the means for monitoring a site from day to day, so enabling the monitoring means to be implemented as a dedicated component without needing peripherals such as a display or keyboard.
- the term “pyramid” has been used to define an object having any shaped based with vertices extending from an apex.
- the shape of the base may include any combination of lines and curves. Such shapes include triangles, quadrilaterals, circles, ellipses, combinations of these, but are not limited to simple closed areas.
- the term pyramid should be construed to include a conical shape.
- the mixer may be protected by an annular-cone (formed by the projection into 3-D of a doughnut or ring), such that the rotating portion of the mixer in the very middle may itself be identified as a “moving object”, but will not trigger an alarm as it does not intersect the monitored-pyramid extending around (but not touching) it.
- annular-cone formed by the projection into 3-D of a doughnut or ring
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Alarm Systems (AREA)
Abstract
A monitoring system using two or more cameras to monitor a space in 3 dimensions is disclosed. One or more virtual 3-D monitored volumes relating to dangerous or restricted areas, are constructed, preferably as CSG objects, within this space. By detecting an object moving within the view of the cameras, the system can determine if a moving object intersects any monitored volumes and take appropriate action.
Description
- The present invention relates to the development of an automatic zone monitoring system for use in the identification of zone infringements in applications such as security, health & safety, working practices analysis and traffic analysis.
- The use of video cameras in monitoring behaviour is increasing. In particular video cameras are now routinely used for security purposes and, whilst the majority of these systems are monitored manually, a number of systems have been developed to automatically analyze the resultant video feed. Much work has been done by the Universities of Leeds and Reading in the United Kingdom on the recognition and extraction of objects in video. The University of Leeds has developed algorithms to detect deformable objects, such as humans, using Hidden Markov Model techniques to help in the identification of pixel clusters as humans, and to predict their behaviour when information loss occurs, e.g. due to occlusion by foreground objects. The University of Reading has done research in the area of identifying rigid structures in video, such as vehicles, in a similar way.
- In any case, both establishments have demonstrated the ability to identify objects by colour coding a bounding box around each object. See, for example, http://www.scs.leeds.ac.uk/imv/index.html.
- Typically these security-systems involve static cameras monitoring regions in the view of a single camera. For example, anyone approaching a safe in a bank might trigger an alarm. These systems have no inherent understanding of the 3-D nature of the real world, so they cannot distinguish between a small object close to the camera and a large object far away. This is not a problem with applications which closely-monitor high-value/danger items which are not moving around.
- In dynamic situations such as may be found in building sites for example, a single wide-angle camera may be covering a large zone, and 2-D aware systems may well trigger false alarms when, for eg, a crane lifts a load into the air, and visually the load (which may be close to the camera, and perfectly safe) might line-up with a more-distant object being monitored.
- Surveillance systems which rely on manual analysis face a further problem in that they do not support the determination of more routine behavioral patterns and do not facilitate changes in the state of prohibited areas.
- For example, in a health and safety application there may be regular occurrences of an employee passing through a danger area due to poor site design. Alternatively, the volume of the prohibited region may vary according to the state of dangerous equipment.
- The present invention provides an automatic zone monitoring system comprising: means for capturing live video using a plurality of video cameras; and processing means connected to said video cameras comprising: means for automatically identifying moving objects within the field of view of said video cameras; means for defining one or more 3 dimensional monitored volumes; and means for detecting the intersection between said moving objects and the or each monitored volume.
- Embodiments of the invention will now be described with reference to the accompanying drawings in which:
- FIG. 1 is an aerial map of a building site;
- FIG. 2 is a view from
camera 1 of FIG. 1 including a circle designating an object to be monitored; - FIG. 3 is a view from
camera 2 of FIG. 1; - FIG. 4 illustrates the circle of FIG. 2 mapping to 3-D infinite cone;
- FIG. 5 illustrates the intersection of 2 danger cones to define a monitored volume;
- FIG. 6 shows a person, bounded by a box, appearing to approach the monitored volume of FIG. 5;
- FIG. 7 shows an aerial view of a moving pyramid defined around the person of FIG. 6;
- FIG. 8 shows an aerial view of a false-alarm due to a single-camera view; and
- FIG. 9 shows a 3-D moving volume formed from the 2 views of
cameras - The present invention provides a monitoring system using two or more cameras to monitor a space in 3 dimensions. One or more virtual 3-D monitored volumes, relating to dangerous or restricted areas, are constructed within this space. By detecting an object moving within the view of the cameras, the system can determine if a moving object intersects any monitored volumes and take appropriate action.
- Referring now to FIG. 1, a site including a house and a cement mixer is being monitored by Camera1 and Camera 2. Images from the camera are fed back to a computer system where the images may be simultaneously displayed within respective windows on a computer display in a conventional manner. The connection between the cameras and the computer system may be by any conventional means ranging from simple direct cabling, through wireless RF or IR connections, to a network connection where the camera and the computer are connected typically via a TCP/IP link across a LAN, the Internet or an Intranet and in fact may be physically quite remote from one another.
- The field of view for
Camera 1, a wide angle camera with fixed focal length lens, is a cone radiating from the camera and appears as a triangle in the figure. Nonetheless, it is possible to use catadioptic lens cameras which capture 360 degree panoramas in a hemispherical space. The advantage of a catadioptic lens being wider coverage of a zone with less camera equipment. For more information on catadioptic lenses, see http://www.eecs.lehigh.edu/˜tboult/VSAM/remote-reality.html. - In a first embodiment, a monitored
volume 16 is defined by a user drawing a boundary around an object, eg a cement mixer, in at least two of the camera views in which the object appears. FIG. 2 shows a manually-selectedcircle 10 superimposed on the image forCamera 1. The 2-D circle 10 corresponds to aninfinite cone 14 in 3-D, and an infinite number ofpossible spheres 12 can be located in thecone 14, FIG. 4. FIG. 3 shows the view from Camera 2 including asecond circle 10′ superimposed on the camera image and this in turn defines acone 14′ emanating from Camera 2. Finally, FIG. 5 shows the monitoredvolume 16 represented which is the intersection of the twocones circles boundaries volume 16, which represents a sphere around the mixer, is completely contained in the intersection-volume of the twocones - It will be seen that boundaries such as the
circles volume 16 may be defined in any number of camera views. It is also not necessary, although it is desirable, for a monitored volume to be defined for every camera within whose field of view it lies. In the first embodiment, it is clear that if a boundary for a monitored volume is not defined in a camera view, then the result is that an object moving in that view will never cause a conflict with that volume, although a conflict may arise from camera views in which the monitored volume has been defined. - FIG. 6 shows a display where the computer has identified changing-pixels as a moving-object, in this case a
person 60. In the manner disclosed by the Universities of Leeds and Reading, the system constructs a boundingbox 18 around the moving pixels on detection of the set of moving pixels on the camera-image, and this corresponds to a rectangular movingpyramid 20 projected from the camera-viewpoint out into the 3-D real-world, FIG. 7. - When this rectangular-
pyramid 20 intersects a 3-D monitoredvolume 16, i.e. when the boundingbox 18 intersects thecircle 10 in the camera image, an alarm condition may be set. It should be seen, however, thatCamera 1 alone cannot determine where a person is located in thepyramid 20. FIG. 8 shows that if theperson 60 moves to the right, the movingpyramid 20 appears to intersect the monitoredvolume 16 athatched area 22. This shows that a system relying on Camera 1 on its own would consider that the person is in danger, and will trigger an alarm if it were the only decider. As such in 2-D systems, without employing complex heuristics, this would trigger a false-alarm. Thus, it is clear that using only a single camera is more likely to result in the generation of false-positives as the tracked object could exist at any position in themoving pyramid 20. - FIG. 9 shows a quadrilateral24 constrained by moving
pyramids Cameras volume 24 does NOT intersect the monitoredvolume 16 of the cement mixer, so a false alarm is avoided. - It will be seen that a moving
pyramid volume 16 need not be a simple sphere, rather it can be a more complex shape defined by the intersection of pyramids projected from morecomplex boundaries - Nonetheless, it should be seen that in the first embodiment, the system checks for apparent boundary infringements using each camera independently of the other and raises an alarm if both cameras appear to show a moving object infringing the same identified monitored volume, for example, the “cement mixer”. No correlation is made between the camera views and it is possible to incorrectly relate boundaries from one view to another, so causing the system to produce spurious results. Neither is it possible to relate moving objects from one camera view to another, and so again it is possible for the system to return false or spurious responses.
- To enable simpler definition of even complex monitored volumes which are automatically usable in any camera view in which they are visible and to allow for greater resolution of possible false or spurious responses, a second embodiment is provided. In this embodiment, monitored volumes are constructed using Constructive Solid Geometry (CSG) techniques, where a monitored volume is defined as the CSG union/intersection/difference of primitive CSG objects, so allowing the construction of extremely complex shapes, if desired. CSG descriptions may be found at, for example, http://www.bath.ac.uk/˜ensab/G_mod/FYGM/gm.htm. A monitored volume may thus take many different forms depending on the reason for its definition. For example, it may be defined as a parallelepiped surrounding a dangerous-place, eg a piece of machinery with revolving parts.
- The second embodiment, however, requires a world coordinate system enabling CSG objects to be defined in such coordinates, rather than 2-D camera coordinates, so that monitored volumes may be defined and moving pyramids, also defined as CSG objects, may be checked for possible intersection with such monitored volumes.
- The characteristics of each camera also need to be defined in world coordinates. Camera characteristics typically involve up to 8 unknowns:
- the X,Y,Z coordinates of the origin of a camera viewpoint;
- its geometry i.e. its focal length (possibly different in transverse directions for an asymmetric lens); and
- its alignment i.e. rotation about the X, Y and Z axes.
- These characteristics can be determined in a conventional manner if the 3-D world coordinates of four non-coplanar reference-points RP1 . . . RP4 which appear within the field-of-view of a camera are known, FIG. 1. Alternatively, knowing the camera origin in 3-D, and the world coordinates of two reference-points in the field-of-view of the camera, similar techniques can be used to calculate the remaining camera geometry and alignment characteristics.
- It should be seen that there is no need for the 3-D coordinates to be ultra-precise, because the system can be corrected to cope with small errors by manually adjusting the camera-characteristics/CSG coordinates until the predicted screen-position of the reference-points matches the actual on-screen position to pixel-accuracy.
- The reference points can be beacons of a known shape or colour which are placed at locations of known coordinates and appear in each of the camera views to be automatically identified by conventional image processing techniques. For example, U.S. Pat. No. 5,579,471 discloses a system for querying images by content which could usefully be incorporated in the present invention. The beacons may even be adapted to emit infra-red light, perhaps even modulated, which can be detected by band-pass filtering the camera image to locate the camera image X,Y coordinates of the discrete infra-red sources.
- Alternatively, the computer system can be adapted to allow a user to manually define the reference points simply by identifying a location of known coordinates in each of the camera views in which the location appears. Thus, the user could pick out one or more corners of the house as well as any other features at known locations in each of the camera views. In any case, it should be seen that once the camera has been characterised/calibrated, it doesn't matter if the real-world feature moves from the reference point, because the system should have already used the information to calibrate the camera.
- It should be seen that the world coordinates need not be universal, where locations are taken for example from GPS systems, and in fact may be site based, where for example, the entrance to the site is the origin of the coordinate system and the units of the system are in metres rather than degrees of latitude and longitude. Nonetheless, any system based on this second embodiment will work as long as the camera characteristics, the moving pyramids and the monitored volumes are defined in the same coordinates.
- In the present example, the reference points have world coordinates, as follows:
- (XRP1, YRP1, ZRP1) . . . (XRP4, YRP4, ZRP4)
- and corresponding camera image coordinates:
- Camera nn ((XRP1, YRP1) . . . (XRP4, YRP4))
- This defines a mapping between world coordinates and the screen coordinates for each Camera nn.
- Thus, it is possible to define 3-D virtual monitored volumes using the world coordinates, for example, Sphere (X=10, Y=50, Z=10) Radius=5; and knowing the 3-D coordinates of the monitored volumes, it is conventional to use the above mapping to project an image of a monitored volume onto the 2-D view of a camera, and so display the monitored volumes over a camera video image. This technique of overlaying virtual objects on a camera image is demonstrated by the Timeframe system deployed at the Ename archaeological site in Flanders, Belgium, where virtual drawings of the historical buildings are superimposed on images of the current site. It will be seen, however, that the Timeframe system was not designed for real-time detection of a moving object and assessment of the interaction of such an object with the virtual objects displayed on the single camera image.
- Turning now to the definition of the moving pyramid objects20, 20′. Knowing the screen coordinates of the
rectangular bounding box 18, surrounding a moving object, the system is able to use the above mapping to define the world coordinates of a CSG rectangular pyramid object comprising four triangles emanating from the camera location, through respective adjacent vertices of thebounding box 18, FIG. 6. - Having constructed the moving
pyramids volume 16 is in view of two (or more) cameras, by executing a CSG intersection calculation on the moving pyramids and monitored volumes. For non-null intersections, the system then assess the significance of those conflicts. - In one variation, CSG intersection of the monitored
volume 16 and the union of all the movingpyramids 20 from the first camera is first used to determineZone 1. Similarly,Zone 2 is constructed from the intersection of the samemonitored volume 16, and all the movingpyramids 20′ fromCamera 2. When the intersection of two or more of these Zones are non-empty, the system has detected one (or more) instances of moving objects intersecting the monitoredvolume 16, hence a real conflict is assumed and an alarm condition is set. - In another variation, the system locates the set of intersections of moving pyramids emanating from the various cameras of the system. An intersection of this set of intersections and the monitored volumes is then used to determine if there are any conflicts.
- Either technique eliminates the false-alarms that might be generated by 2-D systems.
- In any case, a major advantage of the second embodiment is that it is not even necessary for a monitored volume visible in multiple views to be defined for all those views; the monitored volume can be constructed interactively using as little as two views and all the cameras that can see it will automatically monitor it.
- The invention enables operators to configure a system for actions to be taken in response to different conflict scenarios, particularly where object state information is available either because the operator has input the information or it has been derived through some knowledge of the domain. For example:
- if a conflict exists between a moving “vehicle” object and a “warning area object” monitored volume —record this conflict in a log file.
- if a conflict exists between a moving “person” object and a “danger area object” monitored volume—record this in a log file and inform operator not to delete the camera tape (i.e. maintain for investigation).
- if a conflict exists between a moving “person” object and a “prohibited area object” monitored volume—alert security.
- It is important to appreciate that the significance of a conflict, and therefore the required detection accuracy, will vary between applications. For example in a security application any infringement may require the immediate real-time notification of an operator. Conversely in a health and safety application the system could be looking for repeated behavioral patterns which necessitate additional staff training or site redesign.
- Naturally video surveillance systems are subject to physical constraints. For example, there may be regions in the surveillance area which are obscured from the view of the cameras. Furthermore the size of the surveillance area needs to be constrained such that a change in the scene, for example the movement of a person, registers as a significant pixel change.
- It will also be seen that best results will be obtained if the various cameras of the system are not placed in substantially co-planar alignment with the objects being tracked/monitored. Referring to FIG. 9 in more detail, it will be seen that if
Cameras people quadrilaterals Cameras volumes - If
cameras people - This is because, without employing heuristics, no correlation is made between the moving objects in one camera view with those of another camera. Thus, the system has no idea whether the
object 60 lies within the volume bounded by quadrilateral 24 or 26 or if theobject 90 lies within the volume bounded by quadrilateral 28 or 30. Thus, the intersection of the volume bounded by quadrilateral 26 may also intersect the monitoredvolume 16 and so would create a false positive. - If on the
other hand Camera 2 is placed above (higher in the Z-axis than)Camera 1, it will be seen that the moving pyramid emanating fromCamera 2 and bounding theperson 90 is less likely to intersect the movingpyramid 20 emanating fromCamera 1 as it will pass under thepyramid 20. Similarly, the moving pyramid emanating fromCamera 1 and bounding theperson 90 will be less likely to intersect the movingpyramid 20′ emanating fromCamera 2 as it will pass under thepyramid 20′. - It will also be seen that best results will be obtained if the fields of view of the cameras are largely transverse. (This of course has no meaning in the case of catadioptic lens cameras.) This largely prevents two moving objects occluding each other in more than one camera view and thus causing the system to lose the benefits of the 3-D viewing provided by two or more cameras.
- In spite of employing cameras in optimal positions, it is conceded that some ambiguities may still arise, especially where a number of objects are simultaneously moving within camera views.
- Object tracking from frame to frame can help to mitigate such problems. Referring to FIG. 9, it will be seen that the moving
pyramids pyramids pyramids boundaries 18 and associate these with the same moving object. The system can therefore discount an intersection volume associated with another object and eliminate what may have been a false positive. - Because in the second embodiment, the system knows the 3-D location of the intersection of related moving pyramids relating to the same moving object, object tracking can also be used to prevent accidents or security breaches, because the system can attempt to predict the possible destination of a moving object or draw interference from the pattern of movement of an object, eg if an object seems to be moving from car to car in a car park, where cars are defined as the monitored objects, the system may decide this could be a thief and alert security; or if an object is moving towards a monitored volume an audible warning may sound deterring the object from moving closer.
- Now that the preferred embodiments have been described, it will be seen that many possible enhancements to the system are possible, including:
- the use of object state information in the construction of the 3-D objects. As well as the examples above, the position and operating parameters of a crane could be automatically fed to the system and used to determine the size and position of a monitored volume. In the same way that reference-points were used to calculate the camera-geometry earlier, the same method may be used to determine the position and size of moving objects, by triangulation using two or more views of the said object, to which may be attached a self-identifying beacon. Alternatively, in a security application, monitored volumes designated as prohibited zones could vary depending on the time of day or working hours.
- the inclusion of a predictive capability to enable the construction of “what if” scenarios. For example, in developing a security plan an operator could use the system to consider where a moving subject could move to in a specified time at a specified speed.
- improvement in the conflict determination algorithms through the inclusion of background knowledge in the system. The provision of a 3-D model of the zone being monitored, constructed either interactively on-site or generated automatically from site schematics, enables an improvement in the conflict determination algorithms. For example, by including information on the location of site boundaries it is possible to constrain moving
pyramids - the use of “intelligent” determination, for example identifying whether a moving object is a person or a vehicle, enables more appropriate alert generation. For example, a bull-dozer approaching a cement mixer might be considered “safe”, whereas a person in the same place might be considered “at risk”.
- It will be seen from the foregoing description that the computer system need not be a single computer and its processing capabilities can be distributed or implemented according to the economics of the zone to be monitored. For example, it may be possible to incorporate the processing means required to locate and create a boundary around a moving object within the video camera and to only transmit the video image to the computer when there is a moving object, thus reducing the required bandwidth and processing overhead on the computer. The means for defining and displaying monitored volumes may also be separated from the means for monitoring a site from day to day, so enabling the monitoring means to be implemented as a dedicated component without needing peripherals such as a display or keyboard.
- Finally, the embodiment has been described using object oriented terms. It should be seen, however, that the invention is not limited to a strict implementation in object oriented languages and may be implemented using any suitable programming techniques.
- It should be seen that in the present specification, the term “pyramid” has been used to define an object having any shaped based with vertices extending from an apex. The shape of the base may include any combination of lines and curves. Such shapes include triangles, quadrilaterals, circles, ellipses, combinations of these, but are not limited to simple closed areas. As such the term pyramid should be construed to include a conical shape. Complex areas including holes are permitted, so for example the mixer may be protected by an annular-cone (formed by the projection into 3-D of a doughnut or ring), such that the rotating portion of the mixer in the very middle may itself be identified as a “moving object”, but will not trigger an alarm as it does not intersect the monitored-pyramid extending around (but not touching) it.
Claims (13)
1. An automatic zone monitoring system comprising:
means for capturing live video using a plurality of video cameras; and
processing means connected to said video cameras comprising:
means for automatically identifying moving objects within the field of view of said video cameras;
means for defining one or more 3 dimensional monitored volumes; and
means for detecting the intersection between said moving objects and the or each monitored volume.
2. An automatic zone monitoring system according to claim 1 wherein said defining means comprises means for defining a plurality of monitored boundaries within the field of view of two or more of said video cameras, and means for relating monitored boundaries associated with the same monitored volume, said monitored volume being the intersection of respective pyramids projected through related monitored boundaries from the viewpoint of a camera associated with said monitored boundary.
3. An automatic zone monitoring system according to claim 2 wherein said identifying means comprises means for defining a boundary around said objects moving within the field of view of said video cameras; and said detecting means comprises means for detecting the intersection of said moving object boundaries and any monitored boundaries defined for said video cameras.
4. An automatic zone monitoring system according to claim 3 comprising warning means responsive to moving object boundaries intersecting at least two related monitored boundaries to raise an alarm.
5. An automatic zone monitoring system according to claim 1 wherein said processing means comprises:
means for generating a correlation between the respective fields of view of said video cameras with the 3-dimensional coordinates of the zone being monitored; and wherein
said defining means comprises means for defining the or each monitored volume in terms of said 3-dimensional coordinates;
said identifying means comprises means for defining a boundary around said objects moving within the field of view of said video cameras; and
said detecting means comprises means for defining respective moving pyramids projected through moving object boundaries from the viewpoint of a camera associated with said boundary in terms of said 3-dimensional coordinates; and means for generating the intersection of the or each monitored volume with the intersection of said moving pyramids.
6. An automatic zone monitoring system as claimed in claim 5 wherein said detecting means comprises means for tracking the intersection of respective moving pyramids and means for relating moving pyramids associated with the same moving object according to the track of the intersection of said moving pyramids, said generating means being adapted to generate an intersection of the set of intersections of related moving pyramids and the or each monitored volume.
7. An automatic zone monitoring system according to claim 6 comprising warning means responsive to said intersection being non-null to raise an alarm.
8. An automatic zone monitoring system as claimed in claim 7 comprising means for associating attributes with each moving object, said warning means being responsive to the attributes of a moving object within said intersection to determine the type of alarm to be raised.
9. An automatic zone monitoring system as claimed in claim 1 wherein at least two of said cameras are placed in transverse alignment.
10. An automatic zone monitoring system as claimed in claim 1 wherein at least two of said cameras are placed on planes non-aligned with said moving objects.
11. An automatic zone monitoring system as claimed in claim 1 comprising display means connected to said processor means, said processor means being adapted to display live video from said video cameras, and to superimpose on said respective live video displays, said monitored volumes.
12. A method of automatic ally monitoring a zone comprising the steps of:
capturing live video using a plurality of video cameras;
automatically identifying moving objects within the field of view of said video cameras;
defining one or more 3 dimensional monitored volumes; and
detecting the intersection between said moving objects and the or each monitored volume.
13. A computer program product comprising computer program code stored on a computer readable storage medium for, when executed on a computing device, automatically monitoring a zone, the program code comprising means for performing the steps of claim 12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9917900A GB2352859A (en) | 1999-07-31 | 1999-07-31 | Automatic zone monitoring using two or more cameras |
GB9917900 | 1999-07-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030020808A1 true US20030020808A1 (en) | 2003-01-30 |
US6816186B2 US6816186B2 (en) | 2004-11-09 |
Family
ID=33397595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/484,096 Expired - Fee Related US6816186B2 (en) | 1999-07-31 | 2000-01-18 | Automatic zone monitoring |
Country Status (2)
Country | Link |
---|---|
US (1) | US6816186B2 (en) |
GB (1) | GB2352859A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030215141A1 (en) * | 2002-05-20 | 2003-11-20 | Zakrzewski Radoslaw Romuald | Video detection/verification system |
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
US20060125922A1 (en) * | 2004-12-10 | 2006-06-15 | Microsoft Corporation | System and method for processing raw image files |
US20080088857A1 (en) * | 2006-10-13 | 2008-04-17 | Apple Inc. | System and Method for RAW Image Processing |
US20080089580A1 (en) * | 2006-10-13 | 2008-04-17 | Marcu Gabriel G | System and method for raw image processing using conversion matrix interpolated from predetermined camera characterization matrices |
US20080088858A1 (en) * | 2006-10-13 | 2008-04-17 | Apple Inc. | System and Method for Processing Images Using Predetermined Tone Reproduction Curves |
US20080100704A1 (en) * | 2000-10-24 | 2008-05-01 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US20080231705A1 (en) * | 2007-03-23 | 2008-09-25 | Keller Todd I | System and Method for Detecting Motion and Providing an Audible Message or Response |
US20090208054A1 (en) * | 2008-02-20 | 2009-08-20 | Robert Lee Angell | Measuring a cohort's velocity, acceleration and direction using digital video |
US20090297023A1 (en) * | 2001-03-23 | 2009-12-03 | Objectvideo Inc. | Video segmentation using statistical pixel modeling |
US20100026802A1 (en) * | 2000-10-24 | 2010-02-04 | Object Video, Inc. | Video analytic rule detection system and method |
US20100128138A1 (en) * | 2007-06-08 | 2010-05-27 | Nikon Corporation | Imaging device, image display device, and program |
US8059153B1 (en) | 2004-06-21 | 2011-11-15 | Wyse Technology Inc. | Three-dimensional object tracking using distributed thin-client cameras |
US20120086780A1 (en) * | 2010-10-12 | 2012-04-12 | Vinay Sharma | Utilizing Depth Information to Create 3D Tripwires in Video |
US20120206486A1 (en) * | 2011-02-14 | 2012-08-16 | Yuuichi Kageyama | Information processing apparatus and imaging region sharing determination method |
US20120320201A1 (en) * | 2007-05-15 | 2012-12-20 | Ipsotek Ltd | Data processing apparatus |
US20130297151A1 (en) * | 2009-08-18 | 2013-11-07 | Crown Equipment Corporation | Object tracking and steer maneuvers for materials handling vehicles |
US20140111653A1 (en) * | 2011-05-27 | 2014-04-24 | Movee'n See | Method and system for the tracking of a moving object by a tracking device |
US9020261B2 (en) | 2001-03-23 | 2015-04-28 | Avigilon Fortress Corporation | Video segmentation using statistical pixel modeling |
WO2016202143A1 (en) | 2015-06-17 | 2016-12-22 | Zhejiang Dahua Technology Co., Ltd | Methods and systems for video surveillance |
WO2018026589A1 (en) * | 2016-08-01 | 2018-02-08 | Eduardo Recavarren | Identifications of patterns of life through analysis of devices within monitored volumes |
US9892606B2 (en) | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US10999556B2 (en) * | 2012-07-03 | 2021-05-04 | Verint Americas Inc. | System and method of video capture and search optimization |
US11126857B1 (en) * | 2014-09-30 | 2021-09-21 | PureTech Systems Inc. | System and method for object falling and overboarding incident detection |
US11288517B2 (en) * | 2014-09-30 | 2022-03-29 | PureTech Systems Inc. | System and method for deep learning enhanced object incident detection |
US11429095B2 (en) | 2019-02-01 | 2022-08-30 | Crown Equipment Corporation | Pairing a remote control device to a vehicle |
US11641121B2 (en) | 2019-02-01 | 2023-05-02 | Crown Equipment Corporation | On-board charging station for a remote control device |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7319479B1 (en) * | 2000-09-22 | 2008-01-15 | Brickstream Corporation | System and method for multi-camera linking and analysis |
GB2375251B (en) * | 2001-04-30 | 2003-03-05 | Infrared Integrated Syst Ltd | The location of events in a three dimensional space under surveillance |
US20020163577A1 (en) * | 2001-05-07 | 2002-11-07 | Comtrak Technologies, Inc. | Event detection in a video recording system |
JP3996428B2 (en) * | 2001-12-25 | 2007-10-24 | 松下電器産業株式会社 | Abnormality detection device and abnormality detection system |
US7289138B2 (en) * | 2002-07-02 | 2007-10-30 | Fuji Xerox Co., Ltd. | Intersection detection in panoramic video |
JP4195991B2 (en) * | 2003-06-18 | 2008-12-17 | パナソニック株式会社 | Surveillance video monitoring system, surveillance video generation method, and surveillance video monitoring server |
US7171024B2 (en) * | 2003-12-01 | 2007-01-30 | Brickstream Corporation | Systems and methods for determining if objects are in a queue |
US7602944B2 (en) * | 2005-04-06 | 2009-10-13 | March Networks Corporation | Method and system for counting moving objects in a digital video stream |
US8744546B2 (en) | 2005-05-05 | 2014-06-03 | Dexcom, Inc. | Cellulosic-based resistance domain for an analyte sensor |
JP4970195B2 (en) * | 2007-08-23 | 2012-07-04 | 株式会社日立国際電気 | Person tracking system, person tracking apparatus, and person tracking program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5063603A (en) * | 1989-11-06 | 1991-11-05 | David Sarnoff Research Center, Inc. | Dynamic method for recognizing objects and image processing system therefor |
US5724493A (en) * | 1994-12-13 | 1998-03-03 | Nippon Telegraph & Telephone Corporation | Method and apparatus for extracting 3D information of feature points |
US5831669A (en) * | 1996-07-09 | 1998-11-03 | Ericsson Inc | Facility monitoring system with image memory and correlation |
US6396535B1 (en) * | 1999-02-16 | 2002-05-28 | Mitsubishi Electric Research Laboratories, Inc. | Situation awareness system |
US6570608B1 (en) * | 1998-09-30 | 2003-05-27 | Texas Instruments Incorporated | System and method for detecting interactions of people and vehicles |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3811010A (en) * | 1972-08-16 | 1974-05-14 | Us Navy | Intrusion detection apparatus |
JPS633590A (en) * | 1986-06-23 | 1988-01-08 | Sony Corp | Monitor device |
EP0356734A3 (en) * | 1988-08-02 | 1990-03-14 | Siemens Aktiengesellschaft | Intruder detection device with television cameras |
US5903454A (en) * | 1991-12-23 | 1999-05-11 | Hoffberg; Linda Irene | Human-factored interface corporating adaptive pattern recognition based controller apparatus |
FR2693011B1 (en) * | 1992-06-29 | 1994-09-23 | Matra Sep Imagerie Inf | Method and device for monitoring a three-dimensional scene, using imagery sensors. |
EP0662600A4 (en) * | 1993-06-10 | 1997-02-12 | Oh Yoh Keisoku Kenkyusho Kk | Apparatus for measuring position of moving object. |
US5627915A (en) * | 1995-01-31 | 1997-05-06 | Princeton Video Image, Inc. | Pattern recognition system employing unlike templates to detect objects having distinctive features in a video field |
US5729471A (en) * | 1995-03-31 | 1998-03-17 | The Regents Of The University Of California | Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
US6081606A (en) * | 1996-06-17 | 2000-06-27 | Sarnoff Corporation | Apparatus and a method for detecting motion within an image sequence |
DE19709799A1 (en) * | 1997-03-10 | 1998-09-17 | Bosch Gmbh Robert | Device for video surveillance of an area |
US6256046B1 (en) * | 1997-04-18 | 2001-07-03 | Compaq Computer Corporation | Method and apparatus for visual sensing of humans for active public interfaces |
JPH11266487A (en) * | 1998-03-18 | 1999-09-28 | Toshiba Corp | Intelligent remote supervisory system and recording medium |
-
1999
- 1999-07-31 GB GB9917900A patent/GB2352859A/en not_active Withdrawn
-
2000
- 2000-01-18 US US09/484,096 patent/US6816186B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5063603A (en) * | 1989-11-06 | 1991-11-05 | David Sarnoff Research Center, Inc. | Dynamic method for recognizing objects and image processing system therefor |
US5724493A (en) * | 1994-12-13 | 1998-03-03 | Nippon Telegraph & Telephone Corporation | Method and apparatus for extracting 3D information of feature points |
US5831669A (en) * | 1996-07-09 | 1998-11-03 | Ericsson Inc | Facility monitoring system with image memory and correlation |
US6570608B1 (en) * | 1998-09-30 | 2003-05-27 | Texas Instruments Incorporated | System and method for detecting interactions of people and vehicles |
US6396535B1 (en) * | 1999-02-16 | 2002-05-28 | Mitsubishi Electric Research Laboratories, Inc. | Situation awareness system |
Cited By (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9378632B2 (en) | 2000-10-24 | 2016-06-28 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
US8564661B2 (en) | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US8711217B2 (en) | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US10645350B2 (en) | 2000-10-24 | 2020-05-05 | Avigilon Fortress Corporation | Video analytic rule detection system and method |
US10347101B2 (en) | 2000-10-24 | 2019-07-09 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US10026285B2 (en) | 2000-10-24 | 2018-07-17 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US20080100704A1 (en) * | 2000-10-24 | 2008-05-01 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US20100026802A1 (en) * | 2000-10-24 | 2010-02-04 | Object Video, Inc. | Video analytic rule detection system and method |
US9020261B2 (en) | 2001-03-23 | 2015-04-28 | Avigilon Fortress Corporation | Video segmentation using statistical pixel modeling |
US20090297023A1 (en) * | 2001-03-23 | 2009-12-03 | Objectvideo Inc. | Video segmentation using statistical pixel modeling |
US8457401B2 (en) | 2001-03-23 | 2013-06-04 | Objectvideo, Inc. | Video segmentation using statistical pixel modeling |
US9892606B2 (en) | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US7280696B2 (en) * | 2002-05-20 | 2007-10-09 | Simmonds Precision Products, Inc. | Video detection/verification system |
US20030215141A1 (en) * | 2002-05-20 | 2003-11-20 | Zakrzewski Radoslaw Romuald | Video detection/verification system |
US8059153B1 (en) | 2004-06-21 | 2011-11-15 | Wyse Technology Inc. | Three-dimensional object tracking using distributed thin-client cameras |
US20060125922A1 (en) * | 2004-12-10 | 2006-06-15 | Microsoft Corporation | System and method for processing raw image files |
US8493473B2 (en) | 2006-10-13 | 2013-07-23 | Apple Inc. | System and method for RAW image processing |
US20080088857A1 (en) * | 2006-10-13 | 2008-04-17 | Apple Inc. | System and Method for RAW Image Processing |
US20080089580A1 (en) * | 2006-10-13 | 2008-04-17 | Marcu Gabriel G | System and method for raw image processing using conversion matrix interpolated from predetermined camera characterization matrices |
US7893975B2 (en) | 2006-10-13 | 2011-02-22 | Apple Inc. | System and method for processing images using predetermined tone reproduction curves |
US7835569B2 (en) | 2006-10-13 | 2010-11-16 | Apple Inc. | System and method for raw image processing using conversion matrix interpolated from predetermined camera characterization matrices |
US20100271505A1 (en) * | 2006-10-13 | 2010-10-28 | Apple Inc. | System and Method for RAW Image Processing |
US20080088858A1 (en) * | 2006-10-13 | 2008-04-17 | Apple Inc. | System and Method for Processing Images Using Predetermined Tone Reproduction Curves |
US7773127B2 (en) | 2006-10-13 | 2010-08-10 | Apple Inc. | System and method for RAW image processing |
US8810656B2 (en) * | 2007-03-23 | 2014-08-19 | Speco Technologies | System and method for detecting motion and providing an audible message or response |
US20080231705A1 (en) * | 2007-03-23 | 2008-09-25 | Keller Todd I | System and Method for Detecting Motion and Providing an Audible Message or Response |
US20120320201A1 (en) * | 2007-05-15 | 2012-12-20 | Ipsotek Ltd | Data processing apparatus |
US9836933B2 (en) * | 2007-05-15 | 2017-12-05 | Ipsotek Ltd. | Data processing apparatus to generate an alarm |
US8587658B2 (en) * | 2007-06-08 | 2013-11-19 | Nikon Corporation | Imaging device, image display device, and program with intruding object detection |
US20100128138A1 (en) * | 2007-06-08 | 2010-05-27 | Nikon Corporation | Imaging device, image display device, and program |
US8107677B2 (en) * | 2008-02-20 | 2012-01-31 | International Business Machines Corporation | Measuring a cohort'S velocity, acceleration and direction using digital video |
US20090208054A1 (en) * | 2008-02-20 | 2009-08-20 | Robert Lee Angell | Measuring a cohort's velocity, acceleration and direction using digital video |
US9002581B2 (en) * | 2009-08-18 | 2015-04-07 | Crown Equipment Corporation | Object tracking and steer maneuvers for materials handling vehicles |
US20130297151A1 (en) * | 2009-08-18 | 2013-11-07 | Crown Equipment Corporation | Object tracking and steer maneuvers for materials handling vehicles |
US20120086780A1 (en) * | 2010-10-12 | 2012-04-12 | Vinay Sharma | Utilizing Depth Information to Create 3D Tripwires in Video |
US8890936B2 (en) * | 2010-10-12 | 2014-11-18 | Texas Instruments Incorporated | Utilizing depth information to create 3D tripwires in video |
US9621747B2 (en) * | 2011-02-14 | 2017-04-11 | Sony Corporation | Information processing apparatus and imaging region sharing determination method |
US20120206486A1 (en) * | 2011-02-14 | 2012-08-16 | Yuuichi Kageyama | Information processing apparatus and imaging region sharing determination method |
US20140111653A1 (en) * | 2011-05-27 | 2014-04-24 | Movee'n See | Method and system for the tracking of a moving object by a tracking device |
US10999556B2 (en) * | 2012-07-03 | 2021-05-04 | Verint Americas Inc. | System and method of video capture and search optimization |
US11126857B1 (en) * | 2014-09-30 | 2021-09-21 | PureTech Systems Inc. | System and method for object falling and overboarding incident detection |
US11288517B2 (en) * | 2014-09-30 | 2022-03-29 | PureTech Systems Inc. | System and method for deep learning enhanced object incident detection |
EP3311562A4 (en) * | 2015-06-17 | 2018-06-20 | Zhejiang Dahua Technology Co., Ltd | Methods and systems for video surveillance |
WO2016202143A1 (en) | 2015-06-17 | 2016-12-22 | Zhejiang Dahua Technology Co., Ltd | Methods and systems for video surveillance |
US10671857B2 (en) * | 2015-06-17 | 2020-06-02 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for video surveillance |
US11367287B2 (en) * | 2015-06-17 | 2022-06-21 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for video surveillance |
WO2018026589A1 (en) * | 2016-08-01 | 2018-02-08 | Eduardo Recavarren | Identifications of patterns of life through analysis of devices within monitored volumes |
US11429095B2 (en) | 2019-02-01 | 2022-08-30 | Crown Equipment Corporation | Pairing a remote control device to a vehicle |
US11500373B2 (en) | 2019-02-01 | 2022-11-15 | Crown Equipment Corporation | On-board charging station for a remote control device |
US11641121B2 (en) | 2019-02-01 | 2023-05-02 | Crown Equipment Corporation | On-board charging station for a remote control device |
Also Published As
Publication number | Publication date |
---|---|
GB2352859A (en) | 2001-02-07 |
GB9917900D0 (en) | 1999-09-29 |
US6816186B2 (en) | 2004-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6816186B2 (en) | Automatic zone monitoring | |
RU2251739C2 (en) | Objects recognition and tracking system | |
CN108550234B (en) | Label matching and fence boundary management method and device for double base stations and storage medium | |
US8442306B2 (en) | Volume-based coverage analysis for sensor placement in 3D environments | |
US20210185279A1 (en) | Systems and methods for personnel location at a drilling site | |
US12045432B2 (en) | Interactive virtual interface | |
US20120033083A1 (en) | Verfahren zur videoanalyse | |
WO2011060385A1 (en) | Method for tracking an object through an environment across multiple cameras | |
CN107645652A (en) | A kind of illegal geofence system based on video monitoring | |
CN103270752A (en) | Method and system for converting privacy zone planar images to their corresponding pan/tilt coordinates | |
Wang et al. | An intelligent surveillance system based on an omnidirectional vision sensor | |
CN104159067A (en) | Intelligent monitoring system and method based on combination of 3DGIS with real scene video | |
CN104618688A (en) | Visual monitor protection method | |
CN113557713A (en) | Situational awareness monitoring | |
Lee et al. | Design policy of intelligent space | |
CN113068000B (en) | Video target monitoring method, device, equipment, system and storage medium | |
CN113869231B (en) | Method and equipment for acquiring real-time image information of target object | |
TW202125444A (en) | Warning area configuration system and method thereof | |
Conci et al. | Camera placement using particle swarm optimization in visual surveillance applications | |
CN112818780A (en) | Defense area setting method and device for aircraft monitoring and identifying system | |
CN112800918A (en) | Identity recognition method and device for illegal moving target | |
US11830126B2 (en) | Accurate representation of camera field of view in two-dimensional mapping applications | |
CN108234932A (en) | Personnel's form extracting method and device in video monitoring image | |
CN203968263U (en) | The intelligent monitor system combining with outdoor scene video based on 3DGIS | |
JP4754283B2 (en) | Monitoring system and setting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LUKE, JAMES STEVEN;SHARP, CHRISTOPHER EDWARD;WALTER, ANDREW GORDON NEIL;REEL/FRAME:010516/0647 Effective date: 19991012 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20081109 |