+

US20120057640A1 - Video Analytics for Security Systems and Methods - Google Patents

Video Analytics for Security Systems and Methods Download PDF

Info

Publication number
US20120057640A1
US20120057640A1 US13/225,238 US201113225238A US2012057640A1 US 20120057640 A1 US20120057640 A1 US 20120057640A1 US 201113225238 A US201113225238 A US 201113225238A US 2012057640 A1 US2012057640 A1 US 2012057640A1
Authority
US
United States
Prior art keywords
video
video analytics
analytics
messages
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/225,238
Inventor
Fang Shi
Changsong Qi
Jin Ming
Keqiang Dai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intersil Americas LLC
Original Assignee
Intersil Americas LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2010/076555 external-priority patent/WO2012027891A1/en
Priority claimed from PCT/CN2010/076564 external-priority patent/WO2012027892A1/en
Priority claimed from PCT/CN2010/076569 external-priority patent/WO2012027894A1/en
Priority claimed from PCT/CN2010/076567 external-priority patent/WO2012027893A1/en
Application filed by Intersil Americas LLC filed Critical Intersil Americas LLC
Assigned to INTERSIL AMERICAS INC. reassignment INTERSIL AMERICAS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DAI, KEQIANG, MING, JIN, QI, CHANGSONG, SHI, FANG
Publication of US20120057640A1 publication Critical patent/US20120057640A1/en
Assigned to Intersil Americas LLC reassignment Intersil Americas LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: INTERSIL AMERICAS INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/198Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Definitions

  • Patent non-provisional applications entitled “Video Classification Systems and Methods” (attorney docket no. 043497-0393274), “Rho-Domain Metrics” (attorney docket no. 043497-0393276) and “Systems And Methods for Video Content Analysis” (attorney docket no. 043497-0393278), which are expressly incorporated by reference herein.
  • FIG. 1 is a block schematic illustrating a simplified example of a video security surveillance analytics architecture according to certain aspects of the invention.
  • FIG. 2 is a block schematic depicting an example of a video analytics engine according to certain aspects of the invention.
  • FIG. 3 depicts an example of H.264 standards-defined bitstream syntax.
  • FIG. 4A is an image that includes both foreground and background objects.
  • FIG. 4B is the image of 4 A from which foreground objects have been extracted using techniques according to certain aspects of the invention.
  • FIGS. 5A and 5B are images illustrating virtual line counting according to certain aspects of the invention.
  • FIG. 6 is a simplified block schematic illustrating a processing system employed in certain embodiments of the invention.
  • Certain embodiments of the invention comprise systems having an architecture that is operable to perform video analytics for security applications.
  • Video analytics may also be referred to as video content analysis.
  • a video security surveillance analytics architecture where the server encodes captured video images
  • certain embodiments provide greatly improved video analytics efficiency for client side processing applications and systems. By improving and/or optimizing client side video analytics efficiency, client-side performance can be greatly improved, consequently enabling processing of an increased number of video channels.
  • video analytics metadata (“VAMD”) created on the server side can enable high accuracy video analytics on the server side and for the video security surveillance system as a whole.
  • the advantages of a layered video analytics system architecture can include facilitating and/or enabling a balanced partition of video analytics at multiple layers.
  • These layers may include server and client layers, pixel domain layers and motion domain layers.
  • global analytics defined to include information related to background frame, segmented object descriptors and camera parameters can enable cost efficient yet complex video analytics in the receiver side for many advanced video intelligent application and can enable an otherwise difficult or impossible level of video analytics efficiency in terms of computational complexity and analytic accuracy.
  • FIG. 1 A simplified example of a video security surveillance analytics architecture is shown in FIG. 1 .
  • the system is partitioned into server side 10 and client side 12 elements.
  • server and client are used here to include hardware and software systems, apparatus and other components that perform types of functions that can be attributed to server side 10 and client side 12 operations. It will be appreciated that certain elements may be provided on either or both server side 10 and client side 12 , and that at least some client and server functionality may be committed to hardware components such as application specific integrated circuits, sequencers, custom logic devices as needed, typically to improve one or more of efficiency, reliability, processing speed and security.
  • Server side 10 components may be embodied in a security surveillance or other camera.
  • a video sensor 100 can be configured to capture information representative a sequence of images, including video data, and passes the information to a video encoder module 102 adapted for use in embodiments of the invention.
  • a video encoder module 102 is the TW5864 from Intersil Techwell Inc., which can be adapted and/or configured to generate VAMD 103 related to video bitstream 105 .
  • video encoder 102 can be configured to generate one or more compressed video bitstream 105 that complies with industry standards and/or that is generated according to a proprietary specification.
  • the video encoder 102 is typically configurable to produce VAMD 103 that can comprise pixel domain video analytics information, such as information obtained directly from an analog-to-digital (“ND”) front end (e.g. at the video sensor 100 ) and/or from an encoding engine 102 as the encoding engine 102 is performing video compression to obtain video bitstream 103 .
  • VAMD 103 may comprise block base video analytics information including, for example, macroblock (“MB”) level information such as motion vector, MB-type and/or number of non-zero coefficients, etc.
  • MB macroblock
  • a MB typically comprises a 16 ⁇ 16 pixel block.
  • VAMD 123 can comprise any video encoding intermediate data such as MB-type, motion vectors, non-zero coefficient (as per the H.264 standard), quantization parameter, DC or AC information, motion estimation metric sum of absolute value (“SAD”), etc.
  • VAMD 123 can also comprise useful information such as motionFlag information generated in an analog to digital front end module, such module being found, for example, in the TW5864 device referenced above.
  • VAMD is typically processed in VAE 104 to generate more advanced video intelligent information that may include, for example, motion indexing, background extraction, object segmentation, motion detection, virtual line detection, object counting, motion tracking and speed estimation.
  • Video analytics engine 104 can be configured to receive the VAMD 103 from the encoder 102 and to process the VAMD 103 using one or more video analytics algorithms based on application requirements. Video analytics engine 104 can generate useful video analytics results, such as background model, motion alarm, virtual line detections, electronic image stabilization parameters, etc. A more detailed example of a video analytics engine 104 is shown in FIG. 2 .
  • Video analytics results can comprise video analytics messages (“VAM”) that may be categorized into a global VAM class and a local VAM class.
  • Global VAM includes video analytics messages applicable to a group of pictures, such as background frames, foreground object segmentation descriptors, camera parameters, predefined motion alarm regions coordination and index, virtual lines, etc.
  • Local VAM can be defined as localized VAM applied to a specific individual video frame, and can include global motion vectors of a current frame, motion alarm region alarm status of the current frame, virtual line counting results, object tracking parameters, camera moving parameters, and so on.
  • an encoder generated video bitstream 105 , VAMD 103 and VAM generated by video analytics engine 104 are packed together as a layered structure into a network bitstream 106 following a predefined packaging format.
  • the network bitstream 106 can be sent though a network to client side of the system.
  • the network bitstream 106 may be stored locally, on a server and/or on a remote storage device for future playback and/or dissemination.
  • FIG. 3 depicts an example of an H.264 standards-defined bitstream syntax, in which VAM and VAMD 103 can be packed into a supplemental enhancement information (“SEI”) network abstraction layer package unit.
  • SEI Supplemental Enhancement information
  • a global video analytics (“GVA”) SEI network abstraction layer unit can be inserted into network bitstream 106 .
  • the GVA network abstraction layer unit may include the global video analytics messages for a corresponding group of pictures, a pointer to the first local video analytics SEI network abstraction layer location within the group of pictures, and pointer to the next GVA network abstraction layer unit, and may include an indication of the duration of frames which the GVA applicable.
  • a local video analytics (“LVA”) SEI network abstraction layer unit is inserted right after the frame's payload network abstraction layer unit.
  • the LVA can comprise local VAM, VAMD information and a pointer to a location of the next frame which has LVA SEI network abstraction layer unit.
  • the amount of VAMD packed into an LVA network abstraction layer unit depends on the network bandwidth condition and the complexity of user video analytics requirement. For example, if sufficient network bandwidth is available, additional VAMD can be packed.
  • the VAMD can be used by client side video analytics systems and may simplify and/or optimize performance of certain functions. When network bandwidth is limited, less VAMD may be sent to meet the network bandwidth constraints. While FIG. 3 illustrates a bitstream format for H.264 standards, the principles involved may be applied in other video standards and formats.
  • a client side system 12 receives and decodes the network bitstream 106 sent from a server side system 10 .
  • Layers can include server and client layers, pixel domain layers and motion domain layers.
  • Global video analytics messages such as background frame, segmented object descriptors and camera parameters can enable a cost efficient yet complicated video analytics in the receiver side for many advanced video intelligent applications.
  • the VAM enables an otherwise difficult or impossible level of video analytics efficiency in term of computational complexity and analytic accuracy.
  • the client side system 12 separates the compressed video bitstream 125 , the VAMD 123 and the VAM from the network bitstream 106 .
  • the video bitsream can be decoded using decoder 124 and provided with VAMD 123 and associated VAM to client application 122 .
  • Client application typically employs video analytics techniques appropriate for the application at hand.
  • analytics may include background extraction, motion tracking, object detection, and other functions.
  • Known analytics can be selected and adapted to use the VAMD 103 and VAM that were derived from the encoder 102 and video analytics engine 104 at the server side 10 to obtain richer and more accurate results 120 .
  • Adaptions of the analytics may be based on speed requirements, efficiency, and the enhanced information available through the VAM and VAMD 123 .
  • video analytics engine 104 receives and processes encoder feedback VAMD to produce the video analytics information that may be embedded in the network bitstream 106 .
  • the use of embedded layered VAM provides users direct access to a video analytics message of interest, and permits use of VAM with limited or no additional processing. In one example, additional processing would be unnecessary to access the motion frame, number of object passing a virtual line, object moving speed and classification, etc.
  • information related to object tracking may be generated using additional, albeit limited, processing related to the motion of the identified object.
  • Information related to electronic image stabilization may be obtained by additional processing based on the global motion information provided in VAM. Accordingly, in certain embodiments, client side 12 video analytics efficiency can be optimized and performance can be greatly improved, consequently enabling processing of an increased number of channels.
  • client side 12 video analytics may be performed using information generated on the server side 10 .
  • client side video analytics processing would have to rely on video reconstructed from the decoded video bitstream 125 .
  • Decoded bitstream 125 typically lacks some of the detailed information of the original video content (e.g. content provided by video sensor 100 ), which may be discarded or lost in the video compression process. Consequently, video analytics performed solely on the client side 12 cannot generally preserve the accuracy that can be obtained if the processing was performed at the server side 10 , or at the client side 12 using VAMD 123 derived from original video content on the server side 10 . Loss of accuracy due to analytics processing that is limited to client side 12 can exhibit problems with geometric center of an object, object segmentation, etc. Therefore, embedded VAM can enable improved system-level accuracy.
  • VAM embedded, layered VAM in the network bitstream enables fast video indexing, video searching, video classification applications and other applications in the client side.
  • motion detection information, object indexing, foreground and background partition, human detection, human behavior classification information of the VAM can simplify client-side and/or downstream tasks that include, for example, video indexing, classification and fast searching in the client.
  • a client generally needs vast computational power to process the video data and to rebuild the required video analytics information for a variety of applications including the above-listed applications. It will be appreciated that not all VAM can be accurately reconstructed at the client side 12 using video bitstream 125 and it is possible that certain applications, such as human behavioral analysis applications, cannot even be performed if VAM created at server side 10 is not available.
  • the video analytics system architecture allows video analytics to be partitioned between server and client sides based on network bandwidth availability, server and client computational capability and the complexity of the video analytics.
  • the system in response to low network bandwidth conditions, can embed more condensed VAM in the network bitstream 106 after processing by the VAE 104 .
  • the VAM can include motion frame index, object index, and so on. After extracting the VAM from the bitstream, the client side 12 system can utilize the VAM to assist further video analytics processing.
  • More VAMD 103 can be directly embedded into the network bitstream 106 and processing by the VAE 104 can be limited or halted when computational power is limited on the server side 10 .
  • Computational power on the server side 10 may be limited when, for example, the server side 10 system is embodied in a camera, a digital video recorder (“DVR”) or network video recorder (“NVR”).
  • DVR digital video recorder
  • NVR network video recorder
  • Certain embodiments may use client side 12 systems to process embedded VAMD 123 in order to accomplish the desired video analytics function system.
  • more video analytics functions can be partitioned and/or assigned to server side 10 when, for example, the client side is required to monitor and/or process multiple channels simultaneously. It will be appreciated, therefore, that a balanced video analytics system can be achieved for a variety of system configurations.
  • EIS 220 finds wide application that can be used in video security applications.
  • a current captured video frame is processed with reference to the previous reconstructed reference frame or frames and generates a global motion vector 202 for the current frame, utilizing the global motion vector to compensate the reconstructed image in the client side to reduce or eliminate image instability or shaking.
  • a conventional pixel domain EIS algorithm In a conventional pixel domain EIS algorithm, the current and previous reference frames are fetched, a block based or grey-level histogram based matching algorithm is applied to obtain local motion vectors, and the local motion vectors are processed to generate a pixel domain global motion vector.
  • the drawbacks of the conventional approach include the high computational cost associated with the matching algorithm used to generate local motion vectors and the very high memory bandwidth required to fetch both current reconstructed frame and previous reference frames.
  • the video encoding engine 102 can generate VAMD 103 including block-based motion vectors, MB-type, etc., as a byproduct of video compression processing.
  • VAMD 103 is fed into VAE 104 , which can be configured to process the VAMD 103 information in order to generate global motion vector 202 as a VAM.
  • the VAM is then embedded into the network bitstream 106 to transmit to the client side 12 , typically over a network.
  • a client side 12 processor can parse the network bitstream 106 , extract the global motion information for each frame and apply global motion compensation to accomplish EIS 220 .
  • Certain embodiments of the invention comprise a video background modeling feature that can construct or reconstruct a background image 222 which can provide highly desired information for use in a wide variety of video surveillance applications, including motion detection, object segmentation, abundant object detection, etc.
  • Conventional pixel domain background extraction algorithms operate on a statistical model of multiple frame co-located pixel values. For example, a Gauss model is used to model N continuous frames' co-located pixels and to select the mathematical most likely pixel value as the background pixel. If a video frame's height is denoted as H, width as W and continuous N frames to satisfy the statistical model requirement, then total W*H*N pixels are needed to process to generate a background frame.
  • MB-based VAMD 103 is used to generate the background information rather than pixel-based background information.
  • the volume of information generated from VAMD 103 is typically only 1/256 of the volume of pixel-based information.
  • MB based motion vector and non-zero-count information can be used to detect background from foreground moving object.
  • FIG. 4A shows an original image with background and foreground objects
  • FIG. 4B shows a typical background extracted by processing VAMD.
  • a motion detector 200 can be used to automatically detect motion of objects including humans, animals and/or vehicles entering predefined regions of interest.
  • Virtual line detection and counting module 201 can detect a moving object that crosses an invisible line defined by user configuration and that can count a number of objects crossing the line as illustrated in FIGS. 5A and 5B .
  • the virtual line can be based on actual lines in the image and can be a delineation of an area defined by a polygon, circle, ellipse or irregular area.
  • the number of objects crossing one or more lines can be recorded as an absolute number and/or as a statistical frequency and an alarm may be generated to indicate any line crossing, a threshold frequency or absolute number of crossings and/or an absence of crossings within a predetermined time.
  • motion detection 200 and virtual line and counting 201 can be achieved by processing one or more MB-based VAMDs.
  • Information such as motion alarm and object count across virtual line can be packed as VAM is transmitting to the client side 12 .
  • Motion indexing, object counting or similar customized applications can be easily archived by extracting the VAM with simple processing. It will be appreciated that configuration information may be provided from client side to server side as a form of feedback, using packed information as a basis for resetting lines, areas of interest and so on.
  • Certain embodiments of the invention provide improved object tracking within a sequence of video frames using VAMD 103 . Certain embodiments can facilitate client side measurement of speed of motion of objects and can assist in identifying directions of movement. Furthermore, VAMD 103 can provide useful information related to video mosaics 221 , including motion indexing and object counting.
  • computing system 60 may be a commercially available system that executes commercially available operating systems such as Microsoft Windows®, UNIX or a variant thereof, Linux, a real time operating system and or a proprietary operating system.
  • the architecture of the computing system may be adapted, configured and/or designed for integration in the processing system, for embedding in one or more of an image capture system, communications device and/or graphics processing systems.
  • computing system 60 comprises a bus 602 and/or other mechanisms for communicating between processors, whether those processors are integral to the computing system 60 (e.g.
  • processor 604 and/or 605 comprises a CISC or RISC computing processor and/or one or more digital signal processors.
  • processor 604 and/or 605 may be embodied in a custom device and/or may perform as a configurable sequencer.
  • Device drivers 603 may provide output signals used to control internal and external components and to communicate between processors 604 and 605 .
  • Computing system 60 also typically comprises memory 606 that may include one or more of random access memory (“RAM”), static memory, cache, flash memory and any other suitable type of storage device that can be coupled to bus 602 .
  • Memory 606 can be used for storing instructions and data that can cause one or more of processors 604 and 605 to perform a desired process.
  • Main memory 606 may be used for storing transient and/or temporary data such as variables and intermediate information generated and/or used during execution of the instructions by processor 604 or 605 .
  • Computing system 60 also typically comprises non-volatile storage such as read only memory (“ROM”) 608 , flash memory, memory cards or the like; non-volatile storage may be connected to the bus 602 , but may equally be connected using a high-speed universal serial bus (USB), Firewire or other such bus that is coupled to bus 602 .
  • Non-volatile storage can be used for storing configuration, and other information, including instructions executed by processors 604 and/or 605 .
  • Non-volatile storage may also include mass storage device 610 , such as a magnetic disk, optical disk, flash disk that may be directly or indirectly coupled to bus 602 and used for storing instructions to be executed by processors 604 and/or 605 , as well as other information.
  • computing system 60 may be communicatively coupled to a display system 612 , such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user of computing system 60 .
  • a display system 612 such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user of computing system 60 .
  • device drivers 603 can include a display driver, graphics adapter and/or other modules that maintain a digital representation of a display and convert the digital representation to a signal for driving a display system 612 .
  • Display system 612 may also include logic and software to generate a display from a signal provided by system 600 . In that regard, display 612 may be provided as a remote terminal or in a session on a different computing system 60 .
  • An input device 614 is generally provided locally or through a remote system and typically provides for alphanumeric input as well as cursor control 616 input, such as a mouse, a trackball, etc. It will be appreciated that input and output can be provided to a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input.
  • a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input.
  • computing system 60 may be embedded in a system that captures and/or processes images, including video images.
  • computing system may include a video processor or accelerator 617 , which may have its own processor, non-transitory storage and input/output interfaces.
  • video processor or accelerator 617 may be implemented as a combination of hardware and software operated by the one or more processors 604 , 605 .
  • computing system 60 functions as a video encoder, although other functions may be performed by computing system 60 .
  • a video encoder that comprises computing system 60 may be embedded in another device such as a camera, a communications device, a mixing panel, a monitor, a computer peripheral, and so on.
  • portions of the described invention may be performed by computing system 60 .
  • Processor 604 executes one or more sequences of instructions.
  • such instructions may be stored in main memory 606 , having been received from a computer-readable medium such as storage device 610 .
  • Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform process steps according to certain aspects of the invention.
  • functionality may be provided by embedded computing systems that perform specific functions wherein the embedded systems employ a customized combination of hardware and software to perform a set of predefined tasks.
  • embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • Non-volatile storage may be embodied on media such as optical or magnetic disks, including DVD, CD-ROM and BluRay. Storage may be provided locally and in physical proximity to processors 604 and 605 or remotely, typically by use of network connection. Non-volatile storage may be removable from computing system 604 , as in the example of BluRay, DVD or CD storage or memory cards or sticks that can be easily connected or disconnected from a computer using a standard interface, including USB, etc.
  • computer-readable media can include floppy disks, flexible disks, hard disks, magnetic tape, any other magnetic medium, CD-ROMs, DVDs, BluRay, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH/EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
  • Transmission media can be used to connect elements of the processing system and/or components of computing system 60 .
  • Such media can include twisted pair wiring, coaxial cables, copper wire and fiber optics.
  • Transmission media can also include wireless media such as radio, acoustic and light waves. In particular radio frequency (RF), fiber optic and infrared (IR) data communications may be used.
  • RF radio frequency
  • IR infrared
  • Various forms of computer readable media may participate in providing instructions and data for execution by processor 604 and/or 605 .
  • the instructions may initially be retrieved from a magnetic disk of a remote computer and transmitted over a network or modem to computing system 60 .
  • the instructions may optionally be stored in a different storage or a different part of storage prior to or during execution.
  • Computing system 60 may include a communication interface 618 that provides two-way data communication over a network 620 that can include a local network 622 , a wide area network or some combination of the two.
  • a network 620 can include a local network 622 , a wide area network or some combination of the two.
  • ISDN integrated services digital network
  • LAN local area network
  • Network link 620 typically provides data communication through one or more networks to other data devices.
  • network link 620 may provide a connection through local network 622 to a host computer 624 or to a wide are network such as the Internet 628 .
  • Local network 622 and Internet 628 may both use electrical, electromagnetic or optical signals that carry digital data streams.
  • Computing system 60 can use one or more networks to send messages and data, including program code and other information.
  • a server 630 might transmit a requested code for an application program through Internet 628 and may receive in response a downloaded application that provides or augments functional modules such as those described in the examples above.
  • the received code may be executed by processor 604 and/or 605 .
  • Certain embodiments of the invention provide video processing systems and methods. Some of these embodiments comprise a processor configured to receive video frames representative of a sequence of images captured by a video sensor. Some of these embodiments comprise a video encoder operative to encode the video frames according to a desired video encoding standard. Some of these embodiments comprise a video analytics processor that receives video analytics metadata generated by the video encoder from the sequence of images. In some of these embodiments, the video analytics processor is configurable to produce video analytics messages for transmission to a client device. In some of these embodiments, the video analytics messages are used for client side video analytics processing.
  • the video analytics metadata comprise pixel domain video analytics information.
  • the pixel domain video analytics information includes information received directly from an analog-to-digital front end.
  • the pixel domain video analytics information includes information received directly from an encoding engine as the engine is performing compression.
  • the video analytics messages include information related to one or more of a background model, a motion alarm, a virtual line detection and electronic image stabilization parameters.
  • the video analytics messages comprise video analytics messages related to a group of images, including messages related to one or more of a background frame, a foreground object segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region.
  • the video analytics messages comprise video analytics messages related to an individual video frame, including messages related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter.
  • the video analytics messages are transmitted to the client device in a layered structure network bitstream comprising encoder generated video bitstream, a portion of the video analytics metadata.
  • the video analytics messages and the portion of the video analytics metadata are transmitted in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream.
  • Certain embodiments of the invention provide video decoding systems and methods. Some of these embodiments comprise a decoder configured to extract a video frame and one or more video analytics messages from a network bitstream. In some of these embodiments, the video analytics messages provide information related to characteristics of the video frame. Some of these embodiments comprise one or more video processors configured to produce video analytics metadata related to the video frame based on content of the video frame and the video analytics messages.
  • the video analytics metadata comprise pixel domain video analytics information received directly from an analog-to-digital front end. In some of these embodiments, the video analytics metadata comprise pixel domain video analytics information received directly from an encoding engine as the engine was performing compression. In some of these embodiments, the video analytics messages comprise video analytics messages related to a plurality of video frames, including messages related to one or more of a background frame, a foreground object segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region. In some of these embodiments, the video analytics messages comprise video analytics messages related to an individual video frame, including messages related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter.
  • the video analytics messages are received in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream. In some of these embodiments, the video analytics messages are received in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream and together with a portion of the pixel domain video analytics information.
  • the one or more video processors configured to produce a global motion vector. In some of these embodiments, the one or more video processors provide electronic image stabilization based on the video analytics messages. In some of these embodiments, the one or more video processors extract a background image for a plurality of video frames based on the video analytics messages. In some of these embodiments, the one or more video processors use the video analytics messages to monitor objects crossing a virtual line in a plurality of video frames.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Video processing, encoding and decoding systems are described. A processor receives video frames representative of a sequence of images captured by a video sensor and the video frames are encode according to a desired video encoding standard. A video analytics processor receives video analytics metadata generated by the video encoder from the sequence of images and produces video analytics messages for transmission to a client device which performs client side video analytics processing. The video analytics metadata may comprise pixel domain video analytics information directly from an analog-to-digital front end or directly from an encoding engine as the engine is performing compression.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application claims priority from PCT/CN2010/076555 (title: “Video Analytics for Security Systems and Methods”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, from PCT/CN2010/076569 (title: “Video Classification Systems and Methods”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, from PCT/CN2010/076564 (title: “Rho-Domain Metrics”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, and from PCT/CN2010/076567 (title: “Systems And Methods for Video Content Analysis) which was filed in the Chinese Receiving Office on Sep. 2, 2010, each of these applications being hereby incorporated herein by reference. The present Application is also related to concurrently filed U.S. Patent non-provisional applications entitled “Video Classification Systems and Methods” (attorney docket no. 043497-0393274), “Rho-Domain Metrics” (attorney docket no. 043497-0393276) and “Systems And Methods for Video Content Analysis” (attorney docket no. 043497-0393278), which are expressly incorporated by reference herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block schematic illustrating a simplified example of a video security surveillance analytics architecture according to certain aspects of the invention.
  • FIG. 2 is a block schematic depicting an example of a video analytics engine according to certain aspects of the invention.
  • FIG. 3 depicts an example of H.264 standards-defined bitstream syntax.
  • FIG. 4A is an image that includes both foreground and background objects.
  • FIG. 4B is the image of 4A from which foreground objects have been extracted using techniques according to certain aspects of the invention.
  • FIGS. 5A and 5B are images illustrating virtual line counting according to certain aspects of the invention.
  • FIG. 6 is a simplified block schematic illustrating a processing system employed in certain embodiments of the invention.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the invention. Notably, the figures and examples below are not meant to limit the scope of the present invention to a single embodiment, but other embodiments are possible by way of interchange of some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts. Where certain elements of these embodiments can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the disclosed embodiments will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the disclosed embodiments. In the present specification, an embodiment showing a singular component should not be considered limiting; rather, the invention is intended to encompass other embodiments including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, certain embodiments of the present invention encompass present and future known equivalents to the components referred to herein by way of illustration.
  • Certain embodiments of the invention comprise systems having an architecture that is operable to perform video analytics for security applications. Video analytics may also be referred to as video content analysis. In a video security surveillance analytics architecture where the server encodes captured video images, certain embodiments provide greatly improved video analytics efficiency for client side processing applications and systems. By improving and/or optimizing client side video analytics efficiency, client-side performance can be greatly improved, consequently enabling processing of an increased number of video channels. Moreover, video analytics metadata (“VAMD”) created on the server side according to certain aspects of the invention can enable high accuracy video analytics on the server side and for the video security surveillance system as a whole. According to certain aspects of the invention, the advantages of a layered video analytics system architecture can include facilitating and/or enabling a balanced partition of video analytics at multiple layers. These layers may include server and client layers, pixel domain layers and motion domain layers. For example, global analytics defined to include information related to background frame, segmented object descriptors and camera parameters can enable cost efficient yet complex video analytics in the receiver side for many advanced video intelligent application and can enable an otherwise difficult or impossible level of video analytics efficiency in terms of computational complexity and analytic accuracy.
  • A simplified example of a video security surveillance analytics architecture is shown in FIG. 1. In the example, the system is partitioned into server side 10 and client side 12 elements. The terms server and client are used here to include hardware and software systems, apparatus and other components that perform types of functions that can be attributed to server side 10 and client side 12 operations. It will be appreciated that certain elements may be provided on either or both server side 10 and client side 12, and that at least some client and server functionality may be committed to hardware components such as application specific integrated circuits, sequencers, custom logic devices as needed, typically to improve one or more of efficiency, reliability, processing speed and security. Server side 10 components may be embodied in a security surveillance or other camera.
  • On server side 10, a video sensor 100 can be configured to capture information representative a sequence of images, including video data, and passes the information to a video encoder module 102 adapted for use in embodiments of the invention. One example of such video encoder module 102 is the TW5864 from Intersil Techwell Inc., which can be adapted and/or configured to generate VAMD 103 related to video bitstream 105. In certain embodiments, video encoder 102 can be configured to generate one or more compressed video bitstream 105 that complies with industry standards and/or that is generated according to a proprietary specification. The video encoder 102 is typically configurable to produce VAMD103 that can comprise pixel domain video analytics information, such as information obtained directly from an analog-to-digital (“ND”) front end (e.g. at the video sensor 100) and/or from an encoding engine 102 as the encoding engine 102 is performing video compression to obtain video bitstream 103. VAMD103 may comprise block base video analytics information including, for example, macroblock (“MB”) level information such as motion vector, MB-type and/or number of non-zero coefficients, etc. A MB typically comprises a 16×16 pixel block.
  • In certain embodiments, VAMD 123 can comprise any video encoding intermediate data such as MB-type, motion vectors, non-zero coefficient (as per the H.264 standard), quantization parameter, DC or AC information, motion estimation metric sum of absolute value (“SAD”), etc. VAMD 123 can also comprise useful information such as motionFlag information generated in an analog to digital front end module, such module being found, for example, in the TW5864 device referenced above. VAMD is typically processed in VAE 104 to generate more advanced video intelligent information that may include, for example, motion indexing, background extraction, object segmentation, motion detection, virtual line detection, object counting, motion tracking and speed estimation.
  • Video analytics engine 104 can be configured to receive the VAMD103 from the encoder 102 and to process the VAMD103 using one or more video analytics algorithms based on application requirements. Video analytics engine 104 can generate useful video analytics results, such as background model, motion alarm, virtual line detections, electronic image stabilization parameters, etc. A more detailed example of a video analytics engine 104 is shown in FIG. 2. Video analytics results can comprise video analytics messages (“VAM”) that may be categorized into a global VAM class and a local VAM class. Global VAM includes video analytics messages applicable to a group of pictures, such as background frames, foreground object segmentation descriptors, camera parameters, predefined motion alarm regions coordination and index, virtual lines, etc. Local VAM can be defined as localized VAM applied to a specific individual video frame, and can include global motion vectors of a current frame, motion alarm region alarm status of the current frame, virtual line counting results, object tracking parameters, camera moving parameters, and so on.
  • In certain embodiments, an encoder generated video bitstream 105, VAMD 103 and VAM generated by video analytics engine 104 are packed together as a layered structure into a network bitstream 106 following a predefined packaging format. The network bitstream 106 can be sent though a network to client side of the system. The network bitstream 106 may be stored locally, on a server and/or on a remote storage device for future playback and/or dissemination.
  • FIG. 3 depicts an example of an H.264 standards-defined bitstream syntax, in which VAM and VAMD 103 can be packed into a supplemental enhancement information (“SEI”) network abstraction layer package unit. Following SPS, PPS and IDR network abstraction layer units, a global video analytics (“GVA”) SEI network abstraction layer unit can be inserted into network bitstream 106. The GVA network abstraction layer unit may include the global video analytics messages for a corresponding group of pictures, a pointer to the first local video analytics SEI network abstraction layer location within the group of pictures, and pointer to the next GVA network abstraction layer unit, and may include an indication of the duration of frames which the GVA applicable. Following each individual frame which is associated with VAM or VAMD elements, a local video analytics (“LVA”) SEI network abstraction layer unit is inserted right after the frame's payload network abstraction layer unit. The LVA can comprise local VAM, VAMD information and a pointer to a location of the next frame which has LVA SEI network abstraction layer unit. The amount of VAMD packed into an LVA network abstraction layer unit depends on the network bandwidth condition and the complexity of user video analytics requirement. For example, if sufficient network bandwidth is available, additional VAMD can be packed. The VAMD can be used by client side video analytics systems and may simplify and/or optimize performance of certain functions. When network bandwidth is limited, less VAMD may be sent to meet the network bandwidth constraints. While FIG. 3 illustrates a bitstream format for H.264 standards, the principles involved may be applied in other video standards and formats.
  • In certain embodiments of the invention, a client side system 12 receives and decodes the network bitstream106 sent from a server side system 10. The advantages of a layered video analytics system architecture, which can include facilitating and/or enabling a balanced partition of video analytics at multiple layers, become apparent at the client side 12. Layers can include server and client layers, pixel domain layers and motion domain layers. Global video analytics messages such as background frame, segmented object descriptors and camera parameters can enable a cost efficient yet complicated video analytics in the receiver side for many advanced video intelligent applications. The VAM enables an otherwise difficult or impossible level of video analytics efficiency in term of computational complexity and analytic accuracy.
  • In certain embodiments of the invention, the client side system 12 separates the compressed video bitstream 125, the VAMD 123 and the VAM from the network bitstream 106. The video bitsream can be decoded using decoder 124 and provided with VAMD 123 and associated VAM to client application 122. Client application typically employs video analytics techniques appropriate for the application at hand. For example, analytics may include background extraction, motion tracking, object detection, and other functions. Known analytics can be selected and adapted to use the VAMD 103 and VAM that were derived from the encoder 102 and video analytics engine 104 at the server side 10 to obtain richer and more accurate results 120. Adaptions of the analytics may be based on speed requirements, efficiency, and the enhanced information available through the VAM and VAMD 123.
  • Certain advantages may be accrued from video analytics system architecture and layered video analytics information embedded in network bitstreams according to certain aspects of the invention. For example, greatly improved video analytics efficiency can be obtained on the client side 12. In one example, video analytics engine 104 receives and processes encoder feedback VAMD to produce the video analytics information that may be embedded in the network bitstream 106. The use of embedded layered VAM provides users direct access to a video analytics message of interest, and permits use of VAM with limited or no additional processing. In one example, additional processing would be unnecessary to access the motion frame, number of object passing a virtual line, object moving speed and classification, etc. In certain embodiments, information related to object tracking may be generated using additional, albeit limited, processing related to the motion of the identified object. Information related to electronic image stabilization may be obtained by additional processing based on the global motion information provided in VAM. Accordingly, in certain embodiments, client side 12 video analytics efficiency can be optimized and performance can be greatly improved, consequently enabling processing of an increased number of channels.
  • Certain embodiments enable operation of high-accuracy video analytics applications on the client side 12. According to certain aspects of the invention, client side 12 video analytics may be performed using information generated on the server side 10. Without VAM embedded in the network bitstream 106, client side video analytics processing would have to rely on video reconstructed from the decoded video bitstream 125. Decoded bitstream 125 typically lacks some of the detailed information of the original video content (e.g. content provided by video sensor 100), which may be discarded or lost in the video compression process. Consequently, video analytics performed solely on the client side 12 cannot generally preserve the accuracy that can be obtained if the processing was performed at the server side 10, or at the client side 12 using VAMD 123 derived from original video content on the server side 10. Loss of accuracy due to analytics processing that is limited to client side 12 can exhibit problems with geometric center of an object, object segmentation, etc. Therefore, embedded VAM can enable improved system-level accuracy.
  • Certain embodiments of the invention enable fast video indexing, searching and other applications. In particular, embedded, layered VAM in the network bitstream enables fast video indexing, video searching, video classification applications and other applications in the client side. For instance, motion detection information, object indexing, foreground and background partition, human detection, human behavior classification information of the VAM can simplify client-side and/or downstream tasks that include, for example, video indexing, classification and fast searching in the client. Without VAM, a client generally needs vast computational power to process the video data and to rebuild the required video analytics information for a variety of applications including the above-listed applications. It will be appreciated that not all VAM can be accurately reconstructed at the client side 12 using video bitstream 125 and it is possible that certain applications, such as human behavioral analysis applications, cannot even be performed if VAM created at server side 10 is not available.
  • Certain embodiments of the invention permit the use of more complex server/client algorithms, partitioning of computational capability and balancing of network bandwidth. In certain embodiments, the video analytics system architecture allows video analytics to be partitioned between server and client sides based on network bandwidth availability, server and client computational capability and the complexity of the video analytics. In one example, in response to low network bandwidth conditions, the system can embed more condensed VAM in the network bitstream 106 after processing by the VAE 104. The VAM can include motion frame index, object index, and so on. After extracting the VAM from the bitstream, the client side 12 system can utilize the VAM to assist further video analytics processing. More VAMD 103 can be directly embedded into the network bitstream 106 and processing by the VAE 104 can be limited or halted when computational power is limited on the server side 10. Computational power on the server side 10 may be limited when, for example, the server side 10 system is embodied in a camera, a digital video recorder (“DVR”) or network video recorder (“NVR”). Certain embodiments may use client side 12 systems to process embedded VAMD 123 in order to accomplish the desired video analytics function system. In some embodiments, more video analytics functions can be partitioned and/or assigned to server side 10 when, for example, the client side is required to monitor and/or process multiple channels simultaneously. It will be appreciated, therefore, that a balanced video analytics system can be achieved for a variety of system configurations.
  • EXAMPLES
  • With reference to FIG. 2, certain embodiments provide electronic image stabilization (“EIS”) capabilities 220. EIS 220 finds wide application that can be used in video security applications. A current captured video frame is processed with reference to the previous reconstructed reference frame or frames and generates a global motion vector 202 for the current frame, utilizing the global motion vector to compensate the reconstructed image in the client side to reduce or eliminate image instability or shaking.
  • In a conventional pixel domain EIS algorithm, the current and previous reference frames are fetched, a block based or grey-level histogram based matching algorithm is applied to obtain local motion vectors, and the local motion vectors are processed to generate a pixel domain global motion vector. The drawbacks of the conventional approach include the high computational cost associated with the matching algorithm used to generate local motion vectors and the very high memory bandwidth required to fetch both current reconstructed frame and previous reference frames.
  • In certain embodiments of the invention, the video encoding engine 102 can generate VAMD 103 including block-based motion vectors, MB-type, etc., as a byproduct of video compression processing. VAMD 103 is fed into VAE 104, which can be configured to process the VAMD 103 information in order to generate global motion vector 202 as a VAM. The VAM is then embedded into the network bitstream 106 to transmit to the client side 12, typically over a network. A client side 12 processor can parse the network bitstream 106, extract the global motion information for each frame and apply global motion compensation to accomplish EIS 220.
  • Video Background Modeling
  • Certain embodiments of the invention comprise a video background modeling feature that can construct or reconstruct a background image 222 which can provide highly desired information for use in a wide variety of video surveillance applications, including motion detection, object segmentation, abundant object detection, etc. Conventional pixel domain background extraction algorithms operate on a statistical model of multiple frame co-located pixel values. For example, a Gauss model is used to model N continuous frames' co-located pixels and to select the mathematical most likely pixel value as the background pixel. If a video frame's height is denoted as H, width as W and continuous N frames to satisfy the statistical model requirement, then total W*H*N pixels are needed to process to generate a background frame.
  • In certain embodiments, MB-based VAMD 103 is used to generate the background information rather than pixel-based background information. According to certain aspects of the invention, the volume of information generated from VAMD 103 is typically only 1/256 of the volume of pixel-based information. In one example, MB based motion vector and non-zero-count information can be used to detect background from foreground moving object. FIG. 4A shows an original image with background and foreground objects, and FIG. 4B shows a typical background extracted by processing VAMD.
  • Certain embodiments of the invention provide systems and methods for motion detection 200 and virtual line counting 201. A motion detector 200 can be used to automatically detect motion of objects including humans, animals and/or vehicles entering predefined regions of interest. Virtual line detection and counting module 201 can detect a moving object that crosses an invisible line defined by user configuration and that can count a number of objects crossing the line as illustrated in FIGS. 5A and 5B. The virtual line can be based on actual lines in the image and can be a delineation of an area defined by a polygon, circle, ellipse or irregular area. In some embodiments, the number of objects crossing one or more lines can be recorded as an absolute number and/or as a statistical frequency and an alarm may be generated to indicate any line crossing, a threshold frequency or absolute number of crossings and/or an absence of crossings within a predetermined time. In certain embodiments, motion detection 200 and virtual line and counting 201 can be achieved by processing one or more MB-based VAMDs. Information such as motion alarm and object count across virtual line can be packed as VAM is transmitting to the client side 12. Motion indexing, object counting or similar customized applications can be easily archived by extracting the VAM with simple processing. It will be appreciated that configuration information may be provided from client side to server side as a form of feedback, using packed information as a basis for resetting lines, areas of interest and so on.
  • Certain embodiments of the invention provide improved object tracking within a sequence of video frames using VAMD 103. Certain embodiments can facilitate client side measurement of speed of motion of objects and can assist in identifying directions of movement. Furthermore, VAMD 103 can provide useful information related to video mosaics 221, including motion indexing and object counting.
  • System Description
  • Turning now to FIG. 6, certain embodiments of the invention employ a processing system that includes at least one computing system 60 deployed to perform certain of the steps described above. Computing system 60 may be a commercially available system that executes commercially available operating systems such as Microsoft Windows®, UNIX or a variant thereof, Linux, a real time operating system and or a proprietary operating system. The architecture of the computing system may be adapted, configured and/or designed for integration in the processing system, for embedding in one or more of an image capture system, communications device and/or graphics processing systems. In one example, computing system 60 comprises a bus 602 and/or other mechanisms for communicating between processors, whether those processors are integral to the computing system 60 (e.g. 604, 605) or located in different, perhaps physically separated computing systems 60. Typically, processor 604 and/or 605 comprises a CISC or RISC computing processor and/or one or more digital signal processors. In some embodiments, processor 604 and/or 605 may be embodied in a custom device and/or may perform as a configurable sequencer. Device drivers 603 may provide output signals used to control internal and external components and to communicate between processors 604 and 605.
  • Computing system 60 also typically comprises memory 606 that may include one or more of random access memory (“RAM”), static memory, cache, flash memory and any other suitable type of storage device that can be coupled to bus 602. Memory 606 can be used for storing instructions and data that can cause one or more of processors 604 and 605 to perform a desired process. Main memory 606 may be used for storing transient and/or temporary data such as variables and intermediate information generated and/or used during execution of the instructions by processor 604 or 605. Computing system 60 also typically comprises non-volatile storage such as read only memory (“ROM”) 608, flash memory, memory cards or the like; non-volatile storage may be connected to the bus 602, but may equally be connected using a high-speed universal serial bus (USB), Firewire or other such bus that is coupled to bus 602. Non-volatile storage can be used for storing configuration, and other information, including instructions executed by processors 604 and/or 605. Non-volatile storage may also include mass storage device 610, such as a magnetic disk, optical disk, flash disk that may be directly or indirectly coupled to bus 602 and used for storing instructions to be executed by processors 604 and/or 605, as well as other information.
  • In some embodiments, computing system 60 may be communicatively coupled to a display system 612, such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user of computing system 60. Typically, device drivers 603 can include a display driver, graphics adapter and/or other modules that maintain a digital representation of a display and convert the digital representation to a signal for driving a display system 612. Display system 612 may also include logic and software to generate a display from a signal provided by system 600. In that regard, display 612 may be provided as a remote terminal or in a session on a different computing system 60. An input device 614 is generally provided locally or through a remote system and typically provides for alphanumeric input as well as cursor control 616 input, such as a mouse, a trackball, etc. It will be appreciated that input and output can be provided to a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input.
  • In certain embodiments, computing system 60 may be embedded in a system that captures and/or processes images, including video images. In one example, computing system may include a video processor or accelerator 617, which may have its own processor, non-transitory storage and input/output interfaces. In another example, video processor or accelerator 617 may be implemented as a combination of hardware and software operated by the one or more processors 604, 605. In another example, computing system 60 functions as a video encoder, although other functions may be performed by computing system 60. In particular, a video encoder that comprises computing system 60 may be embedded in another device such as a camera, a communications device, a mixing panel, a monitor, a computer peripheral, and so on.
  • According to one embodiment of the invention, portions of the described invention may be performed by computing system 60. Processor 604 executes one or more sequences of instructions. For example, such instructions may be stored in main memory 606, having been received from a computer-readable medium such as storage device 610. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform process steps according to certain aspects of the invention. In certain embodiments, functionality may be provided by embedded computing systems that perform specific functions wherein the embedded systems employ a customized combination of hardware and software to perform a set of predefined tasks. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • The term “computer-readable medium” is used to define any medium that can store and provide instructions and other data to processor 604 and/or 605, particularly where the instructions are to be executed by processor 604 and/or 605 and/or other peripheral of the processing system. Such medium can include non-volatile storage, volatile storage and transmission media. Non-volatile storage may be embodied on media such as optical or magnetic disks, including DVD, CD-ROM and BluRay. Storage may be provided locally and in physical proximity to processors 604 and 605 or remotely, typically by use of network connection. Non-volatile storage may be removable from computing system 604, as in the example of BluRay, DVD or CD storage or memory cards or sticks that can be easily connected or disconnected from a computer using a standard interface, including USB, etc. Thus, computer-readable media can include floppy disks, flexible disks, hard disks, magnetic tape, any other magnetic medium, CD-ROMs, DVDs, BluRay, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH/EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
  • Transmission media can be used to connect elements of the processing system and/or components of computing system 60. Such media can include twisted pair wiring, coaxial cables, copper wire and fiber optics. Transmission media can also include wireless media such as radio, acoustic and light waves. In particular radio frequency (RF), fiber optic and infrared (IR) data communications may be used.
  • Various forms of computer readable media may participate in providing instructions and data for execution by processor 604 and/or 605. For example, the instructions may initially be retrieved from a magnetic disk of a remote computer and transmitted over a network or modem to computing system 60. The instructions may optionally be stored in a different storage or a different part of storage prior to or during execution.
  • Computing system 60 may include a communication interface 618 that provides two-way data communication over a network 620 that can include a local network 622, a wide area network or some combination of the two. For example, an integrated services digital network (ISDN) may used in combination with a local area network (LAN). In another example, a LAN may include a wireless link. Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to a wide are network such as the Internet 628. Local network 622 and Internet 628 may both use electrical, electromagnetic or optical signals that carry digital data streams.
  • Computing system 60 can use one or more networks to send messages and data, including program code and other information. In the Internet example, a server 630 might transmit a requested code for an application program through Internet 628 and may receive in response a downloaded application that provides or augments functional modules such as those described in the examples above. The received code may be executed by processor 604 and/or 605.
  • Additional Descriptions of Certain Aspects of the Invention
  • The foregoing descriptions of the invention are intended to be illustrative and not limiting. For example, those skilled in the art will appreciate that the invention can be practiced with various combinations of the functionalities and capabilities described above, and can include fewer or additional components than described above. Certain additional aspects and features of the invention are further set forth below, and can be obtained using the functionalities and components described in more detail above, as will be appreciated by those skilled in the art after being taught by the present disclosure.
  • Certain embodiments of the invention provide video processing systems and methods. Some of these embodiments comprise a processor configured to receive video frames representative of a sequence of images captured by a video sensor. Some of these embodiments comprise a video encoder operative to encode the video frames according to a desired video encoding standard. Some of these embodiments comprise a video analytics processor that receives video analytics metadata generated by the video encoder from the sequence of images. In some of these embodiments, the video analytics processor is configurable to produce video analytics messages for transmission to a client device. In some of these embodiments, the video analytics messages are used for client side video analytics processing.
  • In some of these embodiments, the video analytics metadata comprise pixel domain video analytics information. In some of these embodiments, the pixel domain video analytics information includes information received directly from an analog-to-digital front end. In some of these embodiments, the pixel domain video analytics information includes information received directly from an encoding engine as the engine is performing compression. In some of these embodiments, the video analytics messages include information related to one or more of a background model, a motion alarm, a virtual line detection and electronic image stabilization parameters. In some of these embodiments, the video analytics messages comprise video analytics messages related to a group of images, including messages related to one or more of a background frame, a foreground object segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region.
  • In some of these embodiments, the video analytics messages comprise video analytics messages related to an individual video frame, including messages related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter. In some of these embodiments, the video analytics messages are transmitted to the client device in a layered structure network bitstream comprising encoder generated video bitstream, a portion of the video analytics metadata. In some of these embodiments, the video analytics messages and the portion of the video analytics metadata are transmitted in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream.
  • Certain embodiments of the invention provide video decoding systems and methods. Some of these embodiments comprise a decoder configured to extract a video frame and one or more video analytics messages from a network bitstream. In some of these embodiments, the video analytics messages provide information related to characteristics of the video frame. Some of these embodiments comprise one or more video processors configured to produce video analytics metadata related to the video frame based on content of the video frame and the video analytics messages.
  • In some of these embodiments, the video analytics metadata comprise pixel domain video analytics information received directly from an analog-to-digital front end. In some of these embodiments, the video analytics metadata comprise pixel domain video analytics information received directly from an encoding engine as the engine was performing compression. In some of these embodiments, the video analytics messages comprise video analytics messages related to a plurality of video frames, including messages related to one or more of a background frame, a foreground object segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region. In some of these embodiments, the video analytics messages comprise video analytics messages related to an individual video frame, including messages related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter.
  • In some of these embodiments, the video analytics messages are received in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream. In some of these embodiments, the video analytics messages are received in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream and together with a portion of the pixel domain video analytics information. In some of these embodiments, the one or more video processors configured to produce a global motion vector. In some of these embodiments, the one or more video processors provide electronic image stabilization based on the video analytics messages. In some of these embodiments, the one or more video processors extract a background image for a plurality of video frames based on the video analytics messages. In some of these embodiments, the one or more video processors use the video analytics messages to monitor objects crossing a virtual line in a plurality of video frames.
  • Although the present invention has been described with reference to specific exemplary embodiments, it will be evident to one of ordinary skill in the art that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (20)

What is claimed is:
1. A video processing system comprising:
a video encoder operative to encode a sequence of images captured by a video sensor into video frames according to a desired video encoding standard and to generate video analytics metadata based on information in the sequence of images;
a video analytics processor configured to receive and process the video analytics metadata to produce video analytics messages suitable for transmission to a client device and that are useable for client-side video analytics processing.
2. The video processing system of claim 1, wherein the video analytics metadata comprise pixel domain video analytics information received directly from an analog-to-digital front end.
3. The video processing system of claim 1, wherein the video encoder comprises an encoding engine, and wherein the video analytics metadata comprise pixel domain video analytics information received directly from the encoding engine and generated as the encoding engine is performing compression on the sequence of images.
4. The video processing system of claim 3, wherein the video analytics messages include information related to one or more of a background model, a motion alarm, a virtual line detection and electronic image stabilization parameters.
5. The video processing system of claim 2, wherein the video analytics messages comprise video analytics messages related to a group of images and include messages related to one or more of a background frame, a foreground object segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region.
6. The video processing system of claim 1, wherein the video analytics messages comprise video analytics messages related to an individual video frame and include messages related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter.
7. The video processing system of claim 1, wherein the video processing system is configured to transmit video analytics messages to the client device in a layered structured network bitstream comprising an encoder-generated video bitstream and at least a portion of the video analytics metadata.
8. The video processing system of claim 7, wherein the video analytics messages and the portion of the video analytics metadata are transmitted in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream.
9. A video decoding system comprising:
a decoder configured to extract video frames and one or more video analytics messages from a network bitstream, wherein the video analytics messages comprise information derived from pixel domain video analytics information which identifies characteristics of a sequence of images represented in the video frames; and
one or more video processors configured to produce video analytics metadata related to the video frame based on the extracted video frames and the information in the video analytics messages.
10. The video decoding system of claim 9, wherein the video analytics metadata comprise pixel domain video analytics information generated directly by an analog-to-digital front end.
11. The video decoding system of claim 9, wherein the video analytics metadata comprise pixel domain video analytics information generated directly by an encoding engine as the engine performed compression on the sequence of images.
12. The video decoding system of claim 11, wherein the video analytics messages are received with a portion of the pixel domain video analytics information in a supplemental enhancement information network abstraction layer package unit of an H.264 bitstream.
13. The video decoding system of claim 9, wherein one or more video processors extract a background image for a plurality of the video frames based on the information in the video analytics messages.
14. The video decoding system of claim 9, wherein one or more video processors use the information in the video analytics messages to monitor objects crossing a virtual line observed in a plurality of the video frames.
15. The video decoding system of claim 9, wherein the one or more video processors are configured to produce a global motion vector using the information in the video analytics messages.
16. The video decoding system of claim 9, wherein one or more video processors provide electronic image stabilization based on the information in the video analytics messages.
17. The video decoding system of claim 9, wherein the video analytics messages include information concerning one or more of a background frame, a foreground object a segmentation descriptor, a camera parameter, a virtual line and a predefined motion alarm region.
18. The video decoding system of claim 9, wherein the video analytics messages comprise video analytics messages concerning an individual video frame and including information related to one or more of a global motion vector, a motion alarm region alarm status, a virtual line count, an object tracking parameter and a camera motion parameter.
19. A non-transitory computer-readable medium encoded with data and instructions wherein the data and instructions, when executed by a processor of a video processing system, cause the video processing system to perform a method comprising:
encoding a sequence of images captured by a video sensor into video frames according to a desired video encoding standard;
generating pixel domain video analytics information from the sequence of images while encoding the sequence of images;
producing video analytics messages using the pixel domain video analytics information; and
transmitting the video analytics messages concurrently with the video frames, wherein the video analytics messages are configured to facilitate client-side video analytics processing of the video frames.
20. The non-transitory computer-readable medium of claim 19, wherein certain video analytics messages correspond to an individual video frame and relate to one or more of a global motion vector, a motion alarm region, a virtual line, object tracking and camera motion.
US13/225,238 2010-09-02 2011-09-02 Video Analytics for Security Systems and Methods Abandoned US20120057640A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
CNPCT/CN2010/076555 2010-09-02
PCT/CN2010/076555 WO2012027891A1 (en) 2010-09-02 2010-09-02 Video analytics for security systems and methods
PCT/CN2010/076564 WO2012027892A1 (en) 2010-09-02 2010-09-02 Rho-domain metrics
CNPCT/CN2010/076564 2010-09-02
PCT/CN2010/076569 WO2012027894A1 (en) 2010-09-02 2010-09-02 Video classification systems and methods
CNPCT/CN2010/076567 2010-09-02
CNPCT/CN2010/076569 2010-09-02
PCT/CN2010/076567 WO2012027893A1 (en) 2010-09-02 2010-09-02 Systems and methods for video content analysis

Publications (1)

Publication Number Publication Date
US20120057640A1 true US20120057640A1 (en) 2012-03-08

Family

ID=45770713

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/225,238 Abandoned US20120057640A1 (en) 2010-09-02 2011-09-02 Video Analytics for Security Systems and Methods
US13/225,222 Abandoned US20120057629A1 (en) 2010-09-02 2011-09-02 Rho-domain Metrics
US13/225,269 Expired - Fee Related US8824554B2 (en) 2010-09-02 2011-09-02 Systems and methods for video content analysis
US13/225,202 Abandoned US20120057633A1 (en) 2010-09-02 2011-09-02 Video Classification Systems and Methods
US14/472,313 Expired - Fee Related US9609348B2 (en) 2010-09-02 2014-08-28 Systems and methods for video content analysis

Family Applications After (4)

Application Number Title Priority Date Filing Date
US13/225,222 Abandoned US20120057629A1 (en) 2010-09-02 2011-09-02 Rho-domain Metrics
US13/225,269 Expired - Fee Related US8824554B2 (en) 2010-09-02 2011-09-02 Systems and methods for video content analysis
US13/225,202 Abandoned US20120057633A1 (en) 2010-09-02 2011-09-02 Video Classification Systems and Methods
US14/472,313 Expired - Fee Related US9609348B2 (en) 2010-09-02 2014-08-28 Systems and methods for video content analysis

Country Status (1)

Country Link
US (5) US20120057640A1 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140096014A1 (en) * 2012-09-29 2014-04-03 Oracle International Corporation Method for enabling dynamic client user interfaces on multiple platforms from a common server application via metadata
US20140355829A1 (en) * 2013-05-31 2014-12-04 Samsung Sds Co., Ltd. People detection apparatus and method and people counting apparatus and method
US20140369417A1 (en) * 2010-09-02 2014-12-18 Intersil Americas LLC Systems and methods for video content analysis
US20150085111A1 (en) * 2013-09-25 2015-03-26 Symbol Technologies, Inc. Identification using video analytics together with inertial sensor data
US9076212B2 (en) 2006-05-19 2015-07-07 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US20150278631A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Filtering methods for visual object detection
US20150350608A1 (en) * 2014-05-30 2015-12-03 Placemeter Inc. System and method for activity monitoring using video data
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US20160315987A1 (en) * 2014-01-17 2016-10-27 Sony Corporation Communication devices, communication data generation method, and communication data processing method
US9517417B2 (en) 2013-06-06 2016-12-13 Zih Corp. Method, apparatus, and computer program product for performance analytics determining participant statistical data and game status data
US9531415B2 (en) 2013-06-06 2016-12-27 Zih Corp. Systems and methods for activity determination based on human frame
US9595124B2 (en) 2013-02-08 2017-03-14 Robert Bosch Gmbh Adding user-selected mark-ups to a video stream
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US9626616B2 (en) 2014-06-05 2017-04-18 Zih Corp. Low-profile real-time location system tag
US9661455B2 (en) 2014-06-05 2017-05-23 Zih Corp. Method, apparatus, and computer program product for real time location system referencing in physically and radio frequency challenged environments
US9668164B2 (en) 2014-06-05 2017-05-30 Zih Corp. Receiver processor for bandwidth management of a multiple receiver real-time location system (RTLS)
US9699278B2 (en) 2013-06-06 2017-07-04 Zih Corp. Modular location tag for a real time location system network
US9712828B2 (en) * 2015-05-27 2017-07-18 Indian Statistical Institute Foreground motion detection in compressed video data
US9715005B2 (en) 2013-06-06 2017-07-25 Zih Corp. Method, apparatus, and computer program product improving real time location systems with multiple location technologies
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9734589B2 (en) 2014-07-23 2017-08-15 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9759803B2 (en) 2014-06-06 2017-09-12 Zih Corp. Method, apparatus, and computer program product for employing a spatial association model in a real time location system
US9782141B2 (en) 2013-02-01 2017-10-10 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9854558B2 (en) 2014-06-05 2017-12-26 Zih Corp. Receiver processor for adaptive windowing and high-resolution TOA determination in a multiple receiver target location system
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US9953196B2 (en) 2014-06-05 2018-04-24 Zih Corp. System, apparatus and methods for variable rate ultra-wideband communications
US10004462B2 (en) 2014-03-24 2018-06-26 Kineticor, Inc. Systems, methods, and devices for removing prospective motion correction from medical imaging scans
US10043078B2 (en) * 2015-04-21 2018-08-07 Placemeter LLC Virtual turnstile system and method
US10261169B2 (en) 2014-06-05 2019-04-16 Zebra Technologies Corporation Method for iterative target location in a multiple receiver target location system
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10380431B2 (en) 2015-06-01 2019-08-13 Placemeter LLC Systems and methods for processing video streams
US10437658B2 (en) 2013-06-06 2019-10-08 Zebra Technologies Corporation Method, apparatus, and computer program product for collecting and displaying sporting event data based on real time data for proximity and movement of objects
US10509099B2 (en) 2013-06-06 2019-12-17 Zebra Technologies Corporation Method, apparatus and computer program product improving real time location systems with multiple location technologies
US10609762B2 (en) 2013-06-06 2020-03-31 Zebra Technologies Corporation Method, apparatus, and computer program product improving backhaul of sensor and other data to real time location system network
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10902282B2 (en) 2012-09-19 2021-01-26 Placemeter Inc. System and method for processing image data
US20220044385A1 (en) * 2020-08-10 2022-02-10 Tencent America LLC Methods of video quality assessment using parametric and pixel level models
US11334751B2 (en) 2015-04-21 2022-05-17 Placemeter Inc. Systems and methods for processing video data for activity monitoring
US11391571B2 (en) 2014-06-05 2022-07-19 Zebra Technologies Corporation Method, apparatus, and computer program for enhancement of event visualizations based on location data
US11423464B2 (en) 2013-06-06 2022-08-23 Zebra Technologies Corporation Method, apparatus, and computer program product for enhancement of fan experience based on location data
US12100276B2 (en) * 2021-11-17 2024-09-24 SimpliSafe, Inc. Identifying regions of interest in an imaging field of view

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8850182B1 (en) * 2012-09-28 2014-09-30 Shoretel, Inc. Data capture for secure protocols
US9177245B2 (en) 2013-02-08 2015-11-03 Qualcomm Technologies Inc. Spiking network apparatus and method with bimodal spike-timing dependent plasticity
US20140328406A1 (en) 2013-05-01 2014-11-06 Raymond John Westwater Method and Apparatus to Perform Optimal Visually-Weighed Quantization of Time-Varying Visual Sequences in Transform Space
US9589363B2 (en) * 2014-03-25 2017-03-07 Intel Corporation Object tracking in encoded video streams
US10194163B2 (en) * 2014-05-22 2019-01-29 Brain Corporation Apparatus and methods for real time estimation of differential motion in live video
US9713982B2 (en) 2014-05-22 2017-07-25 Brain Corporation Apparatus and methods for robotic operation using video imagery
US9939253B2 (en) 2014-05-22 2018-04-10 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
US9848112B2 (en) 2014-07-01 2017-12-19 Brain Corporation Optical detection apparatus and methods
US10057593B2 (en) 2014-07-08 2018-08-21 Brain Corporation Apparatus and methods for distance estimation using stereo imagery
US10032280B2 (en) 2014-09-19 2018-07-24 Brain Corporation Apparatus and methods for tracking salient features
CN104539890A (en) * 2014-12-18 2015-04-22 苏州阔地网络科技有限公司 Target tracking method and system
US10091504B2 (en) 2015-01-08 2018-10-02 Microsoft Technology Licensing, Llc Variations of rho-domain rate control
US10043146B2 (en) * 2015-02-12 2018-08-07 Wipro Limited Method and device for estimating efficiency of an employee of an organization
US10037504B2 (en) * 2015-02-12 2018-07-31 Wipro Limited Methods for determining manufacturing waste to optimize productivity and devices thereof
US10298942B1 (en) * 2015-04-06 2019-05-21 Zpeg, Inc. Method and apparatus to process video sequences in transform space
US10197664B2 (en) 2015-07-20 2019-02-05 Brain Corporation Apparatus and methods for detection of objects using broadband signals
US10075640B2 (en) * 2015-12-31 2018-09-11 Sony Corporation Motion compensation for image sensor with a block based analog-to-digital converter
CN105809136A (en) 2016-03-14 2016-07-27 中磊电子(苏州)有限公司 Image data processing method and image data processing system
CA3106617C (en) * 2017-04-21 2023-11-07 Zenimax Media Inc. Systems and methods for rendering & pre-encoded load estimation based encoder hinting
US10694205B2 (en) * 2017-12-18 2020-06-23 Google Llc Entropy coding of motion vectors using categories of transform blocks
TWI720830B (en) * 2019-06-27 2021-03-01 多方科技股份有限公司 Image processing device and method thereof
CN111901597B (en) * 2020-08-05 2022-03-25 杭州当虹科技股份有限公司 CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity
US11425412B1 (en) * 2020-11-10 2022-08-23 Amazon Technologies, Inc. Motion cues for video encoding

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815604A (en) * 1995-05-18 1998-09-29 U.S. Philips Corporation Interactive image manipulation
US5854856A (en) * 1995-07-19 1998-12-29 Carnegie Mellon University Content based video compression system
US20020181745A1 (en) * 2001-06-05 2002-12-05 Hu Shane Ching-Feng Multi-modal motion estimation for video sequences
US20050203927A1 (en) * 2000-07-24 2005-09-15 Vivcom, Inc. Fast metadata generation and delivery
US20060232673A1 (en) * 2005-04-19 2006-10-19 Objectvideo, Inc. Video-based human verification system and method
US20070127774A1 (en) * 2005-06-24 2007-06-07 Objectvideo, Inc. Target detection and tracking from video streams
US20070237221A1 (en) * 2006-04-07 2007-10-11 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
US20080049834A1 (en) * 2001-12-17 2008-02-28 Microsoft Corporation Sub-block transform coding of prediction residuals
US20080069211A1 (en) * 2006-09-14 2008-03-20 Kim Byung Gyu Apparatus and method for encoding moving picture
US20080184245A1 (en) * 2007-01-30 2008-07-31 March Networks Corporation Method and system for task-based video analytics processing
US20080192646A1 (en) * 2005-10-17 2008-08-14 Huawei Technologies Co., Ltd. Method for Monitoring Quality of Service in Multimedia Communications
US20090219639A1 (en) * 2008-03-03 2009-09-03 Videoiq, Inc. Extending the operational lifetime of a hard-disk drive used in video data storage applications
US20090296808A1 (en) * 2008-06-03 2009-12-03 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US20100020172A1 (en) * 2008-07-25 2010-01-28 International Business Machines Corporation Performing real-time analytics using a network processing solution able to directly ingest ip camera video streams
JP2010128727A (en) * 2008-11-27 2010-06-10 Hitachi Kokusai Electric Inc Image processor
US20100150233A1 (en) * 2008-12-15 2010-06-17 Seunghwan Kim Fast mode decision apparatus and method
US20100215104A1 (en) * 2009-02-26 2010-08-26 Akira Osamoto Method and System for Motion Estimation
US20110157178A1 (en) * 2009-12-28 2011-06-30 Cuneyt Oncel Tuzel Method and System for Determining Poses of Objects
US8128503B1 (en) * 2008-05-29 2012-03-06 Livestream LLC Systems, methods and computer software for live video/audio broadcasting

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3689796T2 (en) * 1985-01-16 1994-08-04 Mitsubishi Denki K.K., Tokio/Tokyo Video coding device.
US5128754A (en) * 1990-03-30 1992-07-07 New York Institute Of Technology Apparatus and method for encoding and decoding video
US6400996B1 (en) * 1999-02-01 2002-06-04 Steven M. Hoffberg Adaptive pattern recognition based control system and method
JPH10164581A (en) * 1996-12-03 1998-06-19 Sony Corp Method and device for coding image signal and signal-recording medium
US6782132B1 (en) * 1998-08-12 2004-08-24 Pixonics, Inc. Video coding and reconstruction apparatus and methods
US6795504B1 (en) * 2000-06-21 2004-09-21 Microsoft Corporation Memory efficient 3-D wavelet transform for video coding without boundary effects
US7868912B2 (en) 2000-10-24 2011-01-11 Objectvideo, Inc. Video surveillance system employing video primitives
US6662564B2 (en) * 2001-09-27 2003-12-16 Siemens Westinghouse Power Corporation Catalytic combustor cooling tube vibration dampening device
US20030159152A1 (en) * 2001-10-23 2003-08-21 Shu Lin Fast motion trick mode using dummy bidirectional predictive pictures
JP4099973B2 (en) 2001-10-30 2008-06-11 松下電器産業株式会社 Video data transmission method, video data reception method, and video surveillance system
US20030163477A1 (en) 2002-02-25 2003-08-28 Visharam Mohammed Zubair Method and apparatus for supporting advanced coding formats in media files
EP1486065B1 (en) 2002-03-15 2016-01-06 Nokia Technologies Oy Method for coding motion in a video sequence
GB0227566D0 (en) * 2002-11-26 2002-12-31 British Telecomm Method and system for estimating global motion in video sequences
GB0227565D0 (en) * 2002-11-26 2002-12-31 British Telecomm Method and system for generating panoramic images from video sequences
GB0227570D0 (en) * 2002-11-26 2002-12-31 British Telecomm Method and system for estimating global motion in video sequences
US7474355B2 (en) * 2003-08-06 2009-01-06 Zoran Corporation Chroma upsampling method and apparatus therefor
EP1513350A1 (en) * 2003-09-03 2005-03-09 Thomson Licensing S.A. Process and arrangement for encoding video pictures
US20050047504A1 (en) * 2003-09-03 2005-03-03 Sung Chih-Ta Star Data stream encoding method and apparatus for digital video compression
US7317839B2 (en) * 2003-09-07 2008-01-08 Microsoft Corporation Chroma motion vector derivation for interlaced forward-predicted fields
US7667732B1 (en) * 2004-03-16 2010-02-23 3Vr Security, Inc. Event generation and camera cluster analysis of multiple video streams in a pipeline architecture
US20060062478A1 (en) * 2004-08-16 2006-03-23 Grandeye, Ltd., Region-sensitive compression of digital video
US20060056511A1 (en) * 2004-08-27 2006-03-16 University Of Victoria Innovation And Development Corporation Flexible polygon motion estimating method and system
US8243820B2 (en) * 2004-10-06 2012-08-14 Microsoft Corporation Decoding variable coded resolution video with native range/resolution post-processing operation
US8948266B2 (en) * 2004-10-12 2015-02-03 Qualcomm Incorporated Adaptive intra-refresh for digital video encoding
US8340172B2 (en) * 2004-11-29 2012-12-25 Qualcomm Incorporated Rate control techniques for video encoding using parametric equations
CN101112101A (en) 2004-11-29 2008-01-23 高通股份有限公司 Rate control techniques for video encoding using parametric equations
WO2006110890A2 (en) 2005-04-08 2006-10-19 Sarnoff Corporation Macro-block based mixed resolution video compression system
US8879635B2 (en) * 2005-09-27 2014-11-04 Qualcomm Incorporated Methods and device for data alignment with time domain boundary
CN100551072C (en) 2006-06-05 2009-10-14 华为技术有限公司 Quantization matrix system of selection in a kind of coding, device and decoding method and system
US20070291118A1 (en) * 2006-06-16 2007-12-20 Shu Chiao-Fe Intelligent surveillance system and method for integrated event based surveillance
JP4363421B2 (en) 2006-06-30 2009-11-11 ソニー株式会社 Monitoring system, monitoring system server and monitoring method
US20080074496A1 (en) * 2006-09-22 2008-03-27 Object Video, Inc. Video analytics for banking business process monitoring
WO2008046243A1 (en) 2006-10-16 2008-04-24 Thomson Licensing Method and device for encoding a data stream, method and device for decoding a data stream, video indexing system and image retrieval system
WO2008072249A2 (en) * 2006-12-15 2008-06-19 Mango D.S.P. Ltd System, apparatus and method for flexible modular programming for video processors
CN100508610C (en) 2007-02-02 2009-07-01 清华大学 A Fast Estimation Method of Rate and Distortion in H.264/AVC Video Coding
US7595815B2 (en) * 2007-05-08 2009-09-29 Kd Secure, Llc Apparatus, methods, and systems for intelligent security and safety
CN101325689A (en) 2007-06-16 2008-12-17 翰华信息科技(厦门)有限公司 System and method for monitoring mobile phone remote video
US10116904B2 (en) * 2007-07-13 2018-10-30 Honeywell International Inc. Features in video analytics
CN101090498B (en) 2007-07-19 2010-06-02 华为技术有限公司 Device and method for motion detection of image
US20090031381A1 (en) * 2007-07-24 2009-01-29 Honeywell International, Inc. Proxy video server for video surveillance
US9734464B2 (en) * 2007-09-11 2017-08-15 International Business Machines Corporation Automatically generating labor standards from video data
US8624733B2 (en) * 2007-11-05 2014-01-07 Francis John Cusack, JR. Device for electronic access control with integrated surveillance
CN101179729A (en) 2007-12-20 2008-05-14 清华大学 A H.264 Macroblock Mode Selection Method Based on Statistical Classification of Inter Modes
WO2009079754A1 (en) * 2007-12-20 2009-07-02 Ati Technologies Ulc Adjusting video processing in a system having a video source device and a video sink device
WO2009094591A2 (en) * 2008-01-24 2009-07-30 Micropower Appliance Video delivery systems using wireless cameras
US9584710B2 (en) * 2008-02-28 2017-02-28 Avigilon Analytics Corporation Intelligent high resolution video system
DE112009000485T5 (en) * 2008-03-03 2011-03-17 VideoIQ, Inc., Bedford Object comparison for tracking, indexing and searching
CN101389029B (en) 2008-10-21 2012-01-11 北京中星微电子有限公司 Method and apparatus for video image encoding and retrieval
CN101389023B (en) 2008-10-21 2011-10-12 镇江唐桥微电子有限公司 Adaptive movement estimation method
US8301792B2 (en) * 2008-10-28 2012-10-30 Panzura, Inc Network-attached media plug-in
CN101448145A (en) 2008-12-26 2009-06-03 北京中星微电子有限公司 IP camera, video monitor system and signal processing method of IP camera
US8675736B2 (en) * 2009-05-14 2014-03-18 Qualcomm Incorporated Motion vector processing
WO2011041904A1 (en) * 2009-10-07 2011-04-14 Telewatch Inc. Video analytics method and system
US8780978B2 (en) * 2009-11-04 2014-07-15 Qualcomm Incorporated Controlling video encoding using audio information
CN102741830B (en) * 2009-12-08 2016-07-13 思杰系统有限公司 Systems and methods for client-side telepresence of multimedia streams
CN101778260B (en) 2009-12-29 2012-01-04 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
US8503539B2 (en) * 2010-02-26 2013-08-06 Bao Tran High definition personal computer (PC) cam
US20110221895A1 (en) * 2010-03-10 2011-09-15 Vinay Sharma Detection of Movement of a Stationary Video Camera
US9143739B2 (en) * 2010-05-07 2015-09-22 Iwatchlife, Inc. Video analytics with burst-like transmission of video data
US20120057640A1 (en) * 2010-09-02 2012-03-08 Fang Shi Video Analytics for Security Systems and Methods
US8890936B2 (en) * 2010-10-12 2014-11-18 Texas Instruments Incorporated Utilizing depth information to create 3D tripwires in video
WO2012142508A1 (en) * 2011-04-15 2012-10-18 Skyfire Labs, Inc. Real-time video optimizer

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815604A (en) * 1995-05-18 1998-09-29 U.S. Philips Corporation Interactive image manipulation
US5854856A (en) * 1995-07-19 1998-12-29 Carnegie Mellon University Content based video compression system
US20050203927A1 (en) * 2000-07-24 2005-09-15 Vivcom, Inc. Fast metadata generation and delivery
US20020181745A1 (en) * 2001-06-05 2002-12-05 Hu Shane Ching-Feng Multi-modal motion estimation for video sequences
US20080049834A1 (en) * 2001-12-17 2008-02-28 Microsoft Corporation Sub-block transform coding of prediction residuals
US20060232673A1 (en) * 2005-04-19 2006-10-19 Objectvideo, Inc. Video-based human verification system and method
US20070127774A1 (en) * 2005-06-24 2007-06-07 Objectvideo, Inc. Target detection and tracking from video streams
US20080192646A1 (en) * 2005-10-17 2008-08-14 Huawei Technologies Co., Ltd. Method for Monitoring Quality of Service in Multimedia Communications
US20070237221A1 (en) * 2006-04-07 2007-10-11 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
US20080069211A1 (en) * 2006-09-14 2008-03-20 Kim Byung Gyu Apparatus and method for encoding moving picture
US20080184245A1 (en) * 2007-01-30 2008-07-31 March Networks Corporation Method and system for task-based video analytics processing
US20090219639A1 (en) * 2008-03-03 2009-09-03 Videoiq, Inc. Extending the operational lifetime of a hard-disk drive used in video data storage applications
US8128503B1 (en) * 2008-05-29 2012-03-06 Livestream LLC Systems, methods and computer software for live video/audio broadcasting
US20090296808A1 (en) * 2008-06-03 2009-12-03 Microsoft Corporation Adaptive quantization for enhancement layer video coding
US20100020172A1 (en) * 2008-07-25 2010-01-28 International Business Machines Corporation Performing real-time analytics using a network processing solution able to directly ingest ip camera video streams
US8325228B2 (en) * 2008-07-25 2012-12-04 International Business Machines Corporation Performing real-time analytics using a network processing solution able to directly ingest IP camera video streams
JP2010128727A (en) * 2008-11-27 2010-06-10 Hitachi Kokusai Electric Inc Image processor
US20100150233A1 (en) * 2008-12-15 2010-06-17 Seunghwan Kim Fast mode decision apparatus and method
US20100215104A1 (en) * 2009-02-26 2010-08-26 Akira Osamoto Method and System for Motion Estimation
US20110157178A1 (en) * 2009-12-28 2011-06-30 Cuneyt Oncel Tuzel Method and System for Determining Poses of Objects

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
He et al, Video Compression and Data Flow for Video Surveillance, 2007-09 *
JP2010128727MT, Fuji Miyuki, 10-06-2010, (note that JP2010128727MT is machine translation for the Japanese application JP2010128727A, it is downloaded from JPO website) *
TW5864B1, 5D1 H264 Encoder with 4-channel A/V Decoder and 12 channels external VD inputs for security applications, 2011-12-04 *

Cited By (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9867549B2 (en) 2006-05-19 2018-01-16 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9076212B2 (en) 2006-05-19 2015-07-07 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US10869611B2 (en) 2006-05-19 2020-12-22 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9138175B2 (en) 2006-05-19 2015-09-22 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9609348B2 (en) * 2010-09-02 2017-03-28 Intersil Americas LLC Systems and methods for video content analysis
US20140369417A1 (en) * 2010-09-02 2014-12-18 Intersil Americas LLC Systems and methods for video content analysis
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US10663553B2 (en) 2011-08-26 2020-05-26 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US10902282B2 (en) 2012-09-19 2021-01-26 Placemeter Inc. System and method for processing image data
US20140096014A1 (en) * 2012-09-29 2014-04-03 Oracle International Corporation Method for enabling dynamic client user interfaces on multiple platforms from a common server application via metadata
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10339654B2 (en) 2013-01-24 2019-07-02 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9607377B2 (en) 2013-01-24 2017-03-28 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9779502B1 (en) 2013-01-24 2017-10-03 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US10653381B2 (en) 2013-02-01 2020-05-19 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9782141B2 (en) 2013-02-01 2017-10-10 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9595124B2 (en) 2013-02-08 2017-03-14 Robert Bosch Gmbh Adding user-selected mark-ups to a video stream
US20140355829A1 (en) * 2013-05-31 2014-12-04 Samsung Sds Co., Ltd. People detection apparatus and method and people counting apparatus and method
US9495600B2 (en) * 2013-05-31 2016-11-15 Samsung Sds Co., Ltd. People detection apparatus and method and people counting apparatus and method
US10707908B2 (en) 2013-06-06 2020-07-07 Zebra Technologies Corporation Method, apparatus, and computer program product for evaluating performance based on real-time data for proximity and movement of objects
US9602152B2 (en) 2013-06-06 2017-03-21 Zih Corp. Method, apparatus, and computer program product for determining play events and outputting events based on real-time data for proximity, movement of objects, and audio data
US9699278B2 (en) 2013-06-06 2017-07-04 Zih Corp. Modular location tag for a real time location system network
US9698841B2 (en) 2013-06-06 2017-07-04 Zih Corp. Method and apparatus for associating radio frequency identification tags with participants
US11423464B2 (en) 2013-06-06 2022-08-23 Zebra Technologies Corporation Method, apparatus, and computer program product for enhancement of fan experience based on location data
US9715005B2 (en) 2013-06-06 2017-07-25 Zih Corp. Method, apparatus, and computer program product improving real time location systems with multiple location technologies
US11287511B2 (en) 2013-06-06 2022-03-29 Zebra Technologies Corporation Method, apparatus, and computer program product improving real time location systems with multiple location technologies
US11023303B2 (en) 2013-06-06 2021-06-01 Zebra Technologies Corporation Methods and apparatus to correlate unique identifiers and tag-individual correlators based on status change indications
US9742450B2 (en) 2013-06-06 2017-08-22 Zih Corp. Method, apparatus, and computer program product improving registration with real time location services
US9571143B2 (en) 2013-06-06 2017-02-14 Zih Corp. Interference rejection in ultra-wideband real time locating systems
US9531415B2 (en) 2013-06-06 2016-12-27 Zih Corp. Systems and methods for activity determination based on human frame
US9667287B2 (en) 2013-06-06 2017-05-30 Zih Corp. Multiple antenna interference rejection in ultra-wideband real time locating systems
US9839809B2 (en) 2013-06-06 2017-12-12 Zih Corp. Method, apparatus, and computer program product for determining play events and outputting events based on real-time data for proximity, movement of objects, and audio data
US10778268B2 (en) 2013-06-06 2020-09-15 Zebra Technologies Corporation Method, apparatus, and computer program product for performance analytics determining play models and outputting events based on real-time data for proximity and movement of objects
US10218399B2 (en) 2013-06-06 2019-02-26 Zebra Technologies Corporation Systems and methods for activity determination based on human frame
US10333568B2 (en) 2013-06-06 2019-06-25 Zebra Technologies Corporation Method and apparatus for associating radio frequency identification tags with participants
US9882592B2 (en) 2013-06-06 2018-01-30 Zih Corp. Method, apparatus, and computer program product for tag and individual correlation
US9517417B2 (en) 2013-06-06 2016-12-13 Zih Corp. Method, apparatus, and computer program product for performance analytics determining participant statistical data and game status data
US10609762B2 (en) 2013-06-06 2020-03-31 Zebra Technologies Corporation Method, apparatus, and computer program product improving backhaul of sensor and other data to real time location system network
US10509099B2 (en) 2013-06-06 2019-12-17 Zebra Technologies Corporation Method, apparatus and computer program product improving real time location systems with multiple location technologies
US9985672B2 (en) 2013-06-06 2018-05-29 Zih Corp. Method, apparatus, and computer program product for evaluating performance based on real-time data for proximity and movement of objects
US10212262B2 (en) 2013-06-06 2019-02-19 Zebra Technologies Corporation Modular location tag for a real time location system network
US10437658B2 (en) 2013-06-06 2019-10-08 Zebra Technologies Corporation Method, apparatus, and computer program product for collecting and displaying sporting event data based on real time data for proximity and movement of objects
US10050650B2 (en) 2013-06-06 2018-08-14 Zih Corp. Method, apparatus, and computer program product improving registration with real time location services
US10421020B2 (en) 2013-06-06 2019-09-24 Zebra Technologies Corporation Method, apparatus, and computer program product for performance analytics determining participant statistical data and game status data
US20150085111A1 (en) * 2013-09-25 2015-03-26 Symbol Technologies, Inc. Identification using video analytics together with inertial sensor data
US20160315987A1 (en) * 2014-01-17 2016-10-27 Sony Corporation Communication devices, communication data generation method, and communication data processing method
US10924524B2 (en) * 2014-01-17 2021-02-16 Saturn Licensing Llc Communication devices, communication data generation method, and communication data processing method
US10004462B2 (en) 2014-03-24 2018-06-26 Kineticor, Inc. Systems, methods, and devices for removing prospective motion correction from medical imaging scans
US20150278631A1 (en) * 2014-03-28 2015-10-01 International Business Machines Corporation Filtering methods for visual object detection
US10169661B2 (en) * 2014-03-28 2019-01-01 International Business Machines Corporation Filtering methods for visual object detection
US10432896B2 (en) * 2014-05-30 2019-10-01 Placemeter Inc. System and method for activity monitoring using video data
US10880524B2 (en) 2014-05-30 2020-12-29 Placemeter Inc. System and method for activity monitoring using video data
US10735694B2 (en) 2014-05-30 2020-08-04 Placemeter Inc. System and method for activity monitoring using video data
US20150350608A1 (en) * 2014-05-30 2015-12-03 Placemeter Inc. System and method for activity monitoring using video data
US10520582B2 (en) 2014-06-05 2019-12-31 Zebra Technologies Corporation Method for iterative target location in a multiple receiver target location system
US9661455B2 (en) 2014-06-05 2017-05-23 Zih Corp. Method, apparatus, and computer program product for real time location system referencing in physically and radio frequency challenged environments
US11391571B2 (en) 2014-06-05 2022-07-19 Zebra Technologies Corporation Method, apparatus, and computer program for enhancement of event visualizations based on location data
US9953195B2 (en) 2014-06-05 2018-04-24 Zih Corp. Systems, apparatus and methods for variable rate ultra-wideband communications
US10310052B2 (en) 2014-06-05 2019-06-04 Zebra Technologies Corporation Method, apparatus, and computer program product for real time location system referencing in physically and radio frequency challenged environments
US9668164B2 (en) 2014-06-05 2017-05-30 Zih Corp. Receiver processor for bandwidth management of a multiple receiver real-time location system (RTLS)
US9953196B2 (en) 2014-06-05 2018-04-24 Zih Corp. System, apparatus and methods for variable rate ultra-wideband communications
US10942248B2 (en) 2014-06-05 2021-03-09 Zebra Technologies Corporation Method, apparatus, and computer program product for real time location system referencing in physically and radio frequency challenged environments
US10285157B2 (en) 2014-06-05 2019-05-07 Zebra Technologies Corporation Receiver processor for adaptive windowing and high-resolution TOA determination in a multiple receiver target location system
US9864946B2 (en) 2014-06-05 2018-01-09 Zih Corp. Low-profile real-time location system tag
US9626616B2 (en) 2014-06-05 2017-04-18 Zih Corp. Low-profile real-time location system tag
US9854558B2 (en) 2014-06-05 2017-12-26 Zih Corp. Receiver processor for adaptive windowing and high-resolution TOA determination in a multiple receiver target location system
US10261169B2 (en) 2014-06-05 2019-04-16 Zebra Technologies Corporation Method for iterative target location in a multiple receiver target location system
US10591578B2 (en) 2014-06-06 2020-03-17 Zebra Technologies Corporation Method, apparatus, and computer program product for employing a spatial association model in a real time location system
US11156693B2 (en) 2014-06-06 2021-10-26 Zebra Technologies Corporation Method, apparatus, and computer program product for employing a spatial association model in a real time location system
US9759803B2 (en) 2014-06-06 2017-09-12 Zih Corp. Method, apparatus, and computer program product for employing a spatial association model in a real time location system
US9734589B2 (en) 2014-07-23 2017-08-15 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10438349B2 (en) 2014-07-23 2019-10-08 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11100636B2 (en) 2014-07-23 2021-08-24 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11334751B2 (en) 2015-04-21 2022-05-17 Placemeter Inc. Systems and methods for processing video data for activity monitoring
US10043078B2 (en) * 2015-04-21 2018-08-07 Placemeter LLC Virtual turnstile system and method
US10726271B2 (en) 2015-04-21 2020-07-28 Placemeter, Inc. Virtual turnstile system and method
US9712828B2 (en) * 2015-05-27 2017-07-18 Indian Statistical Institute Foreground motion detection in compressed video data
US10997428B2 (en) 2015-06-01 2021-05-04 Placemeter Inc. Automated detection of building entrances
US11138442B2 (en) 2015-06-01 2021-10-05 Placemeter, Inc. Robust, adaptive and efficient object detection, classification and tracking
US10380431B2 (en) 2015-06-01 2019-08-13 Placemeter LLC Systems and methods for processing video streams
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10660541B2 (en) 2015-07-28 2020-05-26 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11100335B2 (en) 2016-03-23 2021-08-24 Placemeter, Inc. Method for queue time estimation
US20220044385A1 (en) * 2020-08-10 2022-02-10 Tencent America LLC Methods of video quality assessment using parametric and pixel level models
US11875495B2 (en) * 2020-08-10 2024-01-16 Tencent America LLC Methods of video quality assessment using parametric and pixel level models
US12100276B2 (en) * 2021-11-17 2024-09-24 SimpliSafe, Inc. Identifying regions of interest in an imaging field of view

Also Published As

Publication number Publication date
US20120057633A1 (en) 2012-03-08
US20120057629A1 (en) 2012-03-08
US20140369417A1 (en) 2014-12-18
US8824554B2 (en) 2014-09-02
US20120057634A1 (en) 2012-03-08
US9609348B2 (en) 2017-03-28

Similar Documents

Publication Publication Date Title
US20120057640A1 (en) Video Analytics for Security Systems and Methods
Ding et al. Advances in video compression system using deep neural network: A review and case studies
US20210337217A1 (en) Video analytics encoding for improved efficiency of video processing and compression
US20210203997A1 (en) Hybrid video and feature coding and decoding
US20130216135A1 (en) Visual search system architectures based on compressed or compact descriptors
Wang et al. Towards analysis-friendly face representation with scalable feature and texture compression
US8923640B1 (en) Coherence groups: region descriptors for low bit rate encoding
CN115298710A (en) Video conference frame based on face restoration
WO2012027891A1 (en) Video analytics for security systems and methods
CN111131825A (en) Video processing method and related device
CN101389029A (en) Method and apparatus for video image encoding and retrieval
Safin et al. Hardware and software video encoding comparison
CN103051891B (en) The method and apparatus for determining the saliency value of the block for the video frame that block prediction encodes in data flow
US20130235931A1 (en) Masking video artifacts with comfort noise
US10445613B2 (en) Method, apparatus, and computer readable device for encoding and decoding of images using pairs of descriptors and orientation histograms representing their respective points of interest
US10051281B2 (en) Video coding system with efficient processing of zooming transitions in video
US10536726B2 (en) Pixel patch collection for prediction in video coding system
CN112383778B (en) Video coding method and device and decoding method and device
KR20220061032A (en) Method and image-processing device for video processing
CN104767998B (en) A kind of visual signature coding method and device towards video
CN111542858B (en) Dynamic image analysis device, system, method, and storage medium
WO2012027893A1 (en) Systems and methods for video content analysis
CN114097008A (en) System and method for automatic identification of hand activity defined in a unified parkinson's disease rating scale
US20240244229A1 (en) Systems and methods for predictive coding
US20230142015A1 (en) Video surveillance system, computer-implemented video management process, and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERSIL AMERICAS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHI, FANG;QI, CHANGSONG;MING, JIN;AND OTHERS;REEL/FRAME:027002/0766

Effective date: 20110913

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: INTERSIL AMERICAS LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:INTERSIL AMERICAS INC.;REEL/FRAME:033119/0484

Effective date: 20111223

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载