+

CN105744223A - Video data processing method and apparatus - Google Patents

Video data processing method and apparatus Download PDF

Info

Publication number
CN105744223A
CN105744223A CN201610079944.1A CN201610079944A CN105744223A CN 105744223 A CN105744223 A CN 105744223A CN 201610079944 A CN201610079944 A CN 201610079944A CN 105744223 A CN105744223 A CN 105744223A
Authority
CN
China
Prior art keywords
acquisition device
video acquisition
video data
video
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610079944.1A
Other languages
Chinese (zh)
Other versions
CN105744223B (en
Inventor
毛慧子
张弛
印奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Aperture Science and Technology Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201610079944.1A priority Critical patent/CN105744223B/en
Publication of CN105744223A publication Critical patent/CN105744223A/en
Application granted granted Critical
Publication of CN105744223B publication Critical patent/CN105744223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Embodiments of the present invention provide a video data processing method and apparatus. The video data processing method includes the steps of acquiring sample video data collected by a plurality of video collection apparatuses respectively, wherein the sample video data collected by one of the plurality of video collection apparatuses and the sample video data collected by at least one another video collection apparatuses of the plurality of video collection apparatuses record a motion trajectory of the same sample target; and training with the acquired sample video data so as to determine a location relation among the plurality of video collection apparatuses. According to the video data processing method and apparatus provided by the embodiments of the present invention, the location relation among the plurality of video collection apparatuses can be determined through a method of training with the samples, and the method is simple to implement, and has low requirements on deployment of the video collection apparatuses and prior information such as a background scene and so on, so that the deployment overhead of a target track system can be reduced.

Description

Video data handling procedure and device
Technical field
The present invention relates to target following technical field, relate more specifically to a kind of video data handling procedure and device.
Background technology
Locating and tracking technology has important function for fields such as enterprise's security protections.Traditional tracking is based primarily upon single photographic head, the advantage of this kind of method is that the requirement that photographed scene and photographic head are arranged is relatively low, more convenient at the deployment phase of the system of tracking, but shortcoming is in that to need the tracking of certain specific objective (such as particular persons) to expend a large amount of manpower and time, since it is desired that rely on manual type to investigate all video datas one by one.The multi-cam tracking proposed in the recent period, it is possible to use the position relationship of photographic head and background information, is automatically obtained the tracking to certain specific objective between multi-cam.But this method often requires that the initialization condition that comparison is harsh, for instance need in advance tracing area to be carried out three-dimensional modeling, it is necessary to the distributing position etc. of reasonable arrangement photographic head.
Summary of the invention
Consider that the problems referred to above propose the present invention.The invention provides a kind of video data handling procedure and device.
According to an aspect of the present invention, it is provided that a kind of video data handling procedure.This video data handling procedure includes: obtain the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of the plurality of video acquisition device the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in the plurality of video acquisition device;And utilize acquired Sample video data to be trained determining the position relationship between the plurality of video acquisition device.
Exemplarily, the described position relationship utilizing acquired Sample video data to be trained determining between the plurality of video acquisition device includes: determine the sample object access time in the posting field of the plurality of video acquisition device according to described acquired Sample video data;And determine the position relationship between the plurality of video acquisition device according to sample object access time in the posting field of the plurality of video acquisition device.
Exemplarily, the position relationship that the described access time according to sample object in the posting field of the plurality of video acquisition device is determined between the plurality of video acquisition device includes: if for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in the plurality of video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between described first video acquisition device and described second video acquisition device according to the one or more sample object movement locus within each self-corresponding first period.
nullExemplarily,The position relationship that the described access time according to sample object in the posting field of the plurality of video acquisition device is determined between the plurality of video acquisition device includes: if for each in one or more sample object,This sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence,Then estimate the one or more sample object possible movement locus within each self-corresponding second period and determine the disappearance region between described first video acquisition device and described second video acquisition device according to the one or more sample object possible movement locus within each self-corresponding second period.
Exemplarily, described video data handling procedure farther includes: if for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then each self-corresponding second period according to the one or more sample object determines the extinction time threshold value that the disappearance region between described first video acquisition device to described second video acquisition device is relevant.
Exemplarily, described video data handling procedure farther includes: for each in the plurality of video acquisition device, obtains the image gathered by this video acquisition device for the key point of its correspondence, and the geographical position of described key point is to have marked;And the geographical position based on described key point location of pixels in described image and described key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
Exemplarily, in described acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
Exemplarily, described video data handling procedure farther includes: obtain the actual video data gathered by least part of video acquisition device in the plurality of video acquisition device for target to be tracked;And utilize the position relationship between acquired actual video data and described at least part of video acquisition device that described target to be tracked is tracked.
nullExemplarily,Described target to be tracked is tracked including by the described position relationship utilized between acquired actual video data and described at least part of video acquisition device: when according to when being found that described target to be tracked moves to the overlapping region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device,The all targets in described overlapping region relatively gathered by described second video acquisition device and the similarity between described target to be tracked,Determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
nullExemplarily,Described target to be tracked is tracked including by the described position relationship utilized between acquired actual video data and described at least part of video acquisition device: when according to when being found that described target to be tracked moves to the disappearance region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device,Similarity between all targets and the described target to be tracked of the posting field moving to described second video acquisition device from described disappearance region relatively gathered in specific time period by described second video acquisition device,Determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
Exemplarily, described specific time period less than or equal to and described first video acquisition device to described second video acquisition device between the relevant extinction time threshold value in disappearance region.
Exemplarily, described video data handling procedure farther includes: transmits the tracking information relevant to described target to be tracked and is used for storing, wherein, described tracking information includes the target information of the described target to be tracked picture position in the actual video data by each collection in described at least part of video acquisition device and described target to be tracked.
According to a further aspect of the invention, provide a kind of video data processing apparatus, including: the first acquisition module, for obtaining the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of the plurality of video acquisition device the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in the plurality of video acquisition device;And training module, for utilizing acquired Sample video data to be trained determining the position relationship between the plurality of video acquisition device.
Exemplarily, described training module includes: the time determines submodule, for determining the sample object access time in the posting field of the plurality of video acquisition device according to described acquired Sample video data;And position determines submodule, for determining the position relationship between the plurality of video acquisition device according to sample object access time in the posting field of the plurality of video acquisition device.
Exemplarily, described position determines that submodule includes overlapping region and determines unit, if for for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in the plurality of video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between described first video acquisition device and described second video acquisition device according to the one or more sample object movement locus within each self-corresponding first period.
nullExemplarily,Described position determines that submodule includes disappearance area determination unit,If for for each in one or more sample object,This sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence,Then estimate the one or more sample object possible movement locus within each self-corresponding second period and determine the disappearance region between described first video acquisition device and described second video acquisition device according to the one or more sample object possible movement locus within each self-corresponding second period.
Exemplarily, described video data processing apparatus farther includes time threshold and determines module, if for for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then each self-corresponding second period according to the one or more sample object determines the extinction time threshold value that the disappearance region between described first video acquisition device to described second video acquisition device is relevant.
Exemplarily, described video data processing apparatus farther includes: the second acquisition module, for for each in the plurality of video acquisition device, obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of described key point is to have marked;And computing module, for for each in the plurality of video acquisition device, the geographical position based on described key point location of pixels in described image and described key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
Exemplarily, in described acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
Exemplarily, described video data processing apparatus farther includes: the 3rd acquisition module, for obtaining the actual video data gathered by least part of video acquisition device in the plurality of video acquisition device for target to be tracked;And tracking module, for utilizing the position relationship between acquired actual video data and described at least part of video acquisition device that described target to be tracked is tracked.
Exemplarily, described tracking module includes the first tracking submodule, for when according to when being found that described target to be tracked moves to the overlapping region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, the all targets in described overlapping region relatively gathered by described second video acquisition device and the similarity between described target to be tracked, determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
Exemplarily, described tracking module includes the second tracking submodule, for when according to when being found that described target to be tracked moves to the disappearance region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, similarity between all targets and the described target to be tracked of the posting field moving to described second video acquisition device from described disappearance region relatively gathered in specific time period by described second video acquisition device, determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
Exemplarily, described specific time period less than or equal to and described first video acquisition device to described second video acquisition device between the relevant extinction time threshold value in disappearance region.
Exemplarily, described video data processing apparatus farther includes delivery module, for transmitting the tracking information relevant to described target to be tracked for storing, wherein, described tracking information includes the target information of the described target to be tracked picture position in the actual video data by each collection in described at least part of video acquisition device and described target to be tracked.
Video data handling procedure according to embodiments of the present invention and device, the method that can utilize sample training determines the position relationship between multiple video acquisition device, determined position relationship may be used for realize between multiple video acquisition devices to expectation target from motion tracking, this method realizes simple, the prior information such as deployment and background scene for video acquisition device does not do too many requirement, therefore can reduce the deployment expense of Target Tracking System.
Accompanying drawing explanation
In conjunction with the drawings the embodiment of the present invention being described in more detail, above-mentioned and other purpose, feature and the advantage of the present invention will be apparent from.Accompanying drawing is for providing being further appreciated by the embodiment of the present invention, and constitutes a part for description, is used for explaining the present invention, is not intended that limitation of the present invention together with the embodiment of the present invention.In the accompanying drawings, identical reference number typically represents same parts or step.
Fig. 1 illustrates the schematic block diagram of the exemplary electronic device for realizing video data handling procedure according to embodiments of the present invention and device;
Fig. 2 illustrates the indicative flowchart of video data handling procedure according to an embodiment of the invention;
Fig. 3 illustrates the schematic diagram of the movement locus of exemplary sample target according to an embodiment of the invention;
Fig. 4 illustrates the indicative flowchart of video data handling procedure in accordance with another embodiment of the present invention;
Fig. 5 illustrates the schematic diagram of the movement locus of exemplary according to an embodiment of the invention target to be tracked;
Fig. 6 illustrates the schematic block diagram of video data processing apparatus according to an embodiment of the invention;And
Fig. 7 illustrates the schematic block diagram of video data processing system according to an embodiment of the invention.
Detailed description of the invention
So that the object, technical solutions and advantages of the present invention become apparent from, example embodiment according to the present invention is described in detail below with reference to accompanying drawings.Obviously, described embodiment is only a part of embodiment of the present invention, rather than whole embodiments of the present invention, it should be appreciated that the present invention is not by the restriction of example embodiment described herein.Based on the embodiment of the present invention described in the present invention, those skilled in the art's all other embodiments obtained when not paying creative work all should fall within protection scope of the present invention.
First, with reference to Fig. 1, the exemplary electronic device 100 for realizing video data handling procedure according to embodiments of the present invention and device is described.
As shown in Figure 1, electronic equipment 100 includes one or more processor 102, one or more storage device 104, input equipment 106, output device 108 and at least two video acquisition device 110, the bindiny mechanism's (not shown) interconnection by bus system 112 and/or other form of these assemblies.It should be noted that, the assembly of the electronic equipment 100 shown in Fig. 1 and structure are illustrative of, and nonrestrictive, and as required, described electronic equipment can also have other assemblies and structure.
Described processor 102 can be the processing unit of CPU (CPU) or other form with data-handling capacity and/or instruction execution capability, and can control other assembly in described electronic equipment 100 to perform desired function.
Described storage device 104 can include one or more computer program, and described computer program can include various forms of computer-readable recording medium, for instance volatile memory and/or nonvolatile memory.Described volatile memory such as can include random access memory (RAM) and/or cache memory (cache) etc..Described nonvolatile memory such as can include read only memory (ROM), hard disk, flash memory etc..Described computer-readable recording medium can store one or more computer program instructions, processor 102 can run described programmed instruction, to realize the client functionality and/or other the desired function that (are realized) in invention described below embodiment by processor.Described computer-readable recording medium can also store various application program and various data, for instance the various data etc. that described application program uses and/or produces.
Described input equipment 106 can be user for inputting the device of instruction, and can include in keyboard, mouse, mike and touch screen etc. one or more.
Described output device 108 can export various information (such as image and/or sound) to outside (such as user), and can include in display, speaker etc. one or more.
Described video acquisition device 110 can gather desired video data (such as comprising the video of moving target), and be stored in described storage device 104 by the video data gathered and use for other assembly.Video acquisition device 110 can adopt any suitable equipment to realize, for instance photographic head.
Exemplarily, the exemplary electronic device for realizing video data handling procedure according to embodiments of the present invention and device can realize on the equipment of such as personal computer or remote server etc..
Below, video data handling procedure according to embodiments of the present invention is described reference Fig. 2.Fig. 2 illustrates the indicative flowchart of video data handling procedure 200 according to an embodiment of the invention.As in figure 2 it is shown, video data handling procedure 200 comprises the following steps.
In step S210, obtain the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of multiple video acquisition devices the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in multiple video acquisition devices.
Exemplarily, image collecting device can be photographic head, its computing equipment possessing computing capability that may be coupled to rear end or remote server, and the video data it gathered is sent to back end computing device or remote server processes.Hereafter the present invention is described the example using photographic head as image collecting device, it will be appreciated that this is not limitation of the present invention.
In one embodiment, image collecting device is common camera.In this case, it is assumed that sample object is pedestrian, in order to obtain the Sample video data being suitable to training, it is possible to make to wear the medicated clothing being prone to separate, overcoat etc. as more bright-coloured in color with background area as the pedestrian of sample object.By recording time and the position that this pedestrian occurs successively in each photographic head, it is possible to obtain the Sample video data relevant to this pedestrian.These Sample video data may be used for it is later determined that position relationship between photographic head.
In another embodiment, image collecting device is depth camera.In such a case, it is possible to distinguish background and pedestrian by the change of the depth information in video data.Therefore, in this case, the dress as the pedestrian of sample object is unrestricted.Similarly, by recording time and the position that this pedestrian occurs successively in each photographic head, it is possible to obtain the Sample video data relevant to this pedestrian.These Sample video data may be used for it is later determined that position relationship between photographic head.
Sample object can be any suitable moving target, for instance pedestrian, vehicle etc..In order to determine the position relationship between photographic head, it is possible to use certain moving target is trained as sample.Before training, it is necessary to record the movement locus of these sample object.For example it is possible to record pedestrian comprises the movement locus in the scene of multiple photographic head at certain.It is understood that when there is multiple photographic head in the scene, a pedestrian is likely to the posting field (i.e. shooting area) through all photographic head, it is also possible to only have passed through the posting field of a portion photographic head.In the case of the latter, the position relationship between a part of photographic head can only be known by the movement locus of this pedestrian.But, the position relationship between residue photographic head can be judged through the movement locus of the pedestrian of the posting field of these photographic head by other.It is therefore not necessary to each pedestrian is through the posting field of all photographic head, but be it desired to the position relationship between all photographic head knowing under certain scene, then each photographic head needs and at least one other photographic head record the movement locus of same a group traveling together.
Posting field as herein described refers to that video acquisition device can collect the region in the geographical space of video data (or image), and the image space at the frame of video place that it collects with video acquisition device is corresponding.In simple terms, if target occurs in the frame of video that video acquisition device collects, illustrate in the posting field that target occurs in video acquisition device.
Sample object can be the moving target in daily life, it is also possible to be used exclusively for the moving target of sample training.Monitoring camera for parking lot, if it is desired to utilize multiple photographic head to realize pedestrian tracking under the scene of whole parking lot, it is necessary to first the deployment scenario of photographic head to be trained.The actual pedestrian that can directly utilize in daily life is trained.The movement locus of the actual pedestrian of monitoring camera record.In such situation, the movement locus ratio of pedestrian is more random, it is possible to through the posting field of all photographic head, it is also possible to merely through the posting field of a part of photographic head.Less the initial pedestrian through parking lot, the possible reference value of video data gathered is less, can only train the approximate location relation of those photographic head of pedestrian's process.Elapsing over time, the pedestrian through parking lot gets more and more, and the amount of the video data gathered is also increasing, finally likely trains the position relationship of all photographic head, and the accuracy that position relationship is determined can improve.
Certainly, ratio preferably, is choose tester to play the part of pedestrian's posting field through photographic head.In initial training process, make in each tester's test process only this tester walk in parking lot as far as possible, so can avoid misrecognized and tracking error, such that it is able to be effectively improved the accuracy of training result.After the fundamental space structure setting up photographic head, it is possible to use the movement locus of many people is trained, so that the tracking accuracy under many people situation can improve.Furthermore it is also possible to utilize extraneous information to be calibrated, make location more accurate as calibration point can be added on movement locus.
In step S220, acquired Sample video data are utilized to be trained determining the position relationship between multiple video acquisition device.
The position relationship between video acquisition device can be judged according to the time sequencing of sample object motion.Position relationship can be understood as the position relationship between the posting field of video acquisition device.If the posting field of two video acquisition devices is relatively near apart or has overlapping region, then it is believed that the two video acquisition device is adjacent.Such as, when pedestrian leaves and immediately enter the posting field of another photographic head from the posting field of a photographic head, it is believed that the two photographic head is adjacent.It is understood that for multiple video acquisition devices, the quantity of the sample object that they record is more big, the accuracy of the position relationship judged is more high.
The advantage that Sample video data are trained is utilized to be in that to utilize simple method to carry out self-adaptive initial.In the deployment process of Target Tracking System, user need not draw the Background of whole scene in advance, also without the distribution of each photographic head of elaborate scheme, the modeling of the position relationship between photographic head only after photographic head has been disposed, need to be completed by some " training ".
Exemplarily, video data handling procedure according to embodiments of the present invention can realize in the unit with memorizer and processor or system.
Video data handling procedure according to embodiments of the present invention can be deployed in video acquisition end place, for instance, in monitoring field, it is possible to be deployed in the video acquisition end of monitoring system.Alternatively, video data handling procedure according to embodiments of the present invention can also be deployed in server end (or high in the clouds) place.For example, it is possible to gather video data in client, the video data collected is sent to server end (or high in the clouds) by client, server end (or high in the clouds) carry out video data process.
According to video data handling procedure provided by the invention, the method that can utilize sample training determines the position relationship between multiple video acquisition device, determined position relationship may be used for realize between multiple video acquisition devices to expectation target from motion tracking, this method realizes simple, the prior information such as deployment and background scene for video acquisition device does not do too many requirement, therefore can reduce the deployment expense of Target Tracking System.
According to embodiments of the present invention, step S220 may include that and determines the sample object access time in the posting field of multiple video acquisition devices according to acquired Sample video data;And determine the position relationship between multiple video acquisition device according to sample object access time in the posting field of multiple video acquisition devices.
As described above, it is possible to judge the position relationship between video acquisition device according to the time sequencing of sample object motion.Exemplarily, if pedestrian continuously moves to the posting field of another photographic head from the posting field of a photographic head, it is believed that the two photographic head is adjacent.So-called continuous motion can be understood as pedestrian and proceeds immediately to the posting field of another photographic head from the posting field of a photographic head, does not enter into the posting field of other photographic head in the posting field of the two photographic head during conversion.Pedestrian can determine according to pedestrian's time of occurrence in the video data of camera collection in the access time of the posting field of each photographic head.
Such as, if according to the Sample video data that photographic head 1 gathers find that certain pedestrian is in the posting field occurring in photographic head 1 for ten minutes at 9 o'clock to nine o'clock, and the Sample video data gathered according to photographic head 2 find that this pedestrian is in the posting field separating present photographic head 2 in 9: 11 to 9 15, and in the posting field not appearing in other photographic head in 9: 10 to 9 11/, then it is believed that photographic head 1 and photographic head 2 are adjacent camera.
Utilize the position relationship that the sample object access time in the posting field of video acquisition device can be determined between video acquisition device quickly and easily.
According to embodiments of the present invention, determine that the position relationship between multiple video acquisition device may include that if for each in one or more sample object according to sample object access time in the posting field of multiple video acquisition devices, this sample object simultaneously appears in the posting field of the first video acquisition device in multiple video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between the first video acquisition device and the second video acquisition device according to one or more sample object movement locus within each self-corresponding first period.
Fig. 3 illustrates the schematic diagram of the movement locus of exemplary sample target according to an embodiment of the invention.Although posting field is three-dimensional, but for convenience, Fig. 3 is illustrated that posting field projection on level ground, represents posting field with this view field.In figure 3, sample object A exits into the posting field y of photographic head 2 from the posting field x of photographic head 1.The movement locus assuming sample object A is movement locus 310.From movement locus 310 it can be seen that sample object A simultaneously appears in posting field x and posting field y within a period of time, namely simultaneously appear in the camera lens of photographic head 1 and photographic head 2 and record in the video data gathered at them.So, according to the video data that photographic head 1 and photographic head 2 gather, the posting field x of known the photographic head 1 and posting field y of photographic head 2 exists to overlap, and their overlapping region should comprise the movement locus in sample object A that time occurring in posting field x and posting field y (namely simultaneously appearing in photographic head 1 and in video data that photographic head 2 gathers) at the same time.Therefore, it can substantially estimate this overlapping region according to sample object A movement locus within this period.It is understood that be likely to more than one sample object overlapping region between photographic head 1 and photographic head 2, the sample object through overlapping region is more many, and the determination of overlapping region is more accurate.Sample object is likely to difference in the time experienced through overlapping region, and each sample object can corresponding respective first period.The movement locus of sample object of all overlapping regions between photographic head 1 and photographic head 2 can be considered to determine the overlapping region between the two photographic head.
nullAccording to embodiments of the present invention,Determine that the position relationship between multiple video acquisition device may include that if for each in one or more sample object according to sample object access time in the posting field of multiple video acquisition devices,This sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence,Then estimate one or more sample object possible movement locus within each self-corresponding second period the disappearance region determining between the first video acquisition device and the second video acquisition device according to one or more sample object possible movement locus within each self-corresponding second period.
With continued reference to Fig. 3, it is assumed that sample object B exits into the posting field y of photographic head 2 from the posting field x of photographic head 1.The movement locus of sample object B is movement locus 320.In movement locus 320, sample object B had not occurred in the posting field of any photographic head (namely not occurred in the video data of camera collection) for some time.In such a case, it is possible to think that sample object B is during this period of time " disappearance ", had not recorded the movement locus of sample object B.As such, it is possible to according to this section of extinction time sample estimates target B at possible movement locus during this period of time.Such as, if this section of extinction time is very short, such as only have several seconds, then it is believed that sample object B is straight line moving, it is possible to leave leaving point coordinates and entering the inlet point coordinate of posting field y and substantially estimate the movement locus of sample object B of posting field x according to sample object B.Further, disappearance region between two photographic head (or claim cannot overlay area) is substantially estimated according to the possible movement locus of sample object B.
Similarly, it is possible to more than one sample object disappearance region between photographic head 1 and photographic head 2, the sample object through disappearance region is more many, and the determination in disappearance region is more accurate.Sample object is likely to difference in the time experienced through disappearance region, and each sample object can corresponding respective second period.The possible movement locus of sample object in all disappearance regions between photographic head 1 and the photographic head 2 disappearance region to determine between the two photographic head can be considered.
According to embodiments of the present invention, video data handling procedure 200 may further include: if for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence, then determine, according to each self-corresponding second period of one or more sample object, the extinction time threshold value that the disappearance region between the first video acquisition device and the second video acquisition device is relevant.
It is temporarily enter the disappearance region between photographic head or really disappear that extinction time threshold value can be used to weigh target to be tracked when target is tracked.An example according to the present invention, time that extinction time threshold value consumes through disappearance region according to sample object and determine, namely determine the time threshold relevant to this disappearance region according to the second period that the sample object in all process disappearance regions is corresponding.The computational methods of extinction time threshold value can set as required.For example, it is possible to select the second period the longest in the second period corresponding to the sample object in all process disappearance regions as extinction time threshold value.Alternatively, it is also possible to the meansigma methods of the second period corresponding to the sample object that calculates all process disappearance regions is as extinction time threshold value.
As set forth above, it is possible to utilizing extinction time threshold value to weigh target to be tracked is temporarily enter the disappearance region between photographic head or really disappear, this is favorably improved the tracking performance of Target Tracking System.
According to embodiments of the present invention, video data handling procedure 200 may further include: for each in multiple video acquisition devices, obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of key point is to have marked;And the geographical position based on key point location of pixels in the picture and key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
Geographical space can indicate with such as outdoor map or indoor plane figure.Pixel coordinate in image space is mapped with the geographical coordinate in map or plane graph, finds out the corresponding relation of the two.Such as, the key point that geographical position is known can be marked on map or plane graph, utilize the image that video acquisition device collection comprises key point, from the video data that image collecting device gathers, so may determine that each key point location of pixels in image space, and then the mapping relations between the geographical position of key point and corresponding location of pixels can be set up.It is of course also possible to make tester through specific key point, determine key point location of pixels in image space by tester is tracked.
The position relationship of the video acquisition device trained can be calibrated by the mapping relations between image space and geographical space.Additionally, after knowing the mapping relations between image space and geographical space, when target to be tracked is tracked, the movement locus of target to be tracked can be mapped with the target to be tracked position in actual geographic space, it is simple to the position of target to be tracked is checked and followed the tracks of.
According to embodiments of the present invention, in acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
As described above, in initial training process, make in each tester's test process only this tester walk in the layout scene of photographic head as far as possible.In the Sample video data obtained in this case, the movement locus of each sample object does not overlap with the movement locus of other sample object on a timeline.So can avoid misrecognized and tracking error, such that it is able to be effectively improved the accuracy of training result.
Fig. 4 illustrates the indicative flowchart of video data handling procedure 400 in accordance with another embodiment of the present invention.Step S410 and the S420 of the video data handling procedure 400 shown in Fig. 4 is similar with the step S210 in above-mentioned video data handling procedure 200 and S220, for sake of simplicity, do not repeat them here.
According to the present embodiment, video data handling procedure 400 may further include following steps.
In step S430, obtain the actual video data gathered by least part of video acquisition device in multiple video acquisition devices for target to be tracked.In step S440, utilize the position relationship between acquired actual video data and at least part of video acquisition device that target to be tracked is tracked.
When carrying out target detection under actual scene and following the tracks of, it is possible to carry out in the following manner.
Utilize the video acquisition device that position relationship is known that target to be tracked is tracked.When target to be tracked enters the posting field of certain video acquisition device, this video acquisition device collection comprise the video of target to be tracked.The actual video data that video acquisition device collects can be obtained in real time.For acquired actual video data, it is possible to for each frame of video, detect target to be tracked by existing object detection method, to mark out the movement locus of target to be tracked.The movement locus of target to be tracked can utilize the marks such as the target to be tracked position in each frame of video and target information.Target information can utilize many algorithms to obtain, and it includes but not limited to histograms of oriented gradients (HOG) feature, convolutional neural networks feature etc..When target to be tracked is pedestrian, typically by the face characteristic of pedestrian as target information.
When target to be tracked is moved in the posting field of the first video acquisition device, it is possible to use the video data real-time tracking target to be tracked that the first video acquisition device gathers.When target to be tracked is left the posting field of the first video acquisition device and enters the posting field of the second adjacent video acquisition device, it is possible to use the video data that the second video acquisition device gathers continues to follow the tracks of target to be tracked.Therefore, position relationship between video acquisition device is known, it is possible to according to the kinestate that target to be tracked is current judges whether to which video acquisition device to realize the lasting tracking to target to be tracked by.
It is understood that the tracking to target to be tracked can also be carry out after all video acquisition devices that target to be tracked is passed collect the video data comprising this target to be tracked.In such a case, it is possible to carry out detection and the tracking of target to be tracked in all video datas comprise target to be tracked.
According to embodiments of the present invention, the position relationship between acquired actual video data and at least part of video acquisition device is utilized to be tracked including to target to be tracked: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the overlapping region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device, similarity between all targets in overlapping region and the target to be tracked that are relatively gathered by the second video acquisition device, determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
Such as, when finding, according to the data of photographic head 1, the posting field that the pedestrian that certain expectation is followed the tracks of enters photographic head 1, it is possible to use this pedestrian is tracked by the video data that photographic head 1 collects.Assume that photographic head 1 is adjacent with photographic head 2 and has therebetween overlapping region.Along with the movement of pedestrian, it may enter the overlapping region between photographic head 1 and photographic head 2.In such a case, it is possible to think that next pedestrian probably enters the posting field of photographic head 2.Therefore, it can utilize photographic head 2 to continue pedestrian is tracked.
It is understood that there may be in the overlapping region that multiple pedestrian occurs between photographic head 1 and photographic head 2 simultaneously.Photographic head 2 comprises the video data of multiple pedestrian by collecting simultaneously.Need from these pedestrians, identify pedestrian to be tracked.The face characteristic of pedestrian to be tracked can be extracted from the video data that photographic head 1 gathers.The face characteristic of all pedestrians the being positioned at overlapping region face characteristic of pedestrian to be tracked and photographic head 2 collected compares, and calculates similarity, and the pedestrian that similarity meets requirement is considered as pedestrian to be tracked and continues to follow the tracks of.In another example, it is also possible to the face characteristic of all pedestrians face characteristic of pedestrian to be tracked and photographic head 2 collected compares, and calculates similarity, similarity it is considered as pedestrian to be tracked more than the pedestrian of similarity threshold and continues to follow the tracks of.
Similarity threshold according to the present embodiment can be determined as desired, and it can be any suitable value, for instance 90% etc., and this is not limited by the present invention.
According to embodiments of the present invention, the position relationship between acquired actual video data and at least part of video acquisition device is utilized to be tracked including to target to be tracked: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the disappearance region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device, similarity between all targets and the target to be tracked of the posting field moving to the second video acquisition device from disappearance region relatively gathered in specific time period by the second video acquisition device, determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
Such as, when finding, according to the data of photographic head 1, the posting field that the pedestrian that certain expectation is followed the tracks of enters photographic head 1, it is possible to use this pedestrian is tracked by the video data that photographic head 1 collects.Assume that photographic head 1 is adjacent with photographic head 2 and has therebetween disappearance region.Along with the movement of pedestrian, it may enter the disappearance region between photographic head 1 and photographic head 2.In such a case, it is possible to think that next pedestrian probably enters the posting field of photographic head 2.
It is understood that there may be multiple pedestrian to move to the posting field of photographic head 2 from the disappearance region between photographic head 1 and photographic head 2 simultaneously.Photographic head 2 comprises the video data of multiple pedestrian by collecting simultaneously.Need from these pedestrians, identify pedestrian to be tracked.The face characteristic of pedestrian to be tracked can be extracted from the video data that photographic head 1 gathers.The face characteristic of all pedestrians of the posting field moving to photographic head 2 from the disappearance region between photographic head 1 and photographic head 2 face characteristic of pedestrian to be tracked and photographic head 2 collected in specific time period compares, calculate similarity, similarity is considered as pedestrian to be tracked more than the pedestrian of similarity threshold and continues to follow the tracks of.In another example, it is also possible to the face characteristic of all pedestrians face characteristic of pedestrian to be tracked and photographic head 2 collected in specific time period compares, and calculates similarity, the pedestrian that similarity meets requirement is considered as pedestrian to be tracked and continues to follow the tracks of.
Similarity threshold according to the present embodiment can be determined as desired, and it can be any suitable value, for instance 90% etc., and this is not limited by the present invention.
Specific time period mentioned above can be any suitable value, for instance ten seconds, 30 seconds, one minute etc..In one example, specific time period can be determined according to the extinction time threshold value obtained in sample training process.Such as, specific time period can less than or equal to and at least part of video acquisition device in the first video acquisition device and the second video acquisition device between the relevant extinction time threshold value in disappearance region.
As described above, extinction time threshold value can be used to weigh target to be tracked when target is tracked is temporarily enter the disappearance region between photographic head or really disappear.Assume that extinction time threshold value is 30 seconds.If pedestrian to be tracked after the disappearance region entered between photographic head 1 and photographic head 2 more than in the posting field still not appearing in photographic head 2 for 30 seconds, namely in the video data that photographic head 2 gathers, can not find similarity and meet the pedestrian of requirement, then it is believed that this pedestrian disappears, its whereabouts are lost.
With reference to Fig. 5, it is shown that the schematic diagram of the movement locus of exemplary according to an embodiment of the invention target to be tracked.Similar with Fig. 3, the projection on level ground of Fig. 5 posting field represents posting field.As shown in fig. 5, it is assumed that target Z to be tracked from left to right moves along movement locus 510.It can be seen that target Z to be tracked is from after the posting field x of photographic head 1 leaves, entering the disappearance region between the posting field x of photographic head 1 and the posting field y of photographic head 2.Now need to judge whether to occur in the posting field of photographic head 2 target Z to be tracked in specific time period.Horizontal line 520 is movement locus 510 expression on a timeline.On horizontal line 520, black round dot represents in the posting field that target to be tracked occurs in photographic head, and grayish round dot represents target to be tracked and disappears.The threshold value indicated on horizontal line 520 refers to specific time period.It can be seen that target to be tracked occurred in the posting field y of photographic head 2 before specific time period terminates from horizontal line 520, photographic head 2 therefore can be utilized to continue target to be tracked is tracked.
According to embodiments of the present invention, video data handling procedure farther includes: transmits the tracking information relevant to target to be tracked and is used for storing, wherein, tracking information includes the target information of the target to be tracked picture position in the actual video data by each collection at least part of video acquisition device and target to be tracked.
By the video data that each video acquisition device gathers, it is possible to obtain the tracking information of target to be tracked.Information to be tracked may refer to show the information of the movement locus of target to be tracked, say, that, it is possible to comprise target to be tracked mentioned above position (i.e. picture position) in each frame of video and target information.Target information can include but not limited to HOG feature, convolutional neural networks feature etc..When target to be tracked is pedestrian, the face characteristic of pedestrian generally can be used as target information.
The tracking information of target to be tracked can be sent to storage device to be used for storing.Realize when video acquisition end at video data handling procedure, it is possible to received the video data of camera collection by processor, after video data is processed, obtain the tracking information of object to be tracked.Subsequently, tracking information can be sent to the storage device being connected and stores by processor.Additionally, tracking information can also be uploaded to remote server storage by processor.Realize when remote server at video data handling procedure, it is possible to received the video data of camera collection by remote server, after video data is processed, obtain the tracking information of object to be tracked.Subsequently, tracking information can be stored by remote server.
The storage of tracking information can facilitate to be retrieved the motion conditions of target to be tracked and checks later.
It is understood that the training process for sample object and the tracking process for target to be tracked can realize in identical or different back end computing device or remote server.Certainly, the video acquisition device gathering the video of target to be tracked needs to include in the video acquisition device that sample training is targeted.
In one embodiment, the tracking of target to be tracked is completed in remote server.When the track of certain a group traveling together leaves the posting field of a certain photographic head, server can according to the position relationship between photographic head, access pedestrian be next likely to through the video data of photographic head and carry out mating and continues tracking.
In one embodiment, the tracking of target to be tracked is completed on the terminal unit possess computing capability.Such as, each photographic head can be connected with respective terminal unit.When the track of certain a group traveling together leaves the posting field of a certain photographic head, the terminal unit being connected with this photographic head can pass through network (such as Ethernet, wireless network etc.) picture position of this pedestrian is sent to face characteristic the terminal unit being connected with adjacent camera, adjacent camera continue to follow the tracks of this pedestrian.The terminal unit being connected with each photographic head can be calculated the tracking information obtained and be uploaded onto the server.
According to a further aspect of the invention, it is provided that a kind of video data processing apparatus.Fig. 6 illustrates the schematic block diagram of video data processing apparatus 600 according to an embodiment of the invention.
As shown in Figure 6, video data processing apparatus 600 according to embodiments of the present invention includes the first acquisition module 610 and training module 620.
First acquisition module 610 is for obtaining the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of multiple video acquisition devices the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in multiple video acquisition devices.First acquisition module 610 can realize by the programmed instruction of storage in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1.
Training module 620 is for utilizing acquired Sample video data to be trained determining the position relationship between multiple video acquisition device.Training module 620 can realize by the programmed instruction of storage in processor 102 Running storage device 104 in electronic equipment as shown in Figure 1.
According to embodiments of the present invention, training module 620 may include that the time determines submodule, for determining the sample object access time in the posting field of multiple video acquisition devices according to acquired Sample video data;And position determines submodule, for determining the position relationship between multiple video acquisition device according to sample object access time in the posting field of multiple video acquisition devices.
According to embodiments of the present invention, position determines that submodule includes overlapping region and determines unit, if for for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in multiple video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between the first video acquisition device and the second video acquisition device according to one or more sample object movement locus within each self-corresponding first period.
nullAccording to embodiments of the present invention,Position determines that submodule includes disappearance area determination unit,If for for each in one or more sample object,This sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence,Then estimate one or more sample object possible movement locus within each self-corresponding second period the disappearance region determining between the first video acquisition device and the second video acquisition device according to one or more sample object possible movement locus within each self-corresponding second period.
According to embodiments of the present invention, video data processing apparatus 600 farther includes time threshold and determines module, if for for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence, then determine, according to each self-corresponding second period of one or more sample object, the extinction time threshold value that the disappearance region between the first video acquisition device and the second video acquisition device is relevant.
According to embodiments of the present invention, video data processing apparatus 600 farther includes: the second acquisition module, for for each in multiple video acquisition devices, obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of key point is to have marked;And computing module, for for each in multiple video acquisition devices, the geographical position based on key point location of pixels in the picture and key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
According to embodiments of the present invention, in acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
According to embodiments of the present invention, video data processing apparatus 600 farther includes: the 3rd acquisition module, for obtaining the actual video data gathered by least part of video acquisition device in multiple video acquisition devices for target to be tracked;And tracking module, for utilizing the position relationship between acquired actual video data and at least part of video acquisition device that target to be tracked is tracked.
According to embodiments of the present invention, tracking module includes the first tracking submodule, for when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the overlapping region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device, similarity between all targets in overlapping region and the target to be tracked that are relatively gathered by the second video acquisition device, determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
According to embodiments of the present invention, tracking module includes the second tracking submodule, for when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the disappearance region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device, similarity between all targets and the target to be tracked of the posting field moving to the second video acquisition device from disappearance region relatively gathered in specific time period by the second video acquisition device, determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
According to embodiments of the present invention, specific time period less than or equal to and the first video acquisition device and the second video acquisition device between the relevant extinction time threshold value in disappearance region.
According to embodiments of the present invention, video data processing apparatus 600 farther includes delivery module, for transmitting the tracking information relevant to target to be tracked for storing, wherein, tracking information includes the target information of the target to be tracked picture position in the actual video data by each collection at least part of video acquisition device and target to be tracked.
Those of ordinary skill in the art are it is to be appreciated that the unit of each example that describes in conjunction with the embodiments described herein and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel specifically can should be used for using different methods to realize described function to each, but this realization is it is not considered that beyond the scope of this invention.
Fig. 7 illustrates the schematic block diagram of video data processing system 700 according to an embodiment of the invention.Video data processing system 700 includes video acquisition device 710, storage device 720 and processor 730.
Video acquisition device 710 is for gathering the video data of needs.Video acquisition device 710 is optional, and video data processing system 700 can not include video acquisition device 710.
Described storage device 720 stores the program code for realizing the corresponding steps in video data handling procedure according to embodiments of the present invention.
Described processor 730 is for running the program code of storage in described storage device 720, to perform the corresponding steps of video data handling procedure according to embodiments of the present invention, and it is used for realizing the first acquisition module 610 in video data processing apparatus according to embodiments of the present invention and training module 620.
In one embodiment, described program code makes described video data processing system 700 perform following steps when being run by described processor 730: obtain the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of multiple video acquisition devices the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in multiple video acquisition devices;And utilize acquired Sample video data to be trained determining the position relationship between multiple video acquisition device.
In one embodiment, described program code makes the acquired Sample video data that utilize performed by described video data processing system 700 be trained determining that the step of the position relationship between multiple video acquisition device includes when being run by described processor 730: determine the sample object access time in the posting field of multiple video acquisition devices according to acquired Sample video data;And determine the position relationship between multiple video acquisition device according to sample object access time in the posting field of multiple video acquisition devices.
In one embodiment, described program code makes when being run by described processor 730 to determine that the step of the position relationship between multiple video acquisition device includes according to sample object access time in the posting field of multiple video acquisition devices performed by described video data processing system 700: if for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in multiple video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between the first video acquisition device and the second video acquisition device according to one or more sample object movement locus within each self-corresponding first period.
nullIn one embodiment,Described program code makes when being run by described processor 730 to determine that the step of the position relationship between multiple video acquisition device includes according to sample object access time in the posting field of multiple video acquisition devices performed by described video data processing system 700: if for each in one or more sample object,This sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence,Then estimate one or more sample object possible movement locus within each self-corresponding second period the disappearance region determining between the first video acquisition device and the second video acquisition device according to one or more sample object possible movement locus within each self-corresponding second period.
In one embodiment, described program code also makes described video data processing system 700 perform when being run by described processor 730: if for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence, then determine, according to each self-corresponding second period of one or more sample object, the extinction time threshold value that the disappearance region between the first video acquisition device and the second video acquisition device is relevant.
In one embodiment, described program code also makes described video data processing system 700 perform when being run by described processor 730: for each in multiple video acquisition devices, obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of key point is to have marked;And the geographical position based on key point location of pixels in the picture and key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
In one embodiment, in acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
In one embodiment, described program code also makes described video data processing system 700 perform when being run by described processor 730: obtain the actual video data gathered by least part of video acquisition device in multiple video acquisition devices for target to be tracked;And utilize the position relationship between acquired actual video data and at least part of video acquisition device that target to be tracked is tracked.
nullIn one embodiment,Described program code makes the step that target to be tracked is tracked by the position relationship between acquired actual video data and at least part of video acquisition device that utilizes performed by described video data processing system 700 include when being run by described processor 730: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the overlapping region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device,Similarity between all targets in overlapping region and the target to be tracked that are relatively gathered by the second video acquisition device,Determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
nullIn one embodiment,Described program code makes the step that target to be tracked is tracked by the position relationship between acquired actual video data and at least part of video acquisition device that utilizes performed by described video data processing system 700 include when being run by described processor 730: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the disappearance region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device,Similarity between all targets and the target to be tracked of the posting field moving to the second video acquisition device from disappearance region relatively gathered in specific time period by the second video acquisition device,Determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
In one embodiment, described specific time period less than or equal to and the first video acquisition device and the second video acquisition device between the relevant extinction time threshold value in disappearance region.
In one embodiment, described program code also makes described video data processing system 700 perform when being run by described processor 730: transmit the tracking information relevant to target to be tracked and be used for storing, wherein, tracking information includes the target information of the target to be tracked picture position in the actual video data by each collection at least part of video acquisition device and target to be tracked.
In addition, according to embodiments of the present invention, additionally provide a kind of storage medium, store programmed instruction on said storage, when described programmed instruction is run by computer or processor for performing the corresponding steps of the video data handling procedure of the embodiment of the present invention, and for realizing the corresponding module in video data processing apparatus according to embodiments of the present invention.Described storage medium such as can include the combination in any of the storage card of smart phone, the memory unit of panel computer, the hard disk of personal computer, read only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read only memory (CD-ROM), USB storage or above-mentioned storage medium.
In one embodiment, described computer program instructions so that computer or processor realize each functional module of video data processing apparatus according to embodiments of the present invention, and/or can perform video data handling procedure according to embodiments of the present invention when being run by computer or processor.
In one embodiment, described computer program instructions makes described computer perform following steps when being run by computer: obtain the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of multiple video acquisition devices the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in multiple video acquisition devices;And utilize acquired Sample video data to be trained determining the position relationship between multiple video acquisition device.
In one embodiment, described computer program instructions makes the acquired Sample video data that utilize performed by described computer be trained determining that the step of the position relationship between multiple video acquisition device includes when being run by computer: determine the sample object access time in the posting field of multiple video acquisition devices according to acquired Sample video data;And determine the position relationship between multiple video acquisition device according to sample object access time in the posting field of multiple video acquisition devices.
In one embodiment, described computer program instructions makes when being run by computer to determine that the step of the position relationship between multiple video acquisition device includes according to sample object access time in the posting field of multiple video acquisition devices performed by described computer: if for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in multiple video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between the first video acquisition device and the second video acquisition device according to one or more sample object movement locus within each self-corresponding first period.
nullIn one embodiment,Described computer program instructions makes when being run by computer to determine that the step of the position relationship between multiple video acquisition device includes according to sample object access time in the posting field of multiple video acquisition devices performed by described computer: if for each in one or more sample object,This sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence,And this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence,Then estimate one or more sample object possible movement locus within each self-corresponding second period the disappearance region determining between the first video acquisition device and the second video acquisition device according to one or more sample object possible movement locus within each self-corresponding second period.
In one embodiment, described computer program instructions also makes described computer perform when being run by computer: if for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in multiple video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in multiple video acquisition devices in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of multiple video acquisition device within the second period of its correspondence, then determine, according to each self-corresponding second period of one or more sample object, the extinction time threshold value that the disappearance region between the first video acquisition device and the second video acquisition device is relevant.
In one embodiment, described computer program instructions also makes described computer perform when being run by computer: for each in multiple video acquisition devices, obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of key point is to have marked;And the geographical position based on key point location of pixels in the picture and key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
In one embodiment, in acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
In one embodiment, described computer program instructions also makes described computer perform when being run by computer: obtain the actual video data gathered by least part of video acquisition device in multiple video acquisition devices for target to be tracked;And utilize the position relationship between acquired actual video data and at least part of video acquisition device that target to be tracked is tracked.
nullIn one embodiment,Described computer program instructions makes the step that target to be tracked is tracked by the position relationship between acquired actual video data and at least part of video acquisition device that utilizes performed by described computer include when being run by computer: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the overlapping region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device,Similarity between all targets in overlapping region and the target to be tracked that are relatively gathered by the second video acquisition device,Determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
nullIn one embodiment,Described computer program instructions makes the step that target to be tracked is tracked by the position relationship between acquired actual video data and at least part of video acquisition device that utilizes performed by described computer include when being run by computer: when according to when finding that by the actual video data of the first video acquisition device collection at least part of video acquisition device target to be tracked moves to the disappearance region between the second video acquisition device the first video acquisition device and at least part of video acquisition device from the posting field of the first video acquisition device,Similarity between all targets and the target to be tracked of the posting field moving to the second video acquisition device from disappearance region relatively gathered in specific time period by the second video acquisition device,Determine that similarity is target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by the second video acquisition device to follow the tracks of target to be tracked.
In one embodiment, described specific time period less than or equal to and the first video acquisition device and the second video acquisition device between the relevant extinction time threshold value in disappearance region.
In one embodiment, described computer program instructions also makes described computer perform when being run by computer: transmits the tracking information relevant to target to be tracked and is used for storing, wherein, tracking information includes the target information of the target to be tracked picture position in the actual video data by each collection at least part of video acquisition device and target to be tracked.
Each module in video data processing system according to embodiments of the present invention can be run, by the processor implementing the electronic equipment that video data processes according to embodiments of the present invention, the computer program instructions stored in memory and realize, or realizes when the computer instruction that can store in the computer-readable recording medium of computer program according to embodiments of the present invention is run by computer.
Video data handling procedure according to embodiments of the present invention and device, video data processing system and storage medium, the method that can utilize sample training determines the position relationship between multiple video acquisition device, this method realizes simple, the prior information such as deployment and background scene for video acquisition device does not do too many requirement, therefore can reduce the deployment expense of Target Tracking System.
Although describing example embodiment by reference to accompanying drawing here, it should be understood that above-mentioned example embodiment is merely exemplary, and it is not intended to limit the scope of the invention to this.Those of ordinary skill in the art can make various changes and modifications wherein, is made without departing from the scope of the present invention and spirit.All such changes and modifications are intended to be included within the scope of the present invention required by claims.
Those of ordinary skill in the art are it is to be appreciated that the unit of each example that describes in conjunction with the embodiments described herein and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel specifically can should be used for using different methods to realize described function to each, but this realization is it is not considered that beyond the scope of this invention.
In several embodiments provided herein, it should be understood that disclosed equipment and method, it is possible to realize by another way.Such as, apparatus embodiments described above is merely schematic, such as, the division of described unit, being only a kind of logic function to divide, actual can have other dividing mode when realizing, for instance multiple unit or assembly can in conjunction with or be desirably integrated into another equipment, or some features can ignore, or do not perform.
In description mentioned herein, describe a large amount of detail.It is to be appreciated, however, that embodiments of the invention can be put into practice when not having these details.In some instances, known method, structure and technology it are not shown specifically, in order to do not obscure the understanding of this description.
Similarly, it is to be understood that, one or more in order to what simplify that the present invention helping understands in each inventive aspect, in the description to the exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or descriptions thereof sometimes.But, not should by this present invention method namely be construed to and reflect an intention that the present invention for required protection requires feature more more than the feature being expressly recited in each claim.More precisely, reflecting such as corresponding claims, its inventive point is in that to solve corresponding technical problem by the feature of all features of embodiment single disclosed in certain.Therefore, it then follows claims of detailed description of the invention are thus expressly incorporated in this detailed description of the invention, wherein each claim itself as the independent embodiment of the present invention.
It will be appreciated by those skilled in the art that, except mutually exclusive between feature, it is possible to adopt any combination that all processes or the unit of all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed any method or equipment are combined.Unless expressly stated otherwise, each feature disclosed in this specification (including adjoint claim, summary and accompanying drawing) can be replaced by the alternative features providing purpose identical, equivalent or similar.
In addition, those skilled in the art it will be appreciated that, although embodiments more described herein include some feature included in other embodiments rather than further feature, but the combination of the feature of different embodiment means to be within the scope of the present invention and form different embodiments.Such as, in detail in the claims, the one of any of embodiment required for protection can mode use in any combination.
The all parts embodiment of the present invention can realize with hardware, or realizes with the software module run on one or more processor, or realizes with their combination.It will be understood by those of skill in the art that the some or all functions that microprocessor or digital signal processor (DSP) can be used in practice to realize some modules in article analytical equipment according to embodiments of the present invention.The present invention is also implemented as part or all the device program (such as, computer program and computer program) for performing method as described herein.The program of such present invention of realization can store on a computer-readable medium, or can have the form of one or more signal.Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or provides with any other form.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and those skilled in the art can design alternative embodiment without departing from the scope of the appended claims.In the claims, any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not exclude the presence of the element or step not arranged in the claims.Word "a" or "an" before being positioned at element does not exclude the presence of multiple such element.The present invention by means of including the hardware of some different elements and can realize by means of properly programmed computer.In the unit claim listing some devices, several in these devices can be through same hardware branch and specifically embody.Word first, second and third use do not indicate that any order.Can be title by these word explanations.
The above; it is only the specific embodiment of the present invention or the explanation to detailed description of the invention; protection scope of the present invention is not limited thereto; any those familiar with the art is in the technical scope that the invention discloses; change can be readily occurred in or replace, all should be encompassed within protection scope of the present invention.Protection scope of the present invention should be as the criterion with scope of the claims.

Claims (24)

1. a video data handling procedure, including:
Obtain the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of the plurality of video acquisition device the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in the plurality of video acquisition device;And
Acquired Sample video data are utilized to be trained determining the position relationship between the plurality of video acquisition device.
2. video data handling procedure as claimed in claim 1, wherein, the described position relationship utilizing acquired Sample video data to be trained determining between the plurality of video acquisition device includes:
The sample object access time in the posting field of the plurality of video acquisition device is determined according to described acquired Sample video data;And
The position relationship between the plurality of video acquisition device is determined according to sample object access time in the posting field of the plurality of video acquisition device.
3. video data handling procedure as claimed in claim 2, wherein, the position relationship that the described access time according to sample object in the posting field of the plurality of video acquisition device is determined between the plurality of video acquisition device includes:
If for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in the plurality of video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between described first video acquisition device and described second video acquisition device according to the one or more sample object movement locus within each self-corresponding first period.
4. video data handling procedure as claimed in claim 2, wherein, the position relationship that the described access time according to sample object in the posting field of the plurality of video acquisition device is determined between the plurality of video acquisition device includes:
If for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then estimate the one or more sample object possible movement locus within each self-corresponding second period and determine the disappearance region between described first video acquisition device and described second video acquisition device according to the one or more sample object possible movement locus within each self-corresponding second period.
5. video data handling procedure as claimed in claim 4, wherein, described video data handling procedure farther includes:
If for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then each self-corresponding second period according to the one or more sample object determines the extinction time threshold value that the disappearance region between described first video acquisition device to described second video acquisition device is relevant.
6. video data handling procedure as claimed in claim 1, wherein, described video data handling procedure farther includes:
For each in the plurality of video acquisition device,
Obtaining the image gathered by this video acquisition device for the key point of its correspondence, the geographical position of described key point is to have marked;And
Geographical position based on described key point location of pixels in described image and described key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
7. video data handling procedure as claimed in claim 1, wherein, in described acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
8. the video data handling procedure as described in any one of claim 1 to 7, wherein, described video data handling procedure farther includes:
Obtain the actual video data gathered by least part of video acquisition device in the plurality of video acquisition device for target to be tracked;And
Utilize the position relationship between acquired actual video data and described at least part of video acquisition device that described target to be tracked is tracked.
9. video data handling procedure as claimed in claim 8, wherein, described target to be tracked is tracked including by the described position relationship utilized between acquired actual video data and described at least part of video acquisition device:
When according to when being found that described target to be tracked moves to the overlapping region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, the all targets in described overlapping region relatively gathered by described second video acquisition device and the similarity between described target to be tracked, it is determined that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
10. video data handling procedure as claimed in claim 8, wherein, described target to be tracked is tracked including by the described position relationship utilized between acquired actual video data and described at least part of video acquisition device:
When according to when being found that described target to be tracked moves to the disappearance region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, similarity between all targets and the described target to be tracked of the posting field moving to described second video acquisition device from described disappearance region relatively gathered in specific time period by described second video acquisition device, it is determined that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
11. video data handling procedure as claimed in claim 10, wherein, described specific time period less than or equal to and described first video acquisition device to described second video acquisition device between the relevant extinction time threshold value in disappearance region.
12. video data handling procedure as claimed in claim 8, wherein, described video data handling procedure farther includes:
Transmit the tracking information relevant to described target to be tracked to be used for storing, wherein, described tracking information includes the target information of the described target to be tracked picture position in the actual video data by each collection in described at least part of video acquisition device and described target to be tracked.
13. a video data processing apparatus, including:
First acquisition module, for obtaining the Sample video data gathered respectively by multiple video acquisition devices, wherein, one of the plurality of video acquisition device the Sample video data gathered and the movement locus being recorded same sample object by the Sample video data of other video acquisition device collections of at least one in the plurality of video acquisition device;And
Training module, for utilizing acquired Sample video data to be trained determining the position relationship between the plurality of video acquisition device.
14. video data processing apparatus as claimed in claim 13, wherein, described training module includes:
Time determines submodule, for determining the sample object access time in the posting field of the plurality of video acquisition device according to described acquired Sample video data;And
Submodule is determined in position, for determining the position relationship between the plurality of video acquisition device according to sample object access time in the posting field of the plurality of video acquisition device.
15. video data processing apparatus as claimed in claim 14, wherein, described position determines that submodule includes:
Unit is determined in overlapping region, if for for each in one or more sample object, this sample object simultaneously appears in the posting field of the first video acquisition device in the plurality of video acquisition device and the second video acquisition device within the first period of its correspondence, then determine the overlapping region between described first video acquisition device and described second video acquisition device according to the one or more sample object movement locus within each self-corresponding first period.
16. video data processing apparatus as claimed in claim 14, wherein, described position determines that submodule includes:
Disappearance area determination unit, if for for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then estimate the one or more sample object possible movement locus within each self-corresponding second period and determine the disappearance region between described first video acquisition device and described second video acquisition device according to the one or more sample object possible movement locus within each self-corresponding second period.
17. video data processing apparatus as claimed in claim 16, wherein, described video data processing apparatus farther includes:
Time threshold determines module, if for for each in one or more sample object, this sample object is left the posting field of the first video acquisition device in the plurality of video acquisition device in the start time of the second period of its correspondence and enters the posting field of the second video acquisition device in the plurality of video acquisition device in the finish time of the second period of its correspondence, and this sample object does not appear in any one posting field of the plurality of video acquisition device within the second period of its correspondence, then each self-corresponding second period according to the one or more sample object determines the extinction time threshold value that the disappearance region between described first video acquisition device to described second video acquisition device is relevant.
18. video data processing apparatus as claimed in claim 13, wherein, described video data processing apparatus farther includes:
Second acquisition module, for for each in the plurality of video acquisition device, obtains the image gathered by this video acquisition device for the key point of its correspondence, and the geographical position of described key point is to have marked;And
Computing module, for for each in the plurality of video acquisition device, the geographical position based on described key point location of pixels in described image and described key point calculates the mapping relations between the image space of this video acquisition device and geographical space.
19. video data processing apparatus as claimed in claim 13, wherein, in described acquired Sample video data, the movement locus of all sample object of record does not occur simultaneously on a timeline.
20. the video data processing apparatus as described in any one of claim 13 to 19, wherein, described video data processing apparatus farther includes:
3rd acquisition module, for obtaining the actual video data gathered by least part of video acquisition device in the plurality of video acquisition device for target to be tracked;And
Tracking module, for utilizing the position relationship between acquired actual video data and described at least part of video acquisition device that described target to be tracked is tracked.
21. video data processing apparatus as claimed in claim 20, wherein, described tracking module includes:
First follows the tracks of submodule, for when according to when being found that described target to be tracked moves to the overlapping region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, the all targets in described overlapping region relatively gathered by described second video acquisition device and the similarity between described target to be tracked, determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
22. video data processing apparatus as claimed in claim 20, wherein, described tracking module includes:
Second follows the tracks of submodule, for when according to when being found that described target to be tracked moves to the disappearance region between the second video acquisition device described first video acquisition device and described at least part of video acquisition device from the posting field of described first video acquisition device by the actual video data of the first video acquisition device collection in described at least part of video acquisition device, similarity between all targets and the described target to be tracked of the posting field moving to described second video acquisition device from described disappearance region relatively gathered in specific time period by described second video acquisition device, determine that similarity is described target to be tracked more than the target of similarity threshold and utilizes the actual video data gathered by described second video acquisition device to follow the tracks of described target to be tracked.
23. video data processing apparatus as claimed in claim 22, wherein, described specific time period less than or equal to and described first video acquisition device to described second video acquisition device between the relevant extinction time threshold value in disappearance region.
24. video data processing apparatus as claimed in claim 20, wherein, described video data processing apparatus farther includes:
Delivery module, for transmitting the tracking information relevant to described target to be tracked for storing, wherein, described tracking information includes the target information of the described target to be tracked picture position in the actual video data by each collection in described at least part of video acquisition device and described target to be tracked.
CN201610079944.1A 2016-02-04 2016-02-04 Video data handling procedure and device Active CN105744223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610079944.1A CN105744223B (en) 2016-02-04 2016-02-04 Video data handling procedure and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610079944.1A CN105744223B (en) 2016-02-04 2016-02-04 Video data handling procedure and device

Publications (2)

Publication Number Publication Date
CN105744223A true CN105744223A (en) 2016-07-06
CN105744223B CN105744223B (en) 2019-01-29

Family

ID=56241890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610079944.1A Active CN105744223B (en) 2016-02-04 2016-02-04 Video data handling procedure and device

Country Status (1)

Country Link
CN (1) CN105744223B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862240A (en) * 2017-09-19 2018-03-30 深圳韵脉智能科技有限公司 A kind of face tracking methods of multi-cam collaboration
CN108337485A (en) * 2018-03-27 2018-07-27 中冶华天工程技术有限公司 Caller management method based on video motion track
CN108366343A (en) * 2018-03-20 2018-08-03 珠海市微半导体有限公司 The method that intelligent robot monitors pet
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region
CN110232712A (en) * 2019-06-11 2019-09-13 武汉数文科技有限公司 Indoor occupant positioning and tracing method and computer equipment
CN110443134A (en) * 2019-07-03 2019-11-12 安徽四创电子股份有限公司 A kind of system and working method of the recognition of face tracking based on video flowing
CN112102372A (en) * 2020-09-16 2020-12-18 上海麦图信息科技有限公司 Cross-camera track tracking system for airport ground object

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046152A1 (en) * 1998-11-20 2009-02-19 Aman James A Optimizations for live event, real-time, 3D object tracking
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
CN101627630A (en) * 2007-03-06 2010-01-13 松下电器产业株式会社 Camera coupling relation information generating device
US7777783B1 (en) * 2007-03-23 2010-08-17 Proximex Corporation Multi-video navigation
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102523370A (en) * 2011-11-22 2012-06-27 上海交通大学 Multi-camera video abnormal behavior detection method based on network transmission algorithm
CN102821246A (en) * 2012-08-29 2012-12-12 上海天跃科技股份有限公司 Camera linkage control method and monitoring system
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
CN103997624A (en) * 2014-05-21 2014-08-20 江苏大学 Overlapped domain dual-camera target tracking system and method
CN104010168A (en) * 2014-06-13 2014-08-27 东南大学 A non-overlapping multi-camera surveillance network topology adaptive learning method
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN104063867A (en) * 2014-06-27 2014-09-24 浙江宇视科技有限公司 Multi-camera video synchronization method and multi-camera video synchronization device
CN104618688A (en) * 2015-01-19 2015-05-13 荣科科技股份有限公司 Visual monitor protection method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046152A1 (en) * 1998-11-20 2009-02-19 Aman James A Optimizations for live event, real-time, 3D object tracking
CN101627630A (en) * 2007-03-06 2010-01-13 松下电器产业株式会社 Camera coupling relation information generating device
US7777783B1 (en) * 2007-03-23 2010-08-17 Proximex Corporation Multi-video navigation
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
CN102523370A (en) * 2011-11-22 2012-06-27 上海交通大学 Multi-camera video abnormal behavior detection method based on network transmission algorithm
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102821246A (en) * 2012-08-29 2012-12-12 上海天跃科技股份有限公司 Camera linkage control method and monitoring system
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN103997624A (en) * 2014-05-21 2014-08-20 江苏大学 Overlapped domain dual-camera target tracking system and method
CN104010168A (en) * 2014-06-13 2014-08-27 东南大学 A non-overlapping multi-camera surveillance network topology adaptive learning method
CN104063867A (en) * 2014-06-27 2014-09-24 浙江宇视科技有限公司 Multi-camera video synchronization method and multi-camera video synchronization device
CN104618688A (en) * 2015-01-19 2015-05-13 荣科科技股份有限公司 Visual monitor protection method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862240A (en) * 2017-09-19 2018-03-30 深圳韵脉智能科技有限公司 A kind of face tracking methods of multi-cam collaboration
CN107862240B (en) * 2017-09-19 2021-10-08 中科(深圳)科技服务有限公司 Multi-camera collaborative face tracking method
CN108366343A (en) * 2018-03-20 2018-08-03 珠海市微半导体有限公司 The method that intelligent robot monitors pet
US11259502B2 (en) 2018-03-20 2022-03-01 Amicro Semiconductor Co., Ltd. Intelligent pet monitoring method for robot
CN108337485A (en) * 2018-03-27 2018-07-27 中冶华天工程技术有限公司 Caller management method based on video motion track
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region
CN108932496B (en) * 2018-07-03 2022-03-25 北京佳格天地科技有限公司 Method and device for counting number of target objects in area
CN110232712A (en) * 2019-06-11 2019-09-13 武汉数文科技有限公司 Indoor occupant positioning and tracing method and computer equipment
CN110443134A (en) * 2019-07-03 2019-11-12 安徽四创电子股份有限公司 A kind of system and working method of the recognition of face tracking based on video flowing
CN110443134B (en) * 2019-07-03 2022-06-07 安徽四创电子股份有限公司 Face recognition tracking system based on video stream and working method
CN112102372A (en) * 2020-09-16 2020-12-18 上海麦图信息科技有限公司 Cross-camera track tracking system for airport ground object

Also Published As

Publication number Publication date
CN105744223B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN105744223A (en) Video data processing method and apparatus
CN110717414B (en) Target detection tracking method, device and equipment
CN107305627B (en) Vehicle video monitoring method, server and system
Grassi et al. Parkmaster: An in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments
JP6825674B2 (en) Number of people counting method and number of people counting system
CN104103030B (en) Image analysis method, camera apparatus, control apparatus and control method
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
CN109325456B (en) Target identification method, target identification device, target identification equipment and storage medium
CN107885795B (en) Data verification method, system and device for card port data
CN110785774A (en) Method and system for closed-loop perception in autonomous vehicles
CN110869559A (en) Method and system for integrated global and distributed learning in autonomous vehicles
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprint
CN111160243A (en) Passenger flow volume statistical method and related product
US8971573B2 (en) Video-tracking for video-based speed enforcement
CN105608417A (en) Traffic signal lamp detection method and device
CN104574954A (en) Vehicle checking method and system based on free flow system as well as control equipment
US20210216906A1 (en) Integrating simulated and real-world data to improve machine learning models
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
CN114842439B (en) Vehicle identification method, device, electronic device and storage medium across sensing devices
CN109684986A (en) A kind of vehicle analysis method and system based on automobile detecting following
WO2015049340A1 (en) Marker based activity transition models
CN116740753A (en) Target detection and tracking method and system based on improved YOLOv5 and deep SORT
JP7538631B2 (en) Image processing device, image processing method, and program
CN111784742B (en) Pedestrian cross-lens tracking method and device
EP3244344A1 (en) Ground object tracking system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载