Disclosure of Invention
In view of the above, an object of the present invention is to provide a method, an apparatus and an electronic device for tracking a target, so as to alleviate the technical problem that the target in a detection area cannot be tracked without sensing in the prior art.
In a first aspect, an embodiment of the present invention provides a method for tracking a target, including: acquiring distance information obtained when a target object is detected in a detection area by a plurality of distance sensors; acquiring the installation position of each distance sensor in the detection area; and tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area.
Further, tracking each target object by combining the distance information and the installation position to determine a moving track of each target object in the detection area comprises: starting with a starting distance sensor, determining one or more continuous target distance sensors in the plurality of distance sensors, and drawing a moving track of the target object in combination with the installation positions of the target distance sensors; the starting distance sensor is the first sensor triggered in the plurality of distance sensors when the target object enters the detection area, and the target distance sensor is a distance sensor which continuously detects the same target object.
Further, the method further comprises: acquiring attribute characteristics of the target object; establishing an association relation between the attribute characteristics and the moving track according to the acquisition time of the attribute characteristics and the initial trigger time of the moving track to obtain association data; the associated data comprises attribute characteristics and a moving track of the same target object, and the starting trigger time is a trigger time corresponding to the starting point of the moving track.
Further, the obtaining of the attribute feature of the target object includes: acquiring image information which is acquired by an image acquisition device and contains the target object, wherein the image information is an image acquired when the target object enters or leaves the detection area, and the image information comprises physical information and/or clothing information of the target object; and performing attribute analysis on the image information to obtain attribute characteristics of the target object.
Further, the image information includes a plurality of target objects which appear at the same time, and the establishing of the association relationship between the attribute feature and the movement track according to the acquisition time of the attribute feature and the start trigger time of the movement track includes: acquiring position information of a plurality of target objects included in the image information; and establishing an association relation between the attribute characteristics of the target object and the moving track according to the position information of each target object, the acquisition time of the attribute characteristics of each target object and the initial trigger time of the moving track.
Further, the method further comprises: acquiring attribute characteristics of the moving track; and analyzing the associated data by combining the attribute characteristics of the movement track and/or the attribute characteristics of the target object to obtain a movement track distribution map belonging to each attribute characteristic.
Further, the method further comprises: determining label information, wherein the label information is used for distinguishing each moving track; and binding the label information and the moving track.
Further, the tag information is determined by any one of the following methods: determining the label information by using the face feature information, wherein one face feature information corresponds to one label information; determining the label information by using the generation time of the movement track; and determining the label information by using the moving track.
Further, the method further comprises: and carrying out data analysis on the moving tracks belonging to different label information to obtain a moving track distribution diagram of each label information.
Further, the plurality of distance sensors are mounted at the top end of the detection area in the form of a sensor array, wherein the number of the sensor array is one or more.
In a second aspect, an embodiment of the present invention further provides a tracking apparatus for a target, including: a first acquisition unit configured to acquire distance information obtained when a target object is detected in a detection region by a plurality of distance sensors; a second acquisition unit configured to acquire an installation position of each distance sensor in the detection area; and the track tracking unit is used for tracking each target object by combining the distance information and the installation position so as to determine the moving track of each target object in the detection area.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method when executing the computer program.
In a fourth aspect, the present invention also provides a computer-readable medium having a non-volatile program code executable by a processor, where the program code causes the processor to execute the method described above.
In the embodiment of the invention, firstly, distance information obtained when a plurality of distance sensors detect a target object in a detection area is obtained; then acquiring the installation position of each distance sensor in a detection area; and finally, tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area. In the embodiment of the invention, the target object can be accurately positioned and tracked in an imperceptible manner by acquiring the distance information through the distance sensor, so that the technical problem that the target in the detection area cannot be tracked in an imperceptible manner in the prior art is solved, and the technical effect of carrying out the imperceptible tracking on the target object in the detection area is realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Example two:
in accordance with an embodiment of the present invention, there is provided an embodiment of a method for tracking objects, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that described herein.
Fig. 2 is a flowchart of a method for tracking a target according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S202, obtaining distance information obtained when a plurality of distance sensors detect a target object in a detection area;
in the embodiment of the present invention, the detection area may be a store, a mall, or a food court. The target object is a person entering the detection area, e.g. a customer entering a shop, a customer entering a mall.
It should be noted that, in the embodiment of the present invention, the target object is not limited to a human being, and may be any object moving in the detection area, and may be specifically determined according to the actual needs of the user.
Step S204, acquiring the installation position of each distance sensor in the detection area;
in the embodiment of the present invention, the installation position is expressed as coordinate information of the distance sensor within the detection area. When the distance sensor detects the passage of the target object, the detected distance information will change. For example, when the distance information detected by a certain distance sensor changes from 2.5 meters to 1 meter, it indicates that the distance sensor detects that a target object passes through.
Step S206, tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area;
in the embodiment of the invention, the target object can be tracked through the distance information output by the distance sensors and the installation positions of the distance sensors so as to determine the moving track of each target object in the detection area. According to the above description, in the process of tracking the target object, the whole process is perceptive-free, and the tracking of the moving track can be realized without starting any terminal equipment by the target object.
In the embodiment of the present invention, the above-described steps S202 to S206 may be performed by a processor. The processor may be a processor installed in the detection area, and may also be a cloud processor. When the processor is a processor installed in the detection area, the processor acquires distance information acquired by the distance sensor and then generates a movement trajectory of the target object based on the distance information, wherein at this time, the processor and the distance sensor may be connected by wire or wirelessly. When the processor is a cloud processor, the distance sensor transmits the distance information to the cloud processor through the local area network, so that the cloud processor generates a moving track of the target object based on the distance information.
It should be noted that, in addition to the two processors, the processors capable of executing step S202 to step S206 may be applied to the embodiment of the present invention, and this is not particularly limited.
In the embodiment of the invention, firstly, distance information obtained when a plurality of distance sensors detect a target object in a detection area is obtained; then acquiring the installation position of each distance sensor in a detection area; and finally, tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area. In the embodiment of the invention, the target object can be accurately positioned and tracked in an imperceptible manner by acquiring the distance information through the distance sensor, so that the technical problem that the target in the detection area cannot be tracked in an imperceptible manner in the prior art is solved, and the technical effect of carrying out the imperceptible tracking on the target object in the detection area is realized.
In an embodiment of the present invention, a plurality of distance sensors are mounted at the top end of the detection region in the form of a sensor array, wherein the number of the sensor array is one or more.
The single distance sensor in the plurality of distance sensors includes, but is not limited to, ToF, ultrasonic, infrared light switch, microwave, and the like. In addition, the plurality of distance sensors may be selected as a lidar, and may be selected as a low line count lidar, such as 1-8 line lidar, to enable tracking of a person's trajectory.
It should be noted that, when a single distance sensor is selected from the group consisting of ToF, ultrasonic, infrared switch, microwave, and the like, a plurality of distance sensors are mounted on the top end of the detection area in the form of a sensor array, that is, the sensor array is mounted in the detection area by ceiling mounting.
As shown in fig. 3, a sensor array 2 is installed between two rows of shelves 1 by means of ceiling installation. Because each distance sensor in the sensor array has corresponding detection range and detection precision, when the area of the detection area exceeds the detection range of the sensor array, a plurality of sensor arrays can be installed at the top end of the detection area in a suspended ceiling installation mode. The specific installation mode can be that a plurality of sensor arrays are installed at the top end of the detection area at equal intervals. For example, as shown in fig. 3, a sensor array is mounted at the top between any two adjacent shelves (i.e., in the hallway). In addition to the equidistant mounting, the sensor arrays may be mounted in non-equidistant mounting.
In the embodiment of the present invention, the plurality of sensor arrays may be mounted in parallel or in a non-parallel manner. In the non-parallel installation mode, the installation mode can be a vertical intersection installation mode, a non-vertical intersection installation mode, or a non-vertical intersection installation mode. In the embodiment of the present invention, the installation manner of the plurality of sensor arrays is not specifically limited, and a user may set the number of the plurality of sensor arrays and the installation manner of the plurality of sensor arrays according to the actual channel width in the detection region and the required data accuracy.
Through installing sensor array at the top of surveying the region, can carry out distance detection in real time to following through sensor array, if there is the pedestrian (promptly, target object) to appear, can judge that the object that has certain stable height constantly changes in the position, and the change of target object's position can obtain through the reading change of different distance sensor. In this way, the system can recognize that an independent shape (i.e., the target object) with a certain height moves under the sensor array, so that the target object can be tracked, and the moving track of the target object can be obtained.
In the embodiment of the invention, after the plurality of distance sensors are installed, the target object in the detection area can be detected by the plurality of distance sensors, so that the distance information is obtained. Then, the respective target objects are tracked based on the installation position and the distance information of each distance sensor to determine the movement trajectories of the respective target objects in the detection area.
In an alternative embodiment, as shown in fig. 4, in step S206, the tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area includes the following steps:
step S2061, starting with a starting distance sensor, determining one or more continuous target distance sensors among the plurality of distance sensors, and drawing a movement trajectory of the target object in combination with the installation positions of the target distance sensors;
the starting distance sensor is the first sensor triggered in the plurality of distance sensors when the target object enters the detection area, and the target distance sensor is a distance sensor which continuously detects the same target object.
In the embodiment of the present invention, when a target object enters the detection area, one or more distance sensors (i.e., initial distance sensors) installed at the entrance of the detection area will detect that the target object enters the detection area, and at this time, the one or more distance sensors will output distance information obtained when the target object is detected. Next, one or more target distance sensors that continuously detect the target object may be determined starting from the one or more distance sensors, where the one or more target distance sensors are continuous sensors and the one or more target distance sensors are distance sensors that continuously detect the same target object. After one or more target distance sensors are determined, the moving track of the target object can be drawn by combining the installation positions of the target distance sensors in the detection area.
For example, when a target object a enters the detection area, a distance sensor B located at the entrance of the detection area detects the target object a, wherein the distance sensor B is a starting distance sensor. At this time, the distance sensor B outputs a distance information. If the distance between the distance sensor B and the ground is 2.5 meters, and the height of the target object A is 1.8 meters, the position information output by the distance sensor B is 0.7 meter. At this time, a range sensor that continuously outputs 0.7 m of range information is determined as a target range sensor in the sensor array starting with the range sensor B. Then, a movement locus of the target object is drawn based on the installation position of the target distance sensor in the detection region.
In the embodiment of the invention, the target object can be accurately positioned and tracked in an imperceptible manner by acquiring the distance information through the distance sensor, so that the technical problem that the target in the detection area cannot be tracked in an imperceptible manner in the prior art is solved, and the technical effect of carrying out the imperceptible tracking on the target object in the detection area is realized.
But trajectory tracking is most important for retail sales to know the trajectory of a particular group of people, i.e., what the trajectory of people of different attributes (gender, age, etc.) is within the store. Therefore, the association relationship between the attribute characteristics of the target object and the movement track of the target object still needs to be established in combination with machine vision.
In the embodiment of the present invention, when tracking the movement trajectory of each target object, it is further necessary to perform data association between the attribute features of the target object and the movement trajectory of the target object in a time axis synchronization manner.
In an optional embodiment, establishing the association relationship between the attribute feature of the target object and the movement track of the target object may be implemented by the following description process:
firstly, acquiring the attribute characteristics of the target object;
wherein the obtaining of the attribute characteristics of the target object comprises: acquiring image information which is acquired by an image acquisition device and contains the target object, wherein the image information is an image acquired when the target object enters or leaves the detection area, and the image information comprises physical information and/or clothing information of the target object; and performing attribute analysis on the image information to obtain attribute characteristics of the target object.
Then, according to the acquisition time of the attribute characteristics and the initial trigger time of the moving track, establishing an association relationship between the attribute characteristics and the moving track to obtain association data; the associated data comprises attribute characteristics and a moving track of the same target object, and the starting trigger time is a trigger time corresponding to the starting point of the moving track.
It should be noted that, if the target object is a person, the physical information may be information such as face information, hair style information, body type information, posture and gait; the clothing information may be clothing information, for example, information related to clothes, and information related to hats, for example, whether or not a hat is worn, what type of hat is worn, and the like; the above attribute features include information on sex, age, height, race (caucasian, caucasian and black), hairstyle and clothing.
In the embodiment of the present invention, when a target object enters the detection area or when a tracked target object exits the detection area, image information of each target object entering or exiting the detection area may be acquired by using an image acquisition device (for example, an RGB camera with a face recognition function), and as can be seen from the above description, the acquired image information includes the physical appearance information and/or clothing information of the target object. At this time, an attribute analysis may be performed according to the physical information and/or the clothing information to determine an attribute feature of the target object, wherein the attribute analysis includes analysis of a human face, a human body, and the like, and the analyzed attribute feature includes information of gender, age, height, race (caucasian, yellow-seeded, and black-seeded), hair style, clothing, and the like.
After the attribute features are obtained through analysis, the attribute features of the person and the moving tracks identified by the sensor array can be subjected to data association in a time axis synchronization mode to obtain associated data. For example, the attribute feature obtained by analyzing the image information captured by the RGB camera at the time T0 (i.e., the start acquisition time) is combined with the position information of the sensor array at the time T0 (i.e., the start trigger time) to establish the association information between the attribute feature and the movement track. After the association relationship is established, association data is generated.
In the embodiment of the invention, the shooting angle of the RGB camera can be properly adjusted, a shooting range can be set, only the face in the shooting range is identified, and the remote face can be filtered by a face size threshold; for a plurality of faces transversely appearing in the shooting range, the position information of the faces can be extracted, and according to the difference of the position information, the faces are combined with the position information obtained by the sensor array at the time of T0 to distinguish different people. At subsequent times, different persons are tracked separately.
It should be noted that, the attribute analysis of the person can be performed when entering the detection area, or when leaving the detection area, and the data is more accurate because only one face is aligned with the camera when leaving the detection area; according to the acquired human face or human body RGB image, human attribute analysis including sex, age, height, hairstyle, clothes and the like is carried out, and the attribute characteristics of the human are associated with subsequent track information.
When the attribute feature of the target object is in data association with the movement track of the target object, the acquisition time of the attribute feature of the target object and the start trigger time of the movement track can be acquired. The acquisition time is the time when the image acquisition device shoots that the target object enters the detection area. And if the two times are the same, performing data association on the attribute characteristics and the movement track with the same time.
For example, a certain object a is 13:00 enter a convenience store. At this point, the image capture device 1 will be at 13: 00. At this time, the distance sensor B will also detect the entry of the object a into the detection area at 13:00 minutes (i.e., the initial trigger time). After the distance sensor B detects that the object a enters the detection area, the moving trajectory of the object a will be tracked based on the distance information detected by each distance sensor and the installation position of each distance sensor, and the moving trajectory of the object a will be drawn. In establishing the data association relationship between the attribute features of the target object and the movement track of the target object, the association relationship is established based on the time when the object A enters the store (i.e., 13:00 minutes) and the initial trigger time of the object A (i.e., 13:00 minutes). It should be noted that the obtaining time and the starting triggering time are not required to be strictly consistent, and a certain error may exist, and the error value may be set according to an actual situation, which is not specifically limited.
When a plurality of customers enter a shop simultaneously, a plurality of target objects to be presented simultaneously in the image information, in this case, according to the acquisition time of the attribute feature and the start trigger time of the movement trajectory, establishing an association relationship between the attribute feature and the movement trajectory, and obtaining association data includes the following steps:
firstly, acquiring position information of a plurality of target objects included in the image information;
then, according to the position information of each target object, the acquisition time of the attribute feature of each target object and the starting trigger time of the movement track, establishing an association relationship between the attribute feature of the target object and the movement track.
If two customers enter a certain shop at the same time, the image acquisition device acquires image information containing the two customers at the same time. At this time, the image capturing device may transmit the position relationship between two customers in the image information to the gateway device of the sensor array, so that the sensor array establishes an association relationship between the attribute features of each customer and the movement trajectory according to the position information of each customer, the attribute features of each customer, and the start trigger time of the movement trajectory of each customer.
Specifically, when the image acquisition device acquires image information containing two customers, the distance sensors in the sensor array respectively detect the two customers and output corresponding distance information. At this time, the association relationship between the attribute features of the two customers and the corresponding movement trajectories may be established based on the position information of the distance sensors that detect the two customers and the position information of the two customers.
In the embodiment of the present invention, after the attribute features of the target object are associated with the movement trajectory, the associated data may be classified, and the classification may be specifically implemented through the following processes:
firstly, acquiring the attribute characteristics of the moving track;
and then, analyzing the associated data by combining the attribute characteristics of the movement track and/or the attribute characteristics of the target object to obtain a movement track distribution diagram belonging to each attribute characteristic.
In the embodiment of the present invention, after the movement trajectory is generated, the attribute feature of the movement trajectory may be further generated based on the generation time of the movement trajectory and the movement trajectory itself. The associated data may then be classified according to the attribute features of the target object and/or the attribute features of the movement trajectory. For example, the associated data is classified according to the generation time of the movement trajectory to determine the movement trajectory of the target object and the number of target objects in each time period. For another example, the related data with the ages of 20-35 years can be classified into one group according to the attribute characteristics of the target object, and the related data with the ages of more than 55 years can be classified into one group, so as to determine the purchasing behaviors of customers with different ages. For another example, the movement trajectories may be grouped in combination with the generation time of the movement trajectories and the age of the target object. The specific grouping manner is not particularly limited in the embodiment of the present invention.
In the embodiment of the invention, the movement track distribution diagram of each label information can be obtained by analyzing the associated data. The data analysis includes thermodynamic analysis of the trajectory, analysis of the trajectory of a male or female, analysis of the trajectory of a particular age group, and the like. If the detection area is a shop or the like, the result of the big data analysis can be used for shopping guide, commodity placement position planning and the like.
In the embodiment of the present invention, after the movement tracks are generated, the movement tracks also need to be distinguished, and specifically, the movement tracks can be distinguished in the following manner:
firstly, determining label information, wherein the label information is used for distinguishing each moving track; wherein the tag information is determined by any one of the following methods: determining the label information by using the face feature information, wherein one face feature information corresponds to one label information; determining the label information by using the generation time of the movement track; determining the label information by using the moving track;
and then, binding the label information and the movement track.
In the embodiment of the present invention, the recorded tracks all contain their own tag information (i.e., ID) for distinguishing between the tracks. At this time, the face feature information may be used as a tag for distinguishing different persons and person trajectories, and at this time, a unique tag information (i.e., ID) is generated using the face attribute feature detected by the image detection apparatus without registering a base; in addition, a tag information (i.e., ID) may be generated using the time when the movement trace is generated and the movement trace itself. The purpose of the tag information is to distinguish between different track data.
In the embodiment of the present invention, the face information of the target object may be analyzed from the image information captured by the RGB capture camera, so as to obtain the face feature information of the target object, where the face feature information includes feature information of eyes, feature information of mouth, feature information of nose, and the like, for example, feature points of eyes, mouth, and nose, and positions thereof. As can be seen from the above description, by determining tag information for a movement trajectory, it is possible to distinguish a large number of movement trajectories.
It should be noted that after determining the tag information for each movement track, an association relationship between the movement track and the attribute features of the target object may also be established, so as to implement big data analysis and processing on the movement track.
In the embodiment of the invention, the data output by the distance sensor is not interfered by the signal communication quality at all, and is not interfered by the background color and the light intensity in the image information. The target object can be located and tracked by simply installing a plurality of distance sensors on the ceiling of the detection area. Compared with the traditional tracking method, the method provided by the embodiment of the invention greatly reduces the requirements on algorithm and computing power, improves the data precision and enables low-cost track tracking to be possible.
Example three:
the embodiment of the present invention further provides a target tracking device, which is mainly used for executing the target tracking method provided by the above-mentioned embodiments of the present invention, and the following describes the target tracking device provided by the embodiments of the present invention in detail.
Fig. 5 is a schematic diagram of a target tracking apparatus according to an embodiment of the present invention, as shown in fig. 5, the target tracking apparatus mainly includes a first obtaining unit 10, a second obtaining unit 20 and a trajectory tracking unit 30, wherein:
a first acquisition unit 10 configured to acquire distance information obtained when a target object is detected in a detection region by a plurality of distance sensors;
a second acquisition unit 20 for acquiring the mounting position of each distance sensor in the detection area;
a trajectory tracking unit 30, configured to track each target object by combining the distance information and the installation position to determine a moving trajectory of each target object in the detection area.
In the embodiment of the invention, firstly, distance information obtained when a plurality of distance sensors detect a target object in a detection area is obtained; then acquiring the installation position of each distance sensor in a detection area; and finally, tracking each target object by combining the distance information and the installation position to determine the moving track of each target object in the detection area. In the embodiment of the invention, the target object can be accurately positioned and tracked in an imperceptible manner by acquiring the distance information through the distance sensor, so that the technical problem that the target in the detection area cannot be tracked in an imperceptible manner in the prior art is solved, and the technical effect of carrying out the imperceptible tracking on the target object in the detection area is realized.
Optionally, the plurality of distance sensors are mounted at the top end of the detection region in the form of a sensor array, wherein the number of the sensor array is one or more.
Optionally, the trajectory tracking unit 30 is configured to: starting with a starting distance sensor, determining one or more continuous target distance sensors in the plurality of distance sensors, and drawing a moving track of the target object in combination with the installation positions of the target distance sensors; the starting distance sensor is the first sensor triggered in the plurality of distance sensors when the target object enters the detection area, and the target distance sensor is a distance sensor which continuously detects the same target object.
Optionally, the apparatus further comprises: a third obtaining unit, configured to obtain an attribute feature of the target object; the establishing unit is used for establishing an association relation between the attribute characteristics and the moving track according to the acquisition time of the attribute characteristics and the initial trigger time of the moving track to obtain association data; the associated data comprises attribute characteristics and a moving track of the same target object, and the starting trigger time is a trigger time corresponding to the starting point of the moving track.
Optionally, the third obtaining unit is configured to: acquiring image information which is acquired by an image acquisition device and contains the target object, wherein the image information is an image acquired when the target object enters or leaves the detection area, and the image information comprises physical information and/or clothing information of the target object; and performing attribute analysis on the image information to obtain attribute characteristics of the target object.
Optionally, the establishing unit is further configured to: acquiring position information of a plurality of target objects included in the image information under the condition that the image information includes the plurality of target objects which appear at the same time; and establishing an association relation between the attribute characteristics of the target object and the moving track according to the position information of each target object, the acquisition time of the attribute characteristics of each target object and the initial trigger time of the moving track.
Optionally, the apparatus is further configured to: acquiring attribute characteristics of the moving track; and analyzing the associated data by combining the attribute characteristics of the movement track and/or the attribute characteristics of the target object to obtain a movement track distribution map belonging to each attribute characteristic.
Optionally, the apparatus is further configured to: determining label information, wherein the label information is used for distinguishing each moving track; and binding the label information and the moving track.
Optionally, the tag information is determined by any one of the following methods: determining the label information by using the face feature information, wherein one face feature information corresponds to one label information; determining the label information by using the generation time of the movement track; and determining the label information by using the moving track.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The method, the apparatus, and the computer program product for tracking a target provided in the embodiments of the present invention include a computer-readable storage medium storing a non-volatile program code executable by a processor, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.