WO2018137456A1 - Visual tracking method and device - Google Patents
Visual tracking method and device Download PDFInfo
- Publication number
- WO2018137456A1 WO2018137456A1 PCT/CN2017/118809 CN2017118809W WO2018137456A1 WO 2018137456 A1 WO2018137456 A1 WO 2018137456A1 CN 2017118809 W CN2017118809 W CN 2017118809W WO 2018137456 A1 WO2018137456 A1 WO 2018137456A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- eye
- visual tracking
- data
- visual
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
Definitions
- Embodiments of the present invention relate to the field of bio-feedback signal data processing technologies, and in particular, to a visual signal tracking method and a tracking device.
- Visual recognition and tracking are mainly to judge the attention direction and attention trajectory of the pupil of the eye. Since the pupil is composed of physiological organs and tissues such as the sclera and the iris, the pupils have a large individualized difference, and the difference of the iris is the largest.
- the current technical means mainly adopts image recognition for the eye, and uses the binary feature, the gradient histogram, etc., combined with the filtering operation such as expansion corrosion to achieve the purpose of extracting the iris position.
- the above method is basically based on prior knowledge. For complex biological individual differences, it is necessary to determine more hypothetical parameter sets and threshold ranges, which can only be effective in a limited scene, and the accuracy is low, and the iris of the dynamic image cannot be targeted for real-time scenes. deal with.
- the embodiments of the present invention provide a visual tracking method and a tracking device, which are used to solve the technical problem that an eye object cannot be accurately located in real time.
- the eye pattern is processed using the eye object key point as the test data of the object processing method, and the eye object position is determined to form visual focus data.
- the visual tracking data is used as a control signal for the action change of the virtual vision.
- the acquiring an eye pattern includes:
- a symmetrical eye image is cropped according to the eye feature points.
- the key points for establishing an eye object include:
- the key points of the eye object are formed in a semi-manual or automated manner.
- the processing the eye pattern by using the eye object key point as the test data of the object processing method, determining the position of the eye object, and forming the visual focus data includes:
- the continuous change of the eye object to form visual tracking data includes:
- visual tracking data is formed according to the relative position change of the eye object
- visual tracking data is formed according to the relative positional change of the corresponding key point of the eye object.
- the action changes for using the visual tracking data as a control signal for virtual vision include:
- the visual tracking data is used to control the movement of key points in the three-dimensional model or the two-dimensional model of the eye and/or the object to form a change in the virtual vision.
- An image acquisition module configured to acquire an eye pattern
- Key data creation module for establishing key points of eye objects
- the object recognition module is configured to process the eye pattern by using the eye object key point as the test data of the object processing method, determine the iris position, and form the visual focus data.
- a visual tracking data generating module configured to continuously change the eye object to form visual tracking data
- a virtual vision control module for using visual tracking data as a control signal for motion changes of virtual vision.
- the image acquisition module includes:
- a contour acquisition sub-module for acquiring a facial features of the face
- An image cropping sub-module for cropping a symmetrical eye image according to an eye feature point is provided.
- the key data establishing module includes:
- An eye object creation sub-module for establishing an eye object in a semi-manual or automatic manner
- the object key points establish sub-modules for forming key points of the eye object in a semi-manual or automatic manner.
- the object recognition module includes:
- An image importing sub-module configured to import pixel data of an eye pattern into an ERT algorithm for processing as a training data
- An image processing sub-module configured to use the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm
- the eye object position generation sub-module is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
- the visual tracking data generating module includes:
- An eye object trajectory generating sub-module configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time
- the object key point trajectory generating sub-module is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
- the virtual vision control module includes:
- a virtual focus generation sub-module for establishing a key point of an eye object and/or an eye object, and mapping with an eye object in a three-dimensional model or a two-dimensional model and a key point of the object to form a virtual visual focus;
- the virtual vision generation sub-module is configured to control the movement of the key points of the three-dimensional model or the two-dimensional model of the eye and/or the object by using the visual tracking data to form a change of the virtual vision.
- a visual tracking device includes a processor and a memory
- the memory is used in the program code of the visual tracking method described above;
- the processor is configured to execute the program code.
- the visual tracking method and the visual tracking device of the embodiments of the present invention determine the eye pattern based on the mature face detection technology, avoid processing a large number of redundant image signals, and simplify the calculation amount of the image processing.
- the key points of the eye are established by supervised or semi-supervised learning, using the quantitative tools to form a high quality of the calibration data, and the key point calibration data in the image processing method has the effect of directional cutting of the classification of the eye objects such as iris data. Meet the exact positioning of the eye object. It is also useful to further determine other eye objects such as pupil boundaries, and further form accurate visual focus and visual motion trajectory.
- FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention.
- FIG. 2 is a flow chart of a visual tracking method according to an embodiment of the present invention.
- FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of 68 feature points of a facial facial features determined in the prior art.
- FIG. 5 is a schematic structural diagram of key points of an eye object in a left eye pattern in a visual tracking method according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
- FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 1, the visual tracking method of the embodiment of the present invention includes:
- Step 10 Obtain an eye pattern
- Step 20 Establish a key point of the eye object
- Step 30 The eye pattern is processed by using the eye object key point as test data of the object processing method, and the eye object position is determined to form visual focus data.
- the visual tracking method of the embodiment of the present invention determines the eye pattern based on the mature face detection technology, avoids the processing of a large number of redundant image signals, and simplifies the calculation amount of the image processing.
- Eye key points are established using supervised or semi-supervised learning, using quantitative tools to form a high quality of calibration data, and key point calibration data is used to classify eye objects such as iris data in, for example, ERT (Ensemble of Regression Trees) processing methods. It has the effect of directional cropping, which satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries.
- FIG. 2 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 2, based on the foregoing embodiment, the visual tracking method of the embodiment of the present invention further includes:
- Step 40 Continuously change the eye object to form visual tracking data
- Step 50 Use the visual tracking data as a control signal for the action change of the virtual vision.
- the visual tracking method of the embodiment of the invention forms the visual tracking data of the continuous visual focus data of the human eye, and based on the mature coordinate change process, the corresponding actions of the iris object and the pupil object of the eye of the anthropomorphic object can be formed, and the anthropomorphic object pair is realized.
- the synchronous positive feedback of the human visual eye enriches the emotional expression of the anthropomorphic object.
- FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 3, in the visual tracking method of an embodiment of the present invention, step 10 further includes:
- Step 11 Obtain the facial features of the face.
- the acquisition of the facial features yields 68 feature points (as shown in Figure 4). It is important to be clear that the feature points of the facial features cannot be used to accurately describe the position and characteristics of the facial features.
- Step 12 Crop a symmetrical eye image according to the eye feature points.
- the cropping takes 68 feature points as an example, and the left eye pattern surrounded by the eye points of the feature points 37-42 and the right eye pattern surrounded by the feature points 43-48 are respectively cut by the minimum circumscribed rectangle algorithm.
- step 20 further includes:
- Step 21 Establish an eye object in a semi-manual or automatic manner.
- the image recognition algorithm is used to further determine the approximate range of the eye object.
- the image recognition algorithm establishes an approximate range of eye objects by a mapping pattern on a two-dimensional plane during the movement of the three-dimensional model of the established eye object.
- Step 22 Form a key point of the eye object in a semi-manual or automatic manner.
- the key points are manually marked, and on the basis of the artificial mark, the image recognition algorithm is used to further mark the key points of the eye object concealment.
- the image recognition algorithm marks the key points of the eye object through the mapping points on the two-dimensional plane during the movement of the three-dimensional model of the established eye object.
- the manual and automatic combination of a small number of eye images can effectively improve the processing speed and ensure the accuracy, which provides a guarantee for further ensuring the accuracy of the subsequent algorithms as training data.
- a large number of eye images are automatically processed to ensure the speed of dynamic vision processing.
- the eye object determined in step 20 includes:
- the 12 key points of the eyelids and eyelids include the key points at the ends of the eyelids and the key points at the maximum distance between the upper eyelid and the lower eyelid.
- the eight key points of the iris and the iris include the key points at the maximum distance between the left and right edges of the iris in the horizontal direction and the maximum points at the maximum distance between the upper and lower edges of the iris in the vertical direction.
- the eight key points of the pupil and the pupil include the key point at the maximum distance between the left edge and the right edge of the pupil in the horizontal direction and the key point at the maximum distance between the upper edge and the lower edge of the pupil in the vertical direction.
- Key points include the corresponding coordinate position and pattern properties.
- step 30 includes:
- Step 31 Import pixel data of the eye pattern into the ERT algorithm as training data for processing
- Step 32 The determined eye object and the eye object key point are used as test data to correct the processing result of the ERT algorithm;
- Step 33 Form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
- the key point data obtained by manual or semi-manual processing is used as test data to ensure the prediction accuracy in the process of the ERT algorithm.
- the step 40 further includes:
- Step 41 forming visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
- Step 42 In the eye pattern acquired in real time, visual tracking data is formed according to the relative position change of the corresponding key point of the eye object.
- the accuracy is high, the iris position error does not exceed 3% (refers to the actual iris position and predicted iris position distance in addition to the maximum distance of the lower eyelid);
- step 50 further includes:
- Step 51 Establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
- Step 52 Control the movement of the key points of the object and/or the object in the three-dimensional model or the two-dimensional model of the eye using the visual tracking data to form a change of the virtual vision.
- the visual tracking method of the embodiment of the invention can apply the obtained visual tracking data to the eye expression of the virtual object, thereby further improving the anthropomorphic feature of the virtual object.
- FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention. As shown in FIG. 6, the visual tracking method corresponding to the embodiment of the present invention further includes a visual tracking device, including:
- the image acquisition module 100 is configured to acquire an eye pattern.
- the key data creation module 200 is configured to establish an eye object key point.
- the object recognition module 300 is configured to process the eye pattern by using the eye object key point as the test data of the object processing method, determine the iris position, and form the visual focus data.
- FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 7, the visual tracking device according to an embodiment of the present invention further includes:
- the visual tracking data generating module 400 is configured to form a visual tracking data by continuously changing the eye object.
- the virtual vision control module 500 is configured to use the visual tracking data as a control signal for the motion change of the virtual vision.
- FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 8, in the visual tracking device of an embodiment of the present invention, the image acquisition module 100 includes:
- the contour acquisition sub-module 110 is configured to acquire a facial features of the face.
- the image cropping sub-module 120 is configured to crop a symmetrical eye image according to the eye feature points.
- the key data establishing module 200 includes:
- the eye object creation sub-module 210 is configured to establish an eye object in a semi-manual or automatic manner.
- the object key point sub-module 220 is configured to form a key point of the eye object in a semi-manual or automatic manner.
- the object recognition module 300 includes:
- the image importing sub-module 310 is configured to import the pixel data of the eye pattern into the ERT algorithm for processing as the training data;
- the image processing sub-module 320 is configured to correct the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm;
- the eye object position generation sub-module 330 is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
- the visual tracking data generating module 400 includes:
- the eye object trajectory generating sub-module 410 is configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
- the object key point trajectory generation sub-module 420 is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
- the virtual visual control module 500 includes:
- the virtual focus generation sub-module 510 is configured to establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
- the virtual vision generation sub-module 520 is configured to control the movement of the key points in the three-dimensional model or the two-dimensional model of the eye and/or the key points of the object by using the visual tracking data to form a change of the virtual vision.
- a visual tracking device in accordance with an embodiment of the present invention includes a memory and a processor, wherein:
- a memory for storing program code for implementing the processing steps of the visual tracking method of the above embodiment
- the processor is for executing program code that implements the processing steps of the visual tracking method of the above-described embodiments.
- the disclosed systems, devices, and methods may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the visual tracking method and the tracking device of the embodiment of the present invention determine the eye pattern, avoid processing of a large number of redundant image signals, and simplify the calculation amount of image processing.
- the key point calibration data has the effect of directional clipping on the classification of eye objects such as iris data, and satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries.
- the visual tracking method and the tracking device of the embodiments of the present invention determine that the eye pattern can be generally applied to the smart mobile terminal device, improving the efficiency of the human-computer interaction process.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
本发明是要求由申请人提出的,申请日为2017年01月25日,申请号为CN201710060901.3,名称为“一种视觉跟踪方法和跟踪装置”的申请的优先权。以上申请的全部内容通过整体引用结合于此。The present invention claims priority from the applicant's application with the filing date of January 25, 2017, application number CN201710060901.3, entitled "A Visual Tracking Method and Tracking Device". The entire contents of the above application are hereby incorporated by reference in its entirety.
本发明实施例涉及生物反馈信号数据处理技术领域,特别涉及一种视觉信号跟踪方法和跟踪装置。Embodiments of the present invention relate to the field of bio-feedback signal data processing technologies, and in particular, to a visual signal tracking method and a tracking device.
发明背景Background of the invention
视觉识别和追踪主要是判断眼球瞳孔的注意方向和注意轨迹,由于瞳孔是由巩膜、虹膜等生理器官和组织构成,导致瞳孔的围成具有较大的个体化差异,其中以虹膜的差异最大。目前的技术手段主要采用对眼部进行图像识别,利用二值特征、梯度直方图等,结合膨胀腐蚀等滤波操作,来达到提取虹膜位置的目的。但是上述方法基本是基于先验知识,针对复杂的生物个体差异,需要确定较多的假设参数集合和阈值范围,只能在有限的场景下有效,准确度低不能针对实时场景中动态形象的虹膜处理。Visual recognition and tracking are mainly to judge the attention direction and attention trajectory of the pupil of the eye. Since the pupil is composed of physiological organs and tissues such as the sclera and the iris, the pupils have a large individualized difference, and the difference of the iris is the largest. The current technical means mainly adopts image recognition for the eye, and uses the binary feature, the gradient histogram, etc., combined with the filtering operation such as expansion corrosion to achieve the purpose of extracting the iris position. However, the above method is basically based on prior knowledge. For complex biological individual differences, it is necessary to determine more hypothetical parameter sets and threshold ranges, which can only be effective in a limited scene, and the accuracy is low, and the iris of the dynamic image cannot be targeted for real-time scenes. deal with.
发明内容Summary of the invention
有鉴于此,本发明实施例提供一种视觉跟踪方法和跟踪装置,用于解决眼部对象无法准确实时定位的技术问题。In view of this, the embodiments of the present invention provide a visual tracking method and a tracking device, which are used to solve the technical problem that an eye object cannot be accurately located in real time.
本发明实施例的视觉跟踪方法,包括:The visual tracking method of the embodiment of the invention includes:
获取眼部图案;Obtaining an eye pattern;
建立眼部对象关键点;Establish key points of the eye object;
利用眼部对象关键点作为对象处理方法的测试数据处理眼部图案,确定眼部对象位置,形成视觉焦点数据。The eye pattern is processed using the eye object key point as the test data of the object processing method, and the eye object position is determined to form visual focus data.
还包括:Also includes:
将眼部对象的连续变化,形成视觉跟踪数据;Continuously changing the eye object to form visual tracking data;
将视觉跟踪数据作为控制信号用于虚拟视觉的动作变化。The visual tracking data is used as a control signal for the action change of the virtual vision.
所述获取眼部图案包括:The acquiring an eye pattern includes:
获取脸部的五官轮廓;Get the facial features of the face;
根据眼部特征点裁剪出对称的眼部图像。A symmetrical eye image is cropped according to the eye feature points.
所述建立眼部对象关键点包括:The key points for establishing an eye object include:
利用半人工或自动的方式建立眼部对象;Establishing an eye object in a semi-manual or automated manner;
利用半人工或自动的方式形成眼部对象的关键点。The key points of the eye object are formed in a semi-manual or automated manner.
所述利用眼部对象关键点作为对象处理方法的测试数据处理眼部图案,确定眼部对象位置,形成视觉焦点数据包括:The processing the eye pattern by using the eye object key point as the test data of the object processing method, determining the position of the eye object, and forming the visual focus data includes:
将眼部图案的像素数据导入ERT算法作为训练数据进行处理;Importing the pixel data of the eye pattern into the ERT algorithm as training data for processing;
将确定的眼部对象及眼部对象关键点作为测试数据修正ERT算法的处理结果;Determining the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm;
根据修正的处理结果形成眼部对象的准确轮廓和准确的相对位置关系。According to the corrected processing result, an accurate contour of the eye object and an accurate relative positional relationship are formed.
所述将眼部对象的连续变化,形成视觉跟踪数据包括:The continuous change of the eye object to form visual tracking data includes:
在实时获取的眼部图案中,根据眼部对象的相对位置变化形成视觉跟踪数据;In the eye pattern acquired in real time, visual tracking data is formed according to the relative position change of the eye object;
在实时获取的眼部图案中,根据眼部对象的相应关键点的相对位置变化形成视觉跟踪数据。In the eye pattern acquired in real time, visual tracking data is formed according to the relative positional change of the corresponding key point of the eye object.
所述将视觉跟踪数据作为控制信号用于虚拟视觉的动作变化包括:The action changes for using the visual tracking data as a control signal for virtual vision include:
建立眼部对象和/或眼部对象的关键点,与眼部三维模型或二维模型中的对象和对象的关键点的映射,形成虚拟视觉焦点;Establishing key points of the eye object and/or the eye object, and mapping with the key points of the eye 3D model or the 2D model and the object to form a virtual visual focus;
利用视觉跟踪数据控制眼部三维模型或二维模型中的对象和/或对象的关键点的移动,形成虚拟视觉的变化。The visual tracking data is used to control the movement of key points in the three-dimensional model or the two-dimensional model of the eye and/or the object to form a change in the virtual vision.
本发明实施例的视觉跟踪装置,包括:The visual tracking device of the embodiment of the invention includes:
图像获取模块,用于获取眼部图案;An image acquisition module, configured to acquire an eye pattern;
关键数据建立模块,用于建立眼部对象关键点;Key data creation module for establishing key points of eye objects;
对象识别模块,用于利用眼部对象关键点作为对象处理方法的测试数据处理眼部图案,确定虹膜位置,形成视觉焦点数据。The object recognition module is configured to process the eye pattern by using the eye object key point as the test data of the object processing method, determine the iris position, and form the visual focus data.
还包括:Also includes:
视觉跟踪数据生成模块,用于将眼部对象的连续变化,形成视觉跟踪数据;a visual tracking data generating module, configured to continuously change the eye object to form visual tracking data;
虚拟视觉控制模块,用于将视觉跟踪数据作为控制信号用于虚拟视觉的动作变化。A virtual vision control module for using visual tracking data as a control signal for motion changes of virtual vision.
所述图像获取模块包括:The image acquisition module includes:
轮廓获取子模块,用于获取脸部的五官轮廓;a contour acquisition sub-module for acquiring a facial features of the face;
图像裁剪子模块,用于根据眼部特征点裁剪出对称的眼部图像。An image cropping sub-module for cropping a symmetrical eye image according to an eye feature point.
所述关键数据建立模块包括:The key data establishing module includes:
眼部对象建立子模块,用于利用半人工或自动的方式建立眼部对象;An eye object creation sub-module for establishing an eye object in a semi-manual or automatic manner;
对象关键点建立子模块,用于利用半人工或自动的方式形成眼部对象的关键点。The object key points establish sub-modules for forming key points of the eye object in a semi-manual or automatic manner.
所述对象识别模块包括:The object recognition module includes:
图像导入子模块,用于将眼部图案的像素数据导入ERT算法作为训练数据进行处理;An image importing sub-module, configured to import pixel data of an eye pattern into an ERT algorithm for processing as a training data;
图像处理子模块,用于将确定的眼部对象及眼部对象关键点作为测试数据修正ERT算法的处理结果;An image processing sub-module, configured to use the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm;
眼部对象位置生成子模块,用于根据修正的处理结果形成眼部对象的准确轮廓和准确的相对位置关系。The eye object position generation sub-module is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
所述视觉跟踪数据生成模块包括:The visual tracking data generating module includes:
眼部对象轨迹生成子模块,用于在实时获取的眼部图案中,根据眼部对象的相对位置变化形成视觉跟踪数据;An eye object trajectory generating sub-module, configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
对象关键点轨迹生成子模块,用于在实时获取的眼部图案中,根据眼部对象的相应关键点的相对位置变化形成视觉跟踪数据。The object key point trajectory generating sub-module is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
所述虚拟视觉控制模块包括:The virtual vision control module includes:
虚拟焦点生成子模块,用于建立眼部对象和/或眼部对象的关键点,与眼部三 维模型或二维模型中的对象和对象的关键点的映射,形成虚拟视觉焦点;a virtual focus generation sub-module for establishing a key point of an eye object and/or an eye object, and mapping with an eye object in a three-dimensional model or a two-dimensional model and a key point of the object to form a virtual visual focus;
虚拟视觉生成子模块,用于利用视觉跟踪数据控制眼部三维模型或二维模型中的对象和/或对象的关键点的移动,形成虚拟视觉的变化。The virtual vision generation sub-module is configured to control the movement of the key points of the three-dimensional model or the two-dimensional model of the eye and/or the object by using the visual tracking data to form a change of the virtual vision.
本发明实施例的视觉跟踪装置,包括处理器和存储器,A visual tracking device according to an embodiment of the present invention includes a processor and a memory,
所述存储器用于上述的视觉跟踪方法的程序代码;The memory is used in the program code of the visual tracking method described above;
所述处理器用于运行所述程序代码。The processor is configured to execute the program code.
本发明实施例的视觉跟踪方法和视觉跟踪装置在利用成熟的脸部检测技术基础上确定眼部图案,避免了大量冗余图像信号的处理,精简了图像处理的运算量。眼部关键点的建立采用监督或半监督学习,利用量化工具形成具有较高的标定数据质量,关键点标定数据在图像处理方法中对眼部对象如虹膜数据的分类起到定向裁剪的效果,满足眼部对象的准确定位。并有利于进一步确定其他眼部对象如瞳孔边界,进一步形成准确的视觉焦点和视觉运动轨迹。The visual tracking method and the visual tracking device of the embodiments of the present invention determine the eye pattern based on the mature face detection technology, avoid processing a large number of redundant image signals, and simplify the calculation amount of the image processing. The key points of the eye are established by supervised or semi-supervised learning, using the quantitative tools to form a high quality of the calibration data, and the key point calibration data in the image processing method has the effect of directional cutting of the classification of the eye objects such as iris data. Meet the exact positioning of the eye object. It is also useful to further determine other eye objects such as pupil boundaries, and further form accurate visual focus and visual motion trajectory.
附图简要说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明一实施例的视觉跟踪方法的流程图。FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention.
图2为本发明一实施例的视觉跟踪方法的流程图。2 is a flow chart of a visual tracking method according to an embodiment of the present invention.
图3为本发明一实施例的视觉跟踪方法的流程图。FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention.
图4为现有技术中确定的脸部五官轮廓的68特征点示意图。4 is a schematic diagram of 68 feature points of a facial facial features determined in the prior art.
图5为本发明一实施例的视觉跟踪方法中左眼图案中眼部对象关键点的结构示意图。FIG. 5 is a schematic structural diagram of key points of an eye object in a left eye pattern in a visual tracking method according to an embodiment of the present invention.
图6为本发明一实施例的视觉跟踪装置的架构示意图。FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention.
图7为本发明一实施例的视觉跟踪装置的架构示意图。FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
图8为本发明一实施例的视觉跟踪装置的架构示意图。FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
实施本发明的方式Mode for carrying out the invention
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不 是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
图纸中的步骤编号仅用于作为该步骤的附图标记,不表示执行顺序。The step numbers in the drawings are only used as reference numerals for the steps, and do not indicate the order of execution.
图1为本发明一实施例的视觉跟踪方法的流程图。如图1所示,本发明实施例的视觉跟踪方法包括:FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 1, the visual tracking method of the embodiment of the present invention includes:
步骤10:获取眼部图案;Step 10: Obtain an eye pattern;
步骤20:建立眼部对象关键点;Step 20: Establish a key point of the eye object;
步骤30:利用眼部对象关键点作为对象处理方法的测试数据处理眼部图案,确定眼部对象位置,形成视觉焦点数据。Step 30: The eye pattern is processed by using the eye object key point as test data of the object processing method, and the eye object position is determined to form visual focus data.
本发明实施例的视觉跟踪方法在利用成熟的脸部检测技术基础上确定眼部图案,避免了大量冗余图像信号的处理,精简了图像处理的运算量。眼部关键点的建立采用监督或半监督学习,利用量化工具形成具有较高的标定数据质量,关键点标定数据在例如ERT(Ensemble of Regression Trees)处理方法中对眼部对象如虹膜数据的分类起到定向裁剪的效果,满足眼部对象虹膜边界的准确定位。并有利于进一步确定其他眼部对象如瞳孔边界。The visual tracking method of the embodiment of the present invention determines the eye pattern based on the mature face detection technology, avoids the processing of a large number of redundant image signals, and simplifies the calculation amount of the image processing. Eye key points are established using supervised or semi-supervised learning, using quantitative tools to form a high quality of calibration data, and key point calibration data is used to classify eye objects such as iris data in, for example, ERT (Ensemble of Regression Trees) processing methods. It has the effect of directional cropping, which satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries.
图2为本发明一实施例的视觉跟踪方法的流程图。如图2所示,在上述实施例基础上,本发明实施例的视觉跟踪方法还包括:2 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 2, based on the foregoing embodiment, the visual tracking method of the embodiment of the present invention further includes:
步骤40:将眼部对象的连续变化,形成视觉跟踪数据;Step 40: Continuously change the eye object to form visual tracking data;
步骤50:将视觉跟踪数据作为控制信号用于虚拟视觉的动作变化。Step 50: Use the visual tracking data as a control signal for the action change of the virtual vision.
本发明实施例的视觉跟踪方法,将人眼的连续视觉焦点数据形成视觉跟踪数据,基于成熟的坐标变化过程,就可以形成拟人对象眼部的虹膜对象和瞳孔对象的相应动作,实现拟人对象对真人视觉眼神的同步正反馈,丰富拟人对象的情感表达清晰度。The visual tracking method of the embodiment of the invention forms the visual tracking data of the continuous visual focus data of the human eye, and based on the mature coordinate change process, the corresponding actions of the iris object and the pupil object of the eye of the anthropomorphic object can be formed, and the anthropomorphic object pair is realized. The synchronous positive feedback of the human visual eye enriches the emotional expression of the anthropomorphic object.
图3为本发明一实施例的视觉跟踪方法的流程图。如图3所示,在本发明一实施例的视觉跟踪方法中,步骤10还包括:FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 3, in the visual tracking method of an embodiment of the present invention, step 10 further includes:
步骤11:获取脸部的五官轮廓。Step 11: Obtain the facial features of the face.
五官轮廓的获取例如dlib人脸检测模型获得68特征点(如图4所示), 需要明确的是五官轮廓的特征点不能用于精确描述五官的位置和特点。The acquisition of the facial features, such as the dlib face detection model, yields 68 feature points (as shown in Figure 4). It is important to be clear that the feature points of the facial features cannot be used to accurately describe the position and characteristics of the facial features.
步骤12:根据眼部特征点裁剪出对称的眼部图像。Step 12: Crop a symmetrical eye image according to the eye feature points.
如图4和图5所示,裁剪以68特征点为例,采用最小外接矩形算法分别裁剪出特征点37~42眼部包围的左眼图案和特征点43~48包围的右眼图案。As shown in FIG. 4 and FIG. 5, the cropping takes 68 feature points as an example, and the left eye pattern surrounded by the eye points of the feature points 37-42 and the right eye pattern surrounded by the feature points 43-48 are respectively cut by the minimum circumscribed rectangle algorithm.
在本发明一实施例的视觉跟踪方法中,步骤20还包括:In the visual tracking method of an embodiment of the invention, step 20 further includes:
步骤21:利用半人工或自动的方式建立眼部对象。Step 21: Establish an eye object in a semi-manual or automatic manner.
针对半人工方式:包括采用人工标记确定眼部对象的大致范围,在人工标记的基础上,采用图像识别算法进一步确定眼部对象的大致范围。For the semi-manual method: including the artificial marker to determine the approximate range of the eye object, based on the artificial marker, the image recognition algorithm is used to further determine the approximate range of the eye object.
针对自动方式,包括采用图像识别算法确定眼部对象的大致范围。图像识别算法通过建立的眼部对象的三维模型运动过程中在二维平面上的映射图案建立眼部对象的大致范围。For automated methods, including the use of image recognition algorithms to determine the approximate range of eye objects. The image recognition algorithm establishes an approximate range of eye objects by a mapping pattern on a two-dimensional plane during the movement of the three-dimensional model of the established eye object.
步骤22:利用半人工或自动的方式形成眼部对象的关键点。Step 22: Form a key point of the eye object in a semi-manual or automatic manner.
针对半人工方式:在确定的眼部对象范围内,人工标记关键点,在人工标记的基础上,采用图像识别算法进一步标记眼部对象隐蔽的关键点。For the semi-manual method: in the range of the determined eye object, the key points are manually marked, and on the basis of the artificial mark, the image recognition algorithm is used to further mark the key points of the eye object concealment.
针对自动方式,包括采用图像识别算法确定眼部对象的关键点。图像识别算法通过建立的眼部对象的三维模型运动过程中在二维平面上的映射点标记眼部对象的关键点。For automatic methods, including the use of image recognition algorithms to determine the key points of the eye object. The image recognition algorithm marks the key points of the eye object through the mapping points on the two-dimensional plane during the movement of the three-dimensional model of the established eye object.
针对少量的眼部图像采用人工和自动结合的方式进行处理可以有效提高处理速度并保证准确率,为进一步作为训练数据保证后续算法的准确性提供保障。大量的眼部图像采用自动方式可以保证动态视觉的处理速度。The manual and automatic combination of a small number of eye images can effectively improve the processing speed and ensure the accuracy, which provides a guarantee for further ensuring the accuracy of the subsequent algorithms as training data. A large number of eye images are automatically processed to ensure the speed of dynamic vision processing.
在本发明一实施例的视觉跟踪方法中,步骤20中确定的眼部对象包括:In the visual tracking method of an embodiment of the present invention, the eye object determined in
眼睑及眼睑的12个关键点,包括眼睑两端的关键点、上眼睑与下眼睑最大距离处的关键点。The 12 key points of the eyelids and eyelids include the key points at the ends of the eyelids and the key points at the maximum distance between the upper eyelid and the lower eyelid.
虹膜及虹膜的8个关键点,包括水平方向上虹膜左边缘与右边缘最大距离处的关键点、竖直方向上虹膜上边缘与下边缘最大距离处的关键点。The eight key points of the iris and the iris include the key points at the maximum distance between the left and right edges of the iris in the horizontal direction and the maximum points at the maximum distance between the upper and lower edges of the iris in the vertical direction.
瞳孔及瞳孔的8个关键点,包括水平方向上瞳孔左边缘与右边缘最大距离处的关键点、竖直方向上瞳孔上边缘与下边缘最大距离处的关键点。The eight key points of the pupil and the pupil include the key point at the maximum distance between the left edge and the right edge of the pupil in the horizontal direction and the key point at the maximum distance between the upper edge and the lower edge of the pupil in the vertical direction.
关键点包括相应的坐标位置和图案属性。Key points include the corresponding coordinate position and pattern properties.
在本发明一实施例的视觉跟踪方法中,步骤30中包括:In the visual tracking method of an embodiment of the invention,
步骤31:将眼部图案的像素数据导入ERT算法作为训练数据进行处理;Step 31: Import pixel data of the eye pattern into the ERT algorithm as training data for processing;
步骤32:将确定的眼部对象及眼部对象关键点作为测试数据修正ERT算法的处理结果;Step 32: The determined eye object and the eye object key point are used as test data to correct the processing result of the ERT algorithm;
步骤33:根据修正的处理结果形成眼部对象的准确轮廓和准确的相对位置关系。Step 33: Form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
通过将人工或半人工处理获得的关键点数据作为测试数据保证了ERT算法中过程中的预测准确度。The key point data obtained by manual or semi-manual processing is used as test data to ensure the prediction accuracy in the process of the ERT algorithm.
在本发明一实施例的视觉跟踪方法中,步骤40还包括:In the visual tracking method of an embodiment of the invention, the
步骤41:在实时获取的眼部图案中,根据眼部对象的相对位置变化形成视觉跟踪数据;Step 41: forming visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
步骤42:在实时获取的眼部图案中,根据眼部对象的相应关键点的相对位置变化形成视觉跟踪数据。Step 42: In the eye pattern acquired in real time, visual tracking data is formed according to the relative position change of the corresponding key point of the eye object.
本发明实施例的视觉跟踪方法,经过实际应用和对比测算,具有两个显著优点:The visual tracking method of the embodiment of the present invention has two significant advantages after practical application and comparative measurement:
1、准确率高,虹膜位置误差不超过3%(指的是实际虹膜位置和预测虹膜位置距离除以上下眼睑最大距离);1, the accuracy is high, the iris position error does not exceed 3% (refers to the actual iris position and predicted iris position distance in addition to the maximum distance of the lower eyelid);
2、鲁棒性和实时性好,普通计算机及移动设备上眼部对象的确定不超过3ms/帧,以30帧/秒计算,识别失败率低于0.5%。2, robustness and real-time performance, the determination of eye objects on ordinary computers and mobile devices does not exceed 3ms / frame, calculated at 30 frames / second, the recognition failure rate is less than 0.5%.
在本发明一实施例的视觉跟踪方法中,步骤50还包括:In the visual tracking method of an embodiment of the invention, step 50 further includes:
步骤51:建立眼部对象和/或眼部对象的关键点,与眼部三维模型或二维模型中的对象和对象的关键点的映射,形成虚拟视觉焦点;Step 51: Establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
步骤52:利用视觉跟踪数据控制眼部三维模型或二维模型中的对象和/或对象的关键点的移动,形成虚拟视觉的变化。Step 52: Control the movement of the key points of the object and/or the object in the three-dimensional model or the two-dimensional model of the eye using the visual tracking data to form a change of the virtual vision.
本发明实施例的视觉跟踪方法,可以将获得的视觉跟踪数据应用在虚拟对象的眼神表达上,进一步提高虚拟对象的拟人化特征。The visual tracking method of the embodiment of the invention can apply the obtained visual tracking data to the eye expression of the virtual object, thereby further improving the anthropomorphic feature of the virtual object.
图6为本发明一实施例的视觉跟踪装置的架构示意图。如图6所示,与本发明实施例的视觉跟踪方法相应的还包括视觉跟踪装置,包括:FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention. As shown in FIG. 6, the visual tracking method corresponding to the embodiment of the present invention further includes a visual tracking device, including:
图像获取模块100,用于获取眼部图案。The
关键数据建立模块200,用于建立眼部对象关键点。The key
对象识别模块300,用于利用眼部对象关键点作为对象处理方法的测试数据处理眼部图案,确定虹膜位置,形成视觉焦点数据。The
图7为本发明一实施例的视觉跟踪装置的架构示意图。如图7所示,本发明一实施例的视觉跟踪装置中,还包括:FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 7, the visual tracking device according to an embodiment of the present invention further includes:
视觉跟踪数据生成模块400,用于将眼部对象的连续变化,形成视觉跟踪数据。The visual tracking
虚拟视觉控制模块500,用于将视觉跟踪数据作为控制信号用于虚拟视觉的动作变化。The virtual
图8为本发明一实施例的视觉跟踪装置的架构示意图。如图8所示,本发明一实施例的视觉跟踪装置中,图像获取模块100包括:FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 8, in the visual tracking device of an embodiment of the present invention, the
轮廓获取子模块110,用于获取脸部的五官轮廓。The
图像裁剪子模块120,用于根据眼部特征点裁剪出对称的眼部图像。The image cropping sub-module 120 is configured to crop a symmetrical eye image according to the eye feature points.
本发明一实施例的视觉跟踪装置中,关键数据建立模块200包括:In the visual tracking device of an embodiment of the present invention, the key
眼部对象建立子模块210,用于利用半人工或自动的方式建立眼部对象。The eye object creation sub-module 210 is configured to establish an eye object in a semi-manual or automatic manner.
对象关键点建立子模块220,用于利用半人工或自动的方式形成眼部对象的关键点。The object key point sub-module 220 is configured to form a key point of the eye object in a semi-manual or automatic manner.
本发明一实施例的视觉跟踪装置中,对象识别模块300包括:In the visual tracking device of an embodiment of the invention, the
图像导入子模块310,用于将眼部图案的像素数据导入ERT算法作为训练数据进行处理;The image importing sub-module 310 is configured to import the pixel data of the eye pattern into the ERT algorithm for processing as the training data;
图像处理子模块320,用于将确定的眼部对象及眼部对象关键点作为测试数据修正ERT算法的处理结果;The
眼部对象位置生成子模块330,用于根据修正的处理结果形成眼部对象 的准确轮廓和准确的相对位置关系。The eye object position generation sub-module 330 is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
本发明一实施例的视觉跟踪装置中,视觉跟踪数据生成模块400包括:In the visual tracking device of an embodiment of the present invention, the visual tracking
眼部对象轨迹生成子模块410,用于在实时获取的眼部图案中,根据眼部对象的相对位置变化形成视觉跟踪数据;The eye object trajectory generating sub-module 410 is configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
对象关键点轨迹生成子模块420,用于在实时获取的眼部图案中,根据眼部对象的相应关键点的相对位置变化形成视觉跟踪数据。The object key point trajectory generation sub-module 420 is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
本发明一实施例的视觉跟踪装置中,虚拟视觉控制模块500包括:In the visual tracking device of an embodiment of the invention, the virtual
虚拟焦点生成子模块510,用于建立眼部对象和/或眼部对象的关键点,与眼部三维模型或二维模型中的对象和对象的关键点的映射,形成虚拟视觉焦点;The virtual focus generation sub-module 510 is configured to establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
虚拟视觉生成子模块520,用于利用视觉跟踪数据控制眼部三维模型或二维模型中的对象和/或对象的关键点的移动,形成虚拟视觉的变化。The virtual
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换等,均应包含在本发明的保护范围之内。The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention. Any modifications, equivalent substitutions, etc., which are within the spirit and principles of the present invention, should be included in the scope of the present invention. within.
本发明一实施例的视觉跟踪装置包括存储器和处理器,其中:A visual tracking device in accordance with an embodiment of the present invention includes a memory and a processor, wherein:
存储器用于存储实现上述实施例的视觉跟踪方法的处理步骤的程序代码;a memory for storing program code for implementing the processing steps of the visual tracking method of the above embodiment;
处理器用于运行实现上述实施例的视觉跟踪方法的处理步骤的程序代码。The processor is for executing program code that implements the processing steps of the visual tracking method of the above-described embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the unit described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方 法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided herein, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序校验码的介质。Functionality, if implemented as a software functional unit and sold or used as a stand-alone product, can be stored on a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including The instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, and can store a program check code. Medium.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the claims.
本发明实施例的视觉跟踪方法和跟踪装置确定眼部图案,避免了大量冗余图像信号的处理,精简了图像处理的运算量。利用量化工具形成具有较高的标定数据质量,关键点标定数据对眼部对象如虹膜数据的分类起到定向裁剪的效果,满足眼部对象虹膜边界的准确定位。并有利于进一步确定其他眼部对象如瞳孔边界。本发明实施例的视觉跟踪方法和跟踪装置确定眼部图案可以普遍应用于智能移动终端设备,改善人机交互过程的效率。The visual tracking method and the tracking device of the embodiment of the present invention determine the eye pattern, avoid processing of a large number of redundant image signals, and simplify the calculation amount of image processing. Using the quantification tool to form a high quality of the calibration data, the key point calibration data has the effect of directional clipping on the classification of eye objects such as iris data, and satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries. The visual tracking method and the tracking device of the embodiments of the present invention determine that the eye pattern can be generally applied to the smart mobile terminal device, improving the efficiency of the human-computer interaction process.
Claims (15)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710060901.3A CN106845425A (en) | 2017-01-25 | 2017-01-25 | A kind of visual tracking method and tracks of device |
| CN201710060901.3 | 2017-01-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018137456A1 true WO2018137456A1 (en) | 2018-08-02 |
Family
ID=59121246
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/118809 Ceased WO2018137456A1 (en) | 2017-01-25 | 2017-12-27 | Visual tracking method and device |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106845425A (en) |
| WO (1) | WO2018137456A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110009714A (en) * | 2019-03-05 | 2019-07-12 | 重庆爱奇艺智能科技有限公司 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
| CN115100380A (en) * | 2022-06-17 | 2022-09-23 | 上海新眼光医疗器械股份有限公司 | Medical image automatic identification method based on eye body surface feature points |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106845425A (en) * | 2017-01-25 | 2017-06-13 | 迈吉客科技(北京)有限公司 | A kind of visual tracking method and tracks of device |
| CN107679448B (en) * | 2017-08-17 | 2018-09-25 | 平安科技(深圳)有限公司 | Eyeball action-analysing method, device and storage medium |
| CN108197594B (en) | 2018-01-23 | 2020-12-11 | 北京七鑫易维信息技术有限公司 | Method and apparatus for determining pupil position |
| CN110293554A (en) * | 2018-03-21 | 2019-10-01 | 北京猎户星空科技有限公司 | Control method, the device and system of robot |
| CN108555485A (en) * | 2018-04-24 | 2018-09-21 | 无锡奇能焊接系统有限公司 | A kind of cylinder for liquefied gas welding visual tracking method |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1570949A (en) * | 2003-07-18 | 2005-01-26 | 万众一 | Intelligent control method for visual tracking |
| CN103034330A (en) * | 2012-12-06 | 2013-04-10 | 中国科学院计算技术研究所 | Eye interaction method and system for video conference |
| WO2016034021A1 (en) * | 2014-09-02 | 2016-03-10 | Hong Kong Baptist University | Method and apparatus for eye gaze tracking |
| CN106296784A (en) * | 2016-08-05 | 2017-01-04 | 深圳羚羊极速科技有限公司 | A kind of by face 3D data, carry out the algorithm that face 3D ornament renders |
| CN106845425A (en) * | 2017-01-25 | 2017-06-13 | 迈吉客科技(北京)有限公司 | A kind of visual tracking method and tracks of device |
-
2017
- 2017-01-25 CN CN201710060901.3A patent/CN106845425A/en active Pending
- 2017-12-27 WO PCT/CN2017/118809 patent/WO2018137456A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1570949A (en) * | 2003-07-18 | 2005-01-26 | 万众一 | Intelligent control method for visual tracking |
| CN103034330A (en) * | 2012-12-06 | 2013-04-10 | 中国科学院计算技术研究所 | Eye interaction method and system for video conference |
| WO2016034021A1 (en) * | 2014-09-02 | 2016-03-10 | Hong Kong Baptist University | Method and apparatus for eye gaze tracking |
| CN106296784A (en) * | 2016-08-05 | 2017-01-04 | 深圳羚羊极速科技有限公司 | A kind of by face 3D data, carry out the algorithm that face 3D ornament renders |
| CN106845425A (en) * | 2017-01-25 | 2017-06-13 | 迈吉客科技(北京)有限公司 | A kind of visual tracking method and tracks of device |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110009714A (en) * | 2019-03-05 | 2019-07-12 | 重庆爱奇艺智能科技有限公司 | The method and device of virtual role expression in the eyes is adjusted in smart machine |
| CN115100380A (en) * | 2022-06-17 | 2022-09-23 | 上海新眼光医疗器械股份有限公司 | Medical image automatic identification method based on eye body surface feature points |
| CN115100380B (en) * | 2022-06-17 | 2024-03-26 | 上海新眼光医疗器械股份有限公司 | Automatic medical image identification method based on eye body surface feature points |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106845425A (en) | 2017-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018137456A1 (en) | Visual tracking method and device | |
| CN110675487B (en) | Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face | |
| CN111480164B (en) | Head pose and distraction estimation | |
| CN106529409B (en) | A Method for Measuring Eye Gaze Angle Based on Head Posture | |
| US9939893B2 (en) | Eye gaze tracking | |
| WO2020228389A1 (en) | Method and apparatus for creating facial model, electronic device, and computer-readable storage medium | |
| JP7640059B2 (en) | Method for 3D face reconstruction, apparatus, device and storage medium for 3D face reconstruction | |
| JP4951498B2 (en) | Face image recognition device, face image recognition method, face image recognition program, and recording medium recording the program | |
| CN113449570A (en) | Image processing method and device | |
| CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
| CN105224285A (en) | Eyes open and-shut mode pick-up unit and method | |
| CN103971131A (en) | Preset facial expression recognition method and device | |
| JP2022141940A (en) | Face biometric detection method, device, electronic device and storage medium | |
| CN110188630A (en) | A face recognition method and camera | |
| CN114092985A (en) | A terminal control method, device, terminal and storage medium | |
| US20250029425A1 (en) | Live human face detection method and apparatus, computer device, and storage medium | |
| CN119625183A (en) | A three-dimensional head model reconstruction method and device, and electronic equipment | |
| CN112800966B (en) | Sight tracking method and electronic equipment | |
| CN118349116A (en) | Desktop eye tracking method, device and equipment | |
| CN116524572B (en) | Accurate real-time face positioning method based on adaptive Hope-Net | |
| CN114463817B (en) | Lightweight 2D video-based facial expression driving method and system | |
| CN112528714A (en) | Single light source-based gaze point estimation method, system, processor and equipment | |
| Kim et al. | Gaze tracking based on pupil estimation using multilayer perception | |
| CN120163931B (en) | A three-dimensional iris reconstruction and unfolding method | |
| CN116755562B (en) | Obstacle avoidance method, device, medium and AR/VR equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17894469 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 22.11.2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17894469 Country of ref document: EP Kind code of ref document: A1 |