+

WO2018137456A1 - Procédé et dispositif de suivi visuel - Google Patents

Procédé et dispositif de suivi visuel Download PDF

Info

Publication number
WO2018137456A1
WO2018137456A1 PCT/CN2017/118809 CN2017118809W WO2018137456A1 WO 2018137456 A1 WO2018137456 A1 WO 2018137456A1 CN 2017118809 W CN2017118809 W CN 2017118809W WO 2018137456 A1 WO2018137456 A1 WO 2018137456A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
visual tracking
data
visual
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/118809
Other languages
English (en)
Chinese (zh)
Inventor
周鸣
金宇林
伏英娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Appmagics Tech (beijing) Ltd
Original Assignee
Appmagics Tech (beijing) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appmagics Tech (beijing) Ltd filed Critical Appmagics Tech (beijing) Ltd
Publication of WO2018137456A1 publication Critical patent/WO2018137456A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • Embodiments of the present invention relate to the field of bio-feedback signal data processing technologies, and in particular, to a visual signal tracking method and a tracking device.
  • Visual recognition and tracking are mainly to judge the attention direction and attention trajectory of the pupil of the eye. Since the pupil is composed of physiological organs and tissues such as the sclera and the iris, the pupils have a large individualized difference, and the difference of the iris is the largest.
  • the current technical means mainly adopts image recognition for the eye, and uses the binary feature, the gradient histogram, etc., combined with the filtering operation such as expansion corrosion to achieve the purpose of extracting the iris position.
  • the above method is basically based on prior knowledge. For complex biological individual differences, it is necessary to determine more hypothetical parameter sets and threshold ranges, which can only be effective in a limited scene, and the accuracy is low, and the iris of the dynamic image cannot be targeted for real-time scenes. deal with.
  • the embodiments of the present invention provide a visual tracking method and a tracking device, which are used to solve the technical problem that an eye object cannot be accurately located in real time.
  • the eye pattern is processed using the eye object key point as the test data of the object processing method, and the eye object position is determined to form visual focus data.
  • the visual tracking data is used as a control signal for the action change of the virtual vision.
  • the acquiring an eye pattern includes:
  • a symmetrical eye image is cropped according to the eye feature points.
  • the key points for establishing an eye object include:
  • the key points of the eye object are formed in a semi-manual or automated manner.
  • the processing the eye pattern by using the eye object key point as the test data of the object processing method, determining the position of the eye object, and forming the visual focus data includes:
  • the continuous change of the eye object to form visual tracking data includes:
  • visual tracking data is formed according to the relative position change of the eye object
  • visual tracking data is formed according to the relative positional change of the corresponding key point of the eye object.
  • the action changes for using the visual tracking data as a control signal for virtual vision include:
  • the visual tracking data is used to control the movement of key points in the three-dimensional model or the two-dimensional model of the eye and/or the object to form a change in the virtual vision.
  • An image acquisition module configured to acquire an eye pattern
  • Key data creation module for establishing key points of eye objects
  • the object recognition module is configured to process the eye pattern by using the eye object key point as the test data of the object processing method, determine the iris position, and form the visual focus data.
  • a visual tracking data generating module configured to continuously change the eye object to form visual tracking data
  • a virtual vision control module for using visual tracking data as a control signal for motion changes of virtual vision.
  • the image acquisition module includes:
  • a contour acquisition sub-module for acquiring a facial features of the face
  • An image cropping sub-module for cropping a symmetrical eye image according to an eye feature point is provided.
  • the key data establishing module includes:
  • An eye object creation sub-module for establishing an eye object in a semi-manual or automatic manner
  • the object key points establish sub-modules for forming key points of the eye object in a semi-manual or automatic manner.
  • the object recognition module includes:
  • An image importing sub-module configured to import pixel data of an eye pattern into an ERT algorithm for processing as a training data
  • An image processing sub-module configured to use the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm
  • the eye object position generation sub-module is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
  • the visual tracking data generating module includes:
  • An eye object trajectory generating sub-module configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time
  • the object key point trajectory generating sub-module is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
  • the virtual vision control module includes:
  • a virtual focus generation sub-module for establishing a key point of an eye object and/or an eye object, and mapping with an eye object in a three-dimensional model or a two-dimensional model and a key point of the object to form a virtual visual focus;
  • the virtual vision generation sub-module is configured to control the movement of the key points of the three-dimensional model or the two-dimensional model of the eye and/or the object by using the visual tracking data to form a change of the virtual vision.
  • a visual tracking device includes a processor and a memory
  • the memory is used in the program code of the visual tracking method described above;
  • the processor is configured to execute the program code.
  • the visual tracking method and the visual tracking device of the embodiments of the present invention determine the eye pattern based on the mature face detection technology, avoid processing a large number of redundant image signals, and simplify the calculation amount of the image processing.
  • the key points of the eye are established by supervised or semi-supervised learning, using the quantitative tools to form a high quality of the calibration data, and the key point calibration data in the image processing method has the effect of directional cutting of the classification of the eye objects such as iris data. Meet the exact positioning of the eye object. It is also useful to further determine other eye objects such as pupil boundaries, and further form accurate visual focus and visual motion trajectory.
  • FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of a visual tracking method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of 68 feature points of a facial facial features determined in the prior art.
  • FIG. 5 is a schematic structural diagram of key points of an eye object in a left eye pattern in a visual tracking method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention.
  • FIG. 1 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 1, the visual tracking method of the embodiment of the present invention includes:
  • Step 10 Obtain an eye pattern
  • Step 20 Establish a key point of the eye object
  • Step 30 The eye pattern is processed by using the eye object key point as test data of the object processing method, and the eye object position is determined to form visual focus data.
  • the visual tracking method of the embodiment of the present invention determines the eye pattern based on the mature face detection technology, avoids the processing of a large number of redundant image signals, and simplifies the calculation amount of the image processing.
  • Eye key points are established using supervised or semi-supervised learning, using quantitative tools to form a high quality of calibration data, and key point calibration data is used to classify eye objects such as iris data in, for example, ERT (Ensemble of Regression Trees) processing methods. It has the effect of directional cropping, which satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries.
  • FIG. 2 is a flow chart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 2, based on the foregoing embodiment, the visual tracking method of the embodiment of the present invention further includes:
  • Step 40 Continuously change the eye object to form visual tracking data
  • Step 50 Use the visual tracking data as a control signal for the action change of the virtual vision.
  • the visual tracking method of the embodiment of the invention forms the visual tracking data of the continuous visual focus data of the human eye, and based on the mature coordinate change process, the corresponding actions of the iris object and the pupil object of the eye of the anthropomorphic object can be formed, and the anthropomorphic object pair is realized.
  • the synchronous positive feedback of the human visual eye enriches the emotional expression of the anthropomorphic object.
  • FIG. 3 is a flowchart of a visual tracking method according to an embodiment of the present invention. As shown in FIG. 3, in the visual tracking method of an embodiment of the present invention, step 10 further includes:
  • Step 11 Obtain the facial features of the face.
  • the acquisition of the facial features yields 68 feature points (as shown in Figure 4). It is important to be clear that the feature points of the facial features cannot be used to accurately describe the position and characteristics of the facial features.
  • Step 12 Crop a symmetrical eye image according to the eye feature points.
  • the cropping takes 68 feature points as an example, and the left eye pattern surrounded by the eye points of the feature points 37-42 and the right eye pattern surrounded by the feature points 43-48 are respectively cut by the minimum circumscribed rectangle algorithm.
  • step 20 further includes:
  • Step 21 Establish an eye object in a semi-manual or automatic manner.
  • the image recognition algorithm is used to further determine the approximate range of the eye object.
  • the image recognition algorithm establishes an approximate range of eye objects by a mapping pattern on a two-dimensional plane during the movement of the three-dimensional model of the established eye object.
  • Step 22 Form a key point of the eye object in a semi-manual or automatic manner.
  • the key points are manually marked, and on the basis of the artificial mark, the image recognition algorithm is used to further mark the key points of the eye object concealment.
  • the image recognition algorithm marks the key points of the eye object through the mapping points on the two-dimensional plane during the movement of the three-dimensional model of the established eye object.
  • the manual and automatic combination of a small number of eye images can effectively improve the processing speed and ensure the accuracy, which provides a guarantee for further ensuring the accuracy of the subsequent algorithms as training data.
  • a large number of eye images are automatically processed to ensure the speed of dynamic vision processing.
  • the eye object determined in step 20 includes:
  • the 12 key points of the eyelids and eyelids include the key points at the ends of the eyelids and the key points at the maximum distance between the upper eyelid and the lower eyelid.
  • the eight key points of the iris and the iris include the key points at the maximum distance between the left and right edges of the iris in the horizontal direction and the maximum points at the maximum distance between the upper and lower edges of the iris in the vertical direction.
  • the eight key points of the pupil and the pupil include the key point at the maximum distance between the left edge and the right edge of the pupil in the horizontal direction and the key point at the maximum distance between the upper edge and the lower edge of the pupil in the vertical direction.
  • Key points include the corresponding coordinate position and pattern properties.
  • step 30 includes:
  • Step 31 Import pixel data of the eye pattern into the ERT algorithm as training data for processing
  • Step 32 The determined eye object and the eye object key point are used as test data to correct the processing result of the ERT algorithm;
  • Step 33 Form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
  • the key point data obtained by manual or semi-manual processing is used as test data to ensure the prediction accuracy in the process of the ERT algorithm.
  • the step 40 further includes:
  • Step 41 forming visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
  • Step 42 In the eye pattern acquired in real time, visual tracking data is formed according to the relative position change of the corresponding key point of the eye object.
  • the accuracy is high, the iris position error does not exceed 3% (refers to the actual iris position and predicted iris position distance in addition to the maximum distance of the lower eyelid);
  • step 50 further includes:
  • Step 51 Establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
  • Step 52 Control the movement of the key points of the object and/or the object in the three-dimensional model or the two-dimensional model of the eye using the visual tracking data to form a change of the virtual vision.
  • the visual tracking method of the embodiment of the invention can apply the obtained visual tracking data to the eye expression of the virtual object, thereby further improving the anthropomorphic feature of the virtual object.
  • FIG. 6 is a schematic structural diagram of a visual tracking device according to an embodiment of the present invention. As shown in FIG. 6, the visual tracking method corresponding to the embodiment of the present invention further includes a visual tracking device, including:
  • the image acquisition module 100 is configured to acquire an eye pattern.
  • the key data creation module 200 is configured to establish an eye object key point.
  • the object recognition module 300 is configured to process the eye pattern by using the eye object key point as the test data of the object processing method, determine the iris position, and form the visual focus data.
  • FIG. 7 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 7, the visual tracking device according to an embodiment of the present invention further includes:
  • the visual tracking data generating module 400 is configured to form a visual tracking data by continuously changing the eye object.
  • the virtual vision control module 500 is configured to use the visual tracking data as a control signal for the motion change of the virtual vision.
  • FIG. 8 is a schematic structural diagram of a visual tracking apparatus according to an embodiment of the present invention. As shown in FIG. 8, in the visual tracking device of an embodiment of the present invention, the image acquisition module 100 includes:
  • the contour acquisition sub-module 110 is configured to acquire a facial features of the face.
  • the image cropping sub-module 120 is configured to crop a symmetrical eye image according to the eye feature points.
  • the key data establishing module 200 includes:
  • the eye object creation sub-module 210 is configured to establish an eye object in a semi-manual or automatic manner.
  • the object key point sub-module 220 is configured to form a key point of the eye object in a semi-manual or automatic manner.
  • the object recognition module 300 includes:
  • the image importing sub-module 310 is configured to import the pixel data of the eye pattern into the ERT algorithm for processing as the training data;
  • the image processing sub-module 320 is configured to correct the determined eye object and the eye object key point as test data to correct the processing result of the ERT algorithm;
  • the eye object position generation sub-module 330 is configured to form an accurate contour of the eye object and an accurate relative positional relationship according to the corrected processing result.
  • the visual tracking data generating module 400 includes:
  • the eye object trajectory generating sub-module 410 is configured to form visual tracking data according to a relative position change of the eye object in the eye pattern acquired in real time;
  • the object key point trajectory generation sub-module 420 is configured to form visual tracking data according to a relative position change of a corresponding key point of the eye object in the eye pattern acquired in real time.
  • the virtual visual control module 500 includes:
  • the virtual focus generation sub-module 510 is configured to establish a key point of the eye object and/or the eye object, and map the key points of the object and the object in the three-dimensional model or the two-dimensional model of the eye to form a virtual visual focus;
  • the virtual vision generation sub-module 520 is configured to control the movement of the key points in the three-dimensional model or the two-dimensional model of the eye and/or the key points of the object by using the visual tracking data to form a change of the virtual vision.
  • a visual tracking device in accordance with an embodiment of the present invention includes a memory and a processor, wherein:
  • a memory for storing program code for implementing the processing steps of the visual tracking method of the above embodiment
  • the processor is for executing program code that implements the processing steps of the visual tracking method of the above-described embodiments.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the visual tracking method and the tracking device of the embodiment of the present invention determine the eye pattern, avoid processing of a large number of redundant image signals, and simplify the calculation amount of image processing.
  • the key point calibration data has the effect of directional clipping on the classification of eye objects such as iris data, and satisfies the accurate positioning of the iris boundary of the eye object. It is also useful to further determine other eye objects such as pupil boundaries.
  • the visual tracking method and the tracking device of the embodiments of the present invention determine that the eye pattern can be generally applied to the smart mobile terminal device, improving the efficiency of the human-computer interaction process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

L'invention concerne un procédé de suivi visuel et un dispositif de suivi visuel, destinés à être utilisés dans la résolution du problème technique de l'incapacité à positionner avec précision un objet oculaire en temps réel. Le procédé comprend les étapes consistant : à obtenir un motif oculaire (10) ; à construire un point clé d'objet oculaire (20) ; à traiter le motif oculaire à l'aide du point clé d'objet oculaire servant de données de test d'un procédé de traitement d'objet, et à déterminer une position d'objet oculaire pour former des données de foyer visuel (30). Le traitement d'un grand nombre de signaux d'image redondants est évité, et la quantité de calculs du traitement d'image est réduite. Pour construire le point clé d'objet oculaire, un apprentissage supervisé ou semi-supervisé est utilisé, un outil de quantification permet d'obtenir une grande qualité de données d'étalonnage, des données d'étalonnage de point clé créent un effet de coupe directionnelle sur une classification d'objets oculaires, par exemple des données d'iris, au cours du procédé de traitement d'image, et un positionnement précis des objets oculaires est effectué. De plus, la présente invention facilite la détermination plus poussée d'autres objets oculaires, tels qu'une limite de pupille, et la formation plus poussée d'un foyer visuel précis et d'une trajectoire de mouvement visuel.
PCT/CN2017/118809 2017-01-25 2017-12-27 Procédé et dispositif de suivi visuel Ceased WO2018137456A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710060901.3A CN106845425A (zh) 2017-01-25 2017-01-25 一种视觉跟踪方法和跟踪装置
CN201710060901.3 2017-01-25

Publications (1)

Publication Number Publication Date
WO2018137456A1 true WO2018137456A1 (fr) 2018-08-02

Family

ID=59121246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118809 Ceased WO2018137456A1 (fr) 2017-01-25 2017-12-27 Procédé et dispositif de suivi visuel

Country Status (2)

Country Link
CN (1) CN106845425A (fr)
WO (1) WO2018137456A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009714A (zh) * 2019-03-05 2019-07-12 重庆爱奇艺智能科技有限公司 在智能设备中调整虚拟角色眼神的方法及装置
CN115100380A (zh) * 2022-06-17 2022-09-23 上海新眼光医疗器械股份有限公司 基于眼部体表特征点的医学影像自动识别方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845425A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种视觉跟踪方法和跟踪装置
CN107679448B (zh) * 2017-08-17 2018-09-25 平安科技(深圳)有限公司 眼球动作分析方法、装置及存储介质
CN108197594B (zh) 2018-01-23 2020-12-11 北京七鑫易维信息技术有限公司 确定瞳孔位置的方法和装置
CN110293554A (zh) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 机器人的控制方法、装置和系统
CN108555485A (zh) * 2018-04-24 2018-09-21 无锡奇能焊接系统有限公司 一种液化气钢瓶焊接视觉跟踪方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570949A (zh) * 2003-07-18 2005-01-26 万众一 视觉跟踪智能控制方法
CN103034330A (zh) * 2012-12-06 2013-04-10 中国科学院计算技术研究所 一种用于视频会议的眼神交互方法及系统
WO2016034021A1 (fr) * 2014-09-02 2016-03-10 Hong Kong Baptist University Procédé et appareil de suivi du regard
CN106296784A (zh) * 2016-08-05 2017-01-04 深圳羚羊极速科技有限公司 一种通过人脸3d数据,进行面部3d装饰物渲染的算法
CN106845425A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种视觉跟踪方法和跟踪装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570949A (zh) * 2003-07-18 2005-01-26 万众一 视觉跟踪智能控制方法
CN103034330A (zh) * 2012-12-06 2013-04-10 中国科学院计算技术研究所 一种用于视频会议的眼神交互方法及系统
WO2016034021A1 (fr) * 2014-09-02 2016-03-10 Hong Kong Baptist University Procédé et appareil de suivi du regard
CN106296784A (zh) * 2016-08-05 2017-01-04 深圳羚羊极速科技有限公司 一种通过人脸3d数据,进行面部3d装饰物渲染的算法
CN106845425A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种视觉跟踪方法和跟踪装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009714A (zh) * 2019-03-05 2019-07-12 重庆爱奇艺智能科技有限公司 在智能设备中调整虚拟角色眼神的方法及装置
CN115100380A (zh) * 2022-06-17 2022-09-23 上海新眼光医疗器械股份有限公司 基于眼部体表特征点的医学影像自动识别方法
CN115100380B (zh) * 2022-06-17 2024-03-26 上海新眼光医疗器械股份有限公司 基于眼部体表特征点的医学影像自动识别方法

Also Published As

Publication number Publication date
CN106845425A (zh) 2017-06-13

Similar Documents

Publication Publication Date Title
WO2018137456A1 (fr) Procédé et dispositif de suivi visuel
CN110675487B (zh) 基于多角度二维人脸的三维人脸建模、识别方法及装置
CN111480164B (zh) 头部姿势和分心估计
CN106529409B (zh) 一种基于头部姿态的眼睛注视视角测定方法
US9939893B2 (en) Eye gaze tracking
WO2020228389A1 (fr) Procédé et appareil de création de modèle facial, dispositif électronique et support de mémoire à lecture informatique
JP7640059B2 (ja) 3次元顔再構築の方法、3次元顔再構築の装置、デバイスおよび記憶媒体
JP4951498B2 (ja) 顔画像認識装置、顔画像認識方法、顔画像認識プログラムおよびそのプログラムを記録した記録媒体
CN113449570A (zh) 图像处理方法和装置
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
CN105224285A (zh) 眼睛开闭状态检测装置和方法
CN103971131A (zh) 一种预设表情识别方法和装置
JP2022141940A (ja) 顔生体検出方法、装置、電子機器及び記憶媒体
CN110188630A (zh) 一种人脸识别方法和相机
CN114092985A (zh) 一种终端控制方法、装置、终端及存储介质
US20250029425A1 (en) Live human face detection method and apparatus, computer device, and storage medium
CN119625183A (zh) 一种三维头部模型重建方法和装置、电子设备
CN112800966B (zh) 一种视线的追踪方法及电子设备
CN118349116A (zh) 一种桌面式眼动追踪方法、装置及设备
CN116524572B (zh) 基于自适应Hope-Net的人脸精准实时定位方法
CN114463817B (zh) 一种轻量级的基于2d视频的人脸表情驱动方法和系统
CN112528714A (zh) 基于单光源的注视点估计方法、系统、处理器及设备
Kim et al. Gaze tracking based on pupil estimation using multilayer perception
CN120163931B (zh) 一种三维虹膜重建及展开方法
CN116755562B (zh) 一种避障方法、装置、介质及ar/vr设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17894469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 22.11.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17894469

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载