+

CN114125273A - Face focusing method and device and electronic equipment - Google Patents

Face focusing method and device and electronic equipment Download PDF

Info

Publication number
CN114125273A
CN114125273A CN202111306696.7A CN202111306696A CN114125273A CN 114125273 A CN114125273 A CN 114125273A CN 202111306696 A CN202111306696 A CN 202111306696A CN 114125273 A CN114125273 A CN 114125273A
Authority
CN
China
Prior art keywords
face
target
image
parameters
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111306696.7A
Other languages
Chinese (zh)
Other versions
CN114125273B (en
Inventor
陈典浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111306696.7A priority Critical patent/CN114125273B/en
Publication of CN114125273A publication Critical patent/CN114125273A/en
Application granted granted Critical
Publication of CN114125273B publication Critical patent/CN114125273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种人脸对焦方法、装置及电子设备,属于摄像技术领域。其中,本申请提供的人脸对焦方法,包括:将目标图像输入至目标参数检测模型中,得到目标参数检测模型输出的目标焦距;基于目标焦距对人脸进行对焦;其中,目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;相机模型用于基于输入的位姿参数和三维人脸得到复原后的二维人脸图像;其中,人脸特征参数和位姿参数是初始参数检测模型基于二维人脸图像样本输出的参数;位姿参数为表征摄像头拍摄姿态的参数;人脸特征参数为表征人脸轮廓的参数。

Figure 202111306696

The present application discloses a face focusing method, device and electronic device, which belong to the technical field of photography. Wherein, the face focusing method provided by the present application includes: inputting a target image into a target parameter detection model to obtain a target focal length output by the target parameter detection model; focusing on a face based on the target focal length; wherein, the target parameter detection model is Based on 2D face image samples, joint 3D face deformation model and camera model training; 3D face deformation model is used to obtain 3D face based on input facial feature parameters; camera model is used based on input pose parameters and the three-dimensional face to obtain the restored two-dimensional face image; among them, the face feature parameters and the pose parameters are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameters are the parameters that characterize the shooting posture of the camera ; The face feature parameter is the parameter that characterizes the face contour.

Figure 202111306696

Description

人脸对焦方法、装置及电子设备Face focusing method, device and electronic device

技术领域technical field

本申请属于摄像技术领域,具体涉及一种人脸对焦方法、装置及电子设备。The present application belongs to the technical field of photography, and in particular relates to a face focusing method, device and electronic device.

背景技术Background technique

在现实生活中,用户在使用电子设备进行拍照时,若需要拍摄人像,通常需要对人脸进行对焦。In real life, when a user uses an electronic device to take a picture, if he needs to take a portrait, he usually needs to focus on the face.

相关技术中,在进行人脸对焦时,首先进行人脸检测,根据人脸检测框获取感兴趣区域;然后根据人脸的位姿和人脸在图像中的位置来调整感兴趣区域;再在调整后的感兴趣区域内进行相位图像的统计,基于相位对焦(Phase Detection Auto Focus,PDAF)技术和统计的相位图像计算合焦位置;最后,根据合焦位置移动马达的位置,进而完成人脸的自动对焦。In the related art, when focusing on the face, first perform face detection, and obtain the region of interest according to the face detection frame; then adjust the region of interest according to the pose of the face and the position of the face in the image; Count the phase images in the adjusted region of interest, and calculate the in-focus position based on the Phase Detection Auto Focus (PDAF) technology and the statistical phase images; finally, move the position of the motor according to the in-focus position to complete the face of autofocus.

但调整后的感兴趣区域内不仅包含人脸信息,还包含背景信息,背景和人脸不处于同一平面,若基于背景信息和人脸信息计算人脸的合焦位置,会导致人脸对焦精度较差。However, the adjusted region of interest contains not only face information, but also background information. The background and the face are not on the same plane. If the focus position of the face is calculated based on the background information and the face information, the focus accuracy of the face will be affected. poor.

发明内容SUMMARY OF THE INVENTION

本申请实施例的目的是提供一种人脸对焦方法、装置及电子设备,能够解决人脸对焦精度较差的问题。The purpose of the embodiments of the present application is to provide a face focusing method, device and electronic device, which can solve the problem of poor face focusing accuracy.

第一方面,本申请实施例提供了一种人脸对焦方法,该方法包括:In a first aspect, an embodiment of the present application provides a method for focusing on a human face, the method comprising:

将目标图像输入至目标参数检测模型中,得到所述目标参数检测模型输出的目标焦距;Input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model;

基于所述目标焦距对人脸进行对焦;focusing on the human face based on the target focal length;

其中,所述目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;所述三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;所述相机模型用于基于输入的位姿参数和所述三维人脸得到复原后的二维人脸图像;The target parameter detection model is obtained by training based on a 2D face image sample, a joint 3D face deformation model and a camera model; the 3D face deformation model is used to obtain a 3D face based on the input facial feature parameters ; The camera model is used to obtain the restored two-dimensional face image based on the input pose parameters and the three-dimensional face;

其中,所述人脸特征参数和所述位姿参数是初始参数检测模型基于所述二维人脸图像样本输出的参数;所述位姿参数为表征摄像头拍摄姿态的参数;所述人脸特征参数为表征人脸轮廓的参数。Wherein, the facial feature parameter and the pose parameter are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameter is a parameter representing the shooting pose of the camera; the face feature The parameters are parameters that characterize the contour of the face.

第二方面,本申请实施例提供了一种人脸对焦装置,该装置包括:In a second aspect, an embodiment of the present application provides a device for focusing on a human face, the device comprising:

第一检测模块,用于将目标图像输入至目标参数检测模型中,得到所述目标参数检测模型输出的目标焦距;a first detection module, configured to input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model;

对焦模块,用于基于所述目标焦距对人脸进行对焦;a focusing module for focusing on the face based on the target focal length;

其中,所述目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;所述三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;所述相机模型用于基于输入的位姿参数和所述三维人脸得到复原后的二维人脸图像;The target parameter detection model is obtained by training based on a 2D face image sample, a joint 3D face deformation model and a camera model; the 3D face deformation model is used to obtain a 3D face based on the input facial feature parameters ; The camera model is used to obtain the restored two-dimensional face image based on the input pose parameters and the three-dimensional face;

其中,所述人脸特征参数和所述位姿参数是初始参数检测模型基于所述二维人脸图像样本输出的参数;所述位姿参数为表征摄像头拍摄姿态的参数;所述人脸特征参数为表征人脸轮廓的参数。Wherein, the facial feature parameter and the pose parameter are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameter is a parameter representing the shooting pose of the camera; the face feature The parameters are parameters that characterize the contour of the face.

第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。In a third aspect, embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.

第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。In a fourth aspect, an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .

第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。In a fifth aspect, an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the method described.

在本申请实施例中,通过基于目标参数检测模型对目标图像进行特征提取得到的目标焦距对人脸进行对焦,该目标参数检测模型是基于人脸特征参数和位姿参数训练得到的,并不涉及背景信息,实现了基于人脸信息对目标焦距的计算,从而提高了人脸对焦的精度。In the embodiment of the present application, the face is focused by the target focal length obtained by extracting the feature of the target image based on the target parameter detection model. Involving background information, the calculation of target focal length based on face information is realized, thereby improving the accuracy of face focusing.

附图说明Description of drawings

图1是本申请实施例提供的人脸对焦方法的流程示意图之一;1 is one of the schematic flow charts of the method for focusing on a human face provided by an embodiment of the present application;

图2是本申请实施例提供的单目三维人脸重建的系统示意图;2 is a schematic diagram of a system for monocular 3D face reconstruction provided by an embodiment of the present application;

图3是本申请实施例提供的深度卷积神经网络的结构示意图;3 is a schematic structural diagram of a deep convolutional neural network provided by an embodiment of the present application;

图4是本申请实施例提供的人脸对焦方法的流程示意图之二;4 is the second schematic flowchart of the method for focusing on a face provided by an embodiment of the present application;

图5是本申请实施例提供的人脸对焦方法的流程示意图之三;5 is a third schematic flowchart of a method for focusing on a face provided by an embodiment of the present application;

图6a是本申请实施例提供的PDAF技术的寻焦原理示意图之一;6a is one of the schematic diagrams of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图6b是本申请实施例提供的PDAF技术的寻焦原理示意图之二;6b is the second schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图7a是本申请实施例提供的PDAF技术的寻焦原理示意图之三;7a is the third schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图7b是本申请实施例提供的PDAF技术的寻焦原理示意图之四;7b is the fourth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图8a是本申请实施例提供的PDAF技术的寻焦原理示意图之五;Fig. 8a is the fifth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图8b是本申请实施例提供的PDAF技术的寻焦原理示意图之六;Fig. 8b is the sixth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application;

图9是本申请实施例提供的人脸对焦方法的流程示意图之四;9 is a fourth schematic flowchart of a method for focusing on a face provided by an embodiment of the present application;

图10是本申请实施例提供的拍摄方法的流程示意图;10 is a schematic flowchart of a photographing method provided by an embodiment of the present application;

图11是本申请实施例提供的人脸对焦装置的结构示意图;11 is a schematic structural diagram of a face focusing device provided by an embodiment of the present application;

图12是本申请实施例提供的电子设备的结构示意图;12 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;

图13是本申请实施例提供的电子设备的硬件示意图。FIG. 13 is a schematic diagram of hardware of an electronic device provided by an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art fall within the protection scope of this application.

本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。The terms "first", "second" and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between "first", "second", etc. The objects are usually of one type, and the number of objects is not limited. For example, the first object may be one or more than one. In addition, "and/or" in the description and claims indicates at least one of the connected objects, and the character "/" generally indicates that the associated objects are in an "or" relationship.

下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的人脸对焦方法、装置及电子设备进行详细地说明。The following describes in detail the face focusing method, device, and electronic device provided by the embodiments of the present application through specific embodiments and application scenarios with reference to the accompanying drawings.

本申请实施例提供的人脸对焦方法,该人脸对焦方法的执行主体可以为电子设备或者电子设备中能够实现该人脸对焦方法的功能模块或功能实体,本申请实施例提及的电子设备包括但不限于手机、平板电脑、电脑、相机、可穿戴设备等,下面以电子设备作为执行主体为例对本申请实施例提供的人脸对焦方法进行说明。In the face focusing method provided by the embodiments of the present application, the execution subject of the face focusing method may be an electronic device or a functional module or functional entity capable of implementing the face focusing method in the electronic device, and the electronic device mentioned in the embodiments of the present application Including but not limited to mobile phones, tablet computers, computers, cameras, wearable devices, etc. The following describes the face focusing method provided by the embodiment of the present application by taking an electronic device as an execution subject as an example.

图1是本申请实施例提供的人脸对焦方法的流程示意图之一,如图1所示,该人脸对焦方法包括步骤101和步骤102:FIG. 1 is one of the schematic flowcharts of the method for focusing on a human face provided by an embodiment of the present application. As shown in FIG. 1 , the method for focusing on a human face includes step 101 and step 102:

步骤101、将目标图像输入至目标参数检测模型中,得到目标参数检测模型输出的目标焦距。Step 101: Input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model.

其中,目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型(3DMorphable Model,3DMM)和相机模型训练得到的。The target parameter detection model is obtained by training based on a two-dimensional face image sample, a joint three-dimensional face deformable model (3D Morphable Model, 3DMM) and a camera model.

三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;相机模型用于基于输入的位姿参数和三维人脸得到复原后的二维人脸图像;人脸特征参数和位姿参数是初始参数检测模型基于二维人脸图像样本输出的参数;位姿参数为表征摄像头拍摄姿态的参数;人脸特征参数为表征人脸轮廓的参数。The 3D face deformation model is used to obtain a 3D face based on the input facial feature parameters; the camera model is used to obtain a restored 2D face image based on the input pose parameters and the 3D face; the face feature parameters and pose The parameters are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameters are the parameters that characterize the shooting posture of the camera; the face feature parameters are the parameters that characterize the contour of the face.

可选地,三维人脸形变模型和相机模型为单目三维人脸重建技术包括的内容,单目三维人脸重建技术是通过单张的二维人脸图像重建出其对应的三维人脸。在现实世界中,人脸信息是三维的,而二维人脸图像是三维人脸向相机平面的投影,在投影过程中,损失了深度信息,单目三维人脸重建技术可以根据单张的二维人脸图像复原出人脸的深度信息,重建出该二维人脸图像对应的三维人脸。Optionally, the 3D face deformation model and the camera model are included in the monocular 3D face reconstruction technology, and the monocular 3D face reconstruction technology reconstructs its corresponding 3D face from a single 2D face image. In the real world, the face information is three-dimensional, and the two-dimensional face image is the projection of the three-dimensional face to the camera plane. During the projection process, the depth information is lost. The monocular three-dimensional face reconstruction technology can be based on the single image. The two-dimensional face image restores the depth information of the face, and reconstructs the three-dimensional face corresponding to the two-dimensional face image.

其中,三维人脸形变模型是以顶点和三角面片构成的三维人脸标准模型,所有顶点的位置决定了人脸的形状,所有顶点的颜色决定了人脸的纹理;三角面片描述了顶点与顶点之间的拓扑关系;相机模型是将三维人脸投影为二维人脸图像。Among them, the 3D face deformation model is a 3D face standard model composed of vertices and triangular patches. The positions of all vertices determine the shape of the face, and the colors of all vertices determine the texture of the face; triangular patches describe the vertices. Topological relationship with vertices; the camera model is to project a 3D face into a 2D face image.

图2是本申请实施例提供的单目三维人脸重建的系统示意图,如图2所示,单目三维人脸重建技术主要包括三维人脸形变模型、相机模型和参数优化,三维人脸形变模型将基于二维人脸图像样本重建的三维人脸输入至相机模型,由相机模型将三维人脸投影为复原后的二维人脸图像,在基于复原后的二维人脸图像和输入的二维人脸图像样本进行参数优化。2 is a schematic diagram of a system for monocular 3D face reconstruction provided by an embodiment of the present application. As shown in FIG. 2 , the monocular 3D face reconstruction technology mainly includes a 3D face deformation model, a camera model and parameter optimization. The model inputs the 3D face reconstructed based on the 2D face image sample to the camera model, and the camera model projects the 3D face into the restored 2D face image, and then uses the restored 2D face image and the input Two-dimensional face image samples for parameter optimization.

可选地,在训练得到目标参数检测模型后,将获取到的目标图像输入至预先训练好的目标参数检测模型中,目标参数检测模型的卷积层从目标图像中提取二维图像特征,最后通过目标参数检测模型的全连接层输出目标焦距。Optionally, after training to obtain the target parameter detection model, the obtained target image is input into the pre-trained target parameter detection model, and the convolution layer of the target parameter detection model extracts two-dimensional image features from the target image, and finally The target focal length is output through the fully connected layer of the target parameter detection model.

需要说明的是,目标参数检测模型除了输出目标焦距之外,还可以输出目标旋转参数、目标平移参数、目标形状参数和目标纹理参数,输出的目标焦距、目标旋转参数、目标平移参数、目标形状参数和目标纹理参数,可以应用于人脸美颜、智能捏脸等场景中,本申请对此不作限定。It should be noted that in addition to outputting the target focal length, the target parameter detection model can also output target rotation parameters, target translation parameters, target shape parameters, and target texture parameters, and output target focal length, target rotation parameters, target translation parameters, and target shape. The parameters and target texture parameters can be applied to scenes such as face beautification and intelligent face pinching, which are not limited in this application.

步骤102、基于目标焦距对人脸进行对焦。Step 102 , focus on the face based on the target focal length.

可选地,在得到目标焦距后,可以将目标焦距转换为对应的目标对焦像距,并基于目标对焦像距对人脸进行对焦。Optionally, after the target focal length is obtained, the target focal length can be converted into a corresponding target focal image distance, and the face is focused based on the target focal image distance.

本申请实施例提供的人脸对焦方法,通过基于目标参数检测模型对目标图像进行特征提取得到的目标焦距对人脸进行对焦,该目标参数检测模型是基于人脸特征参数和位姿参数训练得到的,并不涉及背景信息,实现了基于人脸信息对目标焦距的计算,从而提高了人脸对焦的精度。In the face focusing method provided by the embodiment of the present application, the face is focused by a target focal length obtained by feature extraction of a target image based on a target parameter detection model, and the target parameter detection model is obtained by training based on face feature parameters and pose parameters. It does not involve background information, and realizes the calculation of target focal length based on face information, thereby improving the accuracy of face focusing.

可选地,位姿参数包括焦距样本、旋转参数样本和平移参数样本;人脸特征参数包括形状参数样本和纹理参数样本。Optionally, the pose parameters include focal length samples, rotation parameter samples and translation parameter samples; the face feature parameters include shape parameter samples and texture parameter samples.

在一实施例中,旋转参数样本包括三维的旋转参数,平移参数样本包括三维的平移参数,形状参数样本包括20维的形状参数,纹理参数样本包括20维的纹理参数,只要估计出三维的旋转参数、三维的平移参数、20维的形状参数、20维的纹理参数、以及上述的目标焦距即可实现三维人脸的重建和三维人脸投影为复原后的二维人脸图像的过程,进而实现对初始参数检测模型的模型参数的优化。In one embodiment, the rotation parameter samples include three-dimensional rotation parameters, the translation parameter samples include three-dimensional translation parameters, the shape parameter samples include 20-dimensional shape parameters, and the texture parameter samples include 20-dimensional texture parameters, as long as the three-dimensional rotation is estimated. parameters, three-dimensional translation parameters, 20-dimensional shape parameters, 20-dimensional texture parameters, and the above-mentioned target focal length can realize the process of reconstructing the three-dimensional face and projecting the three-dimensional face into the restored two-dimensional face image, and then The optimization of the model parameters of the initial parameter detection model is realized.

本申请实施例提供的人脸对焦方法,位姿参数包含了旋转参数样本、平移参数样本和焦距样本,人脸特征参数包含了形状参数样本和纹理参数样本,涉及到的这些参数能够正确反映图像的主要特征,从而能够提高最终训练得到的目标参数检测模型的精确度。In the face focusing method provided by the embodiment of the present application, the pose parameters include rotation parameter samples, translation parameter samples, and focal length samples, and the face feature parameters include shape parameter samples and texture parameter samples, and these parameters involved can correctly reflect the image The main features of , which can improve the accuracy of the target parameter detection model obtained by the final training.

下面介绍目标参数检测模型的训练及优化过程:The following describes the training and optimization process of the target parameter detection model:

在一实施例中,采集目标数量个(例如200个)三维人脸的数据集,利用主成分分析法从数据集中提取出人脸形状基向量Si和人脸纹理基向量Ti。此时每一个三维人脸的形状和纹理可以用以下公式(1)中的人脸形状基向量S和公式(2)中的人脸纹理基向量T线性表示:In one embodiment, a data set of a target number (eg, 200) three-dimensional faces is collected, and a face shape base vector S i and a face texture base vector T i are extracted from the data set by principal component analysis. At this time, the shape and texture of each three-dimensional face can be linearly represented by the face shape basis vector S in the following formula (1) and the face texture basis vector T in the formula (2):

Figure BDA0003340555370000061
Figure BDA0003340555370000061

Figure BDA0003340555370000062
Figure BDA0003340555370000062

其中,

Figure BDA0003340555370000063
表示平均三维人脸形状,
Figure BDA0003340555370000064
表示平均三维人脸纹理,αi表示第i个人脸形状基向量Si对应的参数,βi表示第i个人脸纹理基向量Ti对应的参数,i为正整数。in,
Figure BDA0003340555370000063
represents the average 3D face shape,
Figure BDA0003340555370000064
represents the average three-dimensional face texture, α i represents the parameter corresponding to the ith face shape base vector S i , β i represents the parameter corresponding to the ith face texture base vector T i , and i is a positive integer.

从公式(1)和公式(2)可以得出,只要估计出所有的αi和βi,就可以重建出二维人脸图像对应的三维人脸。例如,估计出20维的α参数和20维的β参数来重建二维人脸图像对应的三维人脸。From formula (1) and formula (2), it can be concluded that as long as all α i and β i are estimated, the three-dimensional face corresponding to the two-dimensional face image can be reconstructed. For example, a 20-dimensional α parameter and a 20-dimensional β parameter are estimated to reconstruct a 3D face corresponding to a 2D face image.

进一步地,为了评估重建出的三维人脸的质量,需要将重建出的三维人脸经过相机模型进行投影,得到复原后的二维人脸图像。Further, in order to evaluate the quality of the reconstructed three-dimensional face, it is necessary to project the reconstructed three-dimensional face through the camera model to obtain a restored two-dimensional face image.

假设在三维空间中,三维人脸的第j个顶点的坐标为(xj,yj,zj),经过相机模型投影后,在二维空间中,第j个顶点的坐标为(uj,vj),具体采用以下公式(3)进行投影。Assuming that in the three-dimensional space, the coordinates of the jth vertex of the three-dimensional face are (x j , y j , z j ), after the camera model projection, in the two-dimensional space, the coordinates of the jth vertex are (u j , v j ), specifically using the following formula (3) for projection.

Figure BDA0003340555370000071
Figure BDA0003340555370000071

其中,dx表示x轴方向的一个像素占的长度单位,dy表示y轴方向的一个像素占的长度单位,u0表示二维人脸图像的中心像素坐标和二维人脸图像原点像素坐标之间相差的横向像素数,v0表示二维人脸图像的中心像素坐标和二维人脸图像原点像素坐标之间相差的纵向像素数,Z表示人工设定的成像平面位置,R表示三维的旋转参数,t表示三维的平移参数,f表示焦距。Among them, dx represents the length unit occupied by a pixel in the x-axis direction, dy represents the length unit occupied by a pixel in the y-axis direction, and u 0 represents the center pixel coordinate of the two-dimensional face image and the two-dimensional face image origin pixel coordinates. The number of horizontal pixels that differ between the two, v 0 represents the number of vertical pixels that differ between the center pixel coordinates of the two-dimensional face image and the pixel coordinates of the origin of the two-dimensional face image, Z represents the manually set imaging plane position, and R represents the three-dimensional The rotation parameter, t represents the three-dimensional translation parameter, and f represents the focal length.

从公式(3)可以得出,只要估计出三维的旋转参数R,三维的平移参数t和焦距f就可以将重建的三维人脸重投影为二维人脸图像。From formula (3), it can be concluded that as long as the three-dimensional rotation parameter R, the three-dimensional translation parameter t and the focal length f are estimated, the reconstructed three-dimensional face can be reprojected into a two-dimensional face image.

可以理解的是,在三维人脸重建过程中需要估计的参数有:第一数量维的α参数、第二数量维的β参数、三维的旋转参数R,三维的平移参数t和焦距f,以20维的α参数和20维的β参数为例。It can be understood that the parameters that need to be estimated in the three-dimensional face reconstruction process are: the α parameter of the first quantitative dimension, the β parameter of the second quantitative dimension, the three-dimensional rotation parameter R, the three-dimensional translation parameter t and the focal length f. Take the 20-dimensional alpha parameter and the 20-dimensional beta parameter as an example.

在一实施例中,目标参数检测模型的具体训练步骤包括:In one embodiment, the specific training steps of the target parameter detection model include:

将二维人脸图像样本输入至初始参数检测模型中,得到初始参数检测模型输出的人脸特征参数和位姿参数;将人脸特征参数输入至三维人脸形变模型中,得到三维人脸形变模型输出的三维人脸;将位姿参数和三维人脸输入至相机模型中,得到相机模型输出的复原后的二维人脸图像;基于复原后的二维人脸图像和二维人脸图像样本确定目标参数检测模型。Input the two-dimensional face image sample into the initial parameter detection model to obtain the face feature parameters and pose parameters output by the initial parameter detection model; input the face feature parameters into the three-dimensional face deformation model to obtain the three-dimensional face deformation The 3D face output by the model; the pose parameters and 3D face are input into the camera model to obtain the restored 2D face image output by the camera model; based on the restored 2D face image and 2D face image The sample determines the target parameter detection model.

其中,初始参数检测模型可以为构建的深度卷积神经网络,采用深度卷积神经网络来进行模型的训练和上述参数的估计,图3是本申请实施例提供的深度卷积神经网络的结构示意图,如图3所示,该深度卷积神经网络依次包括卷积层、归一化层、激活单元层、平均池化层和全连接层。Wherein, the initial parameter detection model may be a deep convolutional neural network constructed, and a deep convolutional neural network is used to perform model training and estimation of the above-mentioned parameters. FIG. 3 is a schematic structural diagram of a deep convolutional neural network provided by the embodiment of the present application. , as shown in Figure 3, the deep convolutional neural network sequentially includes a convolutional layer, a normalization layer, an activation unit layer, an average pooling layer and a fully connected layer.

将二维人脸图像样本输入至深度卷积神经网络中,由深度卷积神经网络的卷积层从二维人脸图像样本中提取二维图像特征;并将提取得到的二维图像特征输入至归一化层进行归一化处理,加快深度卷积神经网络的收敛速度;然后将归一化处理后的二维图像特征输入至激活单元层进行非线性变换;再将进行非线性变换后的二维图像特征输入至平均池化层,平均池化层用于缩小输入的二维图像特征图的尺寸;最后将缩小后的二维图像特征图输入至全连接层,全连接层用于将之前的所有局部特征重新通过权值矩阵组装,最终输出三维的旋转参数、三维的平移参数、焦距、20维的形状参数和20维的纹理参数。The two-dimensional face image samples are input into the deep convolutional neural network, and the two-dimensional image features are extracted from the two-dimensional face image samples by the convolutional layer of the deep convolutional neural network; and the extracted two-dimensional image features are input Then, the normalized two-dimensional image features are input to the activation unit layer for nonlinear transformation; The two-dimensional image features are input to the average pooling layer, and the average pooling layer is used to reduce the size of the input two-dimensional image feature map; finally, the reduced two-dimensional image feature map is input to the fully connected layer, and the fully connected layer is used for All the previous local features are reassembled through the weight matrix, and finally the three-dimensional rotation parameters, three-dimensional translation parameters, focal length, 20-dimensional shape parameters and 20-dimensional texture parameters are output.

在得到三维的旋转参数、三维的平移参数、焦距、20维的形状参数和20维的纹理参数后,就可以将20维的形状参数和20维的纹理参数输入至三维人脸形变模型中重建三维人脸;可以基于三维的旋转参数、三维的平移参数、焦距将重建的三维人脸重投影成复原后的二维人脸图像。After obtaining the 3D rotation parameters, 3D translation parameters, focal length, 20D shape parameters and 20D texture parameters, the 20D shape parameters and 20D texture parameters can be input into the 3D face deformation model for reconstruction 3D face; the reconstructed 3D face can be reprojected into a restored 2D face image based on the 3D rotation parameters, the 3D translation parameters, and the focal length.

需要说明的是,在目标图像为在当前场景下直接采集的图像的情况下,二维人脸图像样本也是直接采集的图像;在目标图像为基于PDAF技术确定的初始对焦位置采集的图像的情况下,二维人脸图像样本也是基于PDAF技术确定的初始对焦位置采集的图像。It should be noted that when the target image is an image directly collected in the current scene, the two-dimensional face image sample is also an image collected directly; when the target image is an image collected based on the initial focus position determined by PDAF technology Below, the two-dimensional face image sample is also an image collected based on the initial focus position determined by PDAF technology.

需要说明的是,上述重建的三维人脸还可以实现人像增强现实(AugmentedReality,AR)和人像虚拟现实技术(Virtual Reality,VR)等,本申请对此不作限定。It should be noted that, the reconstructed three-dimensional face can also realize augmented reality (AR) and virtual reality (VR), etc., which are not limited in this application.

本申请实施例提供的人脸对焦方法,基于初始参数检测模型输出的人脸特征参数和位姿参数确定目标参数检测模型,人脸特征参数和位姿参数能够正确反映图像的主要特征,从而能够提高目标参数检测模型的准确性。In the face focusing method provided by the embodiment of the present application, the target parameter detection model is determined based on the face feature parameters and pose parameters output by the initial parameter detection model, and the face feature parameters and pose parameters can correctly reflect the main features of the image, thereby enabling Improve the accuracy of the target parameter detection model.

进一步地,基于复原后的二维人脸图像和述二维人脸图像样本确定目标参数检测模型具体可通过以下方式实现:Further, determining the target parameter detection model based on the restored two-dimensional face image and the two-dimensional face image sample can be implemented in the following ways:

基于复原后的二维人脸图像和二维人脸图像样本的相似度确定损失函数;基于损失函数对初始参数检测模型的模型参数进行优化,直至满足收敛条件,得到目标参数检测模型。The loss function is determined based on the similarity between the restored two-dimensional face image and the two-dimensional face image samples; the model parameters of the initial parameter detection model are optimized based on the loss function until the convergence conditions are met, and the target parameter detection model is obtained.

在一实施例中,为了对模型的模型参数进行优化,建立了基于复原后的二维人脸图像和二维人脸图像样本的损失函数,该损失函数反映了复原后的二维人脸图像和二维人脸图像样本的相似度,以此对模型的模型参数进行优化,直至满足收敛条件,得到最终优化好的目标参数检测模型。In one embodiment, in order to optimize the model parameters of the model, a loss function based on the restored two-dimensional face image and two-dimensional face image samples is established, and the loss function reflects the restored two-dimensional face image. The similarity between the two-dimensional face image sample and the two-dimensional face image sample is used to optimize the model parameters of the model until the convergence conditions are met, and the final optimized target parameter detection model is obtained.

在一实施例中,采用以下公式(4)构建的损失函数Lossrec来进行深度卷积神经网络的模型参数的优化。In one embodiment, the loss function Loss rec constructed by the following formula (4) is used to optimize the model parameters of the deep convolutional neural network.

Lossrec=∑|Iinput-Irec| (4)Loss rec =∑|I input -I rec | (4)

其中,Iinput表示输入的二维人脸图像样本,Irec表示复原后的二维人脸图像。Among them, I input represents the input two-dimensional face image sample, and I rec represents the restored two-dimensional face image.

本申请实施例提供的人脸对焦方法,通过损失函数对模型的模型参数进行了优化,得到最终优化好的目标参数检测模型,提高了目标参数检测模型的精确度。In the face focusing method provided by the embodiments of the present application, the model parameters of the model are optimized through a loss function, and a final optimized target parameter detection model is obtained, which improves the accuracy of the target parameter detection model.

可选地,图4是本申请实施例提供的人脸对焦方法的流程示意图之二,如图4所示,在执行图1中的步骤101之前,所述方法还包括以下步骤:Optionally, FIG. 4 is a second schematic flowchart of a method for focusing on a face provided by an embodiment of the present application. As shown in FIG. 4 , before step 101 in FIG. 1 is executed, the method further includes the following steps:

步骤103、在检测到人脸且自动对焦的情况下,获取目标图像。Step 103: Obtain a target image when a human face is detected and auto-focusing.

可选地,在开始对焦时,根据人脸检测算法返回的结果确定当前场景中是否存在人脸;若人脸检测算法存在返回值,则确定当前场景中存在人脸,为人像场景;若人脸检测算法不存在返回值,则确定当前场景中不存在人脸,不为人像场景,此时调用其他场景对应的对焦策略进行对焦。Optionally, when starting to focus, it is determined whether there is a human face in the current scene according to the result returned by the face detection algorithm; if there is a return value from the face detection algorithm, it is determined that there is a human face in the current scene, which is a portrait scene; If the face detection algorithm does not have a return value, then it is determined that there is no face in the current scene, and it is not a portrait scene. At this time, the focusing strategy corresponding to other scenes is called to focus.

进一步地,在确定为人像场景时,检测是否接收到用户操作,该用户操作可以是用户在屏幕上执行的与对焦相关的操作,在接收到用户操作时,则说明用户希望将焦点对在用户操作位置所在的成像物体上,此时调用触摸对焦策略进行对焦。Further, when it is determined to be a portrait scene, it is detected whether a user operation is received, and the user operation may be a focus-related operation performed by the user on the screen. When a user operation is received, it means that the user wishes to focus on the user. On the imaging object where the operation position is located, call the touch focus strategy to focus at this time.

在人像场景下,未接收到用户操作时,可以确定用户希望电子设备进行自动对焦,而当前场景中有人像存在,用户通常希望能够将焦点对到人脸上,使得人脸部分成像清晰,因此此时人脸自动对焦策略触发,即电子设备开始执行获取目标图像的步骤。In the portrait scene, when no user operation is received, it can be determined that the user wants the electronic device to perform automatic focusing, and there is a portrait in the current scene, the user usually wants to be able to focus on the face, so that the part of the face is clearly imaged, so At this time, the face auto-focusing strategy is triggered, that is, the electronic device starts to perform the step of acquiring the target image.

需要说明的是,电子设备获取的目标图像可以为在当前场景下直接采集的图像,也可以为基于PDAF技术确定的初始对焦位置采集的图像,PDAF技术是一种利用图像传感器上的相位检测(PD)像素点计算合焦位置的对焦方式,本申请对此不作限定。It should be noted that the target image obtained by the electronic device can be the image directly collected in the current scene, or the image collected based on the initial focus position determined by the PDAF technology. PD) The focusing method for calculating the in-focus position by the pixel points, which is not limited in this application.

本申请实施例提供的人脸对焦方法,在检测到人脸且自动对焦的情况下,才获取目标图像,避免人脸对焦方法的误触发。In the face focusing method provided by the embodiments of the present application, the target image is acquired only when a human face is detected and the focus is automatic, so as to avoid false triggering of the face focusing method.

可选地,图5是本申请实施例提供的人脸对焦方法的流程示意图之三,如图5所示,图4中步骤103的实现方式可以包括以下步骤:Optionally, FIG. 5 is a third schematic flowchart of a method for focusing on a face provided by an embodiment of the present application. As shown in FIG. 5 , the implementation of step 103 in FIG. 4 may include the following steps:

步骤1031、基于目标人脸检测区域、第一相位图和第二相位图确定初始合焦位置。Step 1031: Determine the initial focus position based on the target face detection area, the first phase map and the second phase map.

其中,目标人脸检测区域为自动对焦区域,第一相位图和第二相位图是根据图像传感器上的PD像素得到的。The target face detection area is an auto-focus area, and the first phase map and the second phase map are obtained according to PD pixels on the image sensor.

在一实施例中,在人脸自动对焦策略触发后,需要获得人脸自动对焦策略所需要的输入,主要包括人脸检测算法输出的目标人脸检测区域,目标人脸检测区域可以为目标人脸检测框中的区域,即自动对焦区域;以及根据图像传感器上的PD像素得到的第一相位图和第二相位图,第一相位图可以为左相位图,第二相位图可以为右相位图。In one embodiment, after the face auto-focus strategy is triggered, the input required by the face auto-focus strategy needs to be obtained, mainly including the target face detection area output by the face detection algorithm, and the target face detection area can be the target person. The area in the face detection frame, that is, the auto-focus area; and the first phase map and the second phase map obtained according to the PD pixels on the image sensor, the first phase map can be the left phase map, and the second phase map can be the right phase map picture.

在得到目标人脸检测区域、第一相位图和第二相位图时,基于PDAF技术、目标人脸检测区域、第一相位图和第二相位图确定初始合焦位置。When the target face detection area, the first phase map and the second phase map are obtained, the initial focus position is determined based on the PDAF technology, the target face detection area, the first phase map and the second phase map.

其中,PDAF技术的对焦原理为:通过图像传感器上的PD像素来分别获得第一相位图和第二相位图,当第一相位图和第二相位图的相位差为0时,此时刚好为合焦位置;当第一相位图和第二相位图存在相位差时,此时处于离焦位置,因此,可以直接通过第一相位图和第二相位图的相位差来计算出当前位置离合焦位置的距离。Among them, the focusing principle of PDAF technology is: the first phase map and the second phase map are obtained respectively through the PD pixels on the image sensor. When the phase difference between the first phase map and the second phase map is 0, it is just In-focus position; when there is a phase difference between the first phase image and the second phase image, it is in the defocused position. Therefore, the current position can be directly calculated from the phase difference between the first phase image and the second phase image. distance of the location.

图6a是本申请实施例提供的PDAF技术的寻焦原理示意图之一,图6b是本申请实施例提供的PDAF技术的寻焦原理示意图之二,如图6a和图6b所示,第一相位图和第二相位图存在相位差,处于离焦位置。Fig. 6a is one of the schematic diagrams of the focusing principle of the PDAF technology provided by the embodiment of the present application, and Fig. 6b is the second schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application. As shown in Figs. 6a and 6b, the first phase There is a phase difference between the image and the second phase image, and they are in the defocused position.

图7a是本申请实施例提供的PDAF技术的寻焦原理示意图之三,图7b是本申请实施例提供的PDAF技术的寻焦原理示意图之四,如图7a和图7b所示,第一相位图和第二相位图重合,处于合焦位置。Fig. 7a is the third schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application, and Fig. 7b is the fourth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application. As shown in Figs. 7a and 7b, the first phase The image coincides with the second phase image and is in focus position.

图8a是本申请实施例提供的PDAF技术的寻焦原理示意图之五,图8b是本申请实施例提供的PDAF技术的寻焦原理示意图之六,如图8a和图8b所示,第一相位图和第二相位图也存在相位差,处于离焦位置。Fig. 8a is the fifth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application, and Fig. 8b is the sixth schematic diagram of the focusing principle of the PDAF technology provided by the embodiment of the present application. As shown in Figs. 8a and 8b, the first phase There is also a phase difference between the image and the second phase image, and they are in an out-of-focus position.

具体基于第一相位图和第二相位图确定初始合焦位置的计算过程如下:Specifically, the calculation process for determining the initial focus position based on the first phase map and the second phase map is as follows:

通过遍历的方法确定初始合焦位置,假设第一相位图为PDleft,第二相位图为PDright,Shift参数代表第一相位图PDleft移动的像素数,例如,Shift=1表示第一相位图PDleft整体向第二相位图PDright的方向移动一个像素,Shift=-1表示第一相位图PDleft整体向背离第二相位图PDright的方向移动一个像素,在实际应用中,设定Shift参数的具体值,假设Shift参数为从-16到16的整数,则需要从-16到16遍历Shift参数,将第一相位图PDleft移动shiftk个像素值得到

Figure BDA0003340555370000111
计算每个shiftk下,移动后的
Figure BDA0003340555370000112
和第二相位图PDright在目标人脸检测区域内的相似度SADk,具体相似度SADk可采用以下公式(5)表示:Determine the initial in-focus position by traversing, assuming that the first phase image is PD left , the second phase image is PD right , and the Shift parameter represents the number of pixels moved by the first phase image PD left , for example, Shift=1 represents the first phase The overall picture PD left moves one pixel in the direction of the second phase picture PD right , and Shift=-1 means that the first phase picture PD left overall moves one pixel in the direction away from the second phase picture PD right . In practical applications, set The specific value of the Shift parameter, assuming that the Shift parameter is an integer from -16 to 16, you need to traverse the Shift parameter from -16 to 16, and move the first phase map PD left by shift k pixel values to obtain
Figure BDA0003340555370000111
Calculate each shift k , after the move
Figure BDA0003340555370000112
The similarity SAD k with the second phase map PD right in the target face detection area, and the specific similarity SAD k can be expressed by the following formula (5):

Figure BDA0003340555370000113
Figure BDA0003340555370000113

其中,ROI表示目标人脸检测区域,即为感兴趣区域(Region of Interest,ROI),SADk表示移动第k个Shift对应的相似度,shiftk表示第k个Shift,l属于ROI,l表示ROI区域内的第l个像素。Among them, ROI represents the target face detection area, that is, the region of interest (Region of Interest, ROI), SAD k represents the similarity corresponding to the k-th Shift, shift k represents the k-th Shift, l belongs to the ROI, and l represents The lth pixel within the ROI area.

可以理解的是,SAD值越小,对应的两个相位图在目标人脸检测区域内越相近,此时离合焦位置也就更近。It can be understood that the smaller the SAD value, the closer the corresponding two phase maps are in the target face detection area, and the closer the focus position is.

可选地,从求得的所有SAD值中选取三个最小值

Figure BDA0003340555370000114
Figure BDA0003340555370000115
以及
Figure BDA0003340555370000116
对应的
Figure BDA0003340555370000117
对应的
Figure BDA0003340555370000118
以及
Figure BDA0003340555370000119
对应的
Figure BDA00033405553700001110
然后基于
Figure BDA00033405553700001111
Figure BDA00033405553700001112
拟合一条以下公式(6)所示的二次曲线:Optionally, select the three minimum values from all SAD values obtained
Figure BDA0003340555370000114
and
Figure BDA0003340555370000115
as well as
Figure BDA0003340555370000116
corresponding
Figure BDA0003340555370000117
corresponding
Figure BDA0003340555370000118
as well as
Figure BDA0003340555370000119
corresponding
Figure BDA00033405553700001110
then based on
Figure BDA00033405553700001111
and
Figure BDA00033405553700001112
Fit a quadratic curve shown in the following equation (6):

SAD=a*shift2+b*shift+c (6)SAD=a*shift 2 +b*shift+c (6)

其中,a,b,和c均为二次曲线参数。where a, b, and c are all quadratic curve parameters.

Figure BDA00033405553700001113
Figure BDA00033405553700001114
代入公式(6)中,得到以下公式(7);将
Figure BDA00033405553700001115
Figure BDA00033405553700001116
代入公式(6)中,得到以下公式(8);将
Figure BDA00033405553700001117
Figure BDA00033405553700001118
代入公式(6)中,得到以下公式(9):Will
Figure BDA00033405553700001113
and
Figure BDA00033405553700001114
Substituting into formula (6), the following formula (7) is obtained;
Figure BDA00033405553700001115
and
Figure BDA00033405553700001116
Substituting into formula (6), the following formula (8) is obtained;
Figure BDA00033405553700001117
and
Figure BDA00033405553700001118
Substituting into formula (6), the following formula (9) is obtained:

Figure BDA0003340555370000121
Figure BDA0003340555370000121

Figure BDA0003340555370000122
Figure BDA0003340555370000122

Figure BDA0003340555370000123
Figure BDA0003340555370000123

基于公式(7)、公式(8)和公式(9)可以计算得出二次曲线参数a,b,和c,可知,公式(6)所示的二次曲线的对称轴位置的SAD值最小,此处即为初始合焦位置,因此取公式(6)所示的二次曲线的对称轴位置的shift值作为第一相位图和第二相位图之间的相位差值,具体相位差值pd采用以下公式(10)表示:Based on formula (7), formula (8) and formula (9), the parameters a, b, and c of the quadratic curve can be calculated. It can be known that the SAD value of the position of the symmetry axis of the quadratic curve shown in formula (6) is the smallest , here is the initial focus position, so take the shift value of the position of the symmetry axis of the quadratic curve shown in formula (6) as the phase difference value between the first phase map and the second phase map, the specific phase difference value pd is expressed by the following formula (10):

Figure BDA0003340555370000124
Figure BDA0003340555370000124

步骤1032、基于初始合焦位置对目标对象进行图像采集,得到目标图像。Step 1032: Capture an image of the target object based on the initial focus position to obtain a target image.

在一实施例中,在获取到初始合焦位置时,基于以下公式(11)将相位差值pd转化为马达Code值,基于马达Code值移动马达的位置,从而完成初始对焦,此时的对焦位置已经对到了人像附近,因此成像平面可以对人脸较清晰地进行成像,得到目标图像。In one embodiment, when the initial focus position is obtained, the phase difference value pd is converted into a motor Code value based on the following formula (11), and the position of the motor is moved based on the motor Code value, thereby completing the initial focus, and the focus at this time is: The position has been aligned near the portrait, so the imaging plane can image the human face more clearly to obtain the target image.

Code=pd*DCC (11)Code=pd*DCC (11)

其中,Code为马达行程,离焦转换系数(Defocus Conversion Coefficient,DCC)表示将相位差转化为马达Code的系数常数,由模组厂标定得到。Among them, Code is the motor stroke, and the Defocus Conversion Coefficient (DCC) represents the coefficient constant for converting the phase difference into the motor Code, which is calibrated by the module factory.

本申请实施例提供的人脸对焦方法,基于PDAF技术确定初始合焦位置,在初始合焦位置的基础上进行再次寻焦,两个阶段的寻焦可以使得最终得到的目标焦距更加精准,提高了人脸对焦的精准度。In the face focusing method provided by the embodiments of the present application, the initial focusing position is determined based on the PDAF technology, and focusing is performed again on the basis of the initial focusing position. The two-stage focusing can make the final target focal length more accurate and improve the Accuracy of face focus.

图9是本申请实施例提供的人脸对焦方法的流程示意图之四,如图9所示,图1中步骤102的实现方式可以包括以下步骤:FIG. 9 is a fourth schematic flowchart of a method for focusing on a face provided by an embodiment of the present application. As shown in FIG. 9 , the implementation of step 102 in FIG. 1 may include the following steps:

1021、基于目标焦距、镜头焦距和目标像距,确定目标对焦像距。1021. Based on the target focal length, the lens focal length, and the target image distance, determine the target focus image distance.

其中,目标像距为成像平面到镜头的距离。Among them, the target image distance is the distance from the imaging plane to the lens.

在一实施例中,采用以下公式(12)所示的成像高斯公式计算目标对焦像距:In one embodiment, the imaging Gaussian formula shown in the following formula (12) is used to calculate the target focusing image distance:

Figure BDA0003340555370000131
Figure BDA0003340555370000131

其中,f表示目标焦距,Z表示目标像距,即为人工设定的成像平面位置,u表示物距,fcamera表示镜头焦距,v表示目标对焦像距。Among them, f represents the target focal length, Z represents the target image distance, which is the manually set imaging plane position, u represents the object distance, f camera represents the lens focal length, and v represents the target focal length.

上述公式(12)中,成像时人脸的物距u不变,摄像头的镜头焦距fcamera为已知参数,Z也是已知参数,目标焦距f利用上述步骤102计算得出,所以通过公式(12)可以求解得到目标对焦像距v,即目标对焦像距v可采用以下公式(13)获得:In the above formula (12), the object distance u of the face does not change during imaging, the lens focal length f camera of the camera is a known parameter, Z is also a known parameter, and the target focal length f is calculated using the above step 102, so by formula ( 12) The target focus image distance v can be obtained by solving, that is, the target focus image distance v can be obtained by the following formula (13):

Figure BDA0003340555370000132
Figure BDA0003340555370000132

步骤1022、基于目标对焦像距对人脸进行对焦。Step 1022: Focus on the human face based on the target focusing image distance.

可选地,在确定目标对焦像距之后,将目标对焦像距确定为最终的合焦位置,基于目标对焦像距对人脸进行对焦。Optionally, after the target focus image distance is determined, the target focus image distance is determined as the final focus position, and the human face is focused based on the target focus image distance.

进一步地,可以基于以下公式(14)将目标对焦像距转换为新的马达Code,即根据模组厂在电子设备中烧录的远焦像距vinf、远焦像距vinf对应的马达codeinf、近焦像距vmicr和近焦像距vmicr对应的马达codemicr,通过线性插值将目标对焦像距v转换为需要的新的马达codev,最后基于马达codev将马达移动至目标对焦像距v上,完成人脸图像的最终对焦。Further, the target focus image distance can be converted into a new motor Code based on the following formula (14), that is, the motor corresponding to the far focus image distance v inf and the far focus image distance v inf programmed by the module factory in the electronic device. code inf , near focus image distance v micr and motor code micr corresponding to near focus image distance v micr , convert the target focus image distance v into the required new motor code v through linear interpolation, and finally move the motor to The target is focused on the image distance v, and the final focus of the face image is completed.

Figure BDA0003340555370000133
Figure BDA0003340555370000133

本申请实施例提供的人脸对焦方法,基于目标焦距、镜头焦距和目标像距计算得到最终的目标对焦像距,实现了目标焦距到目标对焦像距的快速转化。In the face focusing method provided by the embodiment of the present application, the final target focusing image distance is calculated based on the target focal length, the lens focal length and the target image distance, and the rapid conversion from the target focal length to the target focusing image distance is realized.

本申请实施例提供的人脸对焦方法,在PDAF技术的基础上,基于目标参数检测模型对初始合焦位置进行了优化,解决了感兴趣区域中背景信息对对焦位置计算的干扰;也解决了PDAF技术难于处理人脸低细节纹理的问题,提高了人脸对焦的精度,在人像拍摄场景下给用户带来更快、更精准的对焦体验。The face focusing method provided by the embodiments of the present application, on the basis of PDAF technology, optimizes the initial focusing position based on the target parameter detection model, and solves the interference of background information in the region of interest on the calculation of the focusing position; PDAF technology is difficult to deal with the problem of low-detail texture of faces, which improves the accuracy of face focusing, and brings users a faster and more accurate focusing experience in portrait shooting scenes.

图10是本申请实施例提供的拍摄方法的流程示意图,如图10所示,该拍摄方法包括步骤1001、步骤1002、步骤1003、步骤1004和步骤1005:FIG. 10 is a schematic flowchart of a photographing method provided by an embodiment of the present application. As shown in FIG. 10 , the photographing method includes step 1001 , step 1002 , step 1003 , step 1004 , and step 1005 :

步骤1001、在检测到人脸且自动对焦的情况下,获取目标图像。Step 1001 , when a human face is detected and auto-focusing, acquire a target image.

步骤1002、将目标图像输入至目标参数检测模型中,得到目标参数检测模型输出的目标焦距。Step 1002: Input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model.

其中,目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;相机模型用于基于输入的位姿参数和三维人脸得到复原后的二维人脸图像;人脸特征参数和位姿参数是初始参数检测模型基于二维人脸图像样本输出的参数;位姿参数为表征摄像头拍摄姿态的参数;人脸特征参数为表征人脸轮廓的参数。Among them, the target parameter detection model is trained based on the 2D face image samples, the joint 3D face deformation model and the camera model; the 3D face deformation model is used to obtain the 3D face based on the input facial feature parameters; the camera model uses It is based on the input pose parameters and 3D face to obtain the restored 2D face image; the face feature parameters and pose parameters are the parameters output by the initial parameter detection model based on the 2D face image samples; the pose parameters represent the The parameters of the camera's shooting posture; the face feature parameters are the parameters that characterize the contour of the face.

步骤1003、基于目标焦距对人脸进行对焦。Step 1003 , focus on the human face based on the target focal length.

步骤1004、接收用户的第一输入。Step 1004: Receive a first input from the user.

其中,第一输入为用户执行拍摄的操作,第一输入可以表现为如下至少一种方式:Wherein, the first input is that the user performs a shooting operation, and the first input can be expressed in at least one of the following ways:

其一,第一输入可以表现为触控操作,包括但不限于点击操作或者按压操作等,即点击屏幕,触发对目标对象的拍摄。First, the first input can be expressed as a touch operation, including but not limited to a click operation or a pressing operation, that is, clicking on the screen to trigger the shooting of the target object.

在该实施方式中,接收用户的第一输入可以表现为,接收用户在电子设备的拍摄界面上的点击操作等。In this implementation manner, receiving the user's first input may be expressed as receiving a user's click operation on the photographing interface of the electronic device, or the like.

其二,第一输入可以表现为实体按键输入。Second, the first input can be represented as a physical key input.

在该实施方式中,电子设备的机身上设有对应的实体按键,接收用户的第一输入,可以表现为,接收用户按压对应的实体按键的第一输入;第一输入还可以为同时按压多个实体按键的组合操作。In this embodiment, the body of the electronic device is provided with a corresponding physical button to receive the first input from the user, which can be expressed as receiving the first input of the user pressing the corresponding physical button; the first input can also be simultaneous pressing The combined operation of multiple physical buttons.

其三、第一输入可以表现为语音输入。Third, the first input can be expressed as a voice input.

在该实施方式中,电子设备可以接收用户语音如“小V,小V,开始拍摄”等,其中,小V为电子设备的唤醒词。In this implementation manner, the electronic device may receive user voices such as "small V, small V, start shooting", etc., where small V is the wake-up word of the electronic device.

步骤1005、响应于第一输入,基于目标焦距对目标对象进行拍摄,得到目标拍摄图像。Step 1005: In response to the first input, photograph the target object based on the target focal length to obtain the target photographed image.

在一实施例中,在检测到用户的第一输入时,基于第一输入解析到用户此时需要拍摄,则向图像传感器发送拍摄指令,图像传感器在接收到拍摄指令时,开始成像,得到目标拍摄图像。In one embodiment, when the first input of the user is detected, based on the analysis of the first input that the user needs to shoot at this time, a shooting instruction is sent to the image sensor, and the image sensor starts imaging when receiving the shooting instruction to obtain the target. Take an image.

可以理解的是,在没有接收到用户的第一输入的情况下,说明用户对当前的对焦效果不满意,此时需要不断地判断用户是否有新的对焦指令,若用户点击屏幕,则说明用户希望将焦点对在用户操作位置所在的成像物体上;或者电子设备重新采用人脸检测算法确定用户此时是否更换拍摄场景,若确定用户更换拍摄场景,则切换到与该拍摄场景对应的对焦策略进行对焦。It can be understood that in the case where the user's first input is not received, it means that the user is not satisfied with the current focusing effect. At this time, it is necessary to constantly judge whether the user has a new focusing command. If the user clicks the screen, it means that the user It is desirable to focus on the imaging object where the user is operating; or the electronic device re-uses the face detection algorithm to determine whether the user has changed the shooting scene at this time, and if it is determined that the user has changed the shooting scene, switch to the focusing strategy corresponding to the shooting scene to focus.

本申请实施例提供的拍摄方法,通过基于目标参数检测模型对目标图像进行特征提取得到的目标焦距对人脸进行对焦,该目标参数检测模型是基于人脸特征参数和位姿参数训练得到的,并不涉及背景信息,实现了基于人脸信息对目标焦距的计算,提高了人脸对焦的精度;在基于目标焦距对目标对象进行拍摄时,能够提高得到的拍摄图像的清晰度,也就提高了拍摄图像的质量。In the shooting method provided by the embodiment of the present application, the face is focused by using the target focal length obtained by feature extraction of the target image based on the target parameter detection model, and the target parameter detection model is obtained by training based on the face feature parameters and the pose parameters, It does not involve background information, realizes the calculation of the target focal length based on the face information, and improves the accuracy of the face focus; when the target object is photographed based on the target focal length, it can improve the clarity of the captured image, which means that the quality of the captured image.

需要说明的是,本申请实施例提供的人脸对焦方法,执行主体可以为人脸对焦装置,或者该人脸对焦装置中的用于执行人脸对焦方法的控制模块。本申请实施例中以人脸对焦装置执行人脸对焦的方法为例,说明本申请实施例提供的人脸对焦的装置。It should be noted that, in the face focusing method provided by the embodiments of the present application, the execution subject may be a face focusing device, or a control module in the face focusing device for executing the face focusing method. In this embodiment of the present application, the method for performing human face focusing by a face focusing device is taken as an example to describe the device for human face focusing provided by the embodiments of the present application.

本申请实施例还提供一种人脸对焦装置。图11是本申请实施例提供的人脸对焦装置的结构示意图,如图11所示,该装置包括第一检测模块1101和对焦模块1102;其中,The embodiment of the present application also provides a face focusing device. FIG. 11 is a schematic structural diagram of a face focusing device provided by an embodiment of the present application. As shown in FIG. 11 , the device includes a first detection module 1101 and a focusing module 1102; wherein,

第一检测模块1101,用于将目标图像输入至目标参数检测模型中,得到目标参数检测模型输出的目标焦距;The first detection module 1101 is used to input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model;

对焦模块1102,用于基于目标焦距对人脸进行对焦;A focusing module 1102, configured to focus on the human face based on the target focal length;

其中,目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;相机模型用于基于输入的位姿参数和三维人脸得到复原后的二维人脸图像;Among them, the target parameter detection model is trained based on the 2D face image samples, the joint 3D face deformation model and the camera model; the 3D face deformation model is used to obtain the 3D face based on the input facial feature parameters; the camera model uses Obtain the restored 2D face image based on the input pose parameters and 3D face;

其中,人脸特征参数和位姿参数是初始参数检测模型基于二维人脸图像样本输出的参数;位姿参数为表征摄像头拍摄姿态的参数;人脸特征参数为表征人脸轮廓的参数。Among them, the face feature parameters and pose parameters are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameters are the parameters that characterize the camera's shooting posture; the face feature parameters are the parameters that characterize the face contour.

本申请实施例提供的人脸对焦装置,通过基于目标参数检测模型对目标图像进行特征提取得到的目标焦距对人脸进行对焦,该目标参数检测模型是基于人脸特征参数和位姿参数训练得到的,并不涉及背景信息,实现了基于人脸信息对目标焦距的计算,从而提高了人脸对焦的精度。The face focusing device provided by the embodiment of the present application focuses on the face through the target focal length obtained by extracting the target image based on the target parameter detection model. The target parameter detection model is obtained by training based on the face feature parameters and the pose parameters. It does not involve background information, and realizes the calculation of target focal length based on face information, thereby improving the accuracy of face focusing.

可选地,所述装置还包括:Optionally, the device further includes:

第二检测模块,用于将二维人脸图像样本输入至初始参数检测模型中,得到初始参数检测模型输出的人脸特征参数和位姿参数;The second detection module is used to input the two-dimensional face image sample into the initial parameter detection model, and obtain the facial feature parameters and pose parameters output by the initial parameter detection model;

形变模块,用于将人脸特征参数输入至三维人脸形变模型中,得到三维人脸形变模型输出的三维人脸;The deformation module is used to input the facial feature parameters into the 3D face deformation model to obtain the 3D face output by the 3D face deformation model;

复原模块,用于将位姿参数和三维人脸输入至相机模型中,得到相机模型输出的复原后的二维人脸图像;The restoration module is used to input the pose parameters and the three-dimensional face into the camera model, and obtain the restored two-dimensional face image output by the camera model;

确定模块,用于基于复原后的二维人脸图像和二维人脸图像样本确定目标参数检测模型。The determining module is used for determining the target parameter detection model based on the restored two-dimensional face image and the two-dimensional face image samples.

可选地,确定模块还用于:Optionally, the determining module is also used to:

基于复原后的二维人脸图像和二维人脸图像样本的相似度确定损失函数;Determine the loss function based on the similarity between the restored two-dimensional face image and the two-dimensional face image samples;

基于损失函数对初始参数检测模型的模型参数进行优化,直至满足收敛条件,得到目标参数检测模型。Based on the loss function, the model parameters of the initial parameter detection model are optimized until the convergence conditions are met, and the target parameter detection model is obtained.

可选地,位姿参数包括焦距样本、旋转参数样本和平移参数样本;人脸特征参数包括形状参数样本和纹理参数样本。Optionally, the pose parameters include focal length samples, rotation parameter samples and translation parameter samples; the face feature parameters include shape parameter samples and texture parameter samples.

可选地,所述装置还包括:Optionally, the device further includes:

获取模块,用于在检测到人脸且自动对焦的情况下,获取目标图像。The acquisition module is used to acquire the target image when the face is detected and auto-focusing.

可选地,获取模块还用于:Optionally, the acquisition module is also used to:

基于目标人脸检测区域、第一相位图和第二相位图确定初始合焦位置,目标人脸检测区域为自动对焦区域,第一相位图和第二相位图是根据图像传感器上的PD像素得到的;The initial in-focus position is determined based on the target face detection area, the first phase map and the second phase map, the target face detection area is the auto-focus area, and the first phase map and the second phase map are obtained according to the PD pixels on the image sensor of;

基于初始合焦位置对目标对象进行图像采集,得到目标图像。Based on the initial in-focus position, image acquisition of the target object is performed to obtain the target image.

可选地,对焦模块1102还用于:Optionally, the focusing module 1102 is also used for:

基于目标焦距、镜头焦距和目标像距,确定目标对焦像距;其中,目标像距为成像平面到镜头的距离;Based on the target focal length, the lens focal length and the target image distance, the target focal image distance is determined; wherein, the target image distance is the distance from the imaging plane to the lens;

基于目标对焦像距对人脸进行对焦。Focuses on faces based on the target focusing distance.

本申请实施例中的人脸对焦装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personaldigital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(NetworkAttached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。The face focusing device in this embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The apparatus may be a mobile electronic device or a non-mobile electronic device. Exemplarily, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant). , PDA), etc., the non-mobile electronic device may be a server, a network attached storage (NAS), a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc. The embodiments of the present application There is no specific limitation.

本申请实施例中的人脸对焦装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为IOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。The face focusing device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.

本申请实施例提供的人脸对焦装置能够实现图1至图10的方法实施例实现的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The face focusing device provided in the embodiments of the present application can implement the various processes implemented by the method embodiments in FIG. 1 to FIG. 10 , and can achieve the same technical effect. To avoid repetition, details are not described here.

可选地,如图12所示,本申请实施例还提供一种电子设备1200,包括处理器1201,存储器1202,存储在存储器1202上并可在所述处理器1201上运行的程序或指令,该程序或指令被处理器1201执行时实现上述人脸对焦方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Optionally, as shown in FIG. 12 , an embodiment of the present application further provides an electronic device 1200, including a processor 1201, a memory 1202, a program or instruction stored in the memory 1202 and executable on the processor 1201, When the program or instruction is executed by the processor 1201, each process of the above-mentioned embodiment of the method for focusing on a face can be achieved, and the same technical effect can be achieved. In order to avoid repetition, details are not repeated here.

需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。It should be noted that the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.

图13为实现本申请实施例的一种电子设备的硬件结构示意图。FIG. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.

该电子设备1300包括但不限于:射频单元1301、网络模块1302、音频输出单元1303、输入单元1304、传感器1305、显示单元1306、用户输入单元1307、接口单元1308、存储器1309以及处理器1310等部件。The electronic device 1300 includes but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310 and other components .

本领域技术人员可以理解,电子设备1300还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1310逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图13中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。Those skilled in the art can understand that the electronic device 1300 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 1310 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions. The structure of the electronic device shown in FIG. 13 does not constitute a limitation on the electronic device. The electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .

其中,处理器1310,用于将目标图像输入至目标参数检测模型中,得到目标参数检测模型输出的目标焦距;The processor 1310 is configured to input the target image into the target parameter detection model to obtain the target focal length output by the target parameter detection model;

处理器1310,还用于基于目标焦距对人脸进行对焦;The processor 1310 is further configured to focus on the human face based on the target focal length;

其中,目标参数检测模型是基于二维人脸图像样本、联合三维人脸形变模型和相机模型训练得到的;三维人脸形变模型用于基于输入的人脸特征参数得到三维人脸;相机模型用于基于输入的位姿参数和三维人脸得到复原后的二维人脸图像;Among them, the target parameter detection model is trained based on the 2D face image samples, the joint 3D face deformation model and the camera model; the 3D face deformation model is used to obtain the 3D face based on the input facial feature parameters; the camera model uses Obtain the restored 2D face image based on the input pose parameters and 3D face;

其中,人脸特征参数和位姿参数是初始参数检测模型基于二维人脸图像样本输出的参数;位姿参数为表征摄像头拍摄姿态的参数;人脸特征参数为表征人脸轮廓的参数。Among them, the face feature parameters and pose parameters are the parameters output by the initial parameter detection model based on the two-dimensional face image sample; the pose parameters are the parameters that characterize the camera's shooting posture; the face feature parameters are the parameters that characterize the face contour.

本申请实施例提供的电子设备,通过基于目标参数检测模型对目标图像进行特征提取得到的目标焦距对人脸进行对焦,该目标参数检测模型是基于人脸特征参数和位姿参数训练得到的,并不涉及背景信息,实现了基于人脸信息对目标焦距的计算,从而提高了人脸对焦的精度。The electronic device provided by the embodiment of the present application focuses on the human face by using the target focal length obtained by extracting the feature of the target image based on the target parameter detection model, and the target parameter detection model is obtained by training based on the facial feature parameters and the pose parameters, The background information is not involved, and the calculation of the target focal length based on the face information is realized, thereby improving the accuracy of the face focusing.

可选地,处理器1310,还用于将二维人脸图像样本输入至初始参数检测模型中,得到初始参数检测模型输出的人脸特征参数和位姿参数;Optionally, the processor 1310 is further configured to input the two-dimensional face image sample into the initial parameter detection model to obtain the facial feature parameters and pose parameters output by the initial parameter detection model;

将人脸特征参数输入至三维人脸形变模型中,得到三维人脸形变模型输出的三维人脸;Input the facial feature parameters into the 3D face deformation model to obtain the 3D face output by the 3D face deformation model;

将位姿参数和三维人脸输入至相机模型中,得到相机模型输出的复原后的二维人脸图像;Input the pose parameters and the three-dimensional face into the camera model, and obtain the restored two-dimensional face image output by the camera model;

基于复原后的二维人脸图像和二维人脸图像样本确定目标参数检测模型。The target parameter detection model is determined based on the restored 2D face image and the 2D face image samples.

本申请实施例提供的电子设备,基于初始参数检测模型输出的人脸特征参数和位姿参数确定目标参数检测模型,人脸特征参数和位姿参数能够正确反映图像的主要特征,从而能够提高目标参数检测模型的准确性。In the electronic device provided by the embodiment of the present application, the target parameter detection model is determined based on the facial feature parameters and pose parameters output by the initial parameter detection model, and the facial feature parameters and pose parameters can correctly reflect the main features of the image, thereby improving the target Parameter detection model accuracy.

可选地,处理器1310,还用于基于复原后的二维人脸图像和二维人脸图像样本的相似度确定损失函数;Optionally, the processor 1310 is further configured to determine a loss function based on the similarity between the restored two-dimensional face image and the two-dimensional face image sample;

基于损失函数对初始参数检测模型的模型参数进行优化,直至满足收敛条件,得到目标参数检测模型。Based on the loss function, the model parameters of the initial parameter detection model are optimized until the convergence conditions are met, and the target parameter detection model is obtained.

本申请实施例提供的电子设备,通过损失函数对模型的模型参数进行了优化,得到最终优化好的目标参数检测模型,提高了目标参数检测模型的精确度。In the electronic device provided by the embodiment of the present application, the model parameters of the model are optimized through the loss function, and the final optimized target parameter detection model is obtained, which improves the accuracy of the target parameter detection model.

可选地,位姿参数包括焦距样本、旋转参数样本和平移参数样本;人脸特征参数包括形状参数样本和纹理参数样本。Optionally, the pose parameters include focal length samples, rotation parameter samples and translation parameter samples; the face feature parameters include shape parameter samples and texture parameter samples.

本申请实施例提供的电子设备,位姿参数包含了旋转参数样本、平移参数样本和焦距样本,人脸特征参数包含了形状参数样本和纹理参数样本,涉及到的这些参数能够正确反映图像的主要特征,从而能够提高最终训练得到的目标参数检测模型的精确度。In the electronic device provided by the embodiment of the present application, the pose parameters include rotation parameter samples, translation parameter samples, and focal length samples, and the face feature parameters include shape parameter samples and texture parameter samples, and these involved parameters can correctly reflect the main characteristics of the image features, so as to improve the accuracy of the target parameter detection model obtained by the final training.

可选地,处理器1310,还用于在检测到人脸且自动对焦的情况下,获取目标图像。Optionally, the processor 1310 is further configured to acquire the target image in the case of detecting a human face and automatically focusing.

本申请实施例提供的电子设备,在检测到人脸且自动对焦的情况下,才获取目标图像,避免人脸对焦方法的误触发。The electronic device provided by the embodiment of the present application acquires a target image only when a human face is detected and automatically focuses, so as to avoid false triggering of the human face focusing method.

可选地,处理器1310,还用于基于目标人脸检测区域、第一相位图和第二相位图确定初始合焦位置,目标人脸检测区域为自动对焦区域,第一相位图和第二相位图是根据图像传感器上的PD像素得到的;Optionally, the processor 1310 is further configured to determine the initial focus position based on the target face detection area, the first phase map and the second phase map, where the target face detection area is an auto-focus area, the first phase map and the second phase map are The phase map is obtained from the PD pixels on the image sensor;

基于初始合焦位置对目标对象进行图像采集,得到目标图像。Based on the initial in-focus position, image acquisition of the target object is performed to obtain the target image.

本申请实施例提供的电子设备,基于PDAF技术确定初始合焦位置,在初始合焦位置的基础上进行再次寻焦,两个阶段的寻焦可以使得最终得到的目标焦距更加精准,提高了人脸对焦的精准度。The electronic device provided by the embodiment of the present application determines the initial focus position based on the PDAF technology, and performs focus search again on the basis of the initial focus position. Accuracy of face focus.

可选地,处理器1310,还用于基于目标焦距、镜头焦距和目标像距,确定目标对焦像距;其中,目标像距为成像平面到镜头的距离;Optionally, the processor 1310 is further configured to determine the target focus image distance based on the target focal length, the lens focal length and the target image distance; wherein, the target image distance is the distance from the imaging plane to the lens;

基于目标对焦像距对人脸进行对焦。Focuses on faces based on the target focusing distance.

本申请实施例提供的电子设备,基于目标焦距、镜头焦距和目标像距计算得到最终的目标对焦像距,实现了目标焦距到目标对焦像距的快速转化。The electronic device provided by the embodiment of the present application calculates the final target focal length based on the target focal length, the lens focal length and the target image distance, and realizes the rapid conversion from the target focal length to the target focal length.

应理解的是,本申请实施例中,输入单元1304可以包括图形处理器(GraphicsProcessing Unit,GPU)13041和麦克风13042,图形处理器13041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1306可包括显示面板13061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板13061。用户输入单元1307包括触控面板13071以及其他输入设备13072。触控面板13071,也称为触摸屏。触控面板13071可包括触摸检测装置和触摸控制器两个部分。其他输入设备13072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器1309可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1310可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1310中。It should be understood that, in this embodiment of the present application, the input unit 1304 may include a graphics processing unit (Graphics Processing Unit, GPU) 13041 and a microphone 13042. camera) to process the image data of still pictures or videos. The display unit 1306 may include a display panel 13061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes a touch panel 13071 and other input devices 13072 . The touch panel 13071 is also called a touch screen. The touch panel 13071 may include two parts, a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described herein again. Memory 1309 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 1310 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 1310.

本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述人脸对焦方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, each process of the above-mentioned embodiment of the method for focusing on a face can be realized, and can achieve The same technical effect, in order to avoid repetition, will not be repeated here.

其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。Wherein, the processor is the processor in the electronic device described in the foregoing embodiments. The readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.

本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述人脸对焦方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used for running a program or an instruction to implement the above-mentioned embodiment of the face focusing method and can achieve the same technical effect, in order to avoid repetition, it will not be repeated here.

应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。It should be understood that the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in the reverse order depending on the functions involved. To perform functions, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to some examples may be combined in other examples.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course hardware can also be used, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present application can be embodied in the form of computer software products that are essentially or contribute to the prior art, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk , CD), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) execute the methods described in the various embodiments of the present application.

上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。The embodiments of the present application have been described above in conjunction with the accompanying drawings, but the present application is not limited to the above-mentioned specific embodiments, which are merely illustrative rather than restrictive. Under the inspiration of this application, without departing from the scope of protection of the purpose of this application and the claims, many forms can be made, which all fall within the protection of this application.

Claims (15)

1. A face focusing method is characterized by comprising the following steps:
inputting a target image into a target parameter detection model to obtain a target focal length output by the target parameter detection model;
focusing the human face based on the target focal length;
the target parameter detection model is obtained based on two-dimensional face image samples, a combined three-dimensional face deformation model and a camera model; the three-dimensional face deformation model is used for obtaining a three-dimensional face based on the input face characteristic parameters; the camera model is used for obtaining a restored two-dimensional face image based on the input pose parameters and the three-dimensional face;
the face feature parameters and the pose parameters are parameters output by an initial parameter detection model based on the two-dimensional face image samples; the pose parameters are parameters representing shooting postures of the camera; the face characteristic parameters are parameters for representing the face contour.
2. The method of claim 1, wherein before the target image is input into the target parameter detection model and the target focal length output by the target parameter detection model is obtained, the method further comprises:
inputting the two-dimensional face image sample into the initial parameter detection model to obtain the face characteristic parameters and the pose parameters output by the initial parameter detection model;
inputting the face characteristic parameters into the three-dimensional face deformation model to obtain the three-dimensional face output by the three-dimensional face deformation model;
inputting the pose parameters and the three-dimensional face into the camera model to obtain the restored two-dimensional face image output by the camera model;
and determining the target parameter detection model based on the restored two-dimensional face image and the two-dimensional face image sample.
3. The method of claim 2, wherein the determining the target parameter detection model based on the restored two-dimensional face image and the two-dimensional face image sample comprises:
determining a loss function based on the similarity of the restored two-dimensional face image and the two-dimensional face image sample;
and optimizing the model parameters of the initial parameter detection model based on the loss function until a convergence condition is met to obtain the target parameter detection model.
4. The face focusing method according to claim 1, wherein the pose parameters comprise a focal length sample, a rotation parameter sample and a translation parameter sample; the face feature parameters comprise shape parameter samples and texture parameter samples.
5. The method of claim 1, wherein before the inputting the target image into the target parameter detection model and obtaining the target focal length output by the target parameter detection model, the method further comprises:
and under the condition that the human face is detected and the automatic focusing is carried out, acquiring the target image.
6. The face focusing method of claim 5, wherein the acquiring the target image comprises:
determining an initial focusing position based on a target face detection area, a first phase diagram and a second phase diagram, wherein the target face detection area is an automatic focusing area, and the first phase diagram and the second phase diagram are obtained according to PD pixels on an image sensor;
and acquiring an image of the target object based on the initial focusing position to obtain the target image.
7. The method according to any one of claims 1 to 6, wherein the focusing the human face based on the target focal distance comprises:
determining a target focusing image distance based on the target focal length, the lens focal length and the target image distance; the target image distance is the distance from an imaging plane to a lens;
and focusing the human face based on the target focusing image distance.
8. A face focusing device, comprising:
the first detection module is used for inputting a target image into a target parameter detection model to obtain a target focal length output by the target parameter detection model;
the focusing module is used for focusing the human face based on the target focal length;
the target parameter detection model is obtained based on two-dimensional face image samples, a combined three-dimensional face deformation model and a camera model; the three-dimensional face deformation model is used for obtaining a three-dimensional face based on the input face characteristic parameters; the camera model is used for obtaining a restored two-dimensional face image based on the input pose parameters and the three-dimensional face;
the face feature parameters and the pose parameters are parameters output by an initial parameter detection model based on the two-dimensional face image samples; the pose parameters are parameters representing shooting postures of the camera; the face characteristic parameters are parameters for representing the face contour.
9. The device of claim 8, further comprising:
the second detection module is used for inputting the two-dimensional face image sample into the initial parameter detection model to obtain the face characteristic parameters and the pose parameters output by the initial parameter detection model;
the deformation module is used for inputting the face characteristic parameters into the three-dimensional face deformation model to obtain the three-dimensional face output by the three-dimensional face deformation model;
the restoration module is used for inputting the pose parameters and the three-dimensional face into the camera model to obtain a restored two-dimensional face image output by the camera model;
and the determining module is used for determining the target parameter detection model based on the restored two-dimensional face image and the two-dimensional face image sample.
10. The face focusing device of claim 9, wherein the determining module is further configured to:
determining a loss function based on the similarity of the restored two-dimensional face image and the two-dimensional face image sample;
and optimizing the model parameters of the initial parameter detection model based on the loss function until a convergence condition is met to obtain the target parameter detection model.
11. The face focusing device of claim 8, wherein the pose parameters comprise a focus parameter sample, a rotation parameter sample, and a translation parameter sample; the face feature parameters comprise shape parameter samples and texture parameter samples.
12. The device of claim 8, further comprising:
and the acquisition module is used for acquiring the target image under the conditions of face detection and automatic focusing.
13. The device of claim 12, wherein the obtaining module is further configured to:
determining an initial focusing position based on a target face detection area, a first phase diagram and a second phase diagram, wherein the target face detection area is an automatic focusing area, and the first phase diagram and the second phase diagram are obtained according to PD pixels on an image sensor;
and acquiring an image of the target object based on the initial focusing position to obtain the target image.
14. The device as claimed in any one of claims 8-13, wherein the focusing module is further configured to:
determining a target focusing image distance based on the target focal length, the lens focal length and the target image distance; the target image distance is the distance from an imaging plane to a lens;
and focusing the human face based on the target focusing image distance.
15. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the face focusing method of any one of claims 1-7.
CN202111306696.7A 2021-11-05 2021-11-05 Face focusing method and device and electronic equipment Active CN114125273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111306696.7A CN114125273B (en) 2021-11-05 2021-11-05 Face focusing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111306696.7A CN114125273B (en) 2021-11-05 2021-11-05 Face focusing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114125273A true CN114125273A (en) 2022-03-01
CN114125273B CN114125273B (en) 2023-04-07

Family

ID=80380811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111306696.7A Active CN114125273B (en) 2021-11-05 2021-11-05 Face focusing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114125273B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010142897A2 (en) * 2009-06-08 2010-12-16 Total Immersion Method and device for calibrating an image sensor using a real-time system for following up objects in an image sequence
WO2015120910A1 (en) * 2014-02-17 2015-08-20 Longsand Limited Determining pose and focal length
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110555815A (en) * 2019-08-30 2019-12-10 维沃移动通信有限公司 Image processing method and electronic equipment
CN111898406A (en) * 2020-06-05 2020-11-06 东南大学 Face detection method based on focal loss and multi-task cascade
WO2020254448A1 (en) * 2019-06-17 2020-12-24 Ariel Ai Inc. Scene reconstruction in three-dimensions from two-dimensional images
CN112597847A (en) * 2020-12-15 2021-04-02 深圳云天励飞技术股份有限公司 Face pose estimation method and device, electronic equipment and storage medium
CN112819947A (en) * 2021-02-03 2021-05-18 Oppo广东移动通信有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010142897A2 (en) * 2009-06-08 2010-12-16 Total Immersion Method and device for calibrating an image sensor using a real-time system for following up objects in an image sequence
WO2015120910A1 (en) * 2014-02-17 2015-08-20 Longsand Limited Determining pose and focal length
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
WO2020254448A1 (en) * 2019-06-17 2020-12-24 Ariel Ai Inc. Scene reconstruction in three-dimensions from two-dimensional images
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image
CN110555815A (en) * 2019-08-30 2019-12-10 维沃移动通信有限公司 Image processing method and electronic equipment
CN111898406A (en) * 2020-06-05 2020-11-06 东南大学 Face detection method based on focal loss and multi-task cascade
CN112597847A (en) * 2020-12-15 2021-04-02 深圳云天励飞技术股份有限公司 Face pose estimation method and device, electronic equipment and storage medium
CN112819947A (en) * 2021-02-03 2021-05-18 Oppo广东移动通信有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114125273B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110472462B (en) Gesture estimation method, processing method based on gesture estimation and electronic equipment
WO2024007478A1 (en) Three-dimensional human body modeling data collection and reconstruction method and system based on single mobile phone
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN111507333B (en) Image correction method and device, electronic equipment and storage medium
CN114445562A (en) Three-dimensional reconstruction method and device, electronic device and storage medium
CN117372657B (en) Key point rotation model training method and device, electronic device and storage medium
CN112714321B (en) Compressed video processing method, device, equipment and computer readable storage medium
CN113630549A (en) Zoom control method, device, electronic equipment and computer-readable storage medium
CN111968052A (en) Image processing method, image processing apparatus, and storage medium
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111654624B (en) Shooting prompting method and device and electronic equipment
WO2022083118A1 (en) Data processing method and related device
CN111524087B (en) Image processing method and device, storage medium and terminal
CN113342157B (en) Eye tracking processing method and related device
CN113409331B (en) Image processing method, image processing device, terminal and readable storage medium
CN113920023B (en) Image processing method and device, computer readable medium and electronic device
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN114125273B (en) Face focusing method and device and electronic equipment
CN118555381A (en) Double-shot continuous zooming method and system of smart phone
CN118301471A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN110956576B (en) Image processing method, device, equipment and storage medium
CN115862147A (en) Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110971786B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载