+

CN117611460A - Face image fusion method, device, equipment and storage medium - Google Patents

Face image fusion method, device, equipment and storage medium Download PDF

Info

Publication number
CN117611460A
CN117611460A CN202311506854.2A CN202311506854A CN117611460A CN 117611460 A CN117611460 A CN 117611460A CN 202311506854 A CN202311506854 A CN 202311506854A CN 117611460 A CN117611460 A CN 117611460A
Authority
CN
China
Prior art keywords
face image
skin color
face
image
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311506854.2A
Other languages
Chinese (zh)
Other versions
CN117611460B (en
Inventor
郭渝茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengzhong Technology Co ltd
Original Assignee
Shenzhen Pengzhong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengzhong Technology Co ltd filed Critical Shenzhen Pengzhong Technology Co ltd
Priority to CN202311506854.2A priority Critical patent/CN117611460B/en
Publication of CN117611460A publication Critical patent/CN117611460A/en
Application granted granted Critical
Publication of CN117611460B publication Critical patent/CN117611460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及图像领域,公开了一种人脸图像融合方法、装置、设备及存储介质。该方法包括:获取第一人脸图像及第二人脸图像;通过肤色检测算法,确定第一人脸图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换;将人脸肤色区域从第一人脸图像中分离出来;提取第二人脸图像的人脸肤色颜色值,对分离的人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像;根据第三人脸图像及第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像。在本发明实施例中,能在不同肤色的人脸融合时,使得肤色融合更自然,提高肤色融合度。

The invention relates to the field of images, and discloses a face image fusion method, device, equipment and storage medium. The method includes: acquiring a first face image and a second face image; determining the face skin color area of the first face image through a skin color detection algorithm, and converting the color space of the face skin color area; The area is separated from the first face image; the face skin color value of the second face image is extracted, the skin color of the separated face skin color area is adjusted, and the skin color adjusted face skin color area is compared with the first person after separation. Other areas in the face image are synthesized to obtain the third face image; based on the third face image and the second face image, the target face image is obtained through the trained generative adversarial network. In embodiments of the present invention, when faces with different skin colors are fused, skin color fusion can be made more natural and the degree of skin color fusion can be improved.

Description

人脸图像融合方法、装置、设备及存储介质Face image fusion method, device, equipment and storage medium

技术领域Technical field

本发明涉及图像领域,尤其涉及一种人脸图像融合方法、装置、设备及存储介质。The present invention relates to the field of images, and in particular to a face image fusion method, device, equipment and storage medium.

背景技术Background technique

随着各种技术的发展,图像方面的人脸处理技术越来越广泛。各大相机软件都有美颜、贴图、换发型、融合等功能,其中,融合使用的人脸图像融合技术,是将两张不同的人脸图像融合成一张人脸图像,新得到的融合图像保留有原人脸图像的面部特征。With the development of various technologies, face processing technology in images is becoming more and more widespread. All major camera software have functions such as beautification, stickers, hairstyle changing, and fusion. Among them, the face image fusion technology used in fusion is to fuse two different face images into one face image. The newly obtained fused image Facial features of the original face image are retained.

目前人脸融合用的技术是基于生成对抗网络(Generative AdversarialNetworks,GANs)的方法,但GANs在不同肤色的人脸融合时,肤色融合不自然。所以,现有人脸图像融合方法的肤色融合度低。The current technology used for face fusion is based on the method of Generative Adversarial Networks (GANs). However, when GANs fuse faces with different skin colors, the skin color fusion is unnatural. Therefore, the skin color fusion degree of existing face image fusion methods is low.

发明内容Contents of the invention

本发明的主要目的在于解决人脸图像融合时肤色融合度低的技术问题。The main purpose of the present invention is to solve the technical problem of low skin color fusion when fusing facial images.

本发明第一方面提供了一种人脸图像融合方法,所述人脸图像融合方法包括:A first aspect of the present invention provides a face image fusion method. The face image fusion method includes:

获取第一人脸图像及第二人脸图像,所述第一人脸图像是未经处理的原始人脸图像,所述第二人脸图像是所述第一人脸图像变化参考的人脸图像;Obtain a first face image and a second face image, the first face image is an unprocessed original face image, and the second face image is a reference face for changes in the first face image image;

通过肤色检测算法,确定所述第一人脸图像的人脸肤色区域,并将所述人脸肤色区域的色彩空间进行转换;Determine the facial skin color area of the first facial image through a skin color detection algorithm, and convert the color space of the human face skin color area;

根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将所述人脸肤色区域从所述第一人脸图像中分离出来;Separate the face skin color area from the first face image according to the color space converted face skin color area and the preset color attribute range;

提取所述第二人脸图像的人脸肤色颜色值,根据所述人脸肤色颜色值,对分离的所述人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后所述第一人脸图像中的其他区域进行合成,得到第三人脸图像;Extract the face skin color value of the second face image, perform skin color adjustment on the separated face skin color area according to the face skin color value, and compare the skin color adjusted face skin color area with the separated face skin color area. Synthesize other areas in the first face image to obtain a third face image;

根据所述第三人脸图像及所述第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像。According to the third face image and the second face image, the target face image is obtained through the trained generative adversarial network.

可选的,在本发明第一方面的第一种实现方式中,所述预设颜色属性范围包括预设色调范围及预设饱和度范围;Optionally, in a first implementation manner of the first aspect of the present invention, the preset color attribute range includes a preset hue range and a preset saturation range;

所述根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将所述人脸肤色区域从所述第一人脸图像中分离出来包括:Separating the face skin color area from the first face image according to the color space converted face skin color area and the preset color attribute range includes:

根据所述预设色调范围及所述预设饱和度范围,选取色彩空间转换后的人脸肤色区域中的目标像素;Select the target pixel in the skin color area of the human face after color space conversion according to the preset hue range and the preset saturation range;

根据所述目标像素,得到目标人脸肤色区域;According to the target pixels, the target face skin color area is obtained;

将所述目标人脸肤色区域从所述第一人脸图像中分离出来。Separate the target face skin color area from the first face image.

可选的,在本发明第一方面的第二种实现方式中,所述根据所述预设色调范围及所述预设饱和度范围,选取色彩空间转换后的人脸肤色区域中的目标像素包括:Optionally, in a second implementation manner of the first aspect of the present invention, the target pixels in the human face skin color area after color space conversion are selected based on the preset hue range and the preset saturation range. include:

根据所述预设色调范围及所述预设饱和度范围,生成掩码;Generate a mask according to the preset hue range and the preset saturation range;

将所述掩码与所述第一人脸图像进行位运算,得到色彩空间转换后的人脸肤色区域中的目标像素。Perform bit operations on the mask and the first face image to obtain target pixels in the skin color area of the face after color space conversion.

可选的,在本发明第一方面的第三种实现方式中,所述提取所述第二人脸图像的人脸肤色颜色值,根据所述人脸肤色颜色值,对分离的所述人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后所述第一人脸图像中的其他区域进行合成,得到第三人脸图像包括:Optionally, in a third implementation manner of the first aspect of the present invention, the facial skin color value of the second facial image is extracted, and the separated human face is extracted based on the facial skin color value. Perform skin color adjustment on the face skin color area, and synthesize the adjusted face skin color area with other areas in the separated first face image to obtain a third face image including:

通过色彩分析工具,提取所述第二人脸图像的人脸肤色颜色值;Using a color analysis tool, extract the facial skin color value of the second facial image;

根据所述人脸肤色颜色值,调整分离的所述人脸肤色区域的色彩分量,以进行肤色调整;Adjust the color component of the separated human face skin color area according to the human face skin color value to perform skin color adjustment;

通过图像重建方法,将肤色调整的人脸肤色区域与分离后所述第一人脸图像中的其他区域进行合成,得到第三人脸图像。Through the image reconstruction method, the skin color-adjusted human face skin color area is synthesized with other areas in the separated first human face image to obtain a third human face image.

可选的,在本发明第一方面的第四种实现方式中,所述通过肤色检测算法,确定所述第一人脸图像的人脸肤色区域,并将所述人脸肤色区域的色彩空间进行转换包括:Optionally, in a fourth implementation manner of the first aspect of the present invention, the skin color detection algorithm is used to determine the face skin color area of the first face image, and the color space of the face skin color area is Making the conversion includes:

通过图像分割算法,将所述第一人脸图像中的前景人物和背景分离;Separate the foreground person and background in the first face image through an image segmentation algorithm;

通过肤色检测算法,确定所述前景人物中的人脸肤色区域;Determine the facial skin color area of the foreground person through a skin color detection algorithm;

将所述人脸肤色区域的色彩空间进行转换。Convert the color space of the skin color area of the human face.

可选的,在本发明第一方面的第五种实现方式中,所述根据所述第三人脸图像及所述第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像包括:Optionally, in a fifth implementation manner of the first aspect of the present invention, obtaining the target face image through a trained generative adversarial network based on the third face image and the second face image includes: :

训练初始生成对抗网络,使得训练的生成对抗网络能够学习人脸的特征;Train an initial generative adversarial network so that the trained generative adversarial network can learn the characteristics of human faces;

提取所述第三人脸图像的人脸特征及所述第二人脸图像的人脸特征;Extract facial features of the third facial image and facial features of the second facial image;

将所述第三人脸图像的人脸特征、以及所述第二人脸图像的人脸特征输入到所述生成对抗网络的生成器中,所述生成器将所述第三人脸图像的人脸特征及所述第二人脸图像的人脸特征融合,得到目标人脸图像。The facial features of the third facial image and the facial features of the second facial image are input into the generator of the generative adversarial network, and the generator converts the facial features of the third facial image into The facial features and the facial features of the second face image are fused to obtain the target face image.

可选的,在本发明第一方面的第六种实现方式中,所述通过肤色检测算法,确定所述第一人脸图像的人脸肤色区域,并将所述人脸肤色区域的色彩空间进行转换包括:Optionally, in a sixth implementation manner of the first aspect of the present invention, the skin color detection algorithm is used to determine the face skin color area of the first face image, and the color space of the face skin color area is Making the conversion includes:

将所述第一人脸图像转换为灰度图像;Convert the first face image into a grayscale image;

将所述灰度图像进行降噪、归一化处理,得到预处理图像;Perform noise reduction and normalization processing on the grayscale image to obtain a preprocessed image;

通过肤色检测算法,确定所述预处理图像的人脸肤色区域,并将所述人脸肤色区域的色彩空间进行转换。Through the skin color detection algorithm, the human face skin color area of the preprocessed image is determined, and the color space of the human face skin color area is converted.

本发明第二方面提供了一种人脸图像融合设备,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互连;所述至少一个处理器调用所述存储器中的所述指令,以使得所述人脸图像融合设备执行上述的人脸图像融合方法。A second aspect of the present invention provides a face image fusion device, including: a memory and at least one processor, instructions are stored in the memory, and the memory and the at least one processor are interconnected through lines; the at least A processor calls the instruction in the memory to cause the face image fusion device to execute the above face image fusion method.

本发明的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述的人脸图像融合方法。A third aspect of the present invention provides a computer-readable storage medium. The computer-readable storage medium stores instructions, which when run on a computer, cause the computer to execute the above-mentioned face image fusion method.

在本发明实施例中,获取第一人脸图像及第二人脸图像;通过肤色检测算法,确定所述第一人脸图像的人脸肤色区域,并将所述人脸肤色区域的色彩空间进行转换;根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将所述人脸肤色区域从所述第一人脸图像中分离出来;提取所述第二人脸图像的人脸肤色颜色值,根据所述人脸肤色颜色值,对分离的所述人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后所述第一人脸图像中的其他区域进行合成,得到第三人脸图像;根据所述第三人脸图像及所述第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像。本发明中,首先,将第一人脸图像的人脸肤色区域的色彩空间进行转换,提取第二人脸图像的人脸肤色颜色值,根据人脸肤色颜色值,对分离的人脸肤色区域进行肤色调整,然后,将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,并通过训练的生成对抗网络,得到目标人脸图像,能在人脸图像融合前对原始人脸图像先进行肤色调整,使得在不同肤色的人脸融合时,肤色融合更自然,从而提高肤色融合度。In the embodiment of the present invention, a first face image and a second face image are obtained; a skin color detection algorithm is used to determine the face skin color area of the first face image, and the color space of the face skin color area is Perform conversion; separate the face skin color area from the first face image according to the color space converted face skin color area and the preset color attribute range; extract the face of the second face image Skin color value: perform skin color adjustment on the separated human face skin color area according to the human face skin color color value, and compare the skin color adjusted human face skin color area with other areas in the separated first face image Synthesize to obtain a third face image; according to the third face image and the second face image, obtain a target face image through a trained generative adversarial network. In the present invention, first, the color space of the face skin color area of the first face image is converted, the face skin color color value of the second face image is extracted, and based on the face skin color color value, the separated face skin color area is Perform skin color adjustment, and then synthesize the adjusted facial skin color area with other areas in the separated first face image, and obtain the target face image through the trained generative adversarial network, which can be used before face image fusion. The skin color of the original face image is adjusted first, so that when faces with different skin colors are fused, the skin color blends more naturally, thereby improving the skin color fusion degree.

附图说明Description of drawings

图1为本发明实施例中人脸图像融合方法的一个实施例示意图;Figure 1 is a schematic diagram of an embodiment of the face image fusion method in the embodiment of the present invention;

图2为本发明实施例中人脸图像融合装置的一个实施例示意图;Figure 2 is a schematic diagram of an embodiment of a face image fusion device in an embodiment of the present invention;

图3为本发明实施例中人脸图像融合设备的一个实施例示意图。Figure 3 is a schematic diagram of an embodiment of a face image fusion device in an embodiment of the present invention.

具体实施方式Detailed ways

本发明实施例提供了一种人脸图像融合方法、装置、设备及存储介质。Embodiments of the present invention provide a face image fusion method, device, equipment and storage medium.

下面将参照附图更详细地描述本发明公开的实施例。虽然附图中显示了本发明公开的某些实施例,然而应当理解的是,本发明公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本发明公开的附图及实施例仅用于示例性作用,并非用于限制本发明公开的保护范围。Disclosed embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather that these embodiments are provided This is for a more thorough and complete understanding of this disclosure. It should be understood that the drawings and embodiments disclosed in the present invention are for illustrative purposes only and are not used to limit the scope of protection of the present invention.

在本发明公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。In the description of the disclosed embodiments of the present invention, the term "including" and similar expressions shall be understood as an open inclusion, that is, "including but not limited to." The term "based on" should be understood to mean "based at least in part on." The terms "one embodiment" or "the embodiment" should be understood to mean "at least one embodiment". The terms "first," "second," etc. may refer to different or the same object. Other explicit and implicit definitions may be included below.

为便于理解,下面对本发明实施例的具体流程进行描述,请参阅图1,本发明实施例中人脸图像融合方法的一个实施例包括:For ease of understanding, the specific process of the embodiment of the present invention is described below. Please refer to Figure 1. An embodiment of the face image fusion method in the embodiment of the present invention includes:

S100,获取第一人脸图像及第二人脸图像。S100: Obtain the first face image and the second face image.

在本实施例中,第一人脸图像是未经处理的原始人脸图像,第二人脸图像是第一人脸图像变化参考的人脸图像,人脸图像融合时需将第一人脸图像的人脸肤色调整为第二人脸图像的人脸肤色,然后再将第一人脸图像和第二人脸图像的脸进行调换。In this embodiment, the first face image is an unprocessed original face image, and the second face image is a reference face image for changes in the first face image. When fusing the face images, the first face image needs to be The face skin color of the image is adjusted to the face skin color of the second face image, and then the faces of the first face image and the second face image are exchanged.

S200,通过肤色检测算法,确定第一人脸图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换。S200: Determine the face skin color area of the first face image through the skin color detection algorithm, and convert the color space of the face skin color area.

在本实施例中,使用肤色检测算法来确定第一人脸图像的人脸肤色区域,肤色检测算法通常基于颜色空间模型,如RGB、HSV或YCbCr等;将人脸肤色区域的色彩空间进行转换,其中,色彩空间转换是指把一个色彩空间中的颜色数据转换或表示成另一个色彩空间中的相应数据,即用不同的色彩空间中的数据表示同一颜色,可选的,在本发明中可以将人脸肤色区域的RGB色彩空间转换为HSV色彩空间,HSV色彩空间可以更好地描述颜色的感觉特征,在HSV上识别颜色比在RGB上更有利。In this embodiment, a skin color detection algorithm is used to determine the face skin color area of the first face image. The skin color detection algorithm is usually based on a color space model, such as RGB, HSV or YCbCr, etc.; the color space of the face skin color area is converted , wherein color space conversion refers to converting or representing color data in one color space into corresponding data in another color space, that is, using data in different color spaces to represent the same color. Optionally, in the present invention The RGB color space of the skin color area of the human face can be converted to the HSV color space. The HSV color space can better describe the sensory characteristics of the color, and it is more advantageous to identify colors on HSV than on RGB.

S300,根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将人脸肤色区域从第一人脸图像中分离出来。S300: Separate the human face skin color area from the first face image according to the human face skin color area after color space conversion and the preset color attribute range.

在本实施例中,通过设定适当的颜色属性范围,可以将人脸肤色区域从第一人脸图像中分离出来,具体地,设置HSV的预设颜色属性范围,若人脸肤色区域像素的颜色属性值在预设颜色属性范围内,则标记该像素,然后,根据所有标记的像素,将人脸肤色区域从第一人脸图像中分离出来。此外,还可以使用大量标记过的图像数据来训练模型,利用机器学习算法(如支持向量机、神经网络等)对肤色和非肤色区域进行分类。In this embodiment, by setting an appropriate color attribute range, the human face skin color area can be separated from the first face image. Specifically, the preset color attribute range of HSV is set. If the pixels of the human face skin color area are If the color attribute value is within the preset color attribute range, the pixel is marked, and then, based on all marked pixels, the face skin color area is separated from the first face image. In addition, a large amount of labeled image data can be used to train the model, and machine learning algorithms (such as support vector machines, neural networks, etc.) can be used to classify skin color and non-skin color areas.

S400,提取第二人脸图像的人脸肤色颜色值,根据人脸肤色颜色值,对分离的人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像。S400: Extract the face skin color value of the second face image, perform skin color adjustment on the separated face skin color area according to the face skin color value, and compare the skin color adjusted face skin color area with the separated first face image The other areas in the image are synthesized to obtain the third face image.

在本实施例中,提取第二人脸图像的人脸肤色颜色值,以第二人脸图像的人脸肤色颜色值为参照,调整分离的人脸肤色区域的人脸肤色颜色值,例如改变亮度、饱和度或色调等,以进行肤色调整,其中,可以使用色彩映射、直方图均衡化、调整曲线等技术来实现肤色调整。在对人脸肤色区域进行肤色调整后,将调整后的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成或重建,得到第三人脸图像,以保持整体的一致性和自然度,其中,第三人脸图像是指肤色调整后人脸肤色区域与人脸肤色区域从第一人脸图像中分离出来后第一人脸图像中剩下的其他区域合并形成的新的图像。In this embodiment, the facial skin color value of the second facial image is extracted, and the facial skin color value of the separated facial skin area is adjusted using the facial skin color value of the second facial image as a reference, for example, changing Brightness, saturation or hue, etc., for skin color adjustment, in which techniques such as color mapping, histogram equalization, and adjustment curves can be used to achieve skin color adjustment. After adjusting the skin color of the human face skin color area, the adjusted human face skin color area is synthesized or reconstructed with other areas in the separated first human face image to obtain a third human face image to maintain overall consistency and Naturalness, where the third face image refers to a new image formed by merging the skin color area of the face after skin color adjustment and the remaining areas in the first face image after the face skin color area is separated from the first face image. image.

500,根据第三人脸图像及第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像。500. According to the third face image and the second face image, obtain the target face image through the trained generative adversarial network.

在本实施例中,将第三人脸图像及第二人脸图像输入到训练的生成对抗网络,得到目标人脸图像,其中,生成对抗网络(GANs)是一种由生成器和判别器组成的网络架构,生成器尝试生成逼真的合成图像,而判别器则试图区分真实图像和合成图像。通过不断的对抗训练,生成器和判别器相互竞争,并逐渐提高生成器生成逼真图像的能力。目标人脸图像是将第一人脸图像经过换脸后得到的新图像。In this embodiment, the third face image and the second face image are input into the trained generative adversarial network to obtain the target face image, where the generative adversarial network (GANs) is a type of network consisting of a generator and a discriminator. The network architecture of the generator attempts to generate realistic synthetic images, while the discriminator attempts to distinguish between real and synthetic images. Through continuous adversarial training, the generator and discriminator compete with each other and gradually improve the generator's ability to generate realistic images. The target face image is a new image obtained by face-changing the first face image.

在本发明第一方面一种可选的实施方式中,预设颜色属性范围包括预设色调范围及预设饱和度范围;根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将人脸肤色区域从第一人脸图像中分离出来包括:In an optional implementation of the first aspect of the present invention, the preset color attribute range includes a preset hue range and a preset saturation range; according to the human face skin color area and the preset color attribute range after color space conversion, The face skin color area separated from the first face image includes:

根据预设色调范围及预设饱和度范围,选取色彩空间转换后的人脸肤色区域中的目标像素;根据目标像素,得到目标人脸肤色区域;将目标人脸肤色区域从第一人脸图像中分离出来。Select the target pixel in the face skin color area after color space conversion according to the preset hue range and the preset saturation range; obtain the target face skin color area based on the target pixel; convert the target face skin color area from the first face image separated from it.

在本实施例中,在HSV模式中,H(Hue)表示色相,S(Saturation)表示饱和度,B(Brightness)表示亮度,肤色通常落在一定的Hue和Saturation范围内,其中,色相在0~360°的标准色轮上,是按位置度量的,在通常的使用中,色相是由颜色名称标识的,比如红、绿或橙色;饱和度表示色彩的纯度,为0时为灰色,白、黑和其他灰色色彩都没有饱和度的,在最大饱和度时,每一色相具有最纯的色光,取值范围0~100%。在本发明中,先需设置预设色调范围及预设饱和度范围,然后,在色彩空间转换后的人脸肤色区域像素中,找到满足预设色调范围及预设饱和度范围的目标像素,将目标像素提取出来,实现人脸肤色区域的分离。In this embodiment, in HSV mode, H (Hue) represents hue, S (Saturation) represents saturation, and B (Brightness) represents brightness. Skin color usually falls within a certain range of Hue and Saturation, where hue is at 0 It is measured by position on the ~360° standard color wheel. In common use, hue is identified by the color name, such as red, green or orange; saturation represents the purity of the color, when it is 0 it is gray, white , black and other gray colors have no saturation. At maximum saturation, each hue has the purest color light, ranging from 0 to 100%. In the present invention, it is first necessary to set the preset hue range and the preset saturation range, and then find the target pixels that meet the preset hue range and the preset saturation range among the pixels in the skin color area of the human face after color space conversion, Extract the target pixels to separate the skin color areas of the human face.

在本发明第一方面一种可选的实施方式中,根据预设色调范围及预设饱和度范围,选取色彩空间转换后的人脸肤色区域中的目标像素包括:In an optional implementation of the first aspect of the present invention, selecting target pixels in the human face skin color area after color space conversion according to the preset hue range and the preset saturation range includes:

根据预设色调范围及预设饱和度范围,生成掩码;将掩码与第一人脸图像进行位运算,得到色彩空间转换后的人脸肤色区域中的目标像素。Generate a mask according to the preset hue range and the preset saturation range; perform bit operations on the mask and the first face image to obtain the target pixels in the skin color area of the face after color space conversion.

在本实施例中,可以通过OpenCV设置在HSV空间可用的色调范围及饱和度范围,然后使用inRange函数方法生成一个掩码,其中,掩码是一个二进制图像;再使用bitwise_and()函数让这个掩码和原图即第一人脸图像进行位运算,以提取色彩空间转换后的人脸肤色区域中的目标像素。In this embodiment, you can set the hue range and saturation range available in HSV space through OpenCV, and then use the inRange function method to generate a mask, where the mask is a binary image; then use the bitwise_and() function to make this mask The code performs bit operations on the original image, that is, the first face image, to extract the target pixels in the skin color area of the face after color space conversion.

在本发明第一方面一种可选的实施方式中,提取第二人脸图像的人脸肤色颜色值,根据人脸肤色颜色值,对分离的人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像包括:In an optional implementation of the first aspect of the present invention, the face skin color value of the second face image is extracted, the skin color is adjusted on the separated face skin color area according to the face skin color value, and the skin color is adjusted The facial skin color area is synthesized with other areas in the separated first face image to obtain the third face image including:

通过色彩分析工具,提取第二人脸图像的人脸肤色颜色值;根据人脸肤色颜色值,调整分离的人脸肤色区域的色彩分量,以进行肤色调整;通过图像重建方法,将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像。Use a color analysis tool to extract the face skin color value of the second face image; adjust the color component of the separated face skin color area according to the face skin color value to perform skin color adjustment; use the image reconstruction method to adjust the skin color The human face skin color area is synthesized with other areas in the separated first human face image to obtain a third human face image.

在本实施例中,通过选取一个具有所需肤色的图像,即第二人脸图像,并使用色彩分析工具,如色彩拾取器来获取该图像中的人脸肤色颜色值;将分离的人脸肤色区域的颜色值调整为所需的人脸肤色颜色值,可以通过使用图像处理软件中的色彩调整工具(如色阶、曲线、色彩平衡等)来实现。在调整颜色值时,可以参考所需的人脸肤色颜色值,逐步调整各个色彩分量(如R、G、B)的值,直到达到预设效果。In this embodiment, by selecting an image with the required skin color, that is, the second face image, and using a color analysis tool, such as a color picker, to obtain the face skin color value in the image; the separated face Adjusting the color value of the skin color area to the required facial skin color value can be achieved by using color adjustment tools (such as color levels, curves, color balance, etc.) in image processing software. When adjusting the color value, you can refer to the required facial skin color value and gradually adjust the values of each color component (such as R, G, B) until the preset effect is achieved.

在本发明第一方面一种可选的实施方式中,通过肤色检测算法,确定第一人脸图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换包括:In an optional implementation of the first aspect of the present invention, determining the face skin color area of the first face image through a skin color detection algorithm, and converting the color space of the face skin color area includes:

通过图像分割算法,将第一人脸图像中的前景人物和背景分离;通过肤色检测算法,确定前景人物中的人脸肤色区域;将人脸肤色区域的色彩空间进行转换。The image segmentation algorithm is used to separate the foreground person and the background in the first face image; the skin color detection algorithm is used to determine the face skin color area of the foreground person; and the color space of the face skin color area is converted.

在本实施例中,使用图像分割算法将第一人脸图像中的前景人物和背景分离,常用的图像分割方法包括基于阈值、基于边缘检测、基于区域生长等。对于提取出的前景人物区域,可以使用肤色检测算法来确定前景人物中的人脸肤色区域,然后将人脸肤色区域的色彩空间进行转换。In this embodiment, an image segmentation algorithm is used to separate the foreground person and the background in the first face image. Commonly used image segmentation methods include threshold-based, edge detection-based, region growing-based, etc. For the extracted foreground character area, a skin color detection algorithm can be used to determine the face skin color area of the foreground character, and then convert the color space of the face skin color area.

在本发明第一方面一种可选的实施方式中,根据第三人脸图像及第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像包括:In an optional implementation of the first aspect of the present invention, obtaining the target face image through a trained generative adversarial network based on the third face image and the second face image includes:

训练初始生成对抗网络,使得训练的生成对抗网络能够学习人脸的特征;提取第三人脸图像的人脸特征及第二人脸图像的人脸特征;将第三人脸图像的人脸特征、以及第二人脸图像的人脸特征输入到生成对抗网络的生成器中,生成器将第三人脸图像的人脸特征及第二人脸图像的人脸特征融合,得到目标人脸图像。Train the initial generative adversarial network so that the trained generative adversarial network can learn the features of the face; extract the face features of the third face image and the face features of the second face image; combine the face features of the third face image , and the facial features of the second face image are input into the generator of the generative adversarial network. The generator fuses the facial features of the third face image and the facial features of the second face image to obtain the target face image. .

在本实施例中,在人脸图像融合中,首先需要训练一个GAN模型,具体地,先初始化生成器和辨别器两个网络的参数;然后从训练集抽取n个样本,以及生成器利用定义的噪声分布生成n个样本;再固定生成器,训练辨别器,使其尽可能区分真假,多次更新迭代后,理想状态下,最终辨别器无法区分图片到底是来自真实的训练样本集合,还是来自生成器生成的样本即可,此时辨别的概率为0.5,完成训练。训练后的GAN模型能够学习人脸的特征,然后,通过将第三人脸图像的人脸特征、以及第二人脸图像的人脸特征输入到生成器中,生成器会尝试将这两个人脸特征合成为一个新的人脸图像,生成的图像将具有一个人的身份特征,但同时反映了另一个人的外貌特征。In this embodiment, in face image fusion, you first need to train a GAN model. Specifically, first initialize the parameters of the generator and discriminator networks; then extract n samples from the training set, and use the definition of the generator Generate n samples from the noise distribution; then fix the generator and train the discriminator to distinguish true from false as much as possible. After multiple update iterations, under ideal conditions, the final discriminator cannot distinguish whether the picture comes from the real training sample set. Just use the samples generated by the generator. At this time, the probability of discrimination is 0.5, and the training is completed. The trained GAN model can learn the features of the face. Then, by inputting the face features of the third face image and the face features of the second face image into the generator, the generator will try to combine the two people. The facial features are synthesized into a new face image, and the generated image will have the identity characteristics of one person, but at the same time reflect the appearance characteristics of another person.

在本发明第一方面一种可选的实施方式中,通过肤色检测算法,确定第一人脸图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换包括:In an optional implementation of the first aspect of the present invention, determining the face skin color area of the first face image through a skin color detection algorithm, and converting the color space of the face skin color area includes:

将第一人脸图像转换为灰度图像;将灰度图像进行降噪、归一化处理,得到预处理图像;通过肤色检测算法,确定预处理图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换。Convert the first face image into a grayscale image; perform noise reduction and normalization processing on the grayscale image to obtain a preprocessed image; determine the facial skin color area of the preprocessed image through the skin color detection algorithm, and compare the facial skin color Convert to the regional color space.

在本实施例中,在进行人脸图像融合之前,需对第一人脸图像进行预处理,先将第一人脸图像进行灰度化处理,得到灰度图像,彩色图像通常包括R、G、B三个分量,分别显示出红绿蓝等各种颜色,灰度化就是使彩色图像的R、G、B三个分量相等的过程。然后,将灰度图像进行降噪、归一化处理,以消除原始图像中无关的信息,恢复有用的真实信息,从而改进图像融合的可靠性。In this embodiment, before performing face image fusion, the first face image needs to be preprocessed. The first face image is grayscaled to obtain a grayscale image. The color image usually includes R, G The three components of R, G and B respectively display various colors such as red, green and blue. Grayscale is the process of making the three components of R, G and B of the color image equal. Then, the grayscale image is denoised and normalized to eliminate irrelevant information in the original image and restore useful real information, thereby improving the reliability of image fusion.

参见图2,本发明第二方面提供了一种人脸图像融合装置,所述人脸图像融合装置包括:Referring to Figure 2, a second aspect of the present invention provides a face image fusion device. The face image fusion device includes:

图像获取模块10,用于获取第一人脸图像及第二人脸图像,第一人脸图像是未经处理的原始人脸图像,第二人脸图像是第一人脸图像变化参考的人脸图像;The image acquisition module 10 is used to acquire a first face image and a second face image. The first face image is an unprocessed original face image, and the second face image is a reference for the change of the first face image. face image;

颜色空间转换模块20,用于通过肤色检测算法,确定第一人脸图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换;The color space conversion module 20 is used to determine the facial skin color area of the first face image through the skin color detection algorithm, and convert the color space of the human face skin color area;

图像分离模块30,用于根据色彩空间转换后的人脸肤色区域及预设颜色属性范围,将人脸肤色区域从第一人脸图像中分离出来;The image separation module 30 is used to separate the human face skin color area from the first human face image according to the human face skin color area after color space conversion and the preset color attribute range;

图像合成模块40,用于提取第二人脸图像的人脸肤色颜色值,根据人脸肤色颜色值,对分离的人脸肤色区域进行肤色调整,并将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像;The image synthesis module 40 is used to extract the face skin color value of the second face image, perform skin color adjustment on the separated face skin color area according to the face skin color value, and combine the skin color adjusted face skin color area with the separated face skin color area. The other areas in the first face image are synthesized to obtain the third face image;

目标图像获取模块50,用于根据第三人脸图像及第二人脸图像,通过训练的生成对抗网络,得到目标人脸图像。The target image acquisition module 50 is used to obtain the target face image through the trained generative adversarial network based on the third face image and the second face image.

在本发明第二方面一种可选的实施方式中,图像分离模块30还用于根据预设色调范围及预设饱和度范围,选取色彩空间转换后的人脸肤色区域中的目标像素;根据目标像素,得到目标人脸肤色区域;将目标人脸肤色区域从第一人脸图像中分离出来。In an optional implementation of the second aspect of the present invention, the image separation module 30 is also used to select target pixels in the human face skin color area after color space conversion according to the preset hue range and the preset saturation range; according to Target pixels are used to obtain the target face skin color area; the target face skin color area is separated from the first face image.

在本发明第二方面一种可选的实施方式中,图像分离模块30还用于根据预设色调范围及预设饱和度范围,生成掩码;将掩码与第一人脸图像进行位运算,得到色彩空间转换后的人脸肤色区域中的目标像素。In an optional implementation of the second aspect of the present invention, the image separation module 30 is also used to generate a mask according to the preset hue range and the preset saturation range; and perform bit operations on the mask and the first face image. , obtain the target pixels in the skin color area of the human face after color space conversion.

在本发明第二方面一种可选的实施方式中,图像合成模块40还用于通过色彩分析工具,提取第二人脸图像的人脸肤色颜色值;根据人脸肤色颜色值,调整分离的人脸肤色区域的色彩分量,以进行肤色调整;通过图像重建方法,将肤色调整的人脸肤色区域与分离后第一人脸图像中的其他区域进行合成,得到第三人脸图像。In an optional implementation of the second aspect of the present invention, the image synthesis module 40 is also used to extract the facial skin color value of the second facial image through a color analysis tool; adjust the separated facial skin color value according to the facial skin color value. The color component of the human face skin color area is used for skin color adjustment; through the image reconstruction method, the skin color adjusted human face skin color area is synthesized with other areas in the separated first face image to obtain a third face image.

在本发明第二方面一种可选的实施方式中,颜色空间转换模块20还用于通过图像分割算法,将第一人脸图像中的前景人物和背景分离;通过肤色检测算法,确定前景人物中的人脸肤色区域;将人脸肤色区域的色彩空间进行转换。In an optional implementation of the second aspect of the present invention, the color space conversion module 20 is also used to separate the foreground person from the background in the first face image through an image segmentation algorithm; and determine the foreground person through a skin color detection algorithm. The human face skin color area in; convert the color space of the human face skin color area.

在本发明第二方面一种可选的实施方式中,目标图像获取模块50还用于训练初始生成对抗网络,使得训练的生成对抗网络能够学习人脸的特征;提取第三人脸图像的人脸特征及第二人脸图像的人脸特征;将第三人脸图像的人脸特征、以及第二人脸图像的人脸特征输入到生成对抗网络的生成器中,生成器将第三人脸图像的人脸特征及第二人脸图像的人脸特征融合,得到目标人脸图像。In an optional implementation of the second aspect of the present invention, the target image acquisition module 50 is also used to train the initial generative adversarial network, so that the trained generative adversarial network can learn the characteristics of the human face; extract the person in the third face image facial features and the facial features of the second face image; input the facial features of the third face image and the facial features of the second face image into the generator of the generative adversarial network, and the generator will The facial features of the face image and the facial features of the second face image are fused to obtain the target face image.

在本发明第二方面一种可选的实施方式中,颜色空间转换模块20还用于将第一人脸图像转换为灰度图像;将灰度图像进行降噪、归一化处理,得到预处理图像;通过肤色检测算法,确定预处理图像的人脸肤色区域,并将人脸肤色区域的色彩空间进行转换。In an optional implementation of the second aspect of the present invention, the color space conversion module 20 is also used to convert the first face image into a grayscale image; perform noise reduction and normalization processing on the grayscale image to obtain a predetermined Process the image; use the skin color detection algorithm to determine the facial skin color area of the pre-processed image, and convert the color space of the human face skin color area.

图3是本发明实施例提供的一种人脸图像融合设备的结构示意图,该人脸图像融合设备500可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)510(例如,一个或一个以上处理器)和存储器520,一个或一个以上存储应用程序533或数据532的存储介质530(例如一个或一个以上海量存储设备)。其中,存储器520和存储介质530可以是短暂存储或持久存储。存储在存储介质530的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对人脸图像融合设备500中的一系列指令操作。更进一步地,处理器510可以设置为与存储介质530通信,在人脸图像融合设备500上执行存储介质530中的一系列指令操作。Figure 3 is a schematic structural diagram of a face image fusion device provided by an embodiment of the present invention. The face image fusion device 500 may vary greatly due to different configurations or performance, and may include one or more central processors (central processors). processing units (CPU) 510 (eg, one or more processors) and memory 520 , one or more storage media 530 (eg, one or more mass storage devices) that stores applications 533 or data 532 . Among them, the memory 520 and the storage medium 530 may be short-term storage or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations in the face image fusion device 500 . Furthermore, the processor 510 may be configured to communicate with the storage medium 530 and execute a series of instruction operations in the storage medium 530 on the face image fusion device 500 .

基于人脸图像融合设备500还可以包括一个或一个以上电源540,一个或一个以上有线或无线网络接口550,一个或一个以上输入输出接口560,和/或,一个或一个以上操作系统531,例如Windows Serve,Mac OS X,Unix,Linux,Free BSD等等。本领域技术人员可以理解,图3示出的人脸图像融合设备结构并不构成对基于人脸图像融合设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。The facial image fusion device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input and output interfaces 560, and/or one or more operating systems 531, for example Windows Server, Mac OS X, Unix, Linux, Free BSD and many more. Those skilled in the art can understand that the structure of the facial image fusion device shown in Figure 3 does not constitute a limitation on the facial image fusion device, and may include more or fewer components than shown in the figure, or combine certain components. Or a different component arrangement.

本发明还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述人脸图像融合方法的步骤。The present invention also provides a computer-readable storage medium. The computer-readable storage medium can be a non-volatile computer-readable storage medium. The computer-readable storage medium can also be a volatile computer-readable storage medium. Instructions are stored in the computer-readable storage medium, and when the instructions are run on the computer, they cause the computer to execute the steps of the face image fusion method.

在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of this disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.

此外,虽然采用特定次序描绘了各操作,但是这应当理解为要求这样操作以所示出的特定次序或以顺序次序执行,或者要求所有图示的操作应被执行以取得期望的结果。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实现中。相反地,在单个实现的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实现中。Furthermore, although operations are depicted in a specific order, this should be understood to require that such operations be performed in the specific order shown or in sequential order, or that all illustrated operations should be performed to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.

尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.

Claims (10)

1. The face image fusion method is characterized by comprising the following steps of:
acquiring a first face image and a second face image, wherein the first face image is an unprocessed original face image, and the second face image is a face image referenced by the change of the first face image;
determining a human face skin color region of the first human face image through a skin color detection algorithm, and converting the color space of the human face skin color region;
separating the human face skin color region from the first human face image according to the human face skin color region after the color space conversion and a preset color attribute range;
extracting a human face skin color value of the second human face image, carrying out skin color adjustment on the separated human face skin color region according to the human face skin color value, and synthesizing the human face skin color region with the skin color adjusted with other regions in the first human face image after separation to obtain a third human face image;
and obtaining a target face image through a training generation countermeasure network according to the third face image and the second face image.
2. The face image fusion method of claim 1, wherein the predetermined color attribute range includes a predetermined hue range and a predetermined saturation range;
the separating the face skin color region from the first face image according to the face skin color region after the color space conversion and the preset color attribute range comprises the following steps:
selecting target pixels in the face skin color region after the color space conversion according to the preset hue range and the preset saturation range;
obtaining a target face skin color region according to the target pixel;
and separating the target face skin color area from the first face image.
3. The method according to claim 2, wherein selecting the target pixel in the color space converted face skin region according to the preset hue range and the preset saturation range comprises:
generating a mask according to the preset tone range and the preset saturation range;
and carrying out bit operation on the mask and the first face image to obtain target pixels in the face skin region after the color space conversion.
4. The method of claim 1, wherein the extracting the face skin color value of the second face image, adjusting the skin color of the separated face skin color region according to the face skin color value, and combining the face skin color region with other regions in the first face image after the skin color adjustment, to obtain a third face image comprises:
extracting a face skin color value of the second face image through a color analysis tool;
according to the skin color value of the human face, adjusting the color component of the separated skin color region of the human face so as to adjust the skin color;
and synthesizing the human face skin color region with the skin color adjusted with other regions in the first human face image after separation by an image reconstruction method to obtain a third human face image.
5. The method according to claim 1, wherein the determining a face skin color region of the first face image by a skin color detection algorithm and converting a color space of the face skin color region comprises:
separating a foreground person from a background in the first face image through an image segmentation algorithm;
determining a face skin color region in the foreground person through a skin color detection algorithm;
and converting the color space of the human face skin color region.
6. The face image fusion method of claim 1, wherein the obtaining the target face image through the training generation countermeasure network according to the third face image and the second face image comprises:
training an initial generation countermeasure network, so that the training generation countermeasure network can learn the characteristics of the face;
extracting the face features of the third face image and the face features of the second face image;
inputting the face features of the third face image and the face features of the second face image into the generator for generating the countermeasure network, and fusing the face features of the third face image and the face features of the second face image by the generator to obtain a target face image.
7. The method according to claim 1, wherein the determining a face skin color region of the first face image by a skin color detection algorithm and converting a color space of the face skin color region comprises:
converting the first face image into a gray scale image;
carrying out noise reduction and normalization treatment on the gray level image to obtain a preprocessed image;
and determining a human face skin color region of the preprocessed image through a skin color detection algorithm, and converting the color space of the human face skin color region.
8. A face image fusion apparatus, characterized in that the face image fusion apparatus comprises:
the image acquisition module is used for acquiring a first face image and a second face image, wherein the first face image is an unprocessed original face image, and the second face image is a face image referenced by the change of the first face image;
the color space conversion module is used for determining a human face skin color region of the first human face image through a skin color detection algorithm and converting the color space of the human face skin color region;
the image separation module is used for separating the human face skin color region from the first human face image according to the human face skin color region after the color space conversion and a preset color attribute range;
the image synthesis module is used for extracting a human face skin color value of the second human face image, carrying out skin color adjustment on the separated human face skin color region according to the human face skin color value, and synthesizing the human face skin color region with the skin color adjusted with other regions in the first human face image after separation to obtain a third human face image;
and the target image acquisition module is used for acquiring a target face image through a training generation countermeasure network according to the third face image and the second face image.
9. A face image fusion apparatus, characterized in that the face image fusion apparatus comprises: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the face image fusion apparatus to perform the face image fusion method of any one of claims 1-7.
10. A computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the face image fusion method according to any of claims 1-7.
CN202311506854.2A 2023-11-10 2023-11-10 Face image fusion method, device, equipment and storage medium Active CN117611460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311506854.2A CN117611460B (en) 2023-11-10 2023-11-10 Face image fusion method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311506854.2A CN117611460B (en) 2023-11-10 2023-11-10 Face image fusion method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117611460A true CN117611460A (en) 2024-02-27
CN117611460B CN117611460B (en) 2025-02-14

Family

ID=89945339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311506854.2A Active CN117611460B (en) 2023-11-10 2023-11-10 Face image fusion method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117611460B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503601A (en) * 2019-08-28 2019-11-26 上海交通大学 Face generation image replacement method and system based on confrontation network
CN111046763A (en) * 2019-11-29 2020-04-21 广州久邦世纪科技有限公司 Portrait cartoon method and device
CN111598818A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Face fusion model training method, device and electronic device
WO2021068487A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Face recognition model construction method, apparatus, computer device, and storage medium
WO2023040679A1 (en) * 2021-09-16 2023-03-23 百果园技术(新加坡)有限公司 Fusion method and apparatus for facial images, and device and storage medium
CN116977464A (en) * 2023-07-10 2023-10-31 深圳伯德睿捷健康科技有限公司 Detection method, system, equipment and medium for skin sensitivity of human face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503601A (en) * 2019-08-28 2019-11-26 上海交通大学 Face generation image replacement method and system based on confrontation network
WO2021068487A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Face recognition model construction method, apparatus, computer device, and storage medium
CN111046763A (en) * 2019-11-29 2020-04-21 广州久邦世纪科技有限公司 Portrait cartoon method and device
CN111598818A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Face fusion model training method, device and electronic device
WO2023040679A1 (en) * 2021-09-16 2023-03-23 百果园技术(新加坡)有限公司 Fusion method and apparatus for facial images, and device and storage medium
CN116977464A (en) * 2023-07-10 2023-10-31 深圳伯德睿捷健康科技有限公司 Detection method, system, equipment and medium for skin sensitivity of human face

Also Published As

Publication number Publication date
CN117611460B (en) 2025-02-14

Similar Documents

Publication Publication Date Title
JP6330385B2 (en) Image processing apparatus, image processing method, and program
WO2022078041A1 (en) Occlusion detection model training method and facial image beautification method
CN107993209B (en) Image processing method, apparatus, computer-readable storage medium and electronic device
CN111667400B (en) Human face contour feature stylization generation method based on unsupervised learning
JP4708909B2 (en) Method, apparatus and program for detecting object of digital image
CN110969631B (en) Method and system for dyeing hair by refined photos
CN107730444A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
CN116648733A (en) Method and system for extracting color from facial image
CN114565508A (en) Virtual dressing method and device
CN113610720B (en) Video denoising method and device, computer readable medium and electronic device
WO2010043771A1 (en) Detecting and tracking objects in digital images
CN114049290A (en) Image processing method, device, device and storage medium
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN114049262A (en) Image processing method, image processing device and storage medium
CN107909542A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN114663951B (en) Low-illumination face detection method and device, computer equipment and storage medium
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN116778212A (en) An image processing method and device
CN116580445B (en) Large language model face feature analysis method, system and electronic equipment
CN118822863A (en) Infrared and visible light image fusion method combining overexposure prior and attention mechanism
CN117611460A (en) Face image fusion method, device, equipment and storage medium
CN111062862A (en) Color-based data enhancement method and system, computer device and storage medium
CN113239867B (en) A Face Recognition Method Based on Mask Area Adaptive Enhancement for Illumination Changes
CN113781330A (en) Image processing method, device and electronic system
Yuan et al. Full convolutional color constancy with adding pooling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载