+

WO2016107259A1 - Image processing method and device therefor - Google Patents

Image processing method and device therefor Download PDF

Info

Publication number
WO2016107259A1
WO2016107259A1 PCT/CN2015/093026 CN2015093026W WO2016107259A1 WO 2016107259 A1 WO2016107259 A1 WO 2016107259A1 CN 2015093026 W CN2015093026 W CN 2015093026W WO 2016107259 A1 WO2016107259 A1 WO 2016107259A1
Authority
WO
WIPO (PCT)
Prior art keywords
template
limb region
image
limb
state parameter
Prior art date
Application number
PCT/CN2015/093026
Other languages
French (fr)
Chinese (zh)
Inventor
李嵩
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2016107259A1 publication Critical patent/WO2016107259A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/10Selection of transformation methods according to the characteristics of the input images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models

Definitions

  • the present invention relates to the field of information technology, and in particular, to a method and an apparatus for image processing.
  • the invention provides a method and a device for image processing, so as to achieve the purpose of automatically beautifying the figure of a person in a photo, and overcome the defects that the above-mentioned existing image beautification method is complicated and inconvenient for the user to use.
  • a method of image processing is provided, the method being applied to an apparatus capable of performing image processing, the apparatus pre-existing at least one template map for at least one limb region of a human body, in the template map At least one set of posture parameters is included, and the method includes:
  • a template map matching at least one of the limb regions and the target body state parameter and an image of the limb region obtained by decomposing the human body photograph are combined and output.
  • an apparatus for image processing comprising:
  • a photo decomposition module configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region
  • a parameter receiving module configured to set a target body state parameter for each of the at least one limb region
  • a matching module configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
  • the synthesizing module is configured to synthesize and output the template image matching the target body state parameter in the at least one limb region and the image of the limb region obtained by decomposing the human body photo.
  • the invention provides a method and a device for image processing, which automatically synthesize a pre-stored template image and a human body photo input by a user, so that the user can create a desired person's body shape without cumbersome operations, and satisfies
  • the photo of the human body is automatically shaped and beautified. Since the synthetic effect can be adjusted at any time according to the target posture parameters selected by the user, the template map can also include obese and funny template maps, so the user can synthesize healthy and beautiful photos, and can also synthesize obese and servicing photos, which increases the fun. Sex.
  • FIG. 1 is a flow chart of a method of image processing in accordance with one embodiment of the present invention.
  • FIG. 2 is a flow chart of a method of image processing in accordance with another embodiment of the present invention.
  • FIG. 3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention.
  • FIG. 4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention.
  • FIG. 1 is a flow chart of a method of image processing according to an embodiment of the present invention.
  • a method of image processing according to an embodiment of the present invention which is applied to an apparatus capable of image processing, including but not limited to a mobile phone, a digital camera, a tablet, a computer, etc., is described below with reference to FIG.
  • the human body photo is divided into an arm region and a trunk region, and the method of decomposing can decompose the limb region in the human body photograph according to different characteristic values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
  • the characteristic value may be a slope of the contour line
  • the decomposition may be performed by calculating a hierarchical range of the slope of the outer contour line of the human body region; for example, the same or similar limb regions of the slope may be the same limb region.
  • S200 Set a target posture parameter for each of the at least one limb region.
  • the specific implementation manner may be: receiving target posture parameters of different limb regions input by the user.
  • the template map with the highest degree of matching with the target body state parameter is obtained from the template map of the pre-stored corresponding limb region.
  • the template image matching the target body state parameter is selected from the template image of the limb region according to the target body state parameter of each limb region, including:
  • the corresponding template map is selected as a template map matching the target body state parameter of the limb region.
  • B s is a value of a pixel belonging to a limb region in a human body photograph (pixels in the limb region are recorded as 1 in the photo, not recorded as 0), and B t is a value of the pixel in the template map belonging to the limb region ( In the template diagram, the pixel of the limb region is recorded as 1, not recorded as 0), and the sum function represents the sum of the number of 1-value points in the human body photograph, and s is the calculated degree of matching.
  • the above synthesis may include: synthesis of contour lines, and/or fusion of brightness.
  • mapping correspondence before and after the synthesis is determined by a grid-based contour method.
  • the above fusion may be: obtaining the first part of the pixel of the limb region in the photo of the human body The average brightness value is obtained, and the second average brightness value of the pixel in the template image is obtained; and the brightness value of the image after the fusion is calculated based on the preset fusion ratio, and the first average brightness value and the second average brightness value.
  • V dst r*V t +(1-r)*V s ;
  • r is the fusion ratio determined according to the user's input
  • V t is the brightness value of a pixel in the matched template image
  • V s is the brightness value of the human body photo in the corresponding pixel
  • V dst is the fusion Brightness value.
  • the above target posture parameters include, but are not limited to, at least one of the following: the degree of obesity of the human body after deformation, the degree of muscle display, and the like.
  • the template image of the corresponding group is filtered, and then a template image with the highest matching degree is obtained in the template image, and the template image is obtained. Synthesize with the human body image input by the user to achieve the desired effect.
  • the embodiment is further capable of re-synthesis.
  • the method further includes:
  • the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained.
  • the photo is synthesized and output.
  • the input of the instruction may be displaying a prompt box for the user, prompting the user whether to select re-synthesis or ending the operation through the prompt box; and determining the specified input according to the user's selection.
  • the template gallery can store both robust and slim body types as well as obese and funny body types, you can have more choices when you input the target body parameters. You can synthesize healthy and beautiful photos or synthesize them. Obese, refreshing photos add to the fun.
  • Steps S100, S200, S300, and S400 are a flowchart of a method for image processing according to another embodiment of the present invention. As shown in FIG. 2, the method further includes: Steps S100, S200, S300, and S400.
  • S220 Perform similar transformation on the graphic of the skin color region, so that the skin color region and the size and position of the limb region in the template image are consistent.
  • the order of the steps S210 and S220 is not limited to between the steps S300 and S400, and may be between S200 and S300.
  • the method further includes:
  • the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained.
  • the photo is synthesized and output.
  • the similarity transformation includes, but is not limited to, rotating, panning, and scaling the image according to the feature point and the centroid.
  • centroid is the average coordinate point of all pixel coordinates of the connected region, and the coordinate calculation formula is:
  • n the number of pixels in the area map.
  • the similarity of the photo input by the user is matched with the photo in the template library, and the limb region in the photo is similar in size to the limb region in the template image, so that the limb region in the photo corresponds to the template image.
  • the limb area is better matched.
  • FIG. 3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention. As shown in Figure 3, its implementation details include the following steps:
  • S001 performs skin color area detection.
  • a confidence interval for coloring the skin color is obtained by counting the skin color of the human body, and the user input image is segmented by the interval, and the pixels falling within the interval are considered to be the skin color region points.
  • S002 performs preliminary posture estimation on the skin color region, determines the aspect ratio of the skin color detection region, and determines the direction of the human body in the figure, and if necessary, rotates the image to be the human body standing upright.
  • Use face detection technology to determine whether the photo of the human body is front or back.
  • S003 uses the geometric distribution of the human body to divide the rough area on the premise that the human body is front and back and upright.
  • the limb area treated in this embodiment includes an arm, a torso, and both legs.
  • the characteristic points of the specific detection arm area include shoulders, armpits and wrists.
  • the characteristic points of the trunk area include the maximum curvature of the neck and shoulder joints, the nipple and the navel, and the characteristic points of the legs are groin, knee and ankle.
  • S005 uses the feature detection result to judge the correctness of the attitude estimation. If the feature point detection is correct and the matching degree is high, the posture detection is considered correct; otherwise, the posture detection is performed again. S002 ⁇ S005 loop repeatedly. If all the gestures cannot detect the feature points correctly, the composition is discarded and the user is prompted to fail.
  • S006 divides the human body region into more than the neck, the trunk, the arm, and the leg. Above the neck, the hands and feet are not treated.
  • S007 performs similar transformation on the divided regions, and transforms the position information of the feature points into a unified manner with the template through the regional centroid.
  • S009 collects the synthetic parameters selected by the user, including but not limited to: gender, fatness, muscle level, shadow depth.
  • Template Gallery 041 is a photo gallery, depending on the region (arm, The torso), different genders, different body parameters (fat and thinness, different muscle strength, etc.) are classified into various image templates of different postures of different limbs.
  • the template image may be a grayscale image, the non-human body part gray level is 0, and the human body part gray level is represented as 1.
  • the 042 template screening unit finds a plurality of templates that meet the conditions according to the input limb region, gender, and target body state parameters.
  • S011 selects a subset of the matching parameters from the template library according to the synthetic parameters selected by the user.
  • S012 uses the region map transformed by S007 to perform matching on the template images selected by S011 in sequence, and the matching formula is described in the embodiment, and details are not described herein again.
  • S013 performs an inverse transformation of the matched template on the S007 step similarity transformation to obtain a template map corresponding to the position in the user input map.
  • S014 uses the image deformation algorithm to calculate the transformation mapping relationship between the limb region map and the template graph.
  • the deformation algorithm can be used based on the grid and the line segment equivalent algorithm.
  • S015 transforms the user input graph by using the transform mapping relationship, and uses the interpolation method to draw the pixel color.
  • S016 converts the color of the limb region to the HSV color space, and fuses the Value (luminance) channel and the template grayscale image.
  • the fusion formula is described in the embodiment and will not be described here.
  • S017 fills the background vacancy part of the transformation using an image restoration algorithm.
  • the final generated image is displayed on the display device.
  • the gesture of the human body input by the user is detected before the matching action, and it is initially determined whether the photo can be merged with the photo in the template library, and the subsequent matching success rate is increased, and the unnecessary operation flow is occupied. system resource.
  • FIG. 4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention, and an apparatus for image processing according to an embodiment of the present invention is described below with reference to FIG. As shown, the device includes:
  • the photo decomposition module 01 is configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region;
  • the parameter receiving module 02 is configured to set a target body state parameter for each of the at least one limb region
  • the matching module 03 is configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
  • the synthesizing module 04 is configured to synthesize and output the template image matching the target body state parameter in at least one limb region and the image of the limb region obtained by decomposing the human body photo.
  • the photo decomposition module is specifically configured to decompose the limb region in the human body photograph according to different feature values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
  • the matching module is configured to acquire, respectively, a pixel value that meets a first preset condition in a target body state parameter of each limb region; and calculate a first total number of pixel values that meet the first preset condition in the target body state parameter of the limb region. And calculating, according to the first total number, a second total number of pixel values that meet the first preset condition in the body state parameter of the preset template image, and calculating a matching degree between the target body state parameter of the limb region and the template image And if the matching degree of the target body state parameter of the limb region with the template image is within a preset threshold range, selecting a corresponding template image as a template map matching the target body state parameter of the limb region.
  • the matching module uses the formula:
  • B s is the value of the pixel in the human body photo belonging to the limb region
  • B t is the value of the pixel in the template image belonging to the limb region
  • the sum function is used to calculate the 1-value point in the human body photo.
  • the sum of the numbers, s is the degree of matching of the calculations.
  • the synthesis module is used for synthesizing contour lines, and mapping correspondence before and after synthesis Determined by a grid-based contour method. And/or the synthesizing module is configured to perform fusion of brightness.
  • the synthesizing module is configured to acquire a first average brightness value of a part of the pixel in the body region in the photo of the human body, and obtain a second average brightness value of the pixel in the template image; based on a preset fusion ratio, and the first The average luminance value and the second average luminance value are used to calculate the luminance value of the image after the fusion.
  • the synthesis module uses the formula:
  • V dst r*V t +(1-r)*V s
  • r is the fusion ratio determined according to the user's input
  • V t is the brightness value of a pixel in the matched template image
  • V s is the brightness value of the human body photo in the corresponding pixel
  • V dst is the fusion Brightness value.
  • the apparatus for image processing further includes:
  • a skin color region identification module for identifying a skin color region in a human body photo
  • the similarity transformation module is configured to similarly transform the graphics of the skin color region so that the skin color region is consistent with the size and position of the limb region in the template image.
  • a prompting module configured to prompt the user to input the target body state parameter again when receiving the rematching instruction
  • the repeating module is configured to repeat the above steps after receiving the target body state parameter input by the user again: according to the received target body state parameter, obtaining the template image with the highest matching degree from the template image of the pre-stored corresponding limb region, and matching the template image with the highest matching degree
  • the human body photo input by the user is synthesized and output.
  • the invention automatically synthesizes the pre-stored template image and the human body photo input by the user, so that the user can create the body shape effect of the desired person without the cumbersome operation, and satisfies the requirement of the human body photo automatic body shaping beautification. Since the synthetic effect can be adjusted at any time according to the target posture parameter selected by the user, the template map can also include a fat and funny template map, so the user can combine Produce healthy and beautiful photos, and also synthesize obese, servicing photos, adding fun.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed are an image processing method and a device therefor. The method comprises: decomposing an input human body photograph according to limb areas to obtain an image of at least one limb area; setting a target posture parameter aiming at each limb area within at least one limb area; according to the target posture parameter of each limb area, selecting a template drawing matching the target posture parameter from template drawings of the limb area; and synthesizing the template drawings matching the target posture parameters in at least one limb area and the images of the limb areas obtained by the decomposition of the human body photograph one by one, and then performing outputting.

Description

一种图像处理的方法及其装置Image processing method and device thereof 技术领域Technical field
本发明涉及信息技术领域,具体涉及一种图像处理的方法及其装置。The present invention relates to the field of information technology, and in particular, to a method and an apparatus for image processing.
背景技术Background technique
俗话说“爱美的人人皆有之”。随着图像处理技术的进步,越来越多的美颜、美肤等照片美化软件映入眼帘。但这些软件主要集中于脸部的图像处理,而不能满足对身材有要求的用户群体。现有的通用图像处理软件,如Photoshop,虽然可以通过一些技术人员的专业操作修改达到照片塑身的效果,但要求技术人员的专业性高,操作复杂,不便于用户使用。As the saying goes, "Everyone who loves beauty has it." With the advancement of image processing technology, more and more photo beautification software such as beauty and skin are coming into view. However, these softwares mainly focus on the image processing of the face, but not the user groups that have requirements for the body. The existing general-purpose image processing software, such as Photoshop, can be modified to achieve the effect of photo-shaping by the professional operation of some technicians, but requires the technician to have high professionalism, complicated operation, and inconvenience to the user.
目前还没有一种方法或者应用软件能够自动把图片中人物身材美化,以达到苗条、健壮的效果。At present, there is no method or application software that can automatically beautify the characters in the picture to achieve a slim and robust effect.
发明内容Summary of the invention
本发明提供一种图像处理的方法及其装置,以达到自动将照片中人物的身材美化的目的,克服上述现有的图片美化的方法操作复杂、不便于用户使用的缺陷。The invention provides a method and a device for image processing, so as to achieve the purpose of automatically beautifying the figure of a person in a photo, and overcome the defects that the above-mentioned existing image beautification method is complicated and inconvenient for the user to use.
本发明解决上述技术问题的技术方案如下。The technical solution of the present invention to solve the above technical problems is as follows.
根据本发明的一个方面,提供的一种图像处理的方法,所述方法应用于可进行图像处理的装置,所述装置针对人体的至少一个肢体区域预存的至少一个模板图,所述模板图中至少包括一组体态参数,所述方法包括:According to an aspect of the present invention, a method of image processing is provided, the method being applied to an apparatus capable of performing image processing, the apparatus pre-existing at least one template map for at least one limb region of a human body, in the template map At least one set of posture parameters is included, and the method includes:
将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像;Decomposing the input human body image according to the limb region to obtain an image of at least one limb region;
针对至少一个肢体区域中的每一个肢体区域设置目标体态参数;Setting a target body state parameter for each of the at least one limb region;
根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选 取与所述目标体态参数匹配的模板图;Selecting from the template map of the limb region according to the target body state parameter of each limb region Taking a template map matching the target body state parameter;
逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。A template map matching at least one of the limb regions and the target body state parameter and an image of the limb region obtained by decomposing the human body photograph are combined and output.
根据本发明的另一个方面,提供的一种图像处理的装置,该装置包括:According to another aspect of the present invention, an apparatus for image processing is provided, the apparatus comprising:
照片分解模块,用于将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像;a photo decomposition module, configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region;
参数接收模块,用于针对至少一个肢体区域中的每一个肢体区域设置目标体态参数;a parameter receiving module, configured to set a target body state parameter for each of the at least one limb region;
匹配模块,用于根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图;a matching module, configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
合成模块,用于逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。The synthesizing module is configured to synthesize and output the template image matching the target body state parameter in the at least one limb region and the image of the limb region obtained by decomposing the human body photo.
本发明提供了一种图像处理的方法及其装置,通过将预存的模板图与用户输入的人体照片进行自动合成,使得用户不需要繁琐的操作就能够制作出想要的人的体型效果,满足了人体照片自动塑身美化的需求。由于合成效果可根据用户选取的目标体态参数随时调整,模板库图中也可以包括肥胖搞笑的模板图,因此用户既可以合成出健康靓丽的照片,也可以合成肥胖、古怪的照片,增加了趣味性。The invention provides a method and a device for image processing, which automatically synthesize a pre-stored template image and a human body photo input by a user, so that the user can create a desired person's body shape without cumbersome operations, and satisfies The photo of the human body is automatically shaped and beautified. Since the synthetic effect can be adjusted at any time according to the target posture parameters selected by the user, the template map can also include obese and funny template maps, so the user can synthesize healthy and beautiful photos, and can also synthesize obese and quirky photos, which increases the fun. Sex.
附图说明DRAWINGS
图1为根据本发明的一个实施例的图像处理的方法的流程图;1 is a flow chart of a method of image processing in accordance with one embodiment of the present invention;
图2为根据本发明的另一实施例的图像处理的方法的流程图;2 is a flow chart of a method of image processing in accordance with another embodiment of the present invention;
图3为根据本发明的一实施例的图像处理的方法的细节实现的流程图;3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention;
图4为根据本发明的一个实施例的图像处理的装置的示范性结构框图。 4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention.
具体实施方式detailed description
以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention are described in the following with reference to the accompanying drawings.
实施例一、Embodiment 1
图1为本发明的一个实施例的图像处理的方法的流程图。下面结合图1来描述根据本发明的一个实施例的图像处理的方法,该方法应用于可进行图像处理的装置,该装置包括但不限于手机、数码相机、平板、电脑等,该装置针对人体的至少一个肢体区域预存的至少一个模板图,所述模板图中至少包括一组体态参数,该方法包括以下步骤:1 is a flow chart of a method of image processing according to an embodiment of the present invention. A method of image processing according to an embodiment of the present invention, which is applied to an apparatus capable of image processing, including but not limited to a mobile phone, a digital camera, a tablet, a computer, etc., is described below with reference to FIG. At least one template map pre-stored in at least one limb region, the template graph including at least one set of posture parameters, the method comprising the steps of:
S100、将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像。S100. Decompose the input human body photo according to the limb region to obtain an image of at least one limb region.
将人体照片分为手臂区域、躯干区域,分解的方法可以根据人体不同的肢体区域对应的不同的特征值,对所述人体照片中的肢体区域进行分解,得到至少一个肢体区域的图像。The human body photo is divided into an arm region and a trunk region, and the method of decomposing can decompose the limb region in the human body photograph according to different characteristic values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
其中,所述特征值可以为轮廓线的斜率;Wherein, the characteristic value may be a slope of the contour line;
所述分解可以为:通过计算人体区域外轮廓线的斜率的阶层范围进行判断;比如,可以将斜率的阶层范围相同或相近的作为相同的肢体区域。The decomposition may be performed by calculating a hierarchical range of the slope of the outer contour line of the human body region; for example, the same or similar limb regions of the slope may be the same limb region.
S200、针对至少一个肢体区域中的每一个肢体区域设置目标体态参数。S200: Set a target posture parameter for each of the at least one limb region.
其具体实现方式可以为:接收用户输入的不同肢体区域的目标体态参数。The specific implementation manner may be: receiving target posture parameters of different limb regions input by the user.
S300、根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图。S300. Select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region.
也就是说,需要根据目标体态参数,从预存的对应肢体区域的模板图中获取与所述目标体态参数匹配程度最高的模板图。That is to say, according to the target body state parameter, the template map with the highest degree of matching with the target body state parameter is obtained from the template map of the pre-stored corresponding limb region.
其中,通过查询用户输入的目标体态参数与系统中预存的哪组体态参 数相对应,进而查询到对应的模板图组,从该模板图组中获取一张匹配程度最高的模板图作为合成的对象;Among them, by querying the target body state parameter input by the user and which group of body parameters are pre-stored in the system Corresponding to the number, and then querying the corresponding template graph group, and obtaining a template graph with the highest matching degree from the template graph group as a synthetic object;
S400、逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。S400. Synthesize and output the template image matching the target body state parameter in at least one limb region and the image of the limb region obtained by decomposing the human body photo.
其中,根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图,包括:The template image matching the target body state parameter is selected from the template image of the limb region according to the target body state parameter of each limb region, including:
逐个获取到每一个肢体区域的目标体态参数中符合第一预设条件的像素值;Obtaining, in each of the target body state parameters of each limb region, pixel values that meet the first preset condition;
计算肢体区域的目标体态参数中符合第一预设条件的像素值的第一总数;Calculating a first total number of pixel values in the target body state parameter of the limb region that meet the first preset condition;
基于所述第一总数、与预设的模板图的体态参数中符合第一预设条件的像素值的第二总数,计算得到所述肢体区域的目标体态参数与所述模板图的匹配程度;Calculating, according to the first total number, a second total number of pixel values that meet the first preset condition in the body state parameter of the preset template image, and calculating a matching degree between the target body state parameter of the limb region and the template image;
若所述肢体区域的目标体态参数与所述模板图的匹配程度在预设阈值范围内,则选取对应的模板图作为与所述肢体区域的目标体态参数匹配的模板图。If the matching degree of the target body state parameter of the limb region to the template map is within a preset threshold range, the corresponding template map is selected as a template map matching the target body state parameter of the limb region.
计算上述匹配程度的公式为:The formula for calculating the above matching degree is:
Figure PCTCN2015093026-appb-000001
Figure PCTCN2015093026-appb-000001
在该公式中,Bs是人体照片中像素属于肢体区域的1值(在照片中肢体区域的像素记为1,不在记为0),Bt是模板图中像素属于肢体区域的1值(在模板图中肢体区域的像素记为1,不在记为0),sum函数表示计算人体照片中的1值点的个数之和,s即计算的所述匹配程度。In this formula, B s is a value of a pixel belonging to a limb region in a human body photograph (pixels in the limb region are recorded as 1 in the photo, not recorded as 0), and B t is a value of the pixel in the template map belonging to the limb region ( In the template diagram, the pixel of the limb region is recorded as 1, not recorded as 0), and the sum function represents the sum of the number of 1-value points in the human body photograph, and s is the calculated degree of matching.
优选地,上述合成,可以包括:轮廓线条的合成,和/或亮度的融合。Preferably, the above synthesis may include: synthesis of contour lines, and/or fusion of brightness.
进一步地,合成前后的映射对应关系通过基于网格的等值线法来确定。Further, the mapping correspondence before and after the synthesis is determined by a grid-based contour method.
上述融合,可以为:获取到人体照片中所述肢体区域部分像素的第一 平均亮度值,获取到模板图中像素的第二平均亮度值;基于预设的融合比率,以及所述第一平均亮度值以及第二平均亮度值,计算得到融合之后的图像的亮度值。The above fusion may be: obtaining the first part of the pixel of the limb region in the photo of the human body The average brightness value is obtained, and the second average brightness value of the pixel in the template image is obtained; and the brightness value of the image after the fusion is calculated based on the preset fusion ratio, and the first average brightness value and the second average brightness value.
可以采用下述公式:The following formula can be used:
Vdst=r*Vt+(1-r)*VsV dst =r*V t +(1-r)*V s ;
在该公式中,r是根据用户的输入确定的融合比率,Vt是匹配到的模板图中在某像素的亮度值,Vs是人体照片在对应像素的亮度值,Vdst是融合后的亮度值。In this formula, r is the fusion ratio determined according to the user's input, V t is the brightness value of a pixel in the matched template image, V s is the brightness value of the human body photo in the corresponding pixel, and V dst is the fusion Brightness value.
上述目标体态参数包括但不限于以下至少之一:变形后的人体的肥胖程度、肌肉显示程度等。The above target posture parameters include, but are not limited to, at least one of the following: the degree of obesity of the human body after deformation, the degree of muscle display, and the like.
本实施例通过利用将用户输入的目标体态参数对应到系统里预存的体态参数,从而筛选出对应组的模板图,再在该模板图中获取匹配程度最高的一张模板图,将该模板图与用户输入的人体照片进行合成,进而达到预期的效果。In this embodiment, by using the target body state parameter input by the user to the body state parameter pre-stored in the system, the template image of the corresponding group is filtered, and then a template image with the highest matching degree is obtained in the template image, and the template image is obtained. Synthesize with the human body image input by the user to achieve the desired effect.
优选地,本实施例还能够重新进行合成,具体的,所述方法还包括:Preferably, the embodiment is further capable of re-synthesis. Specifically, the method further includes:
接收到重新匹配的指令时,提示用户再次输入所述目标体态参数;When receiving the rematching instruction, prompting the user to input the target body state parameter again;
再次接收用户输入的目标体态参数后重复上述步骤:根据接收的目标体态参数,从预存的对应肢体区域的模板图中获取匹配程度最高的模板图,将匹配程度最高的模板图与用户输入的人体照片合成后输出。After receiving the target body state parameter input by the user again, the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained. The photo is synthesized and output.
其中,所述指令的输入可以为为用户显示一提示框,通过所述提示框提示用户是否选择重新进行合成,或者结束操作等选项;根据用户的选择确定指定的输入。The input of the instruction may be displaying a prompt box for the user, prompting the user whether to select re-synthesis or ending the operation through the prompt box; and determining the specified input according to the user's selection.
由于模板图库中既能存储健壮、苗条的体型,也能存储肥胖、搞怪的体型,因此在用户输入目标体态参数时,可以有更多的选择,既可以合成出健康靓丽的照片,也可以合成肥胖、古怪的照片,增加了趣味性。 Since the template gallery can store both robust and slim body types as well as obese and funny body types, you can have more choices when you input the target body parameters. You can synthesize healthy and beautiful photos or synthesize them. Obese, quirky photos add to the fun.
实施例二Embodiment 2
图2为根据本发明的另一实施例的图像处理的方法的流程图,如图2所示,该方法在包括上述步骤S100、S200、S300和S400的基础上还包括:2 is a flowchart of a method for image processing according to another embodiment of the present invention. As shown in FIG. 2, the method further includes: Steps S100, S200, S300, and S400.
S210、识别人体照片中的肤色区域。S210. Identify a skin color region in a photo of a human body.
S220、将肤色区域的图形进行相似变换,使肤色区域与模板图中的肢体区域的大小、位置相一致。S220: Perform similar transformation on the graphic of the skin color region, so that the skin color region and the size and position of the limb region in the template image are consistent.
其中,该步骤S210、S220的顺序不限于在步骤S300和S400之间,也可以在S200和S300之间。The order of the steps S210 and S220 is not limited to between the steps S300 and S400, and may be between S200 and S300.
作为可选地,该方法还包括:Optionally, the method further includes:
接收到重新匹配的指令时,提示用户再次输入目标体态参数;When receiving the rematching instruction, prompting the user to input the target posture parameter again;
再次接收用户输入的目标体态参数后重复上述步骤:根据接收的目标体态参数,从预存的对应肢体区域的模板图中获取匹配程度最高的模板图,将匹配程度最高的模板图与用户输入的人体照片合成后输出。After receiving the target body state parameter input by the user again, the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained. The photo is synthesized and output.
其中,上述相似变换包括但不限于把区域图按照特征点、质心进行图片的旋转、平移、缩放。The similarity transformation includes, but is not limited to, rotating, panning, and scaling the image according to the feature point and the centroid.
上述质心是连通区域所有像素坐标的平均坐标点,其坐标计算公式为:The above centroid is the average coordinate point of all pixel coordinates of the connected region, and the coordinate calculation formula is:
Figure PCTCN2015093026-appb-000002
Figure PCTCN2015093026-appb-000002
其中,n代表区域图中的像素个数。Where n represents the number of pixels in the area map.
本实施例通过将用户输入的照片进行相似变换后再与模板库中的照片相匹配,照片中的肢体区域与模板图中的肢体区域大小相似,使得照片中的肢体区域与模板图中对应的肢体区域能够更好的匹配。In this embodiment, the similarity of the photo input by the user is matched with the photo in the template library, and the limb region in the photo is similar in size to the limb region in the template image, so that the limb region in the photo corresponds to the template image. The limb area is better matched.
实施例三Embodiment 3
图3为根据本发明的一实施例的图像处理的方法的细节实现的流程图, 如图3所示,其实现细节包括以下步骤:3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention. As shown in Figure 3, its implementation details include the following steps:
S001执行肤色区域检测。在对人体肤色进行统计得出肤色色彩的置信区间,利用此区间对用户输入图像进行分割,落在区间内的像素认为是肤色区域点。S001 performs skin color area detection. A confidence interval for coloring the skin color is obtained by counting the skin color of the human body, and the user input image is segmented by the interval, and the pixels falling within the interval are considered to be the skin color region points.
S002对肤色区域进行初步的姿态估计,对肤色检测区域的长宽比、质心判断人体在图中的方向,必要时将图片旋转是的人体直立向上。使用人脸检测技术判断人体照片是正面照还是背面照。S002 performs preliminary posture estimation on the skin color region, determines the aspect ratio of the skin color detection region, and determines the direction of the human body in the figure, and if necessary, rotates the image to be the human body standing upright. Use face detection technology to determine whether the photo of the human body is front or back.
S003在已知人体正反面和直立的前提下,利用人体几何分布划分粗略的区域。S003 uses the geometric distribution of the human body to divide the rough area on the premise that the human body is front and back and upright.
S004在估计区域内利用边缘,纹理,颜色信息进行特征点检测。本实施例处理的肢体区域包括手臂、躯干和双腿。具体检测手臂区域的特征点有肩头、腋窝、手腕,躯干区域的特征点有颈部肩膀连接处曲率最大点、乳头、肚脐,双腿区域的特征点有腹股沟、膝盖,脚腕。S004 performs feature point detection using edge, texture, and color information in the estimated area. The limb area treated in this embodiment includes an arm, a torso, and both legs. The characteristic points of the specific detection arm area include shoulders, armpits and wrists. The characteristic points of the trunk area include the maximum curvature of the neck and shoulder joints, the nipple and the navel, and the characteristic points of the legs are groin, knee and ankle.
S005利用特征检测结果对姿态估计正确性进行判断,如果特征点检测正确,匹配度高,则认为姿态检测正确;否则,重新进行姿态检测。S002~S005反复循环,如果所有姿态都无法正确检测特征点,则放弃合成,提示用户处理失败。S005 uses the feature detection result to judge the correctness of the attitude estimation. If the feature point detection is correct and the matching degree is high, the posture detection is considered correct; otherwise, the posture detection is performed again. S002~S005 loop repeatedly. If all the gestures cannot detect the feature points correctly, the composition is discarded and the user is prompted to fail.
S006在检测特征点的基础上,对人体区域进行分割,分为颈部以上,躯干,胳臂,腿部。其中颈部以上,手部和脚部不作处理。On the basis of detecting the feature points, S006 divides the human body region into more than the neck, the trunk, the arm, and the leg. Above the neck, the hands and feet are not treated.
S007对分割的区域进行相似变换,通过区域质心,特征点的位置信息,变换为和模板统一的方式。S007 performs similar transformation on the divided regions, and transforms the position information of the feature points into a unified manner with the template through the regional centroid.
S008暂存变换后的区域图。S008 temporarily stores the transformed area map.
S009收集用户选择的合成参数,包括且不限于:性别、胖瘦程度、肌肉程度、阴影深度。S009 collects the synthetic parameters selected by the user, including but not limited to: gender, fatness, muscle level, shadow depth.
S010模板图库,模板图库041是一个图片库,根据不同区域(手臂、 躯干),不同性别,不同的体态参数(胖瘦程度,不同肌肉强度等)分类存储了不同肢体不同姿态的各种图片模板。模板图可以是灰度图,非人体部分灰度为0,为人体部分灰度表示为1。042模板筛选单元,根据输入的肢体区域、性别、目标体态参数找到符合条件的多个模板。S010 Template Gallery, Template Gallery 041 is a photo gallery, depending on the region (arm, The torso), different genders, different body parameters (fat and thinness, different muscle strength, etc.) are classified into various image templates of different postures of different limbs. The template image may be a grayscale image, the non-human body part gray level is 0, and the human body part gray level is represented as 1. The 042 template screening unit finds a plurality of templates that meet the conditions according to the input limb region, gender, and target body state parameters.
S011根据用户选取的合成参数,从模板库中筛选出符合参数的一个子集。S011 selects a subset of the matching parameters from the template library according to the synthetic parameters selected by the user.
S012利用S007变换后的区域图,依次对S011筛选后的多个模板图进行匹配,匹配公式,在实施例一种已有表述,此处不再赘述。S012 uses the region map transformed by S007 to perform matching on the template images selected by S011 in sequence, and the matching formula is described in the embodiment, and details are not described herein again.
S013将匹配的模板进行S007步骤相似变换的逆变换,得到和用户输入图中位置对应的模板图。S013 performs an inverse transformation of the matched template on the S007 step similarity transformation to obtain a template map corresponding to the position in the user input map.
S014利用图像变形算法,计算肢体区域图到模板图的变换映射关系,变形算法有基于网格和基于线段对等算法可以使用。S014 uses the image deformation algorithm to calculate the transformation mapping relationship between the limb region map and the template graph. The deformation algorithm can be used based on the grid and the line segment equivalent algorithm.
S015利用变换映射关系对用户输入图进行变换,利用插值方法绘制像素颜色。S015 transforms the user input graph by using the transform mapping relationship, and uses the interpolation method to draw the pixel color.
S016对肢体区域的颜色转换到HSV颜色空间,对Value(亮度)通道和模板灰度图进行融合,融合公式,在实施例一种已有表述,此处不再赘述。S016 converts the color of the limb region to the HSV color space, and fuses the Value (luminance) channel and the template grayscale image. The fusion formula is described in the embodiment and will not be described here.
S017对变换造成的背景空缺部分,使用图像修复算法进行填充。最后生成图片展示在显示设备上。S017 fills the background vacancy part of the transformation using an image restoration algorithm. The final generated image is displayed on the display device.
本实施例通过在匹配动作之前对用户输入的人体照片进行姿态检测,初步判断该照片是否能够与模板库中的照片进行融合,增加后续的匹配成功率,较少不必要的操作流程导致的占用系统资源。In this embodiment, the gesture of the human body input by the user is detected before the matching action, and it is initially determined whether the photo can be merged with the photo in the template library, and the subsequent matching success rate is increased, and the unnecessary operation flow is occupied. system resource.
实施例四、Embodiment 4
图4为根据本发明的一个实施例的图像处理的装置的示范性结构框图,下面根据图4来描述根据本发明的一个实施例的图像处理的装置,如图4 所示,该装置包括:4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention, and an apparatus for image processing according to an embodiment of the present invention is described below with reference to FIG. As shown, the device includes:
照片分解模块01,用于将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像;The photo decomposition module 01 is configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region;
参数接收模块02,用于针对至少一个肢体区域中的每一个肢体区域设置目标体态参数;The parameter receiving module 02 is configured to set a target body state parameter for each of the at least one limb region;
匹配模块03,用于根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图;The matching module 03 is configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
合成模块04,用于逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。The synthesizing module 04 is configured to synthesize and output the template image matching the target body state parameter in at least one limb region and the image of the limb region obtained by decomposing the human body photo.
所述照片分解模块,具体用于根据人体不同的肢体区域对应的不同的特征值,对所述人体照片中的肢体区域进行分解,得到至少一个肢体区域的图像。The photo decomposition module is specifically configured to decompose the limb region in the human body photograph according to different feature values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
其中,匹配模块,用于逐个获取到每一个肢体区域的目标体态参数中符合第一预设条件的像素值;计算肢体区域的目标体态参数中符合第一预设条件的像素值的第一总数;基于所述第一总数、与预设的模板图的体态参数中符合第一预设条件的像素值的第二总数,计算得到所述肢体区域的目标体态参数与所述模板图的匹配程度;若所述肢体区域的目标体态参数与所述模板图的匹配程度在预设阈值范围内,则选取对应的模板图作为与所述肢体区域的目标体态参数匹配的模板图。The matching module is configured to acquire, respectively, a pixel value that meets a first preset condition in a target body state parameter of each limb region; and calculate a first total number of pixel values that meet the first preset condition in the target body state parameter of the limb region. And calculating, according to the first total number, a second total number of pixel values that meet the first preset condition in the body state parameter of the preset template image, and calculating a matching degree between the target body state parameter of the limb region and the template image And if the matching degree of the target body state parameter of the limb region with the template image is within a preset threshold range, selecting a corresponding template image as a template map matching the target body state parameter of the limb region.
该匹配模块利用公式:The matching module uses the formula:
Figure PCTCN2015093026-appb-000003
Figure PCTCN2015093026-appb-000003
来计算匹配程度,在该公式中,Bs是人体照片中像素属于肢体区域的1值,Bt是模板图中像素属于肢体区域的1值,sum函数表示计算人体照片中的1值点的个数之和,s即计算的所述匹配程度。To calculate the degree of matching, in which B s is the value of the pixel in the human body photo belonging to the limb region, B t is the value of the pixel in the template image belonging to the limb region, and the sum function is used to calculate the 1-value point in the human body photo. The sum of the numbers, s is the degree of matching of the calculations.
所述合成模块,用于进行轮廓线条的合成,合成前后的映射对应关系 通过基于网格的等值线法来确定。和/或,所述合成模块,用于进行亮度的融合。The synthesis module is used for synthesizing contour lines, and mapping correspondence before and after synthesis Determined by a grid-based contour method. And/or the synthesizing module is configured to perform fusion of brightness.
所述合成模块,用于获取到人体照片中所述肢体区域部分像素的第一平均亮度值,获取到模板图中像素的第二平均亮度值;基于预设的融合比率,以及所述第一平均亮度值以及第二平均亮度值,计算得到融合之后的图像的亮度值。合成模块利用公式:The synthesizing module is configured to acquire a first average brightness value of a part of the pixel in the body region in the photo of the human body, and obtain a second average brightness value of the pixel in the template image; based on a preset fusion ratio, and the first The average luminance value and the second average luminance value are used to calculate the luminance value of the image after the fusion. The synthesis module uses the formula:
Vdst=r*Vt+(1-r)*Vs V dst =r*V t +(1-r)*V s
来计算融合后各像素点的亮度,以突显照片中肌肉的轮廓,使得合成后的照片效果更真实。在该公式中,r是根据用户的输入确定的融合比率,Vt是匹配到的模板图中在某像素的亮度值,Vs是人体照片在对应像素的亮度值,Vdst是融合后的亮度值。To calculate the brightness of each pixel after fusion, to highlight the contours of the muscles in the photo, so that the combined photo effect is more realistic. In this formula, r is the fusion ratio determined according to the user's input, V t is the brightness value of a pixel in the matched template image, V s is the brightness value of the human body photo in the corresponding pixel, and V dst is the fusion Brightness value.
作为可选地,该图像处理的装置还包括:Optionally, the apparatus for image processing further includes:
肤色区域识别模块,用于识别人体照片中的肤色区域;a skin color region identification module for identifying a skin color region in a human body photo;
相似变换模块,用于将肤色区域的图形进行相似变换,使肤色区域与模板图中的肢体区域的大小、位置相一致。The similarity transformation module is configured to similarly transform the graphics of the skin color region so that the skin color region is consistent with the size and position of the limb region in the template image.
提示模块,用于接收到重新匹配的指令时,提示用户再次输入目标体态参数;a prompting module, configured to prompt the user to input the target body state parameter again when receiving the rematching instruction;
重复模块,用于再次接收用户输入的目标体态参数后重复上述步骤:根据接收的目标体态参数,从预存的对应肢体区域的模板图中获取匹配程度最高的模板图,将匹配程度最高的模板图与用户输入的人体照片合成后输出。The repeating module is configured to repeat the above steps after receiving the target body state parameter input by the user again: according to the received target body state parameter, obtaining the template image with the highest matching degree from the template image of the pre-stored corresponding limb region, and matching the template image with the highest matching degree The human body photo input by the user is synthesized and output.
本发明通过将预存的模板图与用户输入的人体照片进行自动合成,使得用户不需要繁琐的操作就能够制作出想要的人的体型效果,满足了人体照片自动塑身美化的需求。由于合成效果可根据用户选取的目标体态参数随时调整,模板库图中也可以包括肥胖搞笑的模板图,因此用户既可以合 成出健康靓丽的照片,也可以合成肥胖、古怪的照片,增加了趣味性。The invention automatically synthesizes the pre-stored template image and the human body photo input by the user, so that the user can create the body shape effect of the desired person without the cumbersome operation, and satisfies the requirement of the human body photo automatic body shaping beautification. Since the synthetic effect can be adjusted at any time according to the target posture parameter selected by the user, the template map can also include a fat and funny template map, so the user can combine Produce healthy and beautiful photos, and also synthesize obese, quirky photos, adding fun.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed. In addition, the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。A person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to the program instructions. The foregoing program may be stored in a computer readable storage medium, and the program is executed when executed. The foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk. A medium that can store program code.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。 The foregoing is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art is within the technical scope of the present disclosure. Variations or substitutions are readily conceivable and should be covered by the scope of the invention. Therefore, the scope of the invention should be determined by the scope of the appended claims.

Claims (20)

  1. 一种图像处理的方法,所述方法应用于可进行图像处理的装置,所述装置针对人体的至少一个肢体区域预存的至少一个模板图,所述模板图中至少包括一组体态参数,所述方法包括:A method of image processing, the method being applied to an apparatus capable of performing image processing, the apparatus being at least one template map pre-stored for at least one limb region of a human body, wherein the template map includes at least one set of body state parameters, Methods include:
    将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像;Decomposing the input human body image according to the limb region to obtain an image of at least one limb region;
    针对至少一个肢体区域中的每一个肢体区域设置目标体态参数;Setting a target body state parameter for each of the at least one limb region;
    根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图;Selecting, according to the target body state parameter of each limb region, a template map matching the target body state parameter from a template map of the limb region;
    逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。A template map matching at least one of the limb regions and the target body state parameter and an image of the limb region obtained by decomposing the human body photograph are combined and output.
  2. 根据权利要求1所述的图像处理的方法,其中,所述将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像,包括:The image processing method according to claim 1, wherein the decomposing the input human body photo according to the limb region to obtain an image of at least one limb region comprises:
    根据人体不同的肢体区域对应的不同的特征值,对所述人体照片中的肢体区域进行分解,得到至少一个肢体区域的图像。According to different characteristic values corresponding to different limb regions of the human body, the limb region in the human body photograph is decomposed to obtain an image of at least one limb region.
  3. 根据权利要求1所述的图像处理的方法,其中,将至少一个肢体区域中每一个肢体区域匹配程度最高的模板图、与所述人体照片分解得到的所述肢体区域的图像进行合成后输出之前,所述方法还包括:The image processing method according to claim 1, wherein a template map having the highest degree of matching in each of the at least one limb region, an image of the limb region obtained by decomposing the human body photograph, and the output are combined The method further includes:
    识别所述人体照片中的肤色区域;Identifying a skin color region in the human body photo;
    将所述肤色区域的图形进行相似变换,使所述肤色区域与所述模板图中的肢体区域的大小、位置相一致。The pattern of the skin color region is similarly transformed such that the skin color region coincides with the size and position of the limb region in the template image.
  4. 根据权利要求1所述的图像处理的方法,其中,根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图,包括:The image processing method according to claim 1, wherein the template map matching the target body state parameter is selected from the template image of the limb region according to the target body state parameter of each limb region, including:
    逐个获取到每一个肢体区域的目标体态参数中符合第一预设条件的像 素值;Obtaining images of the target posture parameters of each limb region that meet the first preset condition one by one Prime value
    计算肢体区域的目标体态参数中符合第一预设条件的像素值的第一总数;Calculating a first total number of pixel values in the target body state parameter of the limb region that meet the first preset condition;
    基于所述第一总数、与预设的模板图的体态参数中符合第一预设条件的像素值的第二总数,计算得到所述肢体区域的目标体态参数与所述模板图的匹配程度;Calculating, according to the first total number, a second total number of pixel values that meet the first preset condition in the body state parameter of the preset template image, and calculating a matching degree between the target body state parameter of the limb region and the template image;
    若所述肢体区域的目标体态参数与所述模板图的匹配程度在预设阈值范围内,则选取对应的模板图作为与所述肢体区域的目标体态参数匹配的模板图。If the matching degree of the target body state parameter of the limb region to the template map is within a preset threshold range, the corresponding template map is selected as a template map matching the target body state parameter of the limb region.
  5. 根据权利要求4所述的图像处理的方法,其中,基于所述第一总数、与预设的模板图的体态参数中符合第一预设条件的像素值的第二总数,计算得到所述肢体区域的目标体态参数与所述模板图的匹配程度,为基于下述公式进行计算:The image processing method according to claim 4, wherein the limb is calculated based on the first total number and a second total number of pixel values of the preset template map that meet the first preset condition The degree to which the target body state parameters of the region match the template map is calculated based on the following formula:
    Figure PCTCN2015093026-appb-100001
    Figure PCTCN2015093026-appb-100001
    其中,Bs是人体照片中像素属于肢体区域的1值,Bt是模板图中像素属于肢体区域的1值,sum函数表示计算人体照片中的1值点的个数之和,s即计算的所述匹配程度。Where B s is the value of the pixel in the human body photo belonging to the limb region, B t is the value of the pixel in the template image belonging to the limb region, and the sum function is the sum of the number of the 1-value points in the human body photo, s is calculated The degree of matching.
  6. 根据权利要求1所述的图像处理的方法,其中,所述合成,包括:The method of image processing according to claim 1, wherein said synthesizing comprises:
    轮廓线条的合成,合成前后的映射对应关系通过基于网格的等值线法来确定。The synthesis of contour lines, the mapping correspondence before and after synthesis is determined by the grid-based contour method.
  7. 根据权利要求1或6所述的图像处理的方法,其中,所述合成包括亮度的融合。The method of image processing according to claim 1 or 6, wherein said synthesizing comprises fusion of luminance.
  8. 根据权利要求7所述的图像处理的方法,其中,所述亮度的融合,为:获取到人体照片中所述肢体区域部分像素的第一平均亮度值,获取到模板图中像素的第二平均亮度值;基于预设的融合比率,以及所述第一平 均亮度值以及第二平均亮度值,计算得到融合之后的图像的亮度值。The image processing method according to claim 7, wherein the merging of the brightness is: acquiring a first average brightness value of a part of pixels of the limb region in a human body photograph, and acquiring a second average of pixels in the template image Luminance value; based on a preset fusion ratio, and the first flat The average luminance value and the second average luminance value are used to calculate the luminance value of the image after the fusion.
  9. 根据权利要求8所述的图像处理的方法,其中,计算公式为:The method of image processing according to claim 8, wherein the calculation formula is:
    Vdst=r*Vt+(1-r)*VsV dst =r*V t +(1-r)*V s ;
    其中r是根据用户的输入确定的融合比率,Vt是匹配到的模板图中在某像素的亮度值,Vs是人体照片在对应像素的亮度值,Vdst是融合后的亮度值。Where r is the fusion ratio determined according to the user's input, V t is the brightness value of a certain pixel in the matched template picture, V s is the brightness value of the corresponding pixel of the human body photo, and V dst is the brightness value after the fusion.
  10. 根据权利要求1所述的图像处理的方法,其中,所述方法还包括:The method of image processing according to claim 1, wherein the method further comprises:
    接收到重新匹配的指令时,提示用户再次输入所述目标体态参数;When receiving the rematching instruction, prompting the user to input the target body state parameter again;
    再次接收用户输入的目标体态参数后重复上述步骤:根据接收的目标体态参数,从预存的对应肢体区域的模板图中获取匹配程度最高的模板图,将匹配程度最高的模板图与用户输入的人体照片合成后输出。After receiving the target body state parameter input by the user again, the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained. The photo is synthesized and output.
  11. 一种图像处理的装置,所述装置包括:An apparatus for image processing, the apparatus comprising:
    照片分解模块,用于将输入的人体照片按照肢体区域进行分解,得到至少一个肢体区域的图像;a photo decomposition module, configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region;
    参数接收模块,用于针对至少一个肢体区域中的每一个肢体区域设置目标体态参数;a parameter receiving module, configured to set a target body state parameter for each of the at least one limb region;
    匹配模块,用于根据每一个肢体区域的所述目标体态参数,从肢体区域的模板图中选取与所述目标体态参数匹配的模板图;a matching module, configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
    合成模块,用于逐个将至少一个肢体区域中与所述目标体态参数匹配的模板图、与所述人体照片分解得到的肢体区域的图像进行合成后输出。The synthesizing module is configured to synthesize and output the template image matching the target body state parameter in the at least one limb region and the image of the limb region obtained by decomposing the human body photo.
  12. 根据权利要求11所述的装置,其中,所述照片分解模块,具体用于根据人体不同的肢体区域对应的不同的特征值,对所述人体照片中的肢体区域进行分解,得到至少一个肢体区域的图像。The device according to claim 11, wherein the photo decomposition module is specifically configured to decompose the limb region in the human body photograph according to different feature values corresponding to different limb regions of the human body to obtain at least one limb region. Image.
  13. 根据权利要求11所述的装置,其中,所述装置还包括:The apparatus of claim 11 wherein said apparatus further comprises:
    肤色区域识别模块,用于识别所述人体照片中的肤色区域;a skin color region identification module, configured to identify a skin color region in the human body photo;
    相似变换模块,用于将所述肤色区域的图形进行相似变换,使所述肤 色区域与所述模板图中的肢体区域的大小、位置相一致。a similarity transformation module, configured to similarly transform the graphic of the skin color region to make the skin The color area coincides with the size and position of the limb area in the template map.
  14. 根据权利要求11所述的装置,其中,The apparatus according to claim 11, wherein
    匹配模块,用于逐个获取到每一个肢体区域的目标体态参数中符合第一预设条件的像素值;计算肢体区域的目标体态参数中符合第一预设条件的像素值的第一总数;基于所述第一总数、与预设的模板图的体态参数中符合第一预设条件的像素值的第二总数,计算得到所述肢体区域的目标体态参数与所述模板图的匹配程度;若所述肢体区域的目标体态参数与所述模板图的匹配程度在预设阈值范围内,则选取对应的模板图作为与所述肢体区域的目标体态参数匹配的模板图。a matching module, configured to acquire, respectively, a pixel value that meets a first preset condition in a target body state parameter of each limb region; and calculate a first total number of pixel values that meet the first preset condition in the target body state parameter of the limb region; Calculating a matching degree between the target body state parameter of the limb region and the template image, where the first total number and the second total number of pixel values of the preset template map meet the first preset condition; If the matching degree of the target body state parameter of the limb region and the template image is within a preset threshold range, the corresponding template image is selected as a template image matching the target body state parameter of the limb region.
  15. 根据权利要求14所述的装置,其中,The device according to claim 14, wherein
    匹配模块,用于为基于下述公式进行计算:Matching module for calculating based on the following formula:
    Figure PCTCN2015093026-appb-100002
    Figure PCTCN2015093026-appb-100002
    其中,Bs是人体照片中像素属于肢体区域的1值,Bt是模板图中像素属于肢体区域的1值,sum函数表示计算人体照片中的1值点的个数之和,s即计算的所述匹配程度。Where B s is the value of the pixel in the human body photo belonging to the limb region, B t is the value of the pixel in the template image belonging to the limb region, and the sum function is the sum of the number of the 1-value points in the human body photo, s is calculated The degree of matching.
  16. 根据权利要求11所述的装置,其中,所述合成模块,用于进行轮廓线条的合成,合成前后的映射对应关系通过基于网格的等值线法来确定。The apparatus according to claim 11, wherein the synthesis module is configured to perform synthesis of contour lines, and mapping correspondences before and after synthesis are determined by a grid-based contour method.
  17. 根据权利要求11或16所述的装置,其中,所述合成模块,用于进行亮度的融合。The apparatus according to claim 11 or 16, wherein said synthesizing module is configured to perform fusion of luminance.
  18. 根据权利要求17所述的装置,其中,所述合成模块,用于获取到人体照片中所述肢体区域部分像素的第一平均亮度值,获取到模板图中像素的第二平均亮度值;基于预设的融合比率,以及所述第一平均亮度值以及第二平均亮度值,计算得到融合之后的图像的亮度值。The apparatus according to claim 17, wherein the synthesizing module is configured to acquire a first average brightness value of a part of pixels of the limb region in a photo of a human body, and obtain a second average brightness value of a pixel in the template image; The preset fusion ratio, and the first average luminance value and the second average luminance value, calculate a luminance value of the image after the fusion.
  19. 根据权利要求18所述的装置,其中,所述合成模块,用于采用下述计算公式进行计算: The apparatus according to claim 18, wherein said synthesis module is configured to perform calculation using the following calculation formula:
    Vdst=r*Vt+(1-r)*VsV dst =r*V t +(1-r)*V s ;
    其中r是根据用户的输入确定的融合比率,Vt是匹配到的模板图中在某像素的亮度值,Vs是人体照片在对应像素的亮度值,Vdst是融合后的亮度值。Where r is the fusion ratio determined according to the user's input, V t is the brightness value of a certain pixel in the matched template picture, V s is the brightness value of the corresponding pixel of the human body photo, and V dst is the brightness value after the fusion.
  20. 根据权利要求11所述的装置,其中,所述装置还包括:The apparatus of claim 11 wherein said apparatus further comprises:
    提示模块,用于接收到重新匹配的指令时,提示用户再次输入所述目标体态参数;a prompting module, configured to prompt the user to input the target body state parameter again when receiving the rematching instruction;
    重复模块,用于再次接收用户输入的目标体态参数后重复上述步骤:根据接收的目标体态参数,从预存的对应肢体区域的模板图中获取匹配程度最高的模板图,将匹配程度最高的模板图与用户输入的人体照片合成后输出。 The repeating module is configured to repeat the above steps after receiving the target body state parameter input by the user again: according to the received target body state parameter, obtaining the template image with the highest matching degree from the template image of the pre-stored corresponding limb region, and matching the template image with the highest matching degree The human body photo input by the user is synthesized and output.
PCT/CN2015/093026 2014-12-31 2015-10-28 Image processing method and device therefor WO2016107259A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410856265.1 2014-12-31
CN201410856265.1A CN104537608A (en) 2014-12-31 2014-12-31 Image processing method and device

Publications (1)

Publication Number Publication Date
WO2016107259A1 true WO2016107259A1 (en) 2016-07-07

Family

ID=52853127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/093026 WO2016107259A1 (en) 2014-12-31 2015-10-28 Image processing method and device therefor

Country Status (2)

Country Link
CN (1) CN104537608A (en)
WO (1) WO2016107259A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875451A (en) * 2017-05-10 2018-11-23 腾讯科技(深圳)有限公司 A kind of method, apparatus, storage medium and program product positioning image
WO2022174554A1 (en) * 2021-02-18 2022-08-25 深圳市慧鲤科技有限公司 Image display method and apparatus, device, storage medium, program and program product

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device
CN105049959B (en) * 2015-07-08 2019-09-06 广州酷狗计算机科技有限公司 Method for broadcasting multimedia file and device
CN106558039B (en) * 2015-09-23 2019-11-19 腾讯科技(深圳)有限公司 A kind of facial image processing method and device
CN106558043B (en) * 2015-09-29 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and apparatus of determining fusion coefficients
CN106846240B (en) * 2015-12-03 2021-05-07 斑马智行网络(香港)有限公司 A method, device and device for adjusting fusion material
CN107507158A (en) * 2016-06-14 2017-12-22 中兴通讯股份有限公司 A kind of image processing method and device
CN106548133B (en) * 2016-10-17 2019-04-23 歌尔科技有限公司 Template matching method and device and gesture recognition method and device
TWI724092B (en) * 2017-01-19 2021-04-11 香港商斑馬智行網絡(香港)有限公司 Method and device for determining fusion coefficient
TWI731923B (en) * 2017-01-23 2021-07-01 香港商斑馬智行網絡(香港)有限公司 Method, device and equipment for adjusting fusion materials
CN107169262B (en) * 2017-03-31 2021-11-23 百度在线网络技术(北京)有限公司 Method, device, equipment and computer storage medium for recommending body shaping scheme
CN110059522B (en) 2018-01-19 2021-06-25 北京市商汤科技开发有限公司 Human body contour key point detection method, image processing method, device and equipment
CN108830783B (en) * 2018-05-31 2021-07-02 北京市商汤科技开发有限公司 Image processing method and device and computer storage medium
CN108765274A (en) * 2018-05-31 2018-11-06 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage media
CN108830784A (en) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
CN110766607A (en) * 2018-07-25 2020-02-07 北京市商汤科技开发有限公司 An image processing method, device and computer storage medium
CN109146772B (en) * 2018-08-03 2019-08-23 深圳市飘飘宝贝有限公司 A kind of image processing method, terminal and computer readable storage medium
CN109166082A (en) * 2018-08-22 2019-01-08 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109035177B (en) * 2018-08-27 2020-11-20 三星电子(中国)研发中心 A photo processing method and device
CN109173263B (en) * 2018-08-31 2021-08-24 腾讯科技(深圳)有限公司 Image data processing method and device
CN109447896B (en) * 2018-09-21 2023-07-25 维沃移动通信(杭州)有限公司 Image processing method and terminal equipment
CN109461124A (en) * 2018-09-21 2019-03-12 维沃移动通信(杭州)有限公司 A kind of image processing method and terminal device
CN110705448B (en) * 2019-09-27 2023-01-20 北京市商汤科技开发有限公司 Human body detection method and device
CN111062868B (en) * 2019-12-03 2021-04-02 广州云从鼎望科技有限公司 Image processing method, device, machine readable medium and equipment
CN113837056A (en) * 2021-09-18 2021-12-24 深圳市商汤科技有限公司 Method for determining form information, related device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184143A (en) * 2006-11-09 2008-05-21 松下电器产业株式会社 Image processor and image processing method
CN101324961A (en) * 2008-07-25 2008-12-17 上海久游网络科技有限公司 Human face portion three-dimensional picture pasting method in computer virtual world
JP2011076596A (en) * 2009-09-01 2011-04-14 Neu Musik Kk Fashion check system using portable terminal
CN102982581A (en) * 2011-09-05 2013-03-20 北京三星通信技术研究有限公司 Virtual try-on system and method based on images
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
US20140307075A1 (en) * 2013-04-12 2014-10-16 Postech Academy-Industry Foundation Imaging apparatus and control method thereof
CN104156912A (en) * 2014-08-18 2014-11-19 厦门美图之家科技有限公司 Portrait heightening image processing method
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183450A (en) * 2006-11-14 2008-05-21 朱滨 Virtual costume real man try-on system and constructing method thereof
US20090245691A1 (en) * 2008-03-31 2009-10-01 University Of Southern California Estimating pose of photographic images in 3d earth model using human assistance
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN103218838A (en) * 2013-05-11 2013-07-24 苏州华漫信息服务有限公司 Automatic hair drawing method for human face cartoonlization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184143A (en) * 2006-11-09 2008-05-21 松下电器产业株式会社 Image processor and image processing method
CN101324961A (en) * 2008-07-25 2008-12-17 上海久游网络科技有限公司 Human face portion three-dimensional picture pasting method in computer virtual world
JP2011076596A (en) * 2009-09-01 2011-04-14 Neu Musik Kk Fashion check system using portable terminal
CN102982581A (en) * 2011-09-05 2013-03-20 北京三星通信技术研究有限公司 Virtual try-on system and method based on images
US20140307075A1 (en) * 2013-04-12 2014-10-16 Postech Academy-Industry Foundation Imaging apparatus and control method thereof
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN104156912A (en) * 2014-08-18 2014-11-19 厦门美图之家科技有限公司 Portrait heightening image processing method
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875451A (en) * 2017-05-10 2018-11-23 腾讯科技(深圳)有限公司 A kind of method, apparatus, storage medium and program product positioning image
CN108875451B (en) * 2017-05-10 2023-04-07 腾讯科技(深圳)有限公司 Method, device, storage medium and program product for positioning image
WO2022174554A1 (en) * 2021-02-18 2022-08-25 深圳市慧鲤科技有限公司 Image display method and apparatus, device, storage medium, program and program product

Also Published As

Publication number Publication date
CN104537608A (en) 2015-04-22

Similar Documents

Publication Publication Date Title
WO2016107259A1 (en) Image processing method and device therefor
JP7520713B2 (en) Systems and methods for image de-identification - Patents.com
US9911220B2 (en) Automatically determining correspondences between three-dimensional models
US12223541B2 (en) Virtual try-on system, virtual try-on method, computer program product, and information processing device
WO2019228473A1 (en) Method and apparatus for beautifying face image
CN111435433B (en) Information processing device, information processing method, and storage medium
US20180204052A1 (en) A method and apparatus for human face image processing
CN108874145B (en) Image processing method, computing device and storage medium
CN109815776B (en) Action prompting method and device, storage medium and electronic device
EP3847628A1 (en) Marker-less augmented reality system for mammoplasty pre-visualization
CN107808373A (en) Sample image synthetic method, device and computing device based on posture
WO2019023402A1 (en) Method and apparatus to generate and track standardized anatomical regions automatically
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
JP7278724B2 (en) Information processing device, information processing method, and information processing program
JP2009020761A (en) Image processing apparatus and method thereof
CN105096353B (en) Image processing method and device
WO2015017687A2 (en) Systems and methods for producing predictive images
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN110852934A (en) Image processing method and apparatus, image device, and storage medium
JP2022111704A5 (en)
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN112070815B (en) Automatic weight reducing method based on human body outline deformation
CN112785683B (en) Face image adjusting method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15874944

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/11/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15874944

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载