WO2016107259A1 - Procédé de traitement d'image et dispositif associé - Google Patents
Procédé de traitement d'image et dispositif associé Download PDFInfo
- Publication number
- WO2016107259A1 WO2016107259A1 PCT/CN2015/093026 CN2015093026W WO2016107259A1 WO 2016107259 A1 WO2016107259 A1 WO 2016107259A1 CN 2015093026 W CN2015093026 W CN 2015093026W WO 2016107259 A1 WO2016107259 A1 WO 2016107259A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- template
- limb region
- image
- limb
- state parameter
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract 7
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims description 28
- 230000004927 fusion Effects 0.000 claims description 23
- 238000003786 synthesis reaction Methods 0.000 claims description 14
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000006303 photolysis reaction Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 abstract description 2
- 210000003414 extremity Anatomy 0.000 description 73
- 230000036544 posture Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 210000003205 muscle Anatomy 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 210000000746 body region Anatomy 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000002414 leg Anatomy 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 230000037237 body shape Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 208000008589 Obesity Diseases 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 210000004013 groin Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000002445 nipple Anatomy 0.000 description 1
- 235000020824 obesity Nutrition 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 231100000289 photo-effect Toxicity 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 210000000323 shoulder joint Anatomy 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/10—Selection of transformation methods according to the characteristics of the input images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
Definitions
- the present invention relates to the field of information technology, and in particular, to a method and an apparatus for image processing.
- the invention provides a method and a device for image processing, so as to achieve the purpose of automatically beautifying the figure of a person in a photo, and overcome the defects that the above-mentioned existing image beautification method is complicated and inconvenient for the user to use.
- a method of image processing is provided, the method being applied to an apparatus capable of performing image processing, the apparatus pre-existing at least one template map for at least one limb region of a human body, in the template map At least one set of posture parameters is included, and the method includes:
- a template map matching at least one of the limb regions and the target body state parameter and an image of the limb region obtained by decomposing the human body photograph are combined and output.
- an apparatus for image processing comprising:
- a photo decomposition module configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region
- a parameter receiving module configured to set a target body state parameter for each of the at least one limb region
- a matching module configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
- the synthesizing module is configured to synthesize and output the template image matching the target body state parameter in the at least one limb region and the image of the limb region obtained by decomposing the human body photo.
- the invention provides a method and a device for image processing, which automatically synthesize a pre-stored template image and a human body photo input by a user, so that the user can create a desired person's body shape without cumbersome operations, and satisfies
- the photo of the human body is automatically shaped and beautified. Since the synthetic effect can be adjusted at any time according to the target posture parameters selected by the user, the template map can also include obese and funny template maps, so the user can synthesize healthy and beautiful photos, and can also synthesize obese and servicing photos, which increases the fun. Sex.
- FIG. 1 is a flow chart of a method of image processing in accordance with one embodiment of the present invention.
- FIG. 2 is a flow chart of a method of image processing in accordance with another embodiment of the present invention.
- FIG. 3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention.
- FIG. 4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention.
- FIG. 1 is a flow chart of a method of image processing according to an embodiment of the present invention.
- a method of image processing according to an embodiment of the present invention which is applied to an apparatus capable of image processing, including but not limited to a mobile phone, a digital camera, a tablet, a computer, etc., is described below with reference to FIG.
- the human body photo is divided into an arm region and a trunk region, and the method of decomposing can decompose the limb region in the human body photograph according to different characteristic values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
- the characteristic value may be a slope of the contour line
- the decomposition may be performed by calculating a hierarchical range of the slope of the outer contour line of the human body region; for example, the same or similar limb regions of the slope may be the same limb region.
- S200 Set a target posture parameter for each of the at least one limb region.
- the specific implementation manner may be: receiving target posture parameters of different limb regions input by the user.
- the template map with the highest degree of matching with the target body state parameter is obtained from the template map of the pre-stored corresponding limb region.
- the template image matching the target body state parameter is selected from the template image of the limb region according to the target body state parameter of each limb region, including:
- the corresponding template map is selected as a template map matching the target body state parameter of the limb region.
- B s is a value of a pixel belonging to a limb region in a human body photograph (pixels in the limb region are recorded as 1 in the photo, not recorded as 0), and B t is a value of the pixel in the template map belonging to the limb region ( In the template diagram, the pixel of the limb region is recorded as 1, not recorded as 0), and the sum function represents the sum of the number of 1-value points in the human body photograph, and s is the calculated degree of matching.
- the above synthesis may include: synthesis of contour lines, and/or fusion of brightness.
- mapping correspondence before and after the synthesis is determined by a grid-based contour method.
- the above fusion may be: obtaining the first part of the pixel of the limb region in the photo of the human body The average brightness value is obtained, and the second average brightness value of the pixel in the template image is obtained; and the brightness value of the image after the fusion is calculated based on the preset fusion ratio, and the first average brightness value and the second average brightness value.
- V dst r*V t +(1-r)*V s ;
- r is the fusion ratio determined according to the user's input
- V t is the brightness value of a pixel in the matched template image
- V s is the brightness value of the human body photo in the corresponding pixel
- V dst is the fusion Brightness value.
- the above target posture parameters include, but are not limited to, at least one of the following: the degree of obesity of the human body after deformation, the degree of muscle display, and the like.
- the template image of the corresponding group is filtered, and then a template image with the highest matching degree is obtained in the template image, and the template image is obtained. Synthesize with the human body image input by the user to achieve the desired effect.
- the embodiment is further capable of re-synthesis.
- the method further includes:
- the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained.
- the photo is synthesized and output.
- the input of the instruction may be displaying a prompt box for the user, prompting the user whether to select re-synthesis or ending the operation through the prompt box; and determining the specified input according to the user's selection.
- the template gallery can store both robust and slim body types as well as obese and funny body types, you can have more choices when you input the target body parameters. You can synthesize healthy and beautiful photos or synthesize them. Obese, refreshing photos add to the fun.
- Steps S100, S200, S300, and S400 are a flowchart of a method for image processing according to another embodiment of the present invention. As shown in FIG. 2, the method further includes: Steps S100, S200, S300, and S400.
- S220 Perform similar transformation on the graphic of the skin color region, so that the skin color region and the size and position of the limb region in the template image are consistent.
- the order of the steps S210 and S220 is not limited to between the steps S300 and S400, and may be between S200 and S300.
- the method further includes:
- the above steps are repeated: according to the received target body state parameter, the template image with the highest matching degree is obtained from the template image of the pre-stored corresponding limb region, and the template image with the highest matching degree and the human body input by the user are obtained.
- the photo is synthesized and output.
- the similarity transformation includes, but is not limited to, rotating, panning, and scaling the image according to the feature point and the centroid.
- centroid is the average coordinate point of all pixel coordinates of the connected region, and the coordinate calculation formula is:
- n the number of pixels in the area map.
- the similarity of the photo input by the user is matched with the photo in the template library, and the limb region in the photo is similar in size to the limb region in the template image, so that the limb region in the photo corresponds to the template image.
- the limb area is better matched.
- FIG. 3 is a flow chart showing a detailed implementation of a method of image processing in accordance with an embodiment of the present invention. As shown in Figure 3, its implementation details include the following steps:
- S001 performs skin color area detection.
- a confidence interval for coloring the skin color is obtained by counting the skin color of the human body, and the user input image is segmented by the interval, and the pixels falling within the interval are considered to be the skin color region points.
- S002 performs preliminary posture estimation on the skin color region, determines the aspect ratio of the skin color detection region, and determines the direction of the human body in the figure, and if necessary, rotates the image to be the human body standing upright.
- Use face detection technology to determine whether the photo of the human body is front or back.
- S003 uses the geometric distribution of the human body to divide the rough area on the premise that the human body is front and back and upright.
- the limb area treated in this embodiment includes an arm, a torso, and both legs.
- the characteristic points of the specific detection arm area include shoulders, armpits and wrists.
- the characteristic points of the trunk area include the maximum curvature of the neck and shoulder joints, the nipple and the navel, and the characteristic points of the legs are groin, knee and ankle.
- S005 uses the feature detection result to judge the correctness of the attitude estimation. If the feature point detection is correct and the matching degree is high, the posture detection is considered correct; otherwise, the posture detection is performed again. S002 ⁇ S005 loop repeatedly. If all the gestures cannot detect the feature points correctly, the composition is discarded and the user is prompted to fail.
- S006 divides the human body region into more than the neck, the trunk, the arm, and the leg. Above the neck, the hands and feet are not treated.
- S007 performs similar transformation on the divided regions, and transforms the position information of the feature points into a unified manner with the template through the regional centroid.
- S009 collects the synthetic parameters selected by the user, including but not limited to: gender, fatness, muscle level, shadow depth.
- Template Gallery 041 is a photo gallery, depending on the region (arm, The torso), different genders, different body parameters (fat and thinness, different muscle strength, etc.) are classified into various image templates of different postures of different limbs.
- the template image may be a grayscale image, the non-human body part gray level is 0, and the human body part gray level is represented as 1.
- the 042 template screening unit finds a plurality of templates that meet the conditions according to the input limb region, gender, and target body state parameters.
- S011 selects a subset of the matching parameters from the template library according to the synthetic parameters selected by the user.
- S012 uses the region map transformed by S007 to perform matching on the template images selected by S011 in sequence, and the matching formula is described in the embodiment, and details are not described herein again.
- S013 performs an inverse transformation of the matched template on the S007 step similarity transformation to obtain a template map corresponding to the position in the user input map.
- S014 uses the image deformation algorithm to calculate the transformation mapping relationship between the limb region map and the template graph.
- the deformation algorithm can be used based on the grid and the line segment equivalent algorithm.
- S015 transforms the user input graph by using the transform mapping relationship, and uses the interpolation method to draw the pixel color.
- S016 converts the color of the limb region to the HSV color space, and fuses the Value (luminance) channel and the template grayscale image.
- the fusion formula is described in the embodiment and will not be described here.
- S017 fills the background vacancy part of the transformation using an image restoration algorithm.
- the final generated image is displayed on the display device.
- the gesture of the human body input by the user is detected before the matching action, and it is initially determined whether the photo can be merged with the photo in the template library, and the subsequent matching success rate is increased, and the unnecessary operation flow is occupied. system resource.
- FIG. 4 is a block diagram showing an exemplary structure of an apparatus for image processing according to an embodiment of the present invention, and an apparatus for image processing according to an embodiment of the present invention is described below with reference to FIG. As shown, the device includes:
- the photo decomposition module 01 is configured to decompose the input human body photo according to the limb region to obtain an image of at least one limb region;
- the parameter receiving module 02 is configured to set a target body state parameter for each of the at least one limb region
- the matching module 03 is configured to select, according to the target body state parameter of each limb region, a template map that matches the target body state parameter from a template map of the limb region;
- the synthesizing module 04 is configured to synthesize and output the template image matching the target body state parameter in at least one limb region and the image of the limb region obtained by decomposing the human body photo.
- the photo decomposition module is specifically configured to decompose the limb region in the human body photograph according to different feature values corresponding to different limb regions of the human body to obtain an image of at least one limb region.
- the matching module is configured to acquire, respectively, a pixel value that meets a first preset condition in a target body state parameter of each limb region; and calculate a first total number of pixel values that meet the first preset condition in the target body state parameter of the limb region. And calculating, according to the first total number, a second total number of pixel values that meet the first preset condition in the body state parameter of the preset template image, and calculating a matching degree between the target body state parameter of the limb region and the template image And if the matching degree of the target body state parameter of the limb region with the template image is within a preset threshold range, selecting a corresponding template image as a template map matching the target body state parameter of the limb region.
- the matching module uses the formula:
- B s is the value of the pixel in the human body photo belonging to the limb region
- B t is the value of the pixel in the template image belonging to the limb region
- the sum function is used to calculate the 1-value point in the human body photo.
- the sum of the numbers, s is the degree of matching of the calculations.
- the synthesis module is used for synthesizing contour lines, and mapping correspondence before and after synthesis Determined by a grid-based contour method. And/or the synthesizing module is configured to perform fusion of brightness.
- the synthesizing module is configured to acquire a first average brightness value of a part of the pixel in the body region in the photo of the human body, and obtain a second average brightness value of the pixel in the template image; based on a preset fusion ratio, and the first The average luminance value and the second average luminance value are used to calculate the luminance value of the image after the fusion.
- the synthesis module uses the formula:
- V dst r*V t +(1-r)*V s
- r is the fusion ratio determined according to the user's input
- V t is the brightness value of a pixel in the matched template image
- V s is the brightness value of the human body photo in the corresponding pixel
- V dst is the fusion Brightness value.
- the apparatus for image processing further includes:
- a skin color region identification module for identifying a skin color region in a human body photo
- the similarity transformation module is configured to similarly transform the graphics of the skin color region so that the skin color region is consistent with the size and position of the limb region in the template image.
- a prompting module configured to prompt the user to input the target body state parameter again when receiving the rematching instruction
- the repeating module is configured to repeat the above steps after receiving the target body state parameter input by the user again: according to the received target body state parameter, obtaining the template image with the highest matching degree from the template image of the pre-stored corresponding limb region, and matching the template image with the highest matching degree
- the human body photo input by the user is synthesized and output.
- the invention automatically synthesizes the pre-stored template image and the human body photo input by the user, so that the user can create the body shape effect of the desired person without the cumbersome operation, and satisfies the requirement of the human body photo automatic body shaping beautification. Since the synthetic effect can be adjusted at any time according to the target posture parameter selected by the user, the template map can also include a fat and funny template map, so the user can combine Produce healthy and beautiful photos, and also synthesize obese, servicing photos, adding fun.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
- the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
- the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
- the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
- ROM read-only memory
- RAM random access memory
- magnetic disk or an optical disk.
- optical disk A medium that can store program code.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un procédé de traitement d'image et un dispositif associé. Le procédé consiste : à décomposer une photographie de corps humain d'entrée selon des régions de membre pour obtenir une image d'au moins une région de membre ; à régler un paramètre de posture cible visant chaque région de membre dans au moins une région de membre ; selon le paramètre de posture cible de chaque région de membre, à sélectionner un dessin de modèle correspondant au paramètre de posture cible parmi des dessins de modèle de la région de membre ; et à synthétiser les dessins de modèle correspondant aux paramètres de posture cibles dans au moins une région de membre et les images des régions de membre obtenues par la décomposition de la photographie de corps humain une par une, puis réaliser une délivrance.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410856265.1 | 2014-12-31 | ||
CN201410856265.1A CN104537608A (zh) | 2014-12-31 | 2014-12-31 | 一种图像处理的方法及其装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016107259A1 true WO2016107259A1 (fr) | 2016-07-07 |
Family
ID=52853127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/093026 WO2016107259A1 (fr) | 2014-12-31 | 2015-10-28 | Procédé de traitement d'image et dispositif associé |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104537608A (fr) |
WO (1) | WO2016107259A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875451A (zh) * | 2017-05-10 | 2018-11-23 | 腾讯科技(深圳)有限公司 | 一种定位图像的方法、装置、存储介质和程序产品 |
WO2022174554A1 (fr) * | 2021-02-18 | 2022-08-25 | 深圳市慧鲤科技有限公司 | Procédé et appareil d'affichage d'image, dispositif, support de stockage, programme et produit-programme |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537608A (zh) * | 2014-12-31 | 2015-04-22 | 深圳市中兴移动通信有限公司 | 一种图像处理的方法及其装置 |
CN105049959B (zh) * | 2015-07-08 | 2019-09-06 | 广州酷狗计算机科技有限公司 | 多媒体文件播放方法及装置 |
CN106558039B (zh) * | 2015-09-23 | 2019-11-19 | 腾讯科技(深圳)有限公司 | 一种人像处理方法及装置 |
CN106558043B (zh) * | 2015-09-29 | 2019-07-23 | 阿里巴巴集团控股有限公司 | 一种确定融合系数的方法和装置 |
CN106846240B (zh) * | 2015-12-03 | 2021-05-07 | 斑马智行网络(香港)有限公司 | 一种调整融合素材的方法、装置和设备 |
CN107507158A (zh) * | 2016-06-14 | 2017-12-22 | 中兴通讯股份有限公司 | 一种图像处理方法和装置 |
CN106548133B (zh) * | 2016-10-17 | 2019-04-23 | 歌尔科技有限公司 | 一种模板匹配方法和装置以及手势识别方法和装置 |
TWI724092B (zh) * | 2017-01-19 | 2021-04-11 | 香港商斑馬智行網絡(香港)有限公司 | 確定融合係數的方法和裝置 |
TWI731923B (zh) * | 2017-01-23 | 2021-07-01 | 香港商斑馬智行網絡(香港)有限公司 | 調整融合材料的方法、裝置和設備 |
CN107169262B (zh) * | 2017-03-31 | 2021-11-23 | 百度在线网络技术(北京)有限公司 | 推荐塑身方案的方法、装置、设备和计算机存储介质 |
CN110059522B (zh) | 2018-01-19 | 2021-06-25 | 北京市商汤科技开发有限公司 | 人体轮廓关键点检测方法、图像处理方法、装置及设备 |
CN108830783B (zh) * | 2018-05-31 | 2021-07-02 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108765274A (zh) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108830784A (zh) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN110766607A (zh) * | 2018-07-25 | 2020-02-07 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN109146772B (zh) * | 2018-08-03 | 2019-08-23 | 深圳市飘飘宝贝有限公司 | 一种图片处理方法、终端和计算机可读存储介质 |
CN109166082A (zh) * | 2018-08-22 | 2019-01-08 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
CN109035177B (zh) * | 2018-08-27 | 2020-11-20 | 三星电子(中国)研发中心 | 一种照片处理方法和装置 |
CN109173263B (zh) * | 2018-08-31 | 2021-08-24 | 腾讯科技(深圳)有限公司 | 一种图像数据处理方法和装置 |
CN109447896B (zh) * | 2018-09-21 | 2023-07-25 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法及终端设备 |
CN109461124A (zh) * | 2018-09-21 | 2019-03-12 | 维沃移动通信(杭州)有限公司 | 一种图像处理方法及终端设备 |
CN110705448B (zh) * | 2019-09-27 | 2023-01-20 | 北京市商汤科技开发有限公司 | 一种人体检测方法及装置 |
CN111062868B (zh) * | 2019-12-03 | 2021-04-02 | 广州云从鼎望科技有限公司 | 一种图像处理方法、装置、机器可读介质及设备 |
CN113837056A (zh) * | 2021-09-18 | 2021-12-24 | 深圳市商汤科技有限公司 | 形体信息的确定方法及相关装置、设备和存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101184143A (zh) * | 2006-11-09 | 2008-05-21 | 松下电器产业株式会社 | 图像处理器和图像处理方法 |
CN101324961A (zh) * | 2008-07-25 | 2008-12-17 | 上海久游网络科技有限公司 | 计算机虚拟世界中人脸部三维贴图方法 |
JP2011076596A (ja) * | 2009-09-01 | 2011-04-14 | Neu Musik Kk | 携帯端末を用いたファッションチェックシステム |
CN102982581A (zh) * | 2011-09-05 | 2013-03-20 | 北京三星通信技术研究有限公司 | 基于图像的虚拟试穿系统和方法 |
CN103413270A (zh) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | 一种图像的处理方法、装置和终端设备 |
US20140307075A1 (en) * | 2013-04-12 | 2014-10-16 | Postech Academy-Industry Foundation | Imaging apparatus and control method thereof |
CN104156912A (zh) * | 2014-08-18 | 2014-11-19 | 厦门美图之家科技有限公司 | 一种人像增高的图像处理的方法 |
CN104537608A (zh) * | 2014-12-31 | 2015-04-22 | 深圳市中兴移动通信有限公司 | 一种图像处理的方法及其装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101183450A (zh) * | 2006-11-14 | 2008-05-21 | 朱滨 | 虚拟服装真人试穿系统及其构建方法 |
US20090245691A1 (en) * | 2008-03-31 | 2009-10-01 | University Of Southern California | Estimating pose of photographic images in 3d earth model using human assistance |
CN103236066A (zh) * | 2013-05-10 | 2013-08-07 | 苏州华漫信息服务有限公司 | 一种基于人脸特征分析的虚拟试妆方法 |
CN103218838A (zh) * | 2013-05-11 | 2013-07-24 | 苏州华漫信息服务有限公司 | 一种用于人脸卡通化的自动头发绘制方法 |
-
2014
- 2014-12-31 CN CN201410856265.1A patent/CN104537608A/zh active Pending
-
2015
- 2015-10-28 WO PCT/CN2015/093026 patent/WO2016107259A1/fr active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101184143A (zh) * | 2006-11-09 | 2008-05-21 | 松下电器产业株式会社 | 图像处理器和图像处理方法 |
CN101324961A (zh) * | 2008-07-25 | 2008-12-17 | 上海久游网络科技有限公司 | 计算机虚拟世界中人脸部三维贴图方法 |
JP2011076596A (ja) * | 2009-09-01 | 2011-04-14 | Neu Musik Kk | 携帯端末を用いたファッションチェックシステム |
CN102982581A (zh) * | 2011-09-05 | 2013-03-20 | 北京三星通信技术研究有限公司 | 基于图像的虚拟试穿系统和方法 |
US20140307075A1 (en) * | 2013-04-12 | 2014-10-16 | Postech Academy-Industry Foundation | Imaging apparatus and control method thereof |
CN103413270A (zh) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | 一种图像的处理方法、装置和终端设备 |
CN104156912A (zh) * | 2014-08-18 | 2014-11-19 | 厦门美图之家科技有限公司 | 一种人像增高的图像处理的方法 |
CN104537608A (zh) * | 2014-12-31 | 2015-04-22 | 深圳市中兴移动通信有限公司 | 一种图像处理的方法及其装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875451A (zh) * | 2017-05-10 | 2018-11-23 | 腾讯科技(深圳)有限公司 | 一种定位图像的方法、装置、存储介质和程序产品 |
CN108875451B (zh) * | 2017-05-10 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 一种定位图像的方法、装置、存储介质和程序产品 |
WO2022174554A1 (fr) * | 2021-02-18 | 2022-08-25 | 深圳市慧鲤科技有限公司 | Procédé et appareil d'affichage d'image, dispositif, support de stockage, programme et produit-programme |
Also Published As
Publication number | Publication date |
---|---|
CN104537608A (zh) | 2015-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016107259A1 (fr) | Procédé de traitement d'image et dispositif associé | |
JP7520713B2 (ja) | 画像の非識別化のためのシステムおよび方法 | |
US9911220B2 (en) | Automatically determining correspondences between three-dimensional models | |
US12223541B2 (en) | Virtual try-on system, virtual try-on method, computer program product, and information processing device | |
WO2019228473A1 (fr) | Procédé et appareil pour embellir une image de visage | |
CN111435433B (zh) | 信息处理装置、信息处理方法以及存储介质 | |
US20180204052A1 (en) | A method and apparatus for human face image processing | |
CN108874145B (zh) | 一种图像处理方法、计算设备及存储介质 | |
CN109815776B (zh) | 动作提示方法和装置、存储介质及电子装置 | |
EP3847628A1 (fr) | Système de réalité augmentée sans marqueur pour pré-visualisation de mammoplastie | |
CN107808373A (zh) | 基于姿态的样本图像合成方法、装置及计算设备 | |
WO2019023402A1 (fr) | Procédé et appareil de génération et suivi automatiques de régions anatomiques normalisées | |
CN111062891A (zh) | 图像处理方法、装置、终端及计算机可读存储介质 | |
JP7278724B2 (ja) | 情報処理装置、情報処理方法、および情報処理プログラム | |
JP2009020761A (ja) | 画像処理装置及びその方法 | |
CN105096353B (zh) | 一种图像处理方法及装置 | |
WO2015017687A2 (fr) | Systèmes et procédés de production d'images prévisionnelles | |
WO2023273247A1 (fr) | Procédé et dispositif de traitement d'image de visage, support de stockage lisible par ordinateur, terminal | |
KR101141643B1 (ko) | 캐리커쳐 생성 기능을 갖는 이동통신 단말기 및 이를 이용한 생성 방법 | |
US20220277586A1 (en) | Modeling method, device, and system for three-dimensional head model, and storage medium | |
CN110852934A (zh) | 图像处理方法及装置、图像设备及存储介质 | |
JP2022111704A5 (fr) | ||
CN110766631A (zh) | 人脸图像的修饰方法、装置、电子设备和计算机可读介质 | |
CN112070815B (zh) | 一种基于人体外轮廓变形的自动瘦身方法 | |
CN112785683B (zh) | 一种人脸图像调整方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15874944 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/11/2017) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15874944 Country of ref document: EP Kind code of ref document: A1 |