+

CN110458916A - Expression packet automatic generation method, device, computer equipment and storage medium - Google Patents

Expression packet automatic generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110458916A
CN110458916A CN201910602401.7A CN201910602401A CN110458916A CN 110458916 A CN110458916 A CN 110458916A CN 201910602401 A CN201910602401 A CN 201910602401A CN 110458916 A CN110458916 A CN 110458916A
Authority
CN
China
Prior art keywords
expression
facial image
packet
face
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910602401.7A
Other languages
Chinese (zh)
Inventor
向纯玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
Original Assignee
OneConnect Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Smart Technology Co Ltd filed Critical OneConnect Smart Technology Co Ltd
Priority to CN201910602401.7A priority Critical patent/CN110458916A/en
Publication of CN110458916A publication Critical patent/CN110458916A/en
Priority to PCT/CN2020/085573 priority patent/WO2021004114A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of expression packet automatic generation method, device, computer equipment and storage mediums, which comprises extracts the micro- expression of face from facial image, and obtains the expression label of facial image according to the micro- expression of face;Expression packet picture is matched from default expression parcel according to the expression label of facial image, and determines the present position at the face position for the expression packet picture being matched to;The facial characteristics in facial image is extracted, and facial characteristics is covered into the present position from the face position for presetting the expression packet picture being matched in expression parcel, generates personalized emoticons packet.Operation of the present invention is convenient, and since the facial image in the personalized emoticons packet of generation is consistent with the expression of original expression packet picture, therefore the facial characteristics of facial image is fused to effect in original expression packet picture and consistency is more preferable, the user experience is improved, also improves user activity and participation.

Description

Expression packet automatic generation method, device, computer equipment and storage medium
Technical field
The present invention relates to micro- Expression Recognition fields, and in particular to a kind of expression packet automatic generation method, device, computer are set Standby and storage medium.
Background technique
With the development of communication technology, mobile phone using more and more extensive, greatly extend the social scope of people.Base In the expansion of social scope, the case where user is exchanged using mobile phone instant communication software, is also more and more.In order to user it Between exchange and conmmunication, all there is chat feature in many social applications, user can engage in the dialogue or mutually by chat box Various expression packets are sent, mutually to express the mood for being difficult to state by text.
In practical application, the expression packet that user sends is obtained from the third party of special production expression packet mostly, i.e., The material that third party collects according to it is published in network, user obtains from the expression packet that third party provides after generating expression packet Be derived from oneself interested expression packet carry out using.But in this case, user is passively to receive or passively select Expression packet may often will appear the case where being unable to reach the effect oneself wanted.
Summary of the invention
The embodiment of the present invention provides a kind of expression packet automatic generation method, device, computer equipment and storage medium, this hair Bright operation is simple, and the syncretizing effect and consistency of the personalized emoticons packet of generation are more preferable, and the user experience is improved, also improves use Family liveness and participation.
A kind of expression packet automatic generation method, comprising:
Obtain facial image;
The micro- expression of face is extracted from the facial image, and the facial image is obtained according to the micro- expression of the face Expression label;
Expression packet picture is matched from default expression parcel according to the expression label of the facial image, and determination is matched to The expression packet picture face position present position;Wherein, the expression packet picture in the default expression parcel At least one face position is all had, and each described expression packet picture is associated with expression label described at least one;
The facial characteristics in the facial image is extracted, and the facial characteristics is covered from the default expression parcel In the present position at the face position of the expression packet picture that is matched to, generate personalized emoticons packet.
A kind of expression packet automatically generating device, comprising:
Module is obtained, for obtaining facial image;
Extraction module is obtained for extracting the micro- expression of face from the facial image, and according to the micro- expression of the face The expression label of the facial image;
Matching module, for matching expression packet figure from default expression parcel according to the expression label of the facial image Piece, and determine the present position at the face position for the expression packet picture being matched to;Wherein, in the default expression parcel The expression packet picture all has at least one face position, and each described expression packet picture with table described at least one The association of feelings label;
Overlay module is covered for extracting the facial characteristics in the facial image, and by the facial characteristics from institute The present position at the face position for the expression packet picture being matched in default expression parcel is stated, personalized emoticons are generated Packet.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer-readable instruction run on device, the processor realize that above-mentioned expression packet is automatic when executing the computer-readable instruction Generation method.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer-readable instruction, institute It states and realizes above-mentioned expression packet automatic generation method when computer-readable instruction is executed by processor.
Expression packet automatic generation method, device, computer equipment and storage medium provided by the invention, it is only necessary to obtain and appoint It anticipates after the facial image of shooting, can determine table corresponding with the facial image according to the micro- expression of face in the facial image Feelings label (expression label corresponds to a kind of expression), and it is consistent with the expression of facial image according to expression label determination Expression packet picture, and by the facial characteristics in facial image cover to the consistent expression packet picture of its expression (expression packet figure Also expression corresponding with above-mentioned expression label is consistent for uncovered part in piece) in, ultimately generate personalized emoticons packet (this All the elements expression corresponding with above-mentioned expression label in property expression packet is consistent).Present invention only requires shooting face figures As can automatically generate and shoot the consistent personalized emoticons packet of facial image expression (and in the personalized emoticons packet may packet Also expression corresponding with above-mentioned expression label is consistent for the other content contained), it is convenient to operate, and due to the personalized emoticons packet of generation In facial image it is consistent with the expression of original expression packet picture, therefore in the personalized emoticons packet facial image face it is special It levies the effect being fused in original expression packet picture and consistency is more preferable, the user experience is improved, and it is active to also improve user Degree and participation.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is the application environment schematic diagram of expression packet automatic generation method in one embodiment of the invention;
Fig. 2 is the flow chart of expression packet automatic generation method in one embodiment of the invention;
Fig. 3 is the flow chart of the step S20 of expression packet automatic generation method in one embodiment of the invention;
Fig. 4 is the flow chart of the step S30 of expression packet automatic generation method in one embodiment of the invention;
Fig. 5 is the flow chart of the step S40 of expression packet automatic generation method in one embodiment of the invention;
Fig. 6 is the flow chart of the step S407 of expression packet automatic generation method in one embodiment of the invention;
Fig. 7 is the functional block diagram of expression packet automatically generating device in one embodiment of the invention;
Fig. 8 is the functional block diagram of the extraction module of expression packet automatically generating device in one embodiment of the invention;
Fig. 9 is the schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Expression packet automatic generation method provided by the invention, can be applicable in the application environment such as Fig. 1, wherein client (computer equipment) is communicated by network with server.Wherein, client (computer equipment) is including but not limited to each Kind personal computer, laptop, smart phone, tablet computer, camera and portable wearable device.Server can be with It is realized with the server cluster of the either multiple server compositions of independent server.
In one embodiment, it as shown in Fig. 2, providing a kind of expression packet automatic generation method, applies in Fig. 1 in this way Server for be illustrated, include the following steps S10-S40:
S10 obtains facial image;
The facial image refers to that the image comprising part or all of human face five-sense-organ, the facial image can be led to by user It crosses picture pick-up device and shoots and upload value server, can also be stored in advance in the database, server can according to demand at any time It is transferred from database.
S20 extracts the micro- expression of face from the facial image, and obtains the face figure according to the micro- expression of the face The expression label of picture;
That is, in this embodiment, needing to extract the micro- expression of face in the facial image first, and according to the people The micro- expression of face goes to determine corresponding expression label, and the expression label is determined as to the expression label of the facial image.
In one embodiment, as shown in figure 3, the step S20 namely the extraction face from the facial image are micro- Expression, and obtain according to the micro- expression of the face expression label of the facial image, comprising:
S201 extracts the everything cell type of the micro- expression of face from the facial image;
Wherein, the motor unit type may include but be not limited to for movement list general in the world in following table 1 First (AU) and eyeball dynamic etc., the eyeball dynamic are specially different movements and the visual angle of eyeball, for example, eyeball to the left, to Right, upward, downward, upper right is seen, and different act in motor unit corresponding with visual angle of eyeball can also include to eyeball The amplitude size of movement is judged.
S202 confirms the micro- of the facial image according to all motor unit types are extracted from the facial image Expression type.
That is, being previously stored with various micro- expression types in database, (for example micro- expression type is to cry, laugh at or angry Deng) corresponding to motor unit type, each micro- expression type is corresponding with the combination of various motion cell type, for example, micro- Expression type is to laugh at, this micro- expression type is corresponding with the combination of at least following motor unit type: the corners of the mouth raises up (in table 1 AU12), the corners of the mouth raise up (AU12 in table 1)+outer eyebrow raises up (AU2 in table 1), the corners of the mouth raises up, and (AU12 in table 1)+lip is stretched It opens up (AU20 in table 1)+upper lower lip and separates (AU25 in table 1) etc.;Therefore, as long as will extract in the step S201 All motor unit types are compared with the corresponding motor unit type of each micro- expression type stored in database It is right, it can confirm the type of micro- expression.Understandably, in the present embodiment one side, as long as being mentioned in the step S201 In all motor unit types taken, the corresponding everything of a micro- expression type comprising being stored in database Cell type is (that is, can also include other movements in all motor unit types extracted in the step S201 Unit), i.e., it is believed that micro- expression type is micro- expression type.It, can also be only in institute in the present embodiment another aspect State all motor unit types extracted in step S201, the movement list with the micro- expression type stored in database When element type and sequence correspond (can not more or few any one motor unit), just think that the people under the guardianship's is micro- Expression type is micro- expression type.
1 part AU of table
AU label AU description
AU1 Interior eyebrow raises up
AU2 Outer eyebrow raises up
AU4 Eyebrow pushes
AU5 On the face raises up
AU6 Cheek lifts
AU7 Eyelid tightening
AU9 Nose crease
AU10 Upper lip raises up
AU12 The corners of the mouth raises up
AU14 Tighten the corners of the mouth
AU15 Corners of the mouth drop-down
AU16 Lower lip pushes
AU17 Chin tightens
AU18 Lip fold
AU20 Lip stretching, extension
AU23 Lip is shunk
AU24 Lip compresses
AU25 Upper lower lip separates
AU26 Lower jaw drop-down
S203 obtains all expression labels of micro- expression type association, while obtaining and each described table The associated characteristic action unit of feelings label;
That is, being previously stored with the expression label with each micro- expression type association in database, and each micro- table Feelings type corresponds to multiple expression labels, for example, micro- expression type is to laugh at, associated expression label be may include: laughing, is micro- It laughs at, smile sinisterly, must not force a smile, laughs foolishly.Understandably, each has its corresponding with the expression label of micro- expression type association At least one characteristic action unit.
S204, will extract all motor unit types from the facial image and each described expression label is closed The characteristic action unit of connection is matched, and includes in extracting all motor unit types from the facial image When the associated all characteristic action units of one expression label, the expression label is recorded as the facial image Expression label.
In the present embodiment, all motor unit classes only extracted in the step S201 from facial image When in type comprising the associated all characteristic action units of an expression label (that is, extracted in the step S201 It can also include other in addition to the corresponding characteristic action unit of the expression label in all motor unit types Motor unit), i.e., it is believed that above-mentioned expression label is the expression label of facial image.In the above-described embodiments, basis first The motor unit type extracted in facial image confirms that (micro- expression number of types is much smaller than the number of expression label to micro- expression type Amount), just the feature of the motor unit type extracted in facial image and the expression label of micro- expression type association is moved later It is matched as unit, in this way, the motor unit type extracted in facial image is not necessarily to compare with all expression labels, The characteristic action unit of expression label corresponding with the micro- expression type of several classes is only needed to be compared, in the number of expression label It measures in huge situation, greatly reduces calculation amount, alleviate server load.
Understandably, in one embodiment, face directly can also be extracted from the facial image in step s 201 After the everything cell type of micro- expression, directly acquires all expression labels and closed with expression label described in each The characteristic action unit of connection;Enter back into step S204 will be extracted from the facial image all motor unit types with Each described associated described characteristic action unit of expression label is matched, and all institutes are being extracted from the facial image When stating all characteristic action units associated comprising the expression label in motor unit type, by the expression mark Label are recorded as the expression label of the facial image.In this embodiment, without first according to the movement extracted in facial image Cell type confirms micro- expression type, but directly by the feature of the motor unit type and expression label extracted in facial image Motor unit is matched, and comparison step is simplified, and when the quantity of expression label is relatively fewer, which can be preferential It chooses and uses.
S30 matches expression packet picture, and determining from default expression parcel according to the expression label of the facial image The present position at the face position for the expression packet picture being fitted on;Wherein, the expression packet in the default expression parcel Picture all has at least one face position, and each described expression packet picture is closed with expression label described at least one Connection;
That is, in the present embodiment, it is necessary first to be gone according to the expression label of the facial image got in step S20 Obtain in default expression parcel (an expression packet picture likely corresponds to one with the associated expression packet picture of the expression label Or multiple expression labels), at this point, the quantity for the expression packet picture corresponding with the expression label that may be got is greater than one, It at this time, it may be necessary to choose one of expression packet picture according to demand, and is from default by the expression packet picture record of selection The expression packet picture being matched in expression parcel.And the expression with the facial image is being chosen from default expression parcel After unique expression packet picture of tag match, it is thus necessary to determine that position locating for the face position in the expression packet picture It sets, and then is covered with the facial characteristics extracted in the facial image to the face position, replaced in the expression packet picture The corresponding image of face position, generate new personalized expression packet.
In one embodiment, as shown in figure 4, the step S30 namely the expression label according to the facial image Expression packet picture is matched from default expression parcel, and determines the locating position at the face position for the expression packet picture being matched to It sets, comprising:
S301 is obtained from the face face mask extracted in the facial image;In the present embodiment, the face face Profile that is to say the edge contour of the face of the face in the facial image.
S302 chooses expression label institute's espressiove packet figure identical with the facial image from default expression parcel Piece;
S303 determines the present position and the face position at the face position for each expression packet picture chosen Present position profile;The present position at the face position refers to all objects with face in an expression packet picture The face present position of (such as people, animal or cartoon character etc.), the present position profile refer to face position corresponding position Profile (such as face mask of face).
S304 obtains the similarity between the face face mask and the present position profile;The similarity can The similar parameters such as the contour line radian variation with the size, the two that are occupied according to the two are compared, and can give each phase Like the different weight of parameter setting, and then after each similar parameter is normalized, respectively multiplied by corresponding weight it Afterwards, by the sum of products of offset parameter and weight after normalized, as the judgment criteria of the similarity, product it Bigger, the similarity is higher.
S305, by the corresponding expression packet picture of the highest present position profile of the similarity, be recorded as with The expression packet picture of the facial image unique match, while obtaining the expression with the facial image unique match The present position at the face position of packet picture.
That is, in this embodiment it is possible to referring to the expression label from the default expression parcel with the facial image In corresponding institute's espressiove packet picture, the similarity for choosing the face mask of face mask and the facial image is highest described Expression packet picture is recorded as the expression packet picture being matched to from default expression parcel, in this way, can be convenient subsequent The facial characteristics extracted in the facial image is farthest adapted to covering to the face position, replaces the expression packet The corresponding image of face position in picture generates new personalized expression packet.
In one embodiment, the expression label according to the facial image matches expression packet from default expression parcel Picture, and determine the present position at the face position for the expression packet picture being matched to, comprising:
Expression label institute's espressiove packet picture identical with the facial image is chosen from default expression parcel;Institute The expression packet for the legal acquisition of third party that the expression packet picture in default expression parcel can be from special production expression packet is stated, and Expression packet picture in the default expression parcel is required to that (namely face can be part face or complete with face position Portion).
According to preset screening rule, determination is unique with the facial image from all expression packet pictures chosen The matched expression packet picture;Understandably, the screening rule, which can be, randomly selects, may also mean that according to use frequency Rate is chosen, for example, the highest expression packet picture of the personal use frequency that can choose user namely user use the expression The frequency of packet picture is higher, then its probability for being selected is bigger.Similarly, the screening rule can also be to count described pre- first If above-mentioned institute's espressiove packet picture corresponding with the expression label in expression parcel is selected by total access times of all users The wherein highest expression packet picture of total access times is taken, further, the screening rule can also lead to total access times Cross (the association in a conversion rule between total access times and different popularities comprising different range of preset conversion rule Relationship, a total access times are only capable of a corresponding popularity) popularity is converted to, at this time similarly, popularity is higher, is chosen The probability taken is higher.
The expression packet picture that will be determined according to the screening rule, is recorded as and the facial image unique match The expression packet picture, at the same from extract its face position in the expression packet picture of the facial image unique match Present position.
In this embodiment it is possible to according to the expression of screening rule confirmation and the facial image unique match Packet picture, the screening rule can confirm according to user's access times etc., use in this way, can be gone to choose according to user preferences The template of family personalized emoticons packet to be generated, in this way, the personalized emoticons Bao Huigeng generated meets the use habit of user, band Give user better usage experience.
S40 extracts the facial characteristics in the facial image, and the facial characteristics is covered from the default expression The present position at the face position for the expression packet picture being matched in parcel generates personalized emoticons packet.
In this embodiment, matched unique with the expression label of the facial image when being chosen from default expression parcel An expression packet picture after, it is thus necessary to determine that the present position at the face position in the expression packet picture, and then by the people The facial characteristics extracted in face image, the present position profile for replacing original face position (refer to that face position is corresponding The profile of position) in picture material, and then will replacement have the facial image facial characteristics above-mentioned expression packet picture close Cheng Xin personalized emoticons packet (for example, in expression packet picture main body be one sell the kitten sprouted, at this point, by cat face position replace For the facial characteristics of facial image, the personalized emoticons packet of generation is to sell the small of the facial characteristics with facial image sprouted Cat).
In one embodiment, as shown in figure 5, in the step S40, the face extracted in the facial image is special It levies, and the facial characteristics is covered to the face for the expression packet picture being matched to from the default expression parcel The present position at position, comprising:
S401 is obtained from the institute at the face position for the expression packet picture being matched in the default expression parcel Locate position profile, the whole placed angle at the face position, face position contour area;Wherein, the face position Present position profile refer to the edge contour of area occupied by the face position in the expression packet picture;The face portion The whole placed angle of position refers to the size of face position tilt angle, is just setting or be inverted, which can be with It is determined on the basis of wherein one or more face, such as straight to be linked to be between opposite two canthus of one eye eyeball (similarly, also whether the angle between line and horizontal line is located at the lower section of eyes to determine its tilt angle, then with nose or mouth Can be determined so that whether mouth is located at the conditions such as underthe nose) just set or be inverted that (nose or mouth are located at eyes to determine that it is Lower section be positive set, otherwise for be inverted);Face position contour area refers to the gross area of the present position profile.
S402 extracts all facial characteristics being located within face face mask in the facial image, and determines each institute State the positional relationship between facial characteristics central point and the linear distance between each facial characteristics central point;Wherein, institute Stating facial characteristics to include but is not limited to is ear, eyebrow, eye, nose, mouth etc..Positional relationship between each facial characteristics central point Refer to the distance between each facial characteristics central point distance, relative bearing etc..
S403, creates painting canvas, and the painting canvas profile of the painting canvas is consistent with the present position profile at the face position;By institute Facial characteristics is stated to be pre-processed according to preset image procossing mode;That is, the painting canvas profile of the painting canvas and the face The present position profile at position can be completely overlapped.The preset image procossing mode is including but not limited to the face Portion's feature carries out transparency adjustment, toning processing etc., so that the personalized emoticons packet generated is more naturally beautiful.
S404 will will be located in advance in the case where keeping the position relationship between the facial characteristics central point All facial characteristics after reason are placed in the painting canvas profile of the painting canvas according to the whole placed angle;That is, When all facial characteristics are placed into the painting canvas profile, need to keep its position relationship with respect between, in turn Maintain constant (the face spy if the relative positional relationship between each facial characteristics changes, after variation of the expression of facial image The facial image expression expression corresponding with facial image before this of sign composition may change).It understandably, can be with The center of all facial characteristics and the center of the painting canvas profile are loci, and all facial characteristics are put It sets into the painting canvas profile;And the entirety formed between all facial characteristics and the whole placed angle are different When cause, need to be adjusted to consistent with the whole placed angle and then all facial characteristics are placed into the picture In cloth profile.
S405, with the linear distance between the same facial characteristics central point more each than column adjustment, and according to described same The graphics area that the facial characteristics of the outermost in each facial characteristics after ratio adjustment surrounds, with the face Ratio between the contour area of position is in default ratio range;That is, uniformly adjusting each face by same ratio Linear distance between characteristic central point carrys out the size of each facial characteristics of integrated regulation;And each face after adjusting Ratio between the graphics area that the facial characteristics of outermost in portion's feature surrounds, with face position contour area When interior in default ratio range (can set according to demand), size of each facial characteristics on the face position will opposite association It adjusts, otherwise, facial characteristics may be too large or too small and uncoordinated in painting canvas profile.That is, above-mentioned same ratio needs root According to " the graphics area that the facial characteristics of the outermost in each facial characteristics after adjustment surrounds, with the face Ratio between the contour area of position is in default ratio range " condition is chosen (can arrange in advance according to priority level And store in the database, and the screening from database meets the same than arranging of above-mentioned condition automatically by server), if making institute Stating the same ratio that ratio is in default ratio range has multiple selections, then can randomly select one or according to preferential The sequencing of rank is chosen.
S406, by the painting canvas comprising the facial characteristics, corresponding cover from the default expression parcel is matched To the expression packet picture the face position present position profile on;That is, due to the painting canvas profile of the painting canvas Present position profile with the face position can be completely overlapped, therefore is directly replaced in this step with the painting canvas original The face position present position profile in picture material.
S407 carries out image synthesis processing to the expression packet picture for being covered with the painting canvas profile, generates described Property expression packet.Described image synthesis processing is including but not limited to for the expression packet figure for being covered with the facial characteristics Piece merges into same picture and carries out unified exposure toning processing, keeps it more natural.
It is in one embodiment, described to generate the personalized emoticons packet as shown in fig. 6, in the step S407, comprising:
S4071 receives text addition instruction, obtains the expression text of user's typing, the text box that the user chooses is compiled Number;Wherein, after the text addition instruction refers to progress image synthesis processing in step S 407, if user also wants a Property expression packet in autonomous typing expression text when, at this point it is possible to will after triggering programmable button by the modes such as clicking, slide Text addition instruction is sent to server;The expression text just refers to that user is intended for the text of the personalized emoticons packet configuration This;The text box number refers to the unique identification for the text box that can be added into personalized emoticons packet, each text Frame number corresponds to the pattern of a text box.
S4072 is obtained and is numbered associated text box size and default text format with the text box;That is, each A text box number all has the text box size that can be received in expression text, and each text box is one corresponding Default text format, if user does not modify the default text format, the expression text will be with the default text lattice Formula is inserted in the text box.
S4073 obtains the character quantity of the expression text, and according to the character quantity and the text box size Adjust the character boundary in the default text format;That is, character quantity for expression text (namely character can be passed through Length) judgement, character boundary etc. is automatically adjusted;Understandably, except the character is big in the default text format Other text formatting projects except small can also be adjusted according to demand.
S4074, predeterminated position or user's selected location in the expression packet picture are generated numbers with the text box Corresponding text box, and expression text is inserted according to the default text format after adjustment in the text box;That is, After user adjusts the default text format, the expression text will insert institute with the default text format after adjusting It states in text box.
S4075 generates the personalized emoticons after carrying out assembled processing to the expression packet picture and the text box Packet.The assembled processing refers to that the expression packet picture merging for synthesizing the text box after processing with progress image is same Personalized expression packet.
That is, in the above-described embodiments, also supporting the customized input expression text of user, and by for expression text The judgement of character quantity (namely character length), is automatically adjusted character boundary etc., and will be described in filling expression character Text box and expression packet picture automatic assembling are personalized emoticons packet.Understandably, it can equally be wrapped in the personalized emoticons Stage property effect is added, for example is increased, the stage property of heart, cap, star and other effects.
In one embodiment, as shown in fig. 7, providing a kind of expression packet automatically generating device, which automatically generates dress It sets and is corresponded with expression packet automatic generation method in above-described embodiment.The expression packet automatically generating device includes:
Module 11 is obtained, for obtaining facial image;
Extraction module 12 for extracting the micro- expression of face from the facial image, and is obtained according to the micro- expression of the face Take the expression label of the facial image;
Matching module 13, for matching expression packet figure from default expression parcel according to the expression label of the facial image Piece, and determine the present position at the face position for the expression packet picture being matched to;Wherein, in the default expression parcel The expression packet picture all has at least one face position, and each described expression packet picture with table described at least one The association of feelings label;
Overlay module 14 covers certainly for extracting the facial characteristics in the facial image, and by the facial characteristics The present position at the face position for the expression packet picture being matched in the default expression parcel generates personalized table Feelings packet.
In one embodiment, as shown in figure 8, the extraction module 12 includes:
Extraction unit 121, for extracting the everything cell type of the micro- expression of face from the facial image;
Confirmation unit 122, for according to being extracted from the facial image described in all motor unit types confirmations Micro- expression type of facial image;
Acquiring unit 123, for obtaining all expression labels of micro- expression type association, while obtain with it is every One associated characteristic action unit of expression label;
Matching unit 124, for all motor unit types and each institute will to be extracted from the facial image It states the associated characteristic action unit of expression label to be matched, extracting from the facial image, all movements are single In element type when all characteristic action units associated comprising the expression label, the expression label is recorded as The expression label of the facial image.
Specific restriction about expression packet automatically generating device may refer to above for expression packet automatic generation method Restriction, details are not described herein.Modules in above-mentioned expression packet automatically generating device can be fully or partially through software, hard Part and combinations thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, It can also be stored in a software form in the memory in computer equipment, execute the above modules in order to which processor calls Corresponding operation.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 9.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium be stored with operating system, computer-readable instruction and Database.The built-in storage provides ring for the operation of operating system and computer-readable instruction in non-volatile memory medium Border.To realize a kind of expression packet automatic generation method when the computer-readable instruction is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer-readable instruction that can be run on a processor, processor perform the steps of when executing computer-readable instruction
Obtain facial image;
The micro- expression of face is extracted from the facial image, and the facial image is obtained according to the micro- expression of the face Expression label;
Expression packet picture is matched from default expression parcel according to the expression label of the facial image, and determination is matched to The expression packet picture face position present position;Wherein, the expression packet picture in the default expression parcel At least one face position is all had, and each described expression packet picture is associated with expression label described at least one;
The facial characteristics in the facial image is extracted, and the facial characteristics is covered from the default expression parcel In the present position at the face position of the expression packet picture that is matched to, generate personalized emoticons packet.
In one embodiment, a kind of computer readable storage medium is provided, computer-readable instruction is stored thereon with, It is performed the steps of when computer-readable instruction is executed by processor
Obtain facial image;
The micro- expression of face is extracted from the facial image, and the facial image is obtained according to the micro- expression of the face Expression label;
Expression packet picture is matched from default expression parcel according to the expression label of the facial image, and determination is matched to The expression packet picture face position present position;Wherein, the expression packet picture in the default expression parcel At least one face position is all had, and each described expression packet picture is associated with expression label described at least one;
The facial characteristics in the facial image is extracted, and the facial characteristics is covered from the default expression parcel In the present position at the face position of the expression packet picture that is matched to, generate personalized emoticons packet.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer-readable instruction, it is non-volatile that the computer-readable instruction can be stored in one Property computer-readable storage medium in, the computer-readable instruction is when being executed, it may include as above-mentioned each method embodiment Process.Wherein, to memory, storage, database or other media used in each embodiment provided by the present invention Any reference may each comprise non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.It is volatile Property memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM It is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link DRAM (SLDRAM), the direct RAM of memory bus (RDRAM), Direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit or module division progress for example, in practical application, can according to need and by above-mentioned function distribution by difference Functional unit or module complete, i.e., the internal structure of described device is divided into different functional unit or module, with complete All or part of function described above.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of expression packet automatic generation method characterized by comprising
Obtain facial image;
The micro- expression of face is extracted from the facial image, and the expression of the facial image is obtained according to the micro- expression of the face Label;
Expression packet picture is matched from default expression parcel according to the expression label of the facial image, and determines the institute being matched to State the present position at the face position of expression packet picture;Wherein, the expression packet picture in the default expression parcel has There is at least one face position, and each described expression packet picture is associated with expression label described at least one;
It extracts the facial characteristics in the facial image, and the facial characteristics is covered from the default expression parcel The present position at the face position for the expression packet picture being fitted on generates personalized emoticons packet.
2. expression packet automatic generation method as described in claim 1, which is characterized in that described to be extracted from the facial image The micro- expression of face, and obtain according to the micro- expression of the face expression label of the facial image, comprising:
The everything cell type of the micro- expression of face is extracted from the facial image;
According to the micro- expression type for extracting all motor unit types from the facial image and confirming the facial image;
All expression labels of micro- expression type association are obtained, while obtaining and being associated with expression label described in each Characteristic action unit;
All motor unit types will be extracted from the facial image and each described expression label is associated described Characteristic action unit is matched, comprising described in one in extracting all motor unit types from the facial image When the associated all characteristic action units of expression label, the expression label is recorded as to the expression mark of the facial image Label.
3. the method that expression packet as described in claim 1 automatically generates, which is characterized in that the table according to the facial image Feelings label matches expression packet picture from default expression parcel, and determine the face position for the expression packet picture being matched to Present position, comprising:
It is obtained from the face face mask extracted in the facial image;
Expression label institute's espressiove packet picture identical with the facial image is chosen from default expression parcel;
Determine the present position at the face position for each expression packet picture chosen and the present position at the face position Profile;
Obtain the similarity between the face face mask and the present position profile;
By the corresponding expression packet picture of the highest present position profile of the similarity, it is recorded as and the face figure As the expression packet picture of unique match, at the same obtain with the expression packet picture of the facial image unique match five The present position at official position.
4. expression packet automatic generation method as described in claim 1, which is characterized in that the table according to the facial image Feelings label matches expression packet picture from default expression parcel, and determine the face position for the expression packet picture being matched to Present position, comprising:
Expression label institute's espressiove packet picture identical with the facial image is chosen from default expression parcel;
According to preset screening rule, the determining and facial image unique match from all expression packet pictures chosen The expression packet picture;
Will according to the screening rule determine the expression packet picture, be recorded as with described in the facial image unique match Expression packet picture, at the same from extracted locating for its face position in the expression packet picture of the facial image unique match Position.
5. expression packet automatic generation method as described in claim 1, which is characterized in that described to extract in the facial image Facial characteristics, and the facial characteristics is covered to the institute for the expression packet picture being matched to from the default expression parcel State the present position at face position, comprising:
It is obtained from the present position wheel at the face position for the expression packet picture being matched in the default expression parcel Whole placed angle, the face position contour area at wide, described face position;
All facial characteristics being located within face face mask in the facial image are extracted, and determine each facial characteristics The linear distance between positional relationship and each facial characteristics central point between central point;
Newly-built painting canvas, the painting canvas profile of the painting canvas are consistent with the present position profile at the face position;The face is special Sign is pre-processed according to preset image procossing mode;
In the case where keeping the position relationship between the facial characteristics central point, by the institute after being pre-processed There is the facial characteristics to be placed in the painting canvas profile of the painting canvas according to the whole placed angle;
With the linear distance between the same facial characteristics central point more each than column adjustment, and it is adjusted according to the same ratio The graphics area that the facial characteristics of the outermost in each facial characteristics afterwards surrounds, with face position contoured surface Ratio between product is in default ratio range;
By the painting canvas comprising the facial characteristics, correspondence covers the table being matched to from the default expression parcel On the present position profile at the face position of feelings packet picture;
Image synthesis processing is carried out to the expression packet picture for being covered with the painting canvas profile, generates the personalized emoticons Packet.
6. expression packet automatic generation method as described in claim 1, which is characterized in that described to generate the personalized emoticons Packet, comprising:
Text addition instruction is received, obtains the expression text of user's typing, the text box number that the user chooses;
It obtains and numbers associated text box size and default text format with the text box;
The character quantity of the expression text is obtained, and described silent according to the character quantity and text box size adjustment Recognize the character boundary in text formatting;
Predeterminated position or user's selected location in the expression packet picture generate text corresponding with text box number Frame, and expression text is inserted according to the default text format after adjustment in the text box;
After carrying out assembled processing to the expression packet picture and the text box, the personalized emoticons packet is generated.
7. a kind of expression packet automatically generating device characterized by comprising
Module is obtained, for obtaining facial image;
Extraction module, for extracting the micro- expression of face from the facial image, and according to the micro- expression acquisition of the face The expression label of facial image;
Matching module, for matching expression packet picture from default expression parcel according to the expression label of the facial image, and Determine the present position at the face position for the expression packet picture being matched to;Wherein, described in the default expression parcel Expression packet picture all has at least one face position, and each described expression packet picture with expression mark described at least one Label association;
Overlay module is covered for extracting the facial characteristics in the facial image, and by the facial characteristics from described pre- If the present position at the face position for the expression packet picture being matched in expression parcel, personalized emoticons packet is generated.
8. expression packet automatically generating device as claimed in claim 7, which is characterized in that the extraction module includes:
Extraction unit, for extracting the everything cell type of the micro- expression of face from the facial image;
Confirmation unit extracts all motor unit types from the facial image for basis and confirms the facial image Micro- expression type;
Acquiring unit for obtaining all expression labels of micro- expression type association, while obtaining and each institute State the associated characteristic action unit of expression label;
Matching unit, for all motor unit types and each described expression mark will to be extracted from the facial image It signs the associated characteristic action unit to be matched, in extracting all motor unit types from the facial image When all characteristic action units associated comprising the expression label, the expression label is recorded as the face The expression label of image.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer-readable instruction of upper operation, which is characterized in that the processor is realized when executing the computer-readable instruction as weighed Benefit requires any one of 1 to the 6 expression packet automatic generation method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer-readable instruction, special Sign is, realizes that the expression packet as described in any one of claim 1 to 6 is automatic when the computer-readable instruction is executed by processor Generation method.
CN201910602401.7A 2019-07-05 2019-07-05 Expression packet automatic generation method, device, computer equipment and storage medium Pending CN110458916A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910602401.7A CN110458916A (en) 2019-07-05 2019-07-05 Expression packet automatic generation method, device, computer equipment and storage medium
PCT/CN2020/085573 WO2021004114A1 (en) 2019-07-05 2020-04-20 Automatic meme generation method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910602401.7A CN110458916A (en) 2019-07-05 2019-07-05 Expression packet automatic generation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110458916A true CN110458916A (en) 2019-11-15

Family

ID=68482133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910602401.7A Pending CN110458916A (en) 2019-07-05 2019-07-05 Expression packet automatic generation method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110458916A (en)
WO (1) WO2021004114A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889379A (en) * 2019-11-29 2020-03-17 深圳先进技术研究院 Expression package generation method and device and terminal equipment
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN111145283A (en) * 2019-12-13 2020-05-12 北京智慧章鱼科技有限公司 Expression personalized generation method and device for input method
CN111368127A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, device, and computer-readable storage medium
CN111860372A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Artificial intelligence-based expression package generation method, device, device and storage medium
CN112102157A (en) * 2020-09-09 2020-12-18 咪咕文化科技有限公司 Video face changing method, electronic device and computer readable storage medium
CN112214632A (en) * 2020-11-03 2021-01-12 虎博网络技术(上海)有限公司 File retrieval method and device and electronic equipment
WO2021004114A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Automatic meme generation method and apparatus, computer device and storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium
CN113727024A (en) * 2021-08-30 2021-11-30 北京达佳互联信息技术有限公司 Multimedia information generation method, apparatus, electronic device, storage medium, and program product
CN113781666A (en) * 2021-09-15 2021-12-10 广州虎牙科技有限公司 Image generation method, device and electronic device
CN114419177A (en) * 2022-01-07 2022-04-29 上海序言泽网络科技有限公司 Personalized expression package generation method and system, electronic equipment and readable medium
CN114816599A (en) * 2021-01-22 2022-07-29 北京字跳网络技术有限公司 Image display method, apparatus, device and medium
CN115589453A (en) * 2022-09-27 2023-01-10 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN117974853A (en) * 2024-03-29 2024-05-03 成都工业学院 Self-adaptive switching generation method, system, terminal and medium for homologous micro-expression image

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177994B (en) * 2021-03-25 2022-09-06 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113485596B (en) * 2021-07-07 2023-12-22 游艺星际(北京)科技有限公司 Virtual model processing method and device, electronic equipment and storage medium
CN114693827B (en) * 2022-04-07 2025-03-25 深圳云之家网络有限公司 Expression generation method, device, computer equipment and storage medium
CN115294388A (en) * 2022-07-25 2022-11-04 深圳市百川数安科技有限公司 Expression package identification method and device based on Internet community and storage medium
CN117150063B (en) * 2023-10-26 2024-02-06 深圳慢云智能科技有限公司 Image generation method and system based on scene recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107219917A (en) * 2017-04-28 2017-09-29 北京百度网讯科技有限公司 Emoticon generation method and device, computer equipment and computer-readable recording medium
CN108197206A (en) * 2017-12-28 2018-06-22 努比亚技术有限公司 Expression packet generation method, mobile terminal and computer readable storage medium
CN110458916A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Expression packet automatic generation method, device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
US20180024726A1 (en) * 2016-07-21 2018-01-25 Cives Consulting AS Personified Emoji
CN108573527A (en) * 2018-04-18 2018-09-25 腾讯科技(深圳)有限公司 A kind of expression picture generation method and its equipment, storage medium

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021004114A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Automatic meme generation method and apparatus, computer device and storage medium
CN110889379A (en) * 2019-11-29 2020-03-17 深圳先进技术研究院 Expression package generation method and device and terminal equipment
CN110889379B (en) * 2019-11-29 2024-02-20 深圳先进技术研究院 Expression package generation method and device and terminal equipment
CN111145283A (en) * 2019-12-13 2020-05-12 北京智慧章鱼科技有限公司 Expression personalized generation method and device for input method
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN111368127B (en) * 2020-03-06 2023-03-24 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111368127A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, device, and computer-readable storage medium
CN111860372A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Artificial intelligence-based expression package generation method, device, device and storage medium
CN111860372B (en) * 2020-07-24 2024-11-19 中国平安人寿保险股份有限公司 Artificial intelligence-based expression package generation method, device, equipment and storage medium
CN112102157A (en) * 2020-09-09 2020-12-18 咪咕文化科技有限公司 Video face changing method, electronic device and computer readable storage medium
CN112102157B (en) * 2020-09-09 2024-07-09 咪咕文化科技有限公司 Video face changing method, electronic device and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112214632B (en) * 2020-11-03 2023-11-17 虎博网络技术(上海)有限公司 Text retrieval method and device and electronic equipment
CN112214632A (en) * 2020-11-03 2021-01-12 虎博网络技术(上海)有限公司 File retrieval method and device and electronic equipment
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
CN114816599A (en) * 2021-01-22 2022-07-29 北京字跳网络技术有限公司 Image display method, apparatus, device and medium
US12106410B2 (en) 2021-01-22 2024-10-01 Beijing Zitiao Network Technology Co., Ltd. Customizing emojis for users in chat applications
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium
US11922725B2 (en) 2021-02-20 2024-03-05 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and device for generating emoticon, and storage medium
CN113727024A (en) * 2021-08-30 2021-11-30 北京达佳互联信息技术有限公司 Multimedia information generation method, apparatus, electronic device, storage medium, and program product
CN113781666A (en) * 2021-09-15 2021-12-10 广州虎牙科技有限公司 Image generation method, device and electronic device
CN114419177A (en) * 2022-01-07 2022-04-29 上海序言泽网络科技有限公司 Personalized expression package generation method and system, electronic equipment and readable medium
CN114419177B (en) * 2022-01-07 2025-06-17 上海序言泽网络科技有限公司 Personalized expression package generation method, system, electronic device and readable medium
CN115589453A (en) * 2022-09-27 2023-01-10 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN117974853A (en) * 2024-03-29 2024-05-03 成都工业学院 Self-adaptive switching generation method, system, terminal and medium for homologous micro-expression image
CN117974853B (en) * 2024-03-29 2024-06-11 成都工业学院 Method, system, terminal and medium for adaptive switching generation of homologous micro-expression images

Also Published As

Publication number Publication date
WO2021004114A1 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
CN110458916A (en) Expression packet automatic generation method, device, computer equipment and storage medium
US11889230B2 (en) Video conferencing method
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
JP6972043B2 (en) Information processing equipment, information processing methods and programs
JP2021519995A (en) Image processing methods, devices, computer devices and computer programs
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
JP6956389B2 (en) Makeup support device and makeup support method
KR101635730B1 (en) Apparatus and method for generating montage, recording medium for performing the method
JPWO2008102440A1 (en) Makeup face image generating apparatus and method
JP7278724B2 (en) Information processing device, information processing method, and information processing program
US11145091B2 (en) Makeup simulation device, method, and non-transitory recording medium
CN104951770B (en) Construction method, application process and the related device of face database
CN110755847B (en) Virtual operation object generation method and device, storage medium and electronic device
EP3370207B1 (en) Makeup parts generating device and makeup parts generating method
JP2022505746A (en) Digital character blending and generation systems and methods
CN109886144A (en) Virtual examination forwarding method, device, computer equipment and storage medium
CN109509141A (en) Image processing method, head portrait setting method and device
WO2025066458A1 (en) Try-on image generation method and generation system, electronic device, and storage medium
US10152827B2 (en) Three-dimensional modeling method and electronic apparatus thereof
CN114841851B (en) Image generation method, device, electronic device and storage medium
WO2022146766A1 (en) Digital makeup artist
KR102136137B1 (en) Customized LED mask pack manufacturing apparatus thereof
US20180181110A1 (en) System and method of generating a custom eyebrow stencil
CN115826835A (en) Data processing method and device and readable storage medium
CN104715505A (en) Three-dimensional avatar generation system, device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191115

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载