+

CN114549696B - Method, device, storage medium, and processor for generating emoticon package - Google Patents

Method, device, storage medium, and processor for generating emoticon package Download PDF

Info

Publication number
CN114549696B
CN114549696B CN202210139223.0A CN202210139223A CN114549696B CN 114549696 B CN114549696 B CN 114549696B CN 202210139223 A CN202210139223 A CN 202210139223A CN 114549696 B CN114549696 B CN 114549696B
Authority
CN
China
Prior art keywords
time period
target
expression
expression package
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210139223.0A
Other languages
Chinese (zh)
Other versions
CN114549696A (en
Inventor
辛一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210139223.0A priority Critical patent/CN114549696B/en
Publication of CN114549696A publication Critical patent/CN114549696A/en
Application granted granted Critical
Publication of CN114549696B publication Critical patent/CN114549696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a device for generating an expression package, a storage medium and a processor. The method comprises the steps of providing a graphical user interface through terminal equipment, detecting action change amplitude of a target object in the graphical user interface, intercepting a target picture from the graphical user interface in response to the action change amplitude meeting a first preset condition, detecting bullet screen speed of the graphical user interface in a first time period, wherein the first time period is determined by the moment of intercepting the target picture, determining target characters from bullet screen contents displayed in the first time period in response to the bullet screen speed meeting a second preset condition, and generating the expression package by utilizing the target picture and the target characters. The method solves the technical problem of low expression package generating efficiency caused by complex expression package manufacturing steps in the related art.

Description

Expression package generation method and device, storage medium and processor
Technical Field
The invention relates to the field of generation of expression packages, in particular to a method, a device, a storage medium and a processor for generating an expression package.
Background
At present, the process of making the expression package generally needs background selection, text editing, synthesis operation and the like, and the program is complex, so that a user needs to consume a long time when making the expression package on a real-time scene, and further the generation efficiency of the expression package is low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a storage medium and a processor for generating an expression package, which at least solve the technical problem of low efficiency of generating the expression package due to complex steps of expression package manufacturing in the related technology.
According to one aspect of the embodiment of the invention, a graphical user interface is provided through terminal equipment, the method comprises the steps of detecting action change amplitude of a target object in the graphical user interface, intercepting a target picture from the graphical user interface in response to the action change amplitude meeting a first preset condition, wherein content displayed in the target picture comprises at least one of a face area and at least a part of limb area of the target object, detecting a bullet screen rate of the graphical user interface in a first time period, wherein the first time period is determined by the moment of intercepting the target picture, determining target characters from bullet screen content displayed in the first time period in response to the bullet screen rate meeting a second preset condition, and generating the expression package by utilizing the target picture and the target characters.
Optionally, detecting the motion variation amplitude of the target object comprises obtaining a first motion of the target object in a second time period and a second motion of the target object in a third time period, wherein the second time period is a historical time period, the third time period is a current time period, the first motion is a motion to be referred, the second motion is a motion to be compared, and determining the motion variation amplitude based on the first motion and the second motion.
Optionally, the action variation amplitude meeting the first preset condition includes the action variation amplitude exceeding a preset amplitude range.
Optionally, detecting the bullet screen rate in the first time period comprises obtaining the number of bullet screens appearing in the first time period and the duration corresponding to the first time period, wherein the starting time of the first time period is the time of capturing the target picture, and determining the bullet screen rate based on the number of bullet screens and the duration.
Optionally, the bullet screen rate meeting the second predetermined condition rate includes the bullet screen rate exceeding a predetermined multiple of the bullet screen average rate.
Optionally, determining the target text from the barrage content displayed in the first time period includes acquiring the barrage with the highest frequency of occurrence from the barrage content displayed in the first time period, and determining the target text by using the barrage with the highest frequency of occurrence.
Optionally, determining the target text from the barrage content displayed in the first time period comprises obtaining keywords with highest occurrence frequency from the barrage content displayed in the first time period, wherein the keywords with highest occurrence frequency are words with the largest number of repeated occurrence times in the barrage content, and determining the target text by using the keywords with the highest occurrence frequency.
Optionally, generating the expression package by using the target picture and the target text comprises determining a display position of a target object in the target picture, determining a first area and a second area in the target picture by using the display position, wherein the first area is a display area of the target text, the second area is an area to be removed determined by the first area, filling the target text to any position in the first area, and removing the second area to generate the expression package.
Optionally, the expression package generating method further comprises the steps of displaying the expression package in the popup window, and responding to editing operation executed on the expression package, adjusting display content and/or display position of the target characters.
Optionally, the expression package generating method further comprises the step of displaying the expression package in a chat interface provided by the graphical user interface or displaying the expression package in a bullet screen mode in the graphical user interface.
According to one embodiment of the invention, the invention also provides an expression pack generating device, which provides a graphical user interface through terminal equipment, and comprises a first detecting module, a second detecting module and a second detecting module, wherein the first detecting module is used for detecting the action change amplitude of a target object in the graphical user interface; the system comprises a capturing module, a determining module and a generating module, wherein the capturing module is used for responding to the action change amplitude to meet a first preset condition, capturing a target picture from a graphical user interface, the content displayed in the target picture comprises a face area and at least a part of limb area of a target object, the second detecting module is used for detecting the bullet screen rate of the graphical user interface in a first time period, the first time period is determined by the moment of capturing the target picture, the determining module is used for responding to the bullet screen rate to meet a second preset condition, determining target characters from bullet screen content displayed in the first time period, and the generating module is used for generating an expression package by utilizing the target picture and the target characters.
According to an embodiment of the present invention, there is also provided a nonvolatile storage medium in which a computer program is stored, wherein the computer program is configured to execute the expression pack generation method in any one of the above at runtime.
According to an embodiment of the present invention, there is further provided a processor for running a program, wherein the program is configured to execute the expression pack generation method in any one of the above-mentioned aspects at runtime.
According to an embodiment of the present invention, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to run the computer program to perform the expression pack generation method in any one of the above.
In the embodiment of the invention, firstly, the action change amplitude of a target object in a graphical user interface is detected, a target picture can be intercepted from the graphical user interface in response to the action change amplitude meeting a first preset condition, wherein the content displayed in the target picture comprises a face area and at least a part of limb area of the target object, the bullet screen speed of the graphical user interface in a first time period is detected, the first time period is determined by the moment of intercepting the target picture, the target characters are determined from the bullet screen content displayed in the first time period in response to the bullet screen speed meeting a second preset condition, and an expression package is generated by utilizing the target picture and the target characters, so that the aim of quickly generating the expression package is fulfilled. It is easy to notice that the target pictures and the target characters for making the expression package come from the content displayed in real time by the graphical user interface, so that the produced expression package is more closely related to the content currently displayed by the graphical user interface, the expression package is more closely related to the current scene, the process is not required to be manually made by a user, the whole process is implemented based on the content displayed in the graphical user interface, the efficiency of producing the expression package can be improved, and the technical problem that the efficiency of producing the expression package is lower due to the fact that the steps of producing the expression package are complex in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal according to an expression pack generation method according to an embodiment of the present invention;
Fig. 2 is a flowchart of an expression package generating method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an expression package editing process according to an embodiment of the present invention;
FIG. 4 is a flowchart of another expression pack generation method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a preset expression package template according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of removing invalid portions from a picture according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an expression pack generating apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments may be performed in a mobile terminal, a computer terminal, or similar computing device. Taking the Mobile terminal as an example, the Mobile terminal can be a terminal device such as a smart phone (e.g. an Android Mobile phone, an iOS Mobile phone, etc.), a tablet computer, a palm computer, a Mobile internet device (Mobile INTERNET DEVICES, abbreviated as MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input-output device 108, and a display device 110 for communication functions. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, such as a computer program corresponding to the expression pack generation method in the embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the expression pack generation method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input in the input output device 108 may come from a plurality of Human interface devices (Human INTERFACE DEVICE, abbreviated as HID). Such as a keyboard and mouse, a gamepad, other special game controllers (e.g., steering wheel, fishing pole, dance mat, remote control, etc.). Part of the ergonomic interface device may provide output functions, such as force feedback and vibration of the gamepad, audio output of the controller, etc., in addition to input functions.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user may interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction functionality optionally includes interactions such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music and/or web browsing, etc., executable instructions for performing the human-machine interaction functionality described above are configured/stored in one or more processor-executable computer program products or readable storage media.
In accordance with an embodiment of the present invention, there is provided a method embodiment of expression package generation, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
Fig. 2 is a flowchart of an expression pack generating method according to an embodiment of the present invention, as shown in fig. 2, providing a graphical user interface through a terminal device, the method comprising the steps of:
step S202, detecting the motion variation amplitude of the target object in the graphical user interface.
The graphical user interface may be a graphical user interface provided by an electronic device including a display screen, such as a terminal device, a computer device, or the like.
The graphical user interface may display a live broadcast screen, a game scene screen, a real-time news screen, etc. The target object can be a main cast in a live broadcast process, a virtual role in a game scene, a principal in a real-time news picture and the like, and also can be an actor in a TV play, an entertainment guest in an entertainment program, a cartoon role in a cartoon, a virtual role and the like. The application takes a graphical user interface as a live broadcast interface and takes a target object as a host for illustration.
The change amplitude of the target object can be the expression change amplitude of the anchor, the limb change amplitude of the anchor and the like.
In an alternative embodiment, the expression change amplitude of the host in the live broadcast scene can be detected, and when the expression change amplitude of the host is too large, for example, when the host makes a haha laugh, a mouth is large and other expressions with tension, an expression packet corresponding to the expression of the host is generated by intercepting the expression of the host, so that the expression packet related to the current state of the host can be quickly manufactured, and the interaction fun between live broadcast rooms is increased.
And S204, in response to the action change amplitude meeting a first preset condition, capturing a target picture from the graphical user interface.
The content displayed in the target picture comprises at least one of a face area and at least part of limb area of the target object.
The first preset condition can be set by itself.
In an alternative embodiment, the action change amplitude is said to be substantially changed when the action change amplitude satisfies a first predetermined condition. Specifically, taking expression as an example, facial organs such as eyes, mouth, nose and the like are detected, the state under the condition of no change for a long time is taken as a standard state, and when the change amplitude of each organ exceeds a certain range, the change can be regarded as amplitude change. For example, when a joker smiles in a normal state, if the joker smiles in a specific case, it is detected that the change range of the mouth exceeds a certain range, and the change range of the motion of the joker at this time is considered to be a large change.
In another alternative embodiment, the first preset condition may be set according to a degree of change of the face of the anchor, for example, may be set according to a degree of change of the mouth shape of the anchor, a degree of change of the eyebrows of the anchor, and a degree of change of the eyes of the anchor. The degree of change of the mouth shape can be determined by the angle of upward movement of the mouth corner or the angle of wide opening of the mouth, the degree of change of the eyebrow can be determined by the angle of upward movement of the eyebrow, and the degree of change of the eye can be determined by the opening size of the eyes.
In another alternative embodiment, the first preset condition may also be set according to a limb variation degree of the anchor, for example, may be set according to a variation degree of an arm of the anchor, a variation degree of a body, a variation degree of a head, and the like. The degree of change of the arm can be determined according to the angle of the arm lifting or the angle of rotation, the degree of change of the body can be determined according to the moving position of the body or the torsion angle of the body, and the degree of change of the head can be determined according to the moving position of the head or the torsion angle of the head.
In another alternative embodiment, the motion variation amplitude of the anchor can be monitored in real time, and when the motion variation amplitude of the anchor meets the first preset condition, a target picture is intercepted from the graphical user interface, so that a corresponding expression package is manufactured according to the target picture, and the interaction pleasure during live broadcast is increased.
In another optional embodiment, a target picture including a face area and at least a part of a limb area of the target object may be cut from the graphical user interface according to a preset template, so that the cut target picture is a picture suitable for making an expression package, thereby improving the making efficiency of the expression package.
In step S206, a bullet screen rate of the gui in the first period is detected.
Wherein the first time period is determined by the moment of capturing the target picture. The duration corresponding to the first period of time may be preset by the user, for example, 3 seconds.
In an alternative embodiment, the first time period is a few seconds before the moment of capturing the target picture, for example, the T1 time point is a screenshot, but the first time period starts from T seconds before the T1 time point.
In an alternative embodiment, in the first period, if the bullet screen speed in the gui is faster or the number of bullet screens is greater, it is indicated that the current action of the host may cause a discussion, and at this time, the user's requirement for sending the expression package of the host is greater, and if the expression package can be quickly made at this time, a lot of pleasure of discussion will be increased. In the first time period, if the bullet screen rate in the graphical user interface is smaller or the bullet screen number is smaller, it is indicated that the current action of the anchor does not cause discussion, the action may be a normal activity action, at this time, the requirement of the user for sending the expression package is smaller, and the expression package can be made without according to the target image, so as to reduce the occupation of the running resources.
Therefore, whether the expression package needs to be manufactured or not can be determined by detecting the barrage speed of the graphical user interface in the first time period, so that when the expression package needs to be used, the pleasure of discussion is improved by manufacturing the expression package, and when the expression package does not need to be used, the expression package is prohibited from being manufactured so as to save the occupation of operation resources.
In step S208, in response to the bullet screen rate meeting the second preset condition, the target text is determined from the bullet screen content displayed in the first period.
The second preset condition can be set by itself.
In an alternative embodiment, the bullet screen rate in the graphical user interface may be monitored in real time, and the target text may be determined from the bullet screen content displayed in the first time period when the bullet screen rate meets the second preset condition. Specifically, the bullet screen content with the highest display frequency can be used as the target text according to the bullet screen content displayed in the first time period, and the keyword with the highest occurrence frequency in the bullet screen content can be obtained, extracted and used as the target text.
Examples of the step of using keywords as target characters may be 3 barrages of suggesting a hosting game of plums, a hero of plums being very elegant, a hero of a hosting game of plums being very 6, a keyword with highest occurrence frequency being plums, and extracting 'plums' as target characters.
Step S210, generating an expression package by using the target picture and the target text.
In an alternative embodiment, the target picture and the target text may be set in a preset template to generate the expression package. The preset template can define the position of the target picture and the position of the target text.
In another alternative embodiment, after the expression package is generated, the generated expression package may be displayed in the live broadcast scene, the user may select the displayed expression package to send, and the user may edit the displayed expression package and send the edited expression package to the comment area or the bullet screen in the live broadcast room.
In a further alternative embodiment, in a scene of linking with a plurality of anchors, each anchor respectively carries out expression/limb detection on each anchor in different pictures, when any anchor has a large change, the anchor picture with the large change is intercepted, and expression package production is carried out based on the anchor picture.
According to the method, firstly, the action change amplitude of a target object in a graphical user interface is detected, the target picture can be intercepted from the graphical user interface in response to the action change amplitude meeting a first preset condition, wherein the content displayed in the target picture comprises a face area and at least a part of limb area of the target object, the bullet screen speed of the graphical user interface in a first time period is detected, the first time period is determined by the moment of intercepting the target picture, the target characters are determined from the bullet screen content displayed in the first time period in response to the bullet screen speed meeting a second preset condition, and an expression package is generated by utilizing the target picture and the target characters, so that the aim of quickly generating the expression package is fulfilled. It is easy to notice that the target pictures and the target characters for making the expression package come from the content displayed in real time by the graphical user interface, so that the produced expression package is more closely related to the content currently displayed by the graphical user interface, the expression package is more closely related to the current scene, the process is not required to be manually made by a user, the whole process is implemented based on the content displayed in the graphical user interface, the efficiency of producing the expression package can be improved, and the technical problem that the efficiency of producing the expression package is lower due to the fact that the steps of producing the expression package are complex in the related art is solved.
Optionally, detecting the motion variation amplitude of the target object comprises obtaining a first motion of the target object in a second time period and a second motion of the target object in a third time period, wherein the second time period is a historical time period, the third time period is a current time period, the first motion is a motion to be referred, the second motion is a motion to be compared, and determining the motion variation amplitude based on the first motion and the second motion.
In an alternative embodiment, a first action of the anchor in a historical time period can be acquired, the first action is taken as an action to be referred, then a second action of the anchor in a current time period is acquired, the second action is taken as an action to be compared, and the change amplitude of the anchor can be obtained by comparing the position relation of the first action and the second action.
For example, the arm of the host is placed on the desktop in the historical period, and the arm of the host is lifted off the desktop in the current period, and the position relationship between the arm in the historical period and the arm in the current period is greatly changed, at this time, the action change amplitude of the host can be determined according to the position relationship between the arm of the host.
In another alternative embodiment, a non-overlapping region between the first action and the second action may be obtained, and the area of the non-overlapping region is taken as the action change range, where the larger the area of the non-overlapping region is, the larger the action change range of the anchor is indicated, and the smaller the area of the non-overlapping region is, the smaller the action change range of the anchor is indicated.
Optionally, the action variation amplitude meeting the first preset condition includes the action variation amplitude exceeding a preset amplitude range.
The above-mentioned preset amplitude may be determined according to the type of the action.
In an alternative embodiment, if the motion variation amplitude is a variation amplitude of the mouth angle, the corresponding preset amplitude range may be a range of the upward amplitude of the mouth angle. If the motion variation amplitude is the variation amplitude of the limb, the corresponding preset amplitude range can be the amplitude range of the limb position variation. If the motion variation range is the variation range of the eyebrow, the corresponding preset range may be the range of the raised eyebrow.
Optionally, detecting the bullet screen rate in the first time period comprises obtaining the number of bullet screens appearing in the first time period and the duration corresponding to the first time period, wherein the starting time of the first time period is the time of capturing the target picture, and determining the bullet screen rate based on the number of bullet screens and the duration.
The number of the barrages is used for indicating the current discussion heat of the anchor, when the current discussion heat of the anchor is higher, the corresponding number of the barrages is higher, and when the current discussion heat of the anchor is lower, the corresponding number of the barrages is lower.
In an alternative embodiment, when the capturing of the target picture is started, the frequency of occurrence of the barrage may be detected, and specifically, the frequency of occurrence of the barrage may be obtained by dividing the number of barrages in the first period by the duration corresponding to the first period. If the first time period is N minutes, then its corresponding bullet screen rate = number of bullet screens present per N minutes in N minutes.
Optionally, the bullet screen rate meeting the second predetermined condition rate includes the bullet screen rate exceeding a predetermined multiple of the bullet screen average rate.
The bullet screen average speed can be calculated in real time according to the current bullet screen speed and the historical bullet screen speed.
In an alternative embodiment, when the bullet screen speed exceeds the preset multiple of the bullet screen average speed, the discussion liveness of the bullet screen is higher, in order to increase the fun of bullet screen interaction or increase the fun of comments, the target characters can be determined from the bullet screen content, and the expression package can be made according to the target pictures and the target characters, so that the purpose of improving the user experience is achieved.
Optionally, determining the target text from the barrage content displayed in the first time period includes acquiring the barrage with the highest frequency of occurrence from the barrage content displayed in the first time period, and determining the target text by using the barrage with the highest frequency of occurrence.
In an alternative embodiment, the barrage with the highest appearance frequency can be obtained from the barrage content displayed in the first time period, the target characters are determined by using the barrage with the highest appearance frequency, so that the expression package manufactured by the target characters can conform to the current live broadcast scene, and the user can interact by using the manufactured expression package.
Optionally, determining the target text from the barrage content displayed in the first time period comprises obtaining keywords with highest occurrence frequency from the barrage content displayed in the first time period, wherein the keywords with highest occurrence frequency are words with the largest number of repeated occurrence times in the barrage content, and determining the target text by using the keywords with the highest occurrence frequency.
In an alternative embodiment, if there is no repeated barrage in the barrage content, the keyword with higher frequency of occurrence can be extracted from the barrage content as the target text, and the target text related to the discussion content can be obtained for the live broadcast scene with high discussion heat without repeated barrage, so that the expression package made by the target text has real-time interactivity.
Optionally, generating the expression package by using the target picture and the target text comprises determining a display position of a target object in the target picture, determining a first area and a second area in the target picture by using the display position, wherein the first area is a display area of the target text, the second area is an area to be removed determined by the first area, filling the target text to any position in the first area, and removing the second area to generate the expression package.
In an alternative embodiment, the display position of the target object in the target picture may be identified by a preset identification model, and the display area of the target text may be determined according to the display position, for example, after the display position is determined, the display area of the target text may be set above the display position, and may be set in any direction of the display position, where the setting manner of the display area of the target text is not limited in any way.
Further, after the first area is determined, other areas except the area where the display position of the target object is located and the first area can be used as the second area, the target text is filled to any position in the first area, and the second area is removed, so that an expression package is generated.
Optionally, the expression package generating method further comprises the steps of displaying the expression package in the popup window, and responding to editing operation executed on the expression package, adjusting display content and/or display position of the target characters.
In an alternative embodiment, the expression package may be displayed above the transmission frame of the comment area in the live broadcasting room, and the editing control and the transmission control may be displayed, so that the user edits the expression package, and the user may adjust the display content of the target text in the expression package or the display position of the target object, obtain the adjusted expression package, and send the adjusted expression package to the comment area by clicking the transmission control.
As shown in fig. 3, which is a schematic diagram of an expression package editing process, an expression package to be sent may be displayed above a sending frame, a user may edit the expression package through an editing button, and the generated expression package or the edited expression package may be sent to a comment area through the sending button.
In another alternative embodiment, after editing the expression package or sending the expression package, the expression package is saved to a local expression library, and the user can multiplex the expression package in the expression library.
Optionally, the expression package generating method further comprises the step of displaying the expression package in a chat interface provided by the graphical user interface or displaying the expression package in a bullet screen mode in the graphical user interface.
In an alternative embodiment, after the expression package is generated, the expression package may be directly presented in the chat interface of the graphical user interface. The expression package can be sent in the form of a bullet screen in the graphical user interface, wherein when the expression package is sent in the form of a bullet screen, the expression package can be cut, and the generated expression package is prevented from being too large to block a host.
For example, when the expression pack is displayed in the form of a bullet screen, the expression pack may be displayed in the size of 1/N of the image of the anchor with reference to the image of the anchor in the live view. When the anchor character occupies 375 x 375px, the expression package may be displayed in a size of 1/3, i.e., 125 x 125 px.
A preferred embodiment of the present invention will be described in detail with reference to fig. 4. As shown in fig. 4, the method may include the steps of:
step S401, monitoring expression change of a host in real time;
Alternatively, the change in expression of the host, e.g., eye, mouth shape, eyebrows, etc., may be determined by analyzing the facial organs of the host.
Optionally, a preset expression package template may be set before the table emotion package is manufactured, so as to determine the approximate appearance position of the anchor image in the picture and the text. The preset expression package template is shown in fig. 5. After identifying the person-to-carcass characteristics in the image of the anchor, the specific position of the anchor image can be obtained, corresponding characters are placed in the range N of the anchor image according to the position of the anchor image, and if the range N exceeds the image, the range is cut off to the edge of the image.
Step S402, judging through the expression of the anchor, when the expression of the anchor changes greatly, intercepting the current anchor picture to obtain a target picture;
Step S403, at the same time of capturing the picture, starting to detect the current bullet screen occurrence frequency, comparing the current bullet screen occurrence frequency with the average bullet screen frequency of the live broadcasting room, and confirming to make the expression package when the bullet screen frequency exceeds the average bullet screen frequency preset value;
Step S404, detecting all the barrage contents in the first time period to obtain the barrage with the highest appearance frequency, and taking the barrage with the highest appearance frequency as the target character;
step S405, if the same barrage content is not detected, extracting the keyword with the highest appearance frequency in the corresponding time, and taking the keyword as the target text;
Step S406, based on the target characters and the target pictures, placing the target characters in the target pictures according to a preset template, and removing invalid parts in the pictures to generate expression packages;
fig. 6 is a schematic diagram of removing invalid portions from a picture.
Step S407, after the expression package is generated, a bubble popup window can appear in the live broadcasting room, the generated expression package is displayed, the user is supported to edit the expression package, and the edited expression package is sent to the live broadcasting room;
Optionally, after clicking and editing, the user can enter an expression package editing page to support the user to adjust the text position and content, so as to obtain a modified expression package.
In step S408, after the user sends the expression package, the expression package may be automatically saved to the local to support user multiplexing.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides an expression pack generating device, which is used for realizing the embodiment and the preferred implementation, and is not described in detail. As used below, the terms "unit," "module" may be implemented as a combination of software and/or hardware that performs the intended function, although the means described in the following embodiments are preferably implemented in software, implementation of hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a schematic diagram of an expression pack generating apparatus according to an embodiment of the present invention, which provides a graphical user interface through a terminal device, and the expression pack generating apparatus includes a first detection module 702, an interception module 704, a second detection module 706, a determination module 708, and a generation module 710.
The system comprises a first detection module, a interception module and a generation module, wherein the first detection module is used for detecting the action change amplitude of a target object in a graphical user interface, the interception module is used for intercepting a target picture from the graphical user interface in response to the action change amplitude meeting a first preset condition, the content displayed in the target picture comprises at least one of a face area and at least a part of limb area of the target object, the second detection module is used for detecting the bullet screen speed of the graphical user interface in a first time period, the first time period is determined by the moment of intercepting the target picture, the determination module is used for determining target characters from bullet screen content displayed in the first time period in response to the bullet screen speed meeting a second preset condition, and the generation module is used for generating an expression package by utilizing the target picture and the target characters.
Optionally, the first detection module comprises a first acquisition unit and a first determination unit.
The first acquisition unit is used for acquiring a first action of the target object in a second time period and a second action of the target object in a third time period, wherein the second time period is a historical time period, the third time period is a current time period, the first action is a to-be-referred action, and the second action is a to-be-compared action; the first determining unit is used for determining the motion change amplitude based on the first motion and the second motion.
Optionally, the action variation amplitude meeting the first preset condition includes the action variation amplitude exceeding a preset amplitude range.
Optionally, the second detection module comprises a second acquisition unit and a second determination unit.
The system comprises a first acquisition unit, a second acquisition unit and a second determination unit, wherein the first acquisition unit is used for acquiring the number of the barrages appearing in a first time period and the duration corresponding to the first time period, the starting time of the first time period is the time for intercepting a target picture, and the second determination unit is used for determining the barrage speed based on the number and the duration of the barrages.
Optionally, the bullet screen rate meeting the second predetermined condition rate includes the bullet screen rate exceeding a predetermined multiple of the bullet screen average rate.
Optionally, the first determining module comprises a third obtaining unit and a third determining unit.
The third acquisition unit is used for acquiring the barrage with the highest appearance frequency from barrage contents displayed in the first time period, and the third determination unit is used for determining the target characters by using the barrage with the highest appearance frequency.
Optionally, the third acquisition unit comprises an acquisition subunit and a determination subunit.
The acquisition subunit is used for acquiring keywords with highest occurrence frequency from the barrage content displayed in the first time period, wherein the keywords with the highest occurrence frequency are words with the largest repeated occurrence frequency in the barrage content, and the determination subunit is used for determining target characters by utilizing the keywords with the highest occurrence frequency.
Optionally, the generating module comprises a fourth determining unit, a fifth determining unit and a removing unit.
The method comprises the steps of determining a display position of a target object in a target picture, determining a first area and a second area in the target picture by using the display position, wherein the first area is a display area of target characters, the second area is an area to be removed determined by the first area, and the removing unit is used for filling the target characters to any position in the first area and removing the second area to generate an expression package.
Optionally, the device also comprises a display module and an adjustment module.
The display module is used for displaying the expression package in the popup window, and the adjustment module is used for adjusting the display content and/or the display position of the target characters in response to the editing operation executed by the expression package.
Optionally, the device further comprises a display module.
The display module is used for displaying the expression package in a chat interface provided by the graphical user interface or displaying the expression package in the graphical user interface in a bullet screen mode.
It should be noted that the above units and modules may be implemented by software or hardware, and the latter may be implemented by, but not limited to, the units and modules being located in the same processor, or the units and modules being located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may be configured to store a computer program for performing the steps of:
S1, detecting the action change amplitude of a target object in a graphical user interface;
s2, responding to the action change amplitude meeting a first preset condition, and intercepting a target picture from a graphical user interface, wherein the content displayed in the target picture comprises a face area and at least part of limb areas of a target object;
S3, detecting the bullet screen speed of the graphical user interface in a first time period, wherein the first time period is determined by the moment of capturing the target picture;
s4, determining target characters from the barrage content displayed in the first time period in response to the barrage speed meeting a second preset condition;
S5, generating an expression package by using the target picture and the target text.
Alternatively, in the present embodiment, the nonvolatile storage medium may include, but is not limited to, a USB flash disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, detecting the action change amplitude of a target object in a graphical user interface;
s2, responding to the action change amplitude meeting a first preset condition, and intercepting a target picture from a graphical user interface, wherein the content displayed in the target picture comprises a face area and at least part of limb areas of a target object;
S3, detecting the bullet screen speed of the graphical user interface in a first time period, wherein the first time period is determined by the moment of capturing the target picture;
s4, determining target characters from the barrage content displayed in the first time period in response to the barrage speed meeting a second preset condition;
S5, generating an expression package by using the target picture and the target text.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program code.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (14)

1. The expression package generating method is characterized in that a graphical user interface is provided through terminal equipment, and the expression package generating method comprises the following steps:
Detecting the action change amplitude of a target object in the graphical user interface;
Intercepting a target picture from the graphical user interface in response to the action change amplitude meeting a first preset condition, wherein the content displayed in the target picture comprises at least one of a facial area and at least a part of limb area of the target object;
detecting a bullet screen rate of the graphical user interface in a first time period, wherein the first time period is determined by the moment of capturing the target picture;
Determining target characters from the barrage content displayed in the first time period in response to the barrage speed meeting a second preset condition;
and generating an expression package by using the target picture and the target text.
2. The expression pack generation method according to claim 1, wherein detecting the motion variation amplitude of the target object includes:
Acquiring a first action of the target object in a second time period and a second action of the target object in a third time period, wherein the second time period is a historical time period, the third time period is a current time period, the first action is a to-be-referred action, and the second action is a to-be-compared action;
The motion variation amplitude is determined based on the first motion and the second motion.
3. The expression pack generation method according to claim 2, wherein the action change amplitude satisfying the first preset condition includes:
The action change amplitude exceeds a preset amplitude range.
4. The expression pack generation method of claim 1, wherein detecting the bullet screen rate in the first period of time comprises:
acquiring the number of the barrages appearing in the first time period and the duration corresponding to the first time period, wherein the starting time of the first time period is the time of capturing the target picture;
and determining the barrage rate based on the number of barrages and the duration.
5. The expression pack generation method of claim 4, wherein the barrage rate meeting the second preset condition rate comprises:
the barrage rate exceeds a preset multiple of the average barrage rate.
6. The expression pack generation method of claim 1, wherein determining the target text from the barrage content displayed in the first time period comprises:
acquiring a barrage with highest appearance frequency from the barrage content displayed in the first time period;
and determining the target characters by using the barrage with the highest appearance frequency.
7. The expression pack generation method of claim 1, wherein determining the target text from the barrage content displayed in the first time period comprises:
Obtaining keywords with highest occurrence frequency from the barrage content displayed in the first time period, wherein the keywords with highest occurrence frequency are words with the largest repeated occurrence frequency in the barrage content;
And determining the target text by using the keyword with the highest occurrence frequency.
8. The expression package generation method of claim 1, wherein generating the expression package using the target picture and the target text comprises:
Determining a display position of the target object in the target picture;
Determining a first area and a second area in the target picture by utilizing the display position, wherein the first area is a display area of the target text, and the second area is an area to be removed determined by the first area;
And filling the target text to any position in the first area, removing the second area, and generating the expression package.
9. The expression package generating method according to claim 1, characterized in that the expression package generating method further comprises:
displaying the expression package in a popup window;
And responding to the editing operation executed on the expression package, and adjusting the display content and/or the display position of the target text.
10. The expression package generating method according to claim 1, characterized in that the expression package generating method further comprises:
displaying the expression package in a chat interface provided by the graphical user interface, or
And displaying the expression package in the graphical user interface in a bullet screen mode.
11. An expression pack generating apparatus, wherein a graphical user interface is provided through a terminal device, the expression pack generating apparatus comprising:
the first detection module is used for detecting the action change amplitude of the target object in the graphical user interface;
The intercepting module is used for intercepting a target picture from the graphical user interface in response to the action change amplitude meeting a first preset condition, wherein the content displayed in the target picture comprises a face area and at least a part of limb area of the target object;
The second detection module is used for detecting the bullet screen speed of the graphical user interface in a first time period, wherein the first time period is determined by the moment of capturing the target picture;
The determining module is used for determining target characters from the barrage content displayed in the first time period in response to the barrage speed meeting a second preset condition;
and the generating module is used for generating an expression package by utilizing the target picture and the target text.
12. A non-volatile storage medium, wherein a computer program is stored in the non-volatile storage medium, wherein the computer program is arranged to perform the expression pack generation method of any one of claims 1 to 10 when run.
13. A processor, characterized in that the processor is configured to run a program, wherein the program is arranged to execute the expression pack generation method of any one of claims 1 to 10 at run-time.
14. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the expression pack generation method of any one of claims 1 to 10.
CN202210139223.0A 2022-02-15 2022-02-15 Method, device, storage medium, and processor for generating emoticon package Active CN114549696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210139223.0A CN114549696B (en) 2022-02-15 2022-02-15 Method, device, storage medium, and processor for generating emoticon package

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210139223.0A CN114549696B (en) 2022-02-15 2022-02-15 Method, device, storage medium, and processor for generating emoticon package

Publications (2)

Publication Number Publication Date
CN114549696A CN114549696A (en) 2022-05-27
CN114549696B true CN114549696B (en) 2024-12-20

Family

ID=81675161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210139223.0A Active CN114549696B (en) 2022-02-15 2022-02-15 Method, device, storage medium, and processor for generating emoticon package

Country Status (1)

Country Link
CN (1) CN114549696B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370887A (en) * 2017-08-30 2017-11-21 维沃移动通信有限公司 A kind of expression generation method and mobile terminal
CN108038892A (en) * 2017-11-28 2018-05-15 北京川上科技有限公司 Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200463B (en) * 2018-01-19 2020-11-03 上海哔哩哔哩科技有限公司 Bullet screen expression package generation method, server and bullet screen expression package generation system
US10726603B1 (en) * 2018-02-28 2020-07-28 Snap Inc. Animated expressive icon
CN110049377B (en) * 2019-03-12 2021-06-22 北京奇艺世纪科技有限公司 Expression package generation method and device, electronic equipment and computer readable storage medium
CN110414404A (en) * 2019-07-22 2019-11-05 腾讯科技(深圳)有限公司 Image data processing method, device and storage medium based on instant messaging
CN111372141B (en) * 2020-03-18 2024-01-05 腾讯科技(深圳)有限公司 Expression image generation method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370887A (en) * 2017-08-30 2017-11-21 维沃移动通信有限公司 A kind of expression generation method and mobile terminal
CN108038892A (en) * 2017-11-28 2018-05-15 北京川上科技有限公司 Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN114549696A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
US11679334B2 (en) Dynamic gameplay session content generation system
CN111324253B (en) Virtual article interaction method and device, computer equipment and storage medium
CN113422977B (en) Live broadcast method and device, computer equipment and storage medium
CN102571633B (en) Show the method for User Status, displaying terminal and server
WO2014094199A1 (en) Facial movement based avatar animation
EP4300431A1 (en) Action processing method and apparatus for virtual object, and storage medium
CN108211352A (en) A kind of method and terminal for adjusting image quality
CN112652041B (en) Virtual image generation method, device, storage medium and electronic equipment
CN106774852B (en) Message processing method and device based on virtual reality
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
CN104461718A (en) Frame playing method and client end for game application
CN114095744A (en) Video live broadcast method, apparatus, electronic device and readable storage medium
CN112601098A (en) Live broadcast interaction method and content recommendation method and device
CN112258240B (en) Content display method, device, terminal, server and storage medium
CN109388737A (en) A kind of sending method, device and the storage medium of the exposure data of content item
CN114549696B (en) Method, device, storage medium, and processor for generating emoticon package
CN116843802A (en) Virtual image processing method and related product
CN113448466B (en) Animation display method, device, electronic equipment and storage medium
CN116957671A (en) Interactive content display method, interactive popularization page configuration method and device
CN112354184A (en) Role offline control method, system and device in virtual world
CN114816629B (en) Method and device for drawing display object, storage medium and electronic device
CN113360343B (en) Method and device for analyzing memory occupation condition, storage medium and computer equipment
CN114842884B (en) Information recording method, device, electronic device and storage medium
CN115025495B (en) Method and device for synchronizing character model, electronic equipment and storage medium
HK40052286A (en) Animation display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载