+

CN110266994B - A video call method, video call device and terminal - Google Patents

A video call method, video call device and terminal Download PDF

Info

Publication number
CN110266994B
CN110266994B CN201910561823.4A CN201910561823A CN110266994B CN 110266994 B CN110266994 B CN 110266994B CN 201910561823 A CN201910561823 A CN 201910561823A CN 110266994 B CN110266994 B CN 110266994B
Authority
CN
China
Prior art keywords
face
target
image
video image
face data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910561823.4A
Other languages
Chinese (zh)
Other versions
CN110266994A (en
Inventor
徐潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Bubugao Education Software Co ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910561823.4A priority Critical patent/CN110266994B/en
Publication of CN110266994A publication Critical patent/CN110266994A/en
Application granted granted Critical
Publication of CN110266994B publication Critical patent/CN110266994B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephone Function (AREA)

Abstract

本申请适用于通信技术领域,提供一种视频通话方法、视频通话装置及终端,其中方法包括:获取目标对端发送的视频图像;根据所述视频图像,在确定所述视频图像中包含人脸图像的情况下,从预设人脸数据库中获取与所述人脸图像相匹配的目标人脸数据;基于所述目标人脸数据,对所述视频图像进行人脸清晰度修复,得到目标视频图像;显示所述目标视频图像,提升视频通话过程中的人脸画质,提升画面清晰度,实现高清视频通话。

Figure 201910561823

The present application is applicable to the field of communication technology, and provides a video call method, a video call device and a terminal, wherein the method includes: acquiring a video image sent by a target peer end; and determining, according to the video image, that the video image includes a human face In the case of an image, obtain target face data matching the face image from a preset face database; based on the target face data, perform face definition restoration on the video image to obtain the target video image; displaying the target video image, improving the quality of the face image during the video call, improving the clarity of the image, and realizing a high-definition video call.

Figure 201910561823

Description

Video call method, video call device and terminal
Technical Field
The present application belongs to the field of communication technologies, and in particular, to a video call method, a video call apparatus, and a terminal.
Background
With the development of society, the safety of students and young children is receiving attention from people. A household keeper usually equips a child with electronic equipment such as a telephone watch so as to be able to make video or telephone contact with the child in time and to realize positioning and retrieving when the child is lost.
However, the current devices such as telephone, watch and the like are generally affected by factors such as camera quality, network quality, power consumption, ambient light and the like, and when a video call is made, the portrait displayed at the opposite end is fuzzy, and the video call experience is not very good. The current general way is to improve the pixels of the camera, but the improvement of the pixels of the camera still cannot offset the influence of other factors on the video definition, and the high-definition video call cannot be really realized.
Disclosure of Invention
In view of this, embodiments of the present application provide a video call method, a video call device, and a terminal, so as to solve the problem in the prior art that the definition of a human image in a video call is not high due to the influence of factors such as camera quality, network quality, power consumption, and ambient light.
A first aspect of an embodiment of the present application provides a video call method, including:
acquiring a video image sent by a target opposite terminal;
according to the video image, under the condition that the video image contains a face image, acquiring target face data matched with the face image from a preset face database;
performing face definition restoration on the video image based on the target face data to obtain a target video image;
and displaying the target video image.
A second aspect of an embodiment of the present application provides a video call apparatus, including:
the first acquisition module is used for acquiring a video image sent by a target opposite terminal;
the second acquisition module is used for acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image;
the restoration module is used for carrying out face definition restoration on the video image based on the target face data to obtain a target video image;
and the display module is used for displaying the target video image.
A third aspect of embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, performs the steps of the method according to the first aspect.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method as described in the first aspect above.
As can be seen from the above, in the embodiment of the application, by acquiring a video image sent by a target opposite terminal, and under the condition that it is determined that the video image contains a face image, target face data matched with the face image is acquired from a preset face database; based on the target face data, face definition restoration is carried out on the video image to obtain and display the target video image, the face image quality in the video call process is improved, the image definition is improved, and high-definition video call is realized on the premise that the flow and the power consumption of an opposite end are not increased and the hardware setting of the opposite end is not changed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a first flowchart of a video call method according to an embodiment of the present application;
fig. 2 is a second flowchart of a video call method according to an embodiment of the present application;
fig. 3 is a structural diagram of a video call device according to an embodiment of the present application;
fig. 4 is a structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a first flowchart of a video call method according to an embodiment of the present application. As shown in fig. 1, a video call method includes the following steps:
step 101, obtaining a video image sent by a target opposite terminal.
The execution main body of the video call method is electronic equipment with a video call function, and the electronic equipment can be a mobile phone, a tablet personal computer, a telephone watch and the like.
The target opposite terminal is an electronic device with a video call function, and can also be a mobile phone, a tablet personal computer, a telephone watch and the like. Specifically, optionally, the execution subject is a mobile phone or a tablet computer, and the target opposite end is a telephone watch, but not limited thereto.
In this step, after the video between the executing agent and the target peer is connected, the executing agent starts to acquire the video image transmitted from the target peer, so as to perform the following processing procedure.
And 102, acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image.
After the video image sent by the opposite end is acquired, whether the video image contains a face image needs to be judged, and if the video image contains the face image, the stored matched target face data needs to be further acquired from a preset face database.
The method for acquiring the target face data matched with the face image from the preset face database comprises the following steps: acquiring target face data matched with the face display characteristics in the face image from a preset face database; or acquiring face data associated with target contact information corresponding to the target opposite terminal from a preset face database, and determining the face data as target face data matched with a face image; or acquiring the face data which is matched with the face display characteristics in the face image and is associated with the target contact information corresponding to the target opposite terminal from a preset face database as target face data.
Specifically, as an optional implementation manner, the acquiring target face data matched with the face image from a preset face database includes:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
In the process, the face data in the preset face database is associated with the contact information in the address list. One face data corresponds to one contact information.
The preset face database may be stored in a local database, or may be stored in other devices or in the cloud device, which is not particularly limited herein.
In the specific implementation process, the face data needs to be modeled in advance: for example, the face information which can be restored by high definition is collected in advance through the APP in the mobile phone, high definition face modeling is carried out, and modeling data are stored in a mobile phone end database.
Specifically, the preset face database can be constructed by different technical means.
As an optional implementation manner, before acquiring target face data matched with the face image from a preset face database when it is determined that the video image contains the face image according to the video image, the method further includes:
acquiring face images in different video calls to obtain preliminary face data;
screening screened face data meeting the definition index requirement from the preliminary face data;
determining contact person information of opposite ends corresponding to different video calls;
and performing associated storage on the screened face data and the contact information to obtain the preset face database.
In the process, when the face database is constructed, the face data is acquired in different video call processes with different opposite terminals. The face data with the definition index meeting the requirement is screened and stored in association with the contact information of the corresponding opposite terminal, so that the generation of data in the face database and the construction of the face database are realized, the process of specially identifying and collecting faces of different contacts can be omitted, the realization process is more convenient, the continuous correction and the perfection of the face data can be realized along with the continuous proceeding of video call.
As another optional implementation manner, before acquiring, according to the video image and in a case that it is determined that the video image includes a face image, target face data matched with the face image from a preset face database, the method further includes:
outputting a face data acquisition interface, wherein different contact information is displayed in the face data acquisition interface;
under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information;
and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
In the process, when the face database is constructed, a face data acquisition interface needs to be displayed for a user to select different contact information, the face data of the corresponding contact is acquired, and then the face data of the related contact and the contact information of the corresponding opposite terminal are stored in an associated manner, so that the generation of data in the face database and the construction of the face database are realized, and the special acquisition process can realize the acquisition of more clear and complete face information.
Specifically, the acquisition process may be performed when the face information of the contact is first used, and the face information is acquired and stored, or may be performed when the contact information is newly added, which is not specifically limited herein.
And 103, performing face definition restoration on the video image based on the target face data to obtain a target video image.
When the face definition of the video image is repaired, specifically, the face definition of the video image is repaired, and the local area of the face of the video image is repaired, so that the display definition of the face part of the whole image is improved.
In the specific implementation process, a mobile phone and a telephone watch are taken as examples for explanation. When the video between the parent mobile phone and the student telephone watch is switched on, the corresponding APP in the mobile phone extracts the face from the conversation video picture, the face is compared with the modeling data in the APP database, when the matched face data exists, the high-definition face data is fused into the video image displayed in the mobile phone, the mobile phone end realizes the restoration of the definition of the face, the display of the high-definition face is realized, and when the matched face data does not exist, the whole process is finished. This process, under the prerequisite that does not increase wrist-watch end hardware equipment and hardware cost, can be at the people's image in cell-phone end high definition reduction video, promote video conversation and experience.
And 104, displaying the target video image.
After the video image is repaired, the target video image is directly displayed, the video sent by the opposite terminal is directly repaired in the transmission process, so that the repaired target video image is directly displayed on the local terminal display screen, and the display effect is improved.
When the video image sent by the opposite end is processed, specifically, one frame is processed when one frame of video image is received, and a target video image obtained by repairing one frame is correspondingly displayed.
In the embodiment of the application, by acquiring a video image sent by a target opposite terminal, under the condition that the video image contains a face image, target face data matched with the face image is acquired from a preset face database; based on the target face data, face definition restoration is carried out on the video image to obtain and display the target video image, the face image quality in the video call process is improved, the image definition is improved, and high-definition video call is realized on the premise that the flow and the power consumption of an opposite end are not increased and the hardware setting of the opposite end is not changed.
The embodiment of the application also provides different implementation modes of the video call method.
Referring to fig. 2, fig. 2 is a second flowchart of a video call method according to an embodiment of the present application. As shown in fig. 2, a video call method includes the following steps:
step 201, acquiring a video image sent by a target opposite terminal.
The implementation process of this step is the same as that of step 101 in the foregoing embodiment, and is not described here again.
Step 202, according to the video image, under the condition that the video image is determined to contain a face image, target face data matched with the face image is obtained from a preset face database.
In specific implementation, the images in the video may be extracted once at set time intervals, a face detection tool of the visual library OpenCV is used to detect whether a face exists in the images, the similarity between the acquired face and the face data stored in the preset face database is matched, and if the similarity between the face and the face data is greater than 70%, the face of the image in the original video is replaced by a high-definition face in the preset face database.
As an optional implementation manner, the target face data is three-dimensional face data; the performing face sharpness restoration on the video image based on the target face data to obtain a target video image includes:
step 203, determining the face display characteristics in the video image based on the face image.
The face display characteristics comprise at least one of face display angle, face display size, face display proportion, face display outline, face display area and face five-sense organ distribution.
The face display characteristics analyzed from the face image are combined with the three-dimensional face data, so that the face replacement part corresponding to the current face image is found out from the target face data in the preset face database.
And 204, acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics.
The face replacement part can be a face display part under different face display angles, a face display part under different face display sizes, a face display part under different face display proportions, a face display part under different face display contours, a face display part under different face display areas and/or a face display part under different face facial features.
And step 205, acquiring a face replacement image corresponding to the face replacement part.
And obtaining a corresponding face replacement image based on the face replacement part determined from the three-dimensional face data so as to replace the corresponding face area and improve the image display degree.
And step 206, replacing the face image in the video image according to the face replacement image to obtain the target video image.
And the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
In the step, the method for improving the face definition is to directly replace the face region display image with higher definition at the corresponding face position so as to replace the face region with low original definition and improve the face display definition in the video image.
And step 207, displaying the target video image.
The implementation process of this step is the same as that of step 104 in the foregoing embodiment, and is not described here again.
In the embodiment of the application, by acquiring a video image sent by a target opposite terminal, under the condition that the video image contains a face image, target face data matched with the face image is acquired from a preset face database; based on the target face data, face definition restoration is carried out on the video image to obtain and display the target video image, the face image quality in the video call process is improved, the image definition is improved, and high-definition video call is realized on the premise that the flow and the power consumption of an opposite end are not increased and the hardware setting of the opposite end is not changed.
Referring to fig. 3, fig. 3 is a structural diagram of a video call device according to an embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of description.
The video call device includes: a first obtaining module 301, a second obtaining module 302, a repairing module 303 and a display module 304.
A first obtaining module 301, configured to obtain a video image sent by a target peer;
a second obtaining module 302, configured to obtain, according to the video image, target face data matched with the face image from a preset face database when it is determined that the video image includes the face image;
a restoration module 303, configured to perform face sharpness restoration on the video image based on the target face data to obtain a target video image;
a display module 304, configured to display the target video image.
The second obtaining module 302 is specifically configured to:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
The target face data is three-dimensional face data; the repair module 303 is specifically configured to:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
replacing the face image in the video image according to the face replacement image to obtain the target video image;
and the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
Wherein, the device still includes:
the first database establishing module is used for acquiring face images in different video calls to obtain preliminary face data; screening screened face data meeting the definition index requirement from the preliminary face data; determining contact person information of opposite ends corresponding to different video calls; and performing associated storage on the screened face data and the contact information to obtain the preset face database.
Wherein, the device still includes:
the second database establishing module is used for outputting a face data acquisition interface, and different contact information is displayed in the face data acquisition interface; under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information; and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
And the target opposite terminal is a telephone watch.
The video call device provided in the embodiment of the present application can implement each process of the above-mentioned video call method, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Fig. 4 is a structural diagram of a terminal according to an embodiment of the present application. As shown in the figure, the terminal 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the terminal 4. For example, the computer program 42 may be divided into a first acquisition module, a second acquisition module, a repair module, and a display module, and each module has the following specific functions:
the first acquisition module is used for acquiring a video image sent by a target opposite terminal;
the second acquisition module is used for acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image;
the restoration module is used for carrying out face definition restoration on the video image based on the target face data to obtain a target video image;
and the display module is used for displaying the target video image.
The second obtaining module is specifically configured to:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
The target face data is three-dimensional face data; the repair module is specifically configured to:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
replacing the face image in the video image according to the face replacement image to obtain the target video image;
and the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
Wherein, the device still includes:
the first database establishing module is used for acquiring face images in different video calls to obtain preliminary face data; screening screened face data meeting the definition index requirement from the preliminary face data; determining contact person information of opposite ends corresponding to different video calls; and performing associated storage on the screened face data and the contact information to obtain the preset face database.
Wherein, the device still includes:
the second database establishing module is used for outputting a face data acquisition interface, and different contact information is displayed in the face data acquisition interface; under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information; and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
And the target opposite terminal is a telephone watch.
The terminal 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal 4 may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is only an example of a terminal 4 and does not constitute a limitation of terminal 4 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the terminal 4, such as a hard disk or a memory of the terminal 4. The memory 41 may also be an external storage device of the terminal 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal 4. The memory 41 is used for storing the computer program and other programs and data required by the terminal. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A video call method, comprising:
acquiring a video image sent by a target opposite terminal;
according to the video image, under the condition that the video image contains a face image, acquiring target face data matched with the face image from a preset face database;
performing face definition restoration on the video image based on the target face data to obtain a target video image;
displaying the target video image;
the target face data is three-dimensional face data; the performing face sharpness restoration on the video image based on the target face data to obtain a target video image includes:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
and replacing the face image in the video image according to the face replacement image to obtain the target video image.
2. The video call method of claim 1,
the acquiring of the target face data matched with the face image from the preset face database comprises:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset face database corresponds to different contact information in the address list one to one respectively.
3. The video call method of claim 1,
and the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
4. The video call method according to claim 1, wherein before acquiring target face data matching the face image from a preset face database when it is determined that the video image contains the face image, according to the video image, the method further comprises:
acquiring face images in different video calls to obtain preliminary face data;
screening screened face data meeting the definition index requirement from the preliminary face data;
determining contact person information of opposite ends corresponding to different video calls;
and performing associated storage on the screened face data and the contact information to obtain the preset face database.
5. The video call method according to claim 1, wherein before acquiring target face data matching the face image from a preset face database when it is determined that the video image contains the face image, according to the video image, the method further comprises:
outputting a face data acquisition interface, wherein different contact information is displayed in the face data acquisition interface;
under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information;
and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
6. The video call method of claim 1, wherein the target peer is a telephone watch.
7. A video call apparatus, comprising:
the first acquisition module is used for acquiring a video image sent by a target opposite terminal;
the second acquisition module is used for acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image;
the restoration module is used for carrying out face definition restoration on the video image based on the target face data to obtain a target video image;
the display module is used for displaying the target video image;
the target face data is three-dimensional face data; the repair module is specifically configured to:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
and replacing the face image in the video image according to the face replacement image to obtain the target video image.
8. The video call device according to claim 7, wherein the second obtaining module is specifically configured to:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset face database corresponds to different contact information in the address list one to one respectively.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910561823.4A 2019-06-26 2019-06-26 A video call method, video call device and terminal Expired - Fee Related CN110266994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561823.4A CN110266994B (en) 2019-06-26 2019-06-26 A video call method, video call device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561823.4A CN110266994B (en) 2019-06-26 2019-06-26 A video call method, video call device and terminal

Publications (2)

Publication Number Publication Date
CN110266994A CN110266994A (en) 2019-09-20
CN110266994B true CN110266994B (en) 2021-03-26

Family

ID=67921838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561823.4A Expired - Fee Related CN110266994B (en) 2019-06-26 2019-06-26 A video call method, video call device and terminal

Country Status (1)

Country Link
CN (1) CN110266994B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602403A (en) * 2019-09-23 2019-12-20 华为技术有限公司 Method for taking pictures under dark light and electronic equipment
CN110968736B (en) * 2019-12-04 2021-02-02 深圳追一科技有限公司 Video generation method and device, electronic equipment and storage medium
CN111031241B (en) * 2019-12-09 2021-08-27 Oppo广东移动通信有限公司 Image processing method and device, terminal and computer readable storage medium
CN111432154B (en) * 2020-03-30 2022-01-25 维沃移动通信有限公司 Video playing method, video processing method and electronic equipment
CN111698553B (en) * 2020-05-29 2022-09-27 维沃移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN117152030A (en) * 2022-05-23 2023-12-01 华为技术有限公司 Image processing methods and electronic devices
CN118573915B (en) * 2024-05-22 2025-06-03 天翼爱音乐文化科技有限公司 Video high definition processing method, system, equipment and medium
CN118714251B (en) * 2024-07-23 2025-06-17 深圳市有一说一科技有限公司 Video call image optimization method, device, medium and computing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566653A (en) * 2017-09-22 2018-01-09 维沃移动通信有限公司 A kind of call interface methods of exhibiting and mobile terminal
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A method of video communication and a mobile device
CN108683872A (en) * 2018-08-30 2018-10-19 Oppo广东移动通信有限公司 Video call method, device, storage medium and mobile terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100041061A (en) * 2008-10-13 2010-04-22 성균관대학교산학협력단 Video telephony method magnifying the speaker's face and terminal using thereof
CN107623832A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Video background replacement method, device and mobile terminal
KR102056806B1 (en) * 2017-12-15 2019-12-18 주식회사 하이퍼커넥트 Terminal and server providing a video call service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566653A (en) * 2017-09-22 2018-01-09 维沃移动通信有限公司 A kind of call interface methods of exhibiting and mobile terminal
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A method of video communication and a mobile device
CN108683872A (en) * 2018-08-30 2018-10-19 Oppo广东移动通信有限公司 Video call method, device, storage medium and mobile terminal

Also Published As

Publication number Publication date
CN110266994A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110266994B (en) A video call method, video call device and terminal
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN111489290B (en) Face image super-resolution reconstruction method and device and terminal equipment
CN110457963B (en) Display control method, device, mobile terminal, and computer-readable storage medium
US20190355122A1 (en) Device, Method, and Graphical User Interface for Processing Document
CN108961157B (en) Image processing method, image processing device and terminal device
CN110119733B (en) Page identification method and device, terminal equipment and computer readable storage medium
CN109118447B (en) A picture processing method, picture processing device and terminal equipment
CN111754435A (en) Image processing method, apparatus, terminal device, and computer-readable storage medium
CN108564550B (en) Image processing method, device and terminal device
CN111290684A (en) Image display method, image display device and terminal equipment
CN114242023A (en) Display brightness adjustment method, display brightness adjustment device and electronic equipment
CN113391779B (en) Parameter adjusting method, device and equipment for paper-like screen
CN109359582B (en) Information searching method, information searching device and mobile terminal
CN111105440A (en) Tracking method, device, device and storage medium for target object in video
CN111142650B (en) Screen brightness adjusting method, screen brightness adjusting device and terminal
CN108776959B (en) Image processing method and device and terminal equipment
CN108985215B (en) A picture processing method, picture processing device and terminal equipment
CN111784607B (en) Image tone mapping method, device, terminal equipment and storage medium
CN110677586A (en) Image display method, image display device and mobile terminal
CN110705653A (en) Image classification method, image classification device and terminal equipment
CN111861965A (en) Image backlight detection method, image backlight detection device and terminal equipment
CN108629767A (en) A kind of method, device and mobile terminal of scene detection
CN110688035B (en) Photo album processing method, photo album processing device and mobile terminal
CN109492249B (en) Rapid generation method and device of design drawing and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211105

Address after: 523000 room 1301, building 1, No. 28, Chang'an Dongmen Middle Road, Chang'an Town, Dongguan City, Guangdong Province

Patentee after: Dongguan Bubugao Education Software Co.,Ltd.

Address before: 523860 No. 168 Dongmen Middle Road, Xiaobian Community, Chang'an Town, Dongguan City, Guangdong Province

Patentee before: Guangdong GENIUS Technology Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210326

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载