Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a first flowchart of a video call method according to an embodiment of the present application. As shown in fig. 1, a video call method includes the following steps:
step 101, obtaining a video image sent by a target opposite terminal.
The execution main body of the video call method is electronic equipment with a video call function, and the electronic equipment can be a mobile phone, a tablet personal computer, a telephone watch and the like.
The target opposite terminal is an electronic device with a video call function, and can also be a mobile phone, a tablet personal computer, a telephone watch and the like. Specifically, optionally, the execution subject is a mobile phone or a tablet computer, and the target opposite end is a telephone watch, but not limited thereto.
In this step, after the video between the executing agent and the target peer is connected, the executing agent starts to acquire the video image transmitted from the target peer, so as to perform the following processing procedure.
And 102, acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image.
After the video image sent by the opposite end is acquired, whether the video image contains a face image needs to be judged, and if the video image contains the face image, the stored matched target face data needs to be further acquired from a preset face database.
The method for acquiring the target face data matched with the face image from the preset face database comprises the following steps: acquiring target face data matched with the face display characteristics in the face image from a preset face database; or acquiring face data associated with target contact information corresponding to the target opposite terminal from a preset face database, and determining the face data as target face data matched with a face image; or acquiring the face data which is matched with the face display characteristics in the face image and is associated with the target contact information corresponding to the target opposite terminal from a preset face database as target face data.
Specifically, as an optional implementation manner, the acquiring target face data matched with the face image from a preset face database includes:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
In the process, the face data in the preset face database is associated with the contact information in the address list. One face data corresponds to one contact information.
The preset face database may be stored in a local database, or may be stored in other devices or in the cloud device, which is not particularly limited herein.
In the specific implementation process, the face data needs to be modeled in advance: for example, the face information which can be restored by high definition is collected in advance through the APP in the mobile phone, high definition face modeling is carried out, and modeling data are stored in a mobile phone end database.
Specifically, the preset face database can be constructed by different technical means.
As an optional implementation manner, before acquiring target face data matched with the face image from a preset face database when it is determined that the video image contains the face image according to the video image, the method further includes:
acquiring face images in different video calls to obtain preliminary face data;
screening screened face data meeting the definition index requirement from the preliminary face data;
determining contact person information of opposite ends corresponding to different video calls;
and performing associated storage on the screened face data and the contact information to obtain the preset face database.
In the process, when the face database is constructed, the face data is acquired in different video call processes with different opposite terminals. The face data with the definition index meeting the requirement is screened and stored in association with the contact information of the corresponding opposite terminal, so that the generation of data in the face database and the construction of the face database are realized, the process of specially identifying and collecting faces of different contacts can be omitted, the realization process is more convenient, the continuous correction and the perfection of the face data can be realized along with the continuous proceeding of video call.
As another optional implementation manner, before acquiring, according to the video image and in a case that it is determined that the video image includes a face image, target face data matched with the face image from a preset face database, the method further includes:
outputting a face data acquisition interface, wherein different contact information is displayed in the face data acquisition interface;
under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information;
and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
In the process, when the face database is constructed, a face data acquisition interface needs to be displayed for a user to select different contact information, the face data of the corresponding contact is acquired, and then the face data of the related contact and the contact information of the corresponding opposite terminal are stored in an associated manner, so that the generation of data in the face database and the construction of the face database are realized, and the special acquisition process can realize the acquisition of more clear and complete face information.
Specifically, the acquisition process may be performed when the face information of the contact is first used, and the face information is acquired and stored, or may be performed when the contact information is newly added, which is not specifically limited herein.
And 103, performing face definition restoration on the video image based on the target face data to obtain a target video image.
When the face definition of the video image is repaired, specifically, the face definition of the video image is repaired, and the local area of the face of the video image is repaired, so that the display definition of the face part of the whole image is improved.
In the specific implementation process, a mobile phone and a telephone watch are taken as examples for explanation. When the video between the parent mobile phone and the student telephone watch is switched on, the corresponding APP in the mobile phone extracts the face from the conversation video picture, the face is compared with the modeling data in the APP database, when the matched face data exists, the high-definition face data is fused into the video image displayed in the mobile phone, the mobile phone end realizes the restoration of the definition of the face, the display of the high-definition face is realized, and when the matched face data does not exist, the whole process is finished. This process, under the prerequisite that does not increase wrist-watch end hardware equipment and hardware cost, can be at the people's image in cell-phone end high definition reduction video, promote video conversation and experience.
And 104, displaying the target video image.
After the video image is repaired, the target video image is directly displayed, the video sent by the opposite terminal is directly repaired in the transmission process, so that the repaired target video image is directly displayed on the local terminal display screen, and the display effect is improved.
When the video image sent by the opposite end is processed, specifically, one frame is processed when one frame of video image is received, and a target video image obtained by repairing one frame is correspondingly displayed.
In the embodiment of the application, by acquiring a video image sent by a target opposite terminal, under the condition that the video image contains a face image, target face data matched with the face image is acquired from a preset face database; based on the target face data, face definition restoration is carried out on the video image to obtain and display the target video image, the face image quality in the video call process is improved, the image definition is improved, and high-definition video call is realized on the premise that the flow and the power consumption of an opposite end are not increased and the hardware setting of the opposite end is not changed.
The embodiment of the application also provides different implementation modes of the video call method.
Referring to fig. 2, fig. 2 is a second flowchart of a video call method according to an embodiment of the present application. As shown in fig. 2, a video call method includes the following steps:
step 201, acquiring a video image sent by a target opposite terminal.
The implementation process of this step is the same as that of step 101 in the foregoing embodiment, and is not described here again.
Step 202, according to the video image, under the condition that the video image is determined to contain a face image, target face data matched with the face image is obtained from a preset face database.
In specific implementation, the images in the video may be extracted once at set time intervals, a face detection tool of the visual library OpenCV is used to detect whether a face exists in the images, the similarity between the acquired face and the face data stored in the preset face database is matched, and if the similarity between the face and the face data is greater than 70%, the face of the image in the original video is replaced by a high-definition face in the preset face database.
As an optional implementation manner, the target face data is three-dimensional face data; the performing face sharpness restoration on the video image based on the target face data to obtain a target video image includes:
step 203, determining the face display characteristics in the video image based on the face image.
The face display characteristics comprise at least one of face display angle, face display size, face display proportion, face display outline, face display area and face five-sense organ distribution.
The face display characteristics analyzed from the face image are combined with the three-dimensional face data, so that the face replacement part corresponding to the current face image is found out from the target face data in the preset face database.
And 204, acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics.
The face replacement part can be a face display part under different face display angles, a face display part under different face display sizes, a face display part under different face display proportions, a face display part under different face display contours, a face display part under different face display areas and/or a face display part under different face facial features.
And step 205, acquiring a face replacement image corresponding to the face replacement part.
And obtaining a corresponding face replacement image based on the face replacement part determined from the three-dimensional face data so as to replace the corresponding face area and improve the image display degree.
And step 206, replacing the face image in the video image according to the face replacement image to obtain the target video image.
And the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
In the step, the method for improving the face definition is to directly replace the face region display image with higher definition at the corresponding face position so as to replace the face region with low original definition and improve the face display definition in the video image.
And step 207, displaying the target video image.
The implementation process of this step is the same as that of step 104 in the foregoing embodiment, and is not described here again.
In the embodiment of the application, by acquiring a video image sent by a target opposite terminal, under the condition that the video image contains a face image, target face data matched with the face image is acquired from a preset face database; based on the target face data, face definition restoration is carried out on the video image to obtain and display the target video image, the face image quality in the video call process is improved, the image definition is improved, and high-definition video call is realized on the premise that the flow and the power consumption of an opposite end are not increased and the hardware setting of the opposite end is not changed.
Referring to fig. 3, fig. 3 is a structural diagram of a video call device according to an embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of description.
The video call device includes: a first obtaining module 301, a second obtaining module 302, a repairing module 303 and a display module 304.
A first obtaining module 301, configured to obtain a video image sent by a target peer;
a second obtaining module 302, configured to obtain, according to the video image, target face data matched with the face image from a preset face database when it is determined that the video image includes the face image;
a restoration module 303, configured to perform face sharpness restoration on the video image based on the target face data to obtain a target video image;
a display module 304, configured to display the target video image.
The second obtaining module 302 is specifically configured to:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
The target face data is three-dimensional face data; the repair module 303 is specifically configured to:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
replacing the face image in the video image according to the face replacement image to obtain the target video image;
and the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
Wherein, the device still includes:
the first database establishing module is used for acquiring face images in different video calls to obtain preliminary face data; screening screened face data meeting the definition index requirement from the preliminary face data; determining contact person information of opposite ends corresponding to different video calls; and performing associated storage on the screened face data and the contact information to obtain the preset face database.
Wherein, the device still includes:
the second database establishing module is used for outputting a face data acquisition interface, and different contact information is displayed in the face data acquisition interface; under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information; and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
And the target opposite terminal is a telephone watch.
The video call device provided in the embodiment of the present application can implement each process of the above-mentioned video call method, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Fig. 4 is a structural diagram of a terminal according to an embodiment of the present application. As shown in the figure, the terminal 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the terminal 4. For example, the computer program 42 may be divided into a first acquisition module, a second acquisition module, a repair module, and a display module, and each module has the following specific functions:
the first acquisition module is used for acquiring a video image sent by a target opposite terminal;
the second acquisition module is used for acquiring target face data matched with the face image from a preset face database under the condition that the video image contains the face image according to the video image;
the restoration module is used for carrying out face definition restoration on the video image based on the target face data to obtain a target video image;
and the display module is used for displaying the target video image.
The second obtaining module is specifically configured to:
judging whether target contact person information corresponding to the target opposite terminal exists in an address list or not;
under the condition that target contact person information corresponding to the target opposite terminal exists in the address book, acquiring initial face data associated with the target contact person information from the preset face database;
determining the initial face data as target face data matched with the face image;
and the face data in the preset database corresponds to different contact information in the address list one to one respectively.
The target face data is three-dimensional face data; the repair module is specifically configured to:
determining a face display feature in the video image based on the face image;
acquiring a face replacement part matched with the face display characteristics from the three-dimensional face data based on the face display characteristics;
acquiring a face replacement image corresponding to the face replacement part;
replacing the face image in the video image according to the face replacement image to obtain the target video image;
and the display definition of the face corresponding to the three-dimensional face data is greater than that of the face image.
Wherein, the device still includes:
the first database establishing module is used for acquiring face images in different video calls to obtain preliminary face data; screening screened face data meeting the definition index requirement from the preliminary face data; determining contact person information of opposite ends corresponding to different video calls; and performing associated storage on the screened face data and the contact information to obtain the preset face database.
Wherein, the device still includes:
the second database establishing module is used for outputting a face data acquisition interface, and different contact information is displayed in the face data acquisition interface; under the condition that acquisition trigger input of face data is received, acquiring the face data through image acquisition equipment, wherein one acquisition trigger input corresponds to one piece of contact information; and performing associated storage on the acquired face data and the contact information to obtain the preset face database.
And the target opposite terminal is a telephone watch.
The terminal 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal 4 may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is only an example of a terminal 4 and does not constitute a limitation of terminal 4 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the terminal 4, such as a hard disk or a memory of the terminal 4. The memory 41 may also be an external storage device of the terminal 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal 4. The memory 41 is used for storing the computer program and other programs and data required by the terminal. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.