CN113763233B - Image processing method, server and photographing equipment - Google Patents
Image processing method, server and photographing equipment Download PDFInfo
- Publication number
- CN113763233B CN113763233B CN202110890846.7A CN202110890846A CN113763233B CN 113763233 B CN113763233 B CN 113763233B CN 202110890846 A CN202110890846 A CN 202110890846A CN 113763233 B CN113763233 B CN 113763233B
- Authority
- CN
- China
- Prior art keywords
- image
- stylized
- target image
- target
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The application provides an image processing method, a server and photographing equipment, relates to the technical field of image processing, and can effectively avoid the conditions that original images and stylized images are obviously split and severely do not overlap in the image stylized process, and improve the image quality of the image stylized. The method comprises the following steps: the server acquires the image to be processed and the image style identification, after carrying out target image matting processing on the image to be processed, a first target image can be obtained, then, after carrying out stylized processing on the first target image by using a characteristic model corresponding to the image stylized identification, a second target image is obtained, and then, the second target image and a stylized background image corresponding to the image stylized identification are overlapped and fused to obtain a final target image after stylization.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, a server, and a photographing device.
Background
The stylization of images is more and more favored by users, and many image processing software exists in the market, wherein the users of Photoshop are the most and the most professional. The user can style the image through some filters (such as watercolor, poster edges, splash and texture) in the Photoshop. With the advent of Android and ios systems of mobile phones, mobile phone image processing software is also popular.
However, the existing mobile phone photographing App is limited by the mobile phone computing capability and App closed environment, the processing of the photo is mainly concentrated on beautifying effects such as whitening, the whole stylization processing of the photo cannot be realized, the stylization effect often cannot reach expectations, the situation that the original image and the stylized original image are obvious in splitting sense, serious in style and not taken is likely to occur, serious in style and not taken is caused, the actual aesthetic requirements of people cannot be met due to the image quality after the stylization, and the user experience is low.
Disclosure of Invention
The embodiment of the application provides an image processing method, a server and photographing equipment, which can effectively avoid the conditions that original images and stylized images are obviously split and severely not overlapped in the image stylized process, and improve the image quality of the image stylized.
In a first aspect, the present application provides an image processing method, applied to a server, including: acquiring an image to be processed and an image stylized identifier sent by photographing equipment; based on an image matting method, adaptively dividing a target and a background of an image to be processed to obtain a first target image; inputting the first target image into a feature model corresponding to the image stylization identification to perform stylization processing to obtain a second target image; according to the image stylization identification, taking a stylized background image corresponding to the image stylization identification; and superposing and fusing the second target image and the stylized background image to obtain a final target image after stylization.
In the embodiment of the application, the server acquires the image to be processed and the image style identification, after carrying out target image matting processing on the image to be processed, the first target image can be obtained, then the first target image is subjected to the stylized processing by using the characteristic model corresponding to the image stylized identification to obtain the second target image, then the second target image and the stylized background image corresponding to the image stylized identification are overlapped and fused to obtain the final target image after the stylization, so that the high-quality stylized image can be obtained, the situation that the original image and the stylized image are obviously split in the image stylized process and the style is seriously not overlapped is effectively avoided, and the image quality of the image stylized is improved.
Exemplary, performing a stylization process on a feature model corresponding to a stylization identifier of a first target image input image to obtain a second target image, including:
Acquiring an alpha channel of a first target image;
And performing stylization processing on an image area with the alpha channel larger than a preset value in the first target image through the feature model to obtain a second target image.
In the embodiment of the application, the transparent region and the opaque region in the first target image can be determined by acquiring the alpha channel of the first target image and comparing the alpha channel with the preset value, and the first target image, namely the opaque region in the image to be processed can be stylized without the background region, namely the transparent region in the image to be processed by performing the stylized processing on the image region of which the alpha channel is larger than the preset value in the first target image, so that the second target image with the transparent channel can be obtained to perform the image stylized processing better.
Exemplary, when adaptively segmenting a target and a background of an image to be processed based on an image matting method to obtain a first target image, the method further includes: and analyzing and determining image noise in the image to be processed, and removing the determined image noise.
In the embodiment of the application, the image noise in the image to be processed is analyzed and determined, so that the interference of the image noise on the stylized image can be effectively reduced, the fusion effect of the stylized image is further improved, and the high-quality stylized image is obtained.
Exemplary, after the second target image and the stylized background image are overlapped and fused to obtain a final target image after being stylized, the method includes: and generating and transmitting an image link of the final target image.
In the embodiment of the application, after the server obtains the stylized final target image, the server generates the image link of the final target image and sends the image link to the terminal equipment for displaying the final target image, such as a smart phone, a photographing experience machine and the like, so that a user can browse the stylized final target image on a proper device, and the user experience is improved.
Exemplary, based on an image matting method, performing adaptive segmentation on a target and a background of an image to be processed, and when a first target image is obtained, the method includes: the image edges of the first target image are extended by predetermined pixels.
In the embodiment of the application, in order to avoid the phenomenon that the image edge of the first target image is too sharp to cause obvious splitting feeling of the stylized image background and the final target image, the image edge of the first target image can be expanded according to the preset pixels, so that the stylized first target image, namely the second target image, can be better fused with the stylized background image, and the stylized image with higher image quality is obtained.
In a second aspect, the present application provides another image processing method applied to a photographing apparatus, including:
Starting a shooting function to acquire an image to be processed; acquiring an image stylized identifier; and sending the image to be processed and the image stylization identification to a server, wherein the image to be processed and the image stylization identification are used for indicating the server to perform stylization processing on the image to be processed according to the image stylization identification so as to obtain a final target image.
Illustratively, the image processing method further comprises:
Receiving an image link sent by a server; and displaying the final target image and the image link on a screen terminal of the photographing equipment according to the image link.
In a third aspect, the present application provides an image processing apparatus comprising:
the image and identification acquisition unit is used for acquiring the image to be processed and the image stylized identification sent by the photographing equipment;
The image segmentation unit is used for adaptively segmenting a target and a background of an image to be processed based on an image matting method to obtain a first target image;
the stylized processing unit is used for inputting the first target image into the characteristic model corresponding to the image stylized identifier to perform stylized processing to obtain a second target image;
The stylized background image acquisition unit is used for acquiring the stylized background image corresponding to the image stylized identifier according to the image stylized identifier;
And the image fusion unit is used for carrying out superposition fusion on the second target image and the stylized background image to obtain a final target image after stylization.
Specifically, the stylized processing unit includes:
An alpha channel acquisition subunit, configured to acquire an alpha channel of the first target image;
And the stylizing processing subunit is used for performing stylizing processing on the image area of which the alpha channel is larger than a preset value in the first target image through the feature model to obtain a second target image.
Specifically, the image segmentation unit is further configured to:
And analyzing and determining image noise in the image to be processed, and removing the determined image noise.
Specifically, the image processing apparatus further includes:
and generating and transmitting an image link of the final target image.
Specifically, the image segmentation unit is further configured to:
the image edges of the first target image are extended by predetermined pixels.
In a fourth aspect, the present application provides another image processing apparatus applied to a photographing device, comprising:
the image and identification acquisition unit is used for starting a shooting function to acquire an image to be processed; acquiring an image stylized identifier;
The image and identification sending unit is used for sending the image to be processed and the image stylized identification to a server, wherein the image to be processed and the image stylized identification are used for indicating the server to perform stylized processing on the image to be processed according to the image stylized identification so as to obtain a final target image.
Illustratively, the image processing apparatus further includes:
an image link receiving unit for receiving the image link sent by the server;
and the image display unit is used for displaying the final target image and the image link on a screen terminal of the photographing equipment according to the image link.
In a fifth aspect, the present application provides a server comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the method as in the first aspect or any of the alternatives of the first aspect when executing the computer program.
In a sixth aspect, the present application provides a photographing apparatus comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing a method as in the second aspect or any alternative of the second aspect when the computer program is executed.
In a seventh aspect, the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements a method as in the first aspect or any of the alternatives of the first aspect.
In an eighth aspect, an embodiment of the present application provides a computer program product for, when run on an image processing apparatus, causing the image processing apparatus to perform the steps of the image processing method of the first or second aspects described above.
It will be appreciated that the advantages of the second to seventh aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
Fig. 2 is a set of image diagrams provided by the embodiment of the present application, where (1) in fig. 2 is an image diagram of an image to be processed provided by the embodiment of the present application, (2) in fig. 2 is an image diagram of a first target image obtained by adaptively dividing (1) in fig. 2, and (3) in fig. 2 is an image diagram of a second target image obtained by performing a stylized process on (2) in fig. 2;
FIG. 3 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 4 is a schematic view of an image of a final target image obtained by performing a stylization process on the image shown in FIG. 2 according to an embodiment of the present application;
FIG. 5 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural view of another image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
It should also be appreciated that references to "one embodiment" or "some embodiments" or the like described in this specification mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the embodiment of the application, after the photographing device such as a smart phone and a photographing experience machine photographs an image, the photographing device cannot perform high-quality stylized processing on the image due to the limitation of the photographing device, such as lower computing capability, closed application environment and the like, and the photographing device can upload the photographed image to a server for performing the stylized processing, and then the server returns the stylized image to the photographing device so as to facilitate a user to view the stylized image. The server has high operation capability and an open environment, so that the image processing is faster, and the image with higher quality can be obtained.
In the embodiment of the application, the photographing equipment sends the photographed image and the stylized identifier to the server. After receiving the image to be processed, namely the image shot by the photographing equipment, and the image stylization identification, the server can select a proper characteristic model to perform stylization according to the image to be processed and the image stylization identification, and the server sends the image obtained by the stylization to a screen of the photographing equipment for displaying, so that a user can conveniently preview the stylized image. It should be noted that different image stylized identifiers correspond to different feature models. The server stores the trained feature models of different styles, and the feature models can be used for carrying out stylizing processing on the images according to the image stylizing identifications, wherein the image stylizing identifications correspond to the images of the styles.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, which is described in detail below:
in step S101, the server acquires the image to be processed and the image stylized identifier sent by the photographing device.
In the embodiment of the application, after the photographing device photographs an image, according to a selected one of stylized background images displayed on the photographing device by a user, an image style identifier corresponding to the stylized background image selected by the user is acquired, and then the photographed image, namely, an image to be processed and the acquired image style identifier are sent to a server. And the server completes the stylization processing of the image according to the received image to be processed and the image style identification.
Specifically, a plurality of different style background images are displayed on a screen of the photographing device for selection by a user, and when the user selects a certain style background image and photographs, the photographing device can obtain an image stylized identifier corresponding to the selected style background image.
In some embodiments of the present application, before sending an image to be processed and an image stylized identifier to a server, the photographing apparatus needs to perform preprocessing on the image to be processed, for example, performing edge clipping on the image to be processed according to a preset size, where the preset size depends specifically on the resolution of the photographed image, for example, the image to be processed may be clipped into an image with an aspect ratio of 4:3; or cropping the image to be processed into an image with an aspect ratio of 1:1. Therefore, the problems that the size of the human image at the edge is undersized, the defocus blur and the like can be effectively avoided due to the high aspect ratio of the image to be processed. Or the images to be processed after the stylized processing can be overlapped and fused with the stylized background images after being cut into the size consistent with the stylized background images.
In other embodiments of the present application, after receiving the processed image and the image stylized identifier sent by the photographing device, the server performs preprocessing on the image to be processed, for example, performing edge clipping on the image to be processed according to a preset size.
Step S102, the server performs self-adaptive segmentation on a target and a background of an image to be processed based on an image matting method to obtain a first target image.
In the embodiment of the application, the target of the image to be processed specifically refers to a target image, such as a person image, a face image, and the like, which needs to be stylized in the image to be processed; the background of the image to be processed specifically refers to other images than the target image in the image to be processed, such as other images than the person image in one image.
In the embodiment of the application, in order to avoid poor visual effect caused by the stylization of the background in the image to be processed, the original image and the stylized image have obvious splitting sense, the style is serious and not serious, the quality of the stylized image is low, the target and the background of the image to be processed can be subjected to self-adaptive segmentation by an image matting method to obtain a first target image which only retains the figure image, the stylized objects are concentrated on the first target image, the fusion effect of the figure image and the stylized background image can be ensured, and the image quality of the stylized image is improved.
After the first target image is stylized, the first target image is stylized into a stylized image with the same colors, textures and the like of the stylized background image corresponding to the image stylized mark, so that the accuracy requirement on image matting in practical application is not high, and the existing image matting method can be adopted to adaptively divide the target and the background of the image to be processed.
In some embodiments of the present application, there are some uncontrollable factors in the image to be processed, for example, the situation that passers-by, human body are blocked by other objects occurs in the image to be processed, which results in that when the target and the background of the image to be processed are adaptively segmented based on the image matting method, the obtained first target image has larger interference factors, and the image quality after stylization is reduced.
When the image matting method is used for carrying out self-adaptive segmentation on the target and the background of the image to be processed, the image noise in the image to be processed needs to be analyzed and determined, and after the determined image noise is removed, the image to be processed after the image noise is removed is subjected to self-adaptive segmentation, so that a first target image with high reliability is obtained.
Specifically, image noise such as incomplete images in the image to be processed, images with too small duty ratio in a picture and too edge positions is determined by analyzing the information such as the number, the position and the credibility of the images to be processed.
In some embodiments of the application, the position information of the portrait in the image to be processed is marked by detecting the portrait in the image to be processed and by a portrait detection algorithm; according to the position information of the marked portrait in the image to be processed, image segmentation is carried out on the image to be processed, and a plurality of portrait image areas are obtained; calculating the length-width ratio and the area of each portrait area, and removing the portrait area with the area smaller than the preset area; and carrying out pixel point classification and contour detection on other human image areas in the plurality of human image areas after the human image areas with the area smaller than the preset area are removed through an image segmentation algorithm, so as to obtain a first target image.
After pixel point classification and contour detection are performed on other human image areas in a plurality of human image areas after the human image areas with areas smaller than the preset area are removed, positions, sizes and the like in the human image areas are adjusted, and a first target image is obtained.
In other embodiments of the present application, in order to avoid a phenomenon that the image edge of the first target image is too sharp, which results in obvious cracking between the stylized image background and the final target image, the image edge of the first target image may be expanded after the first target image is obtained, and the image edge of the first target image, such as a person image, may be expanded according to a predetermined pixel, so that the stylized first target image, that is, the second target image, may be better fused with the stylized background image, to obtain a stylized image with higher image quality.
Step S103, the server inputs the first target image into a feature model corresponding to the image stylization identifier to perform stylization processing, and a second target image is obtained.
In the embodiment of the application, the feature models of the trained various stylized images are stored in the server, and the feature models of the images of different styles have different image stylized identifiers according to the different stylized images, and after the image stylized identifiers are confirmed, the corresponding feature models can be searched according to the image stylized identifiers.
It should be noted that, the feature model provided in the embodiment of the present application is a feature model obtained by training the antagonistic neural network by using images of different styles, and the object of model learning is the transformation between the styles of the data field a and the data field B, instead of the specific one-to-one mapping relationship between the data a and the data B, that is, the transformation between the image to be processed and the image obtained by final stylization is not the one-to-one correspondence transformation of the pixels, but the transformation of the whole image, so that the selection of the training set image is not limited, and the size, the proportion, etc. of the image can be adapted to photographing devices of various models and photographing settings.
The second target image is a stylized image of the personal image.
As shown in fig. 2, fig. 2 is a set of image diagrams provided in the embodiment of the present application, where (1) in fig. 2 is an image diagram of an image to be processed provided in the embodiment of the present application, (2) in fig. 2 is an image diagram of a first target image obtained by adaptively dividing (1) in fig. 2, and (3) in fig. 2 is an image diagram of a second target image obtained by performing a stylization process on (2) in fig. 2.
In the embodiment of the application, after the first target image is obtained, the server obtains the corresponding feature model according to the image stylization identifier, and then inputs the first target image into the feature model corresponding to the image style identifier for stylization processing to obtain the second target image shown in (3) in fig. 2.
Referring to fig. 3, fig. 3 is a flowchart of another image processing method according to an embodiment of the present application, which is described in detail below:
step S301, an alpha channel of a first target image is acquired.
In the present embodiment, the alpha Channel (α Channel or ALPHA CHANNEL) refers to the transparency and translucency of one image. In the process of performing the stylization processing on the first target image by using the feature model, an Alpha channel of the first target image is detected to avoid performing the stylization processing on a transparent area in the image, thereby causing uncontrollable image noise and reducing the image quality of the stylized image.
Step S302, performing stylization processing on an image area with an alpha channel larger than a preset value in the first target image through the feature model to obtain a second target image.
In the embodiment of the present application, after the alpha channel of the first target image is acquired, the image area of the alpha channel of the first target image, which is larger than the predetermined value, is subjected to the stylization processing, and the image area of the alpha channel of the first target image, which is smaller than or equal to the predetermined value, is not subjected to the stylization processing, so that the second target image with the transparent channel having the same image resolution as the first target image is obtained, as shown in (3) in fig. 2.
Step S104, the server acquires the stylized background image corresponding to the image stylized identifier according to the image stylized identifier.
In the embodiment of the application, a large number of stylized background images are stored in a stylized background image library of the server, each stylized background image corresponds to a unique image stylized identifier, and according to the image stylized identifier, the corresponding stylized background image can be obtained from the stylized background image library.
In some embodiments of the present application, when a stylized background image corresponding to an image stylized identifier does not exist in a stylized background gallery, the server sends image request information to the photographing device or the third party server, where the image request information includes the image stylized identifier, and after the photographing device or the third party server returns the stylized background image corresponding to the image stylized identifier, the returned stylized background image and the corresponding image stylized identifier are associated and stored in the stylized background gallery, so as to facilitate processing of subsequent stylized images.
Step S105, the server carries out superposition fusion on the second target image and the stylized background image to obtain a final target image after stylization.
In the embodiment of the application, the server adds the obtained second target image and the pixel points corresponding to each position in the stylized background image, and determines the final target image after the stylized according to the pixel points after the pixel points are added.
As shown in fig. 4, fig. 4 is an image schematic diagram of a final target image obtained by performing a stylization process on the image shown in fig. 2 according to an embodiment of the present application.
In the embodiment of the application, after the stylized final target image is obtained through the server, the server generates the image link and/or the two-dimensional code of the final target image, so that the photographing equipment can display the corresponding final target image on the screen through the image link and/or the two-dimensional code.
After the server generates the image link and/or the two-dimensional code of the final target image, the server sends the generated image link and/or the two-dimensional code to the photographing device so that a user can preview the final target image on the photographing device.
In the embodiment of the application, the server acquires the image to be processed and the image style identification, after carrying out target image matting processing on the image to be processed, the first target image can be obtained, then the first target image is subjected to the stylized processing by using the characteristic model corresponding to the image stylized identification to obtain the second target image, then the second target image and the stylized background image corresponding to the image stylized identification are overlapped and fused to obtain the final target image after the stylization, so that the situations that the original image and the stylized image are obvious in splitting sense and seriously underlap in the style in the image stylization process can be effectively avoided, and the image quality of the image stylization is improved.
Referring to fig. 5, fig. 5 is a flowchart of another image processing method according to an embodiment of the present application, which is described in detail below:
In step S501, the photographing apparatus starts a photographing function to acquire a to-be-processed image.
In the embodiment of the application, when a user uses the photographing device to photograph, the photographing device starts a photographing function to photograph so as to obtain an image to be processed.
In step S502, the photographing apparatus acquires an image stylized identifier.
In the embodiment of the application, after the photographing device photographs the image, according to a selected one of the stylized background images displayed on the photographing device by the user, the image style identification corresponding to the stylized background image selected by the user is obtained.
In step S503, the photographing device sends an image to be processed and the image stylized identifier to the server, where the image to be processed and the image stylized identifier are used to instruct the server to perform stylized processing on the image to be processed according to the image stylized identifier so as to obtain a final target image.
In the embodiment of the application, a plurality of different style background images are displayed on the screen of the photographing device for selection by a user, and when the user selects one style background image and photographs, the photographing device can acquire the image stylized identifier corresponding to the selected style background image.
In some embodiments of the present application, before sending an image to be processed and an image stylized identifier to a server, the photographing apparatus needs to perform preprocessing on the image to be processed, for example, performing edge clipping on the image to be processed according to a preset size, where the preset size depends specifically on the resolution of the photographed image, for example, the image to be processed may be clipped into an image with an aspect ratio of 4:3; or cropping the image to be processed into an image with an aspect ratio of 1:1. Therefore, the problems that the size of the human image at the edge is undersized, the defocus blur and the like can be effectively avoided due to the high aspect ratio of the image to be processed. Or the images to be processed after the stylized processing can be overlapped and fused with the stylized background images after being cut into the size consistent with the stylized background images.
In some embodiments of the present application, the photographing apparatus receives the image link transmitted from the server; and displaying the final target image and the image link on a screen terminal of the photographing equipment according to the image link.
The step of the server performing the stylization processing on the image may be performed on the side of the photographing apparatus with the performance support of the photographing apparatus.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Based on the image processing method provided by the above embodiment, the embodiment of the present application further provides an embodiment of a device for implementing the above embodiment of the method.
Referring to fig. 6, fig. 6 is a schematic diagram of an image processing apparatus according to an embodiment of the application. The units included are for performing the steps in the corresponding embodiment of fig. 1. Refer specifically to the description of the corresponding embodiment in fig. 1. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 6, the image processing apparatus 6 includes:
An image and identifier obtaining unit 61, configured to obtain an image to be processed and an image stylized identifier sent by the photographing apparatus;
The image segmentation unit 62 is configured to adaptively segment a target and a background of an image to be processed based on an image matting method, so as to obtain a first target image;
A stylizing processing unit 63, configured to input the first target image into a feature model corresponding to the image stylizing identifier to perform stylizing processing, so as to obtain a second target image;
a stylized background image obtaining unit 64, configured to obtain, according to the image stylized identifier, a stylized background image corresponding to the image stylized identifier;
And the image fusion unit 65 is configured to superimpose and fuse the second target image and the stylized background image, so as to obtain a final target image after stylization.
Specifically, the stylized processing unit 63 includes:
An alpha channel acquisition subunit, configured to acquire an alpha channel of the first target image;
And the stylizing processing subunit is used for performing stylizing processing on the image area of which the alpha channel is larger than a preset value in the first target image through the feature model to obtain a second target image.
Specifically, the image segmentation unit 62 is further configured to:
And analyzing and determining image noise in the image to be processed, and removing the determined image noise.
Specifically, the image processing apparatus further includes:
and generating and transmitting an image link of the final target image.
Specifically, the image segmentation unit 62 is further configured to:
the image edges of the first target image are extended by predetermined pixels.
Referring to fig. 7, fig. 7 is a schematic diagram of another image processing apparatus according to an embodiment of the application. The units included are for performing the steps in the corresponding embodiment of fig. 1. Refer specifically to the description of the corresponding embodiment in fig. 5. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 7, the image processing apparatus 7 includes:
a to-be-processed image acquisition unit 71 for starting a shooting function to acquire a to-be-processed image;
an image stylized identifier acquisition unit 72 for acquiring an image stylized identifier;
The image and identifier sending unit 73 is configured to send an image to be processed and an image stylized identifier to the server, where the image to be processed and the image stylized identifier are used to instruct the server to perform stylized processing on the image to be processed according to the image stylized identifier so as to obtain a final target image.
Illustratively, the image processing apparatus further includes:
an image link receiving unit for receiving the image link sent by the server;
And the image display unit is used for displaying the final target image and the image link on the screen terminal of the photographing device according to the image link.
It should be noted that, because the content of information interaction and execution process between the modules and the embodiment of the method of the present application are based on the same concept, specific functions and technical effects thereof may be referred to in the method embodiment section, and details thereof are not repeated herein.
Fig. 8 is a schematic diagram of a server according to an embodiment of the present application. As shown in fig. 8, the server 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a speech recognition program, stored in the memory 81 and executable on the processor 80. The steps of the various image processing method embodiments described above, such as steps 101-105 shown in fig. 1, are implemented when the processor 80 executes the computer program 82. Or the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the units 61-65 shown in fig. 6.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function, which instruction segments describe the execution of the computer program 82 in the server 8. For example, the computer program 82 may be divided into an image and identifier obtaining unit 61, an image dividing unit 62, a stylized processing unit 63, a stylized background image obtaining unit 64, and an image fusing unit 65, and the specific functions of each unit are described in the corresponding embodiment of fig. 1, which is not repeated here.
The server may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of server 8 and is not limiting of server 8, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a server may also include input and output devices, network access devices, buses, etc.
The Processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the server 8, such as a hard disk or a memory of the server 8. The memory 81 may also be an external storage device of the server 8, such as a plug-in hard disk provided on the server 8, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 81 may also include both an internal storage unit of the server 8 and an external storage device. The memory 81 is used to store computer programs and other programs and data required by the server. The memory 81 may also be used to temporarily store data that has been output or is to be output.
Fig. 9 is a schematic diagram of a photographing apparatus according to an embodiment of the present application. As shown in fig. 9, the photographing apparatus 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92, such as a speech recognition program, stored in the memory 91 and executable on the processor 90. The steps of the various image processing method embodiments described above, such as steps 501-503 shown in fig. 5, are implemented by processor 90 when executing computer program 92. Or the processor 90, when executing the computer program 92, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the units 71-73 shown in fig. 7.
By way of example, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function, which instruction segments are used to describe the execution of the computer program 92 in the photographing apparatus 9. For example, the computer program 92 may be divided into the image obtaining unit 71 to be processed, the image stylized identifier obtaining unit 72, and the image and identifier sending unit 73, and the specific functions of each unit are described in the corresponding embodiment with reference to fig. 5, which is not repeated herein.
The photographing apparatus may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of photographing apparatus 9 and is not meant to be limiting of photographing apparatus 9, and may include more or fewer components than shown, or may combine certain components, or different components, such as photographing apparatus may also include input and output devices, network access devices, buses, etc.
The Processor 90 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the photographing apparatus 9, for example, a hard disk or a memory of the photographing apparatus 9. The memory 91 may also be an external storage device of the photographing apparatus 9, such as a plug-in hard disk provided on the photographing apparatus 9, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 91 may also include both an internal storage unit and an external storage device of the photographing apparatus 9. The memory 91 is used to store a computer program and other programs and data required for the photographing apparatus. The memory 91 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the image processing method when being executed by a processor.
The embodiment of the application provides a computer program product which can realize the image processing method when being executed by photographing equipment when being run on the photographing equipment.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (9)
1. An image processing method applied to a server, the image processing method comprising:
Acquiring an image to be processed and an image stylized identifier sent by photographing equipment;
Based on an image matting method, adaptively dividing a target and a background of the image to be processed to obtain a first target image;
inputting the first target image into a feature model corresponding to the image stylization identifier to perform stylization processing to obtain a second target image;
acquiring a stylized background image corresponding to the image stylized identifier according to the image stylized identifier;
Superposing and fusing the second target image and the stylized background image to obtain a stylized final target image;
inputting the first target image into a feature model corresponding to the image stylization identifier to perform stylization processing, so as to obtain a second target image, wherein the method comprises the following steps:
acquiring an alpha channel of the first target image;
no stylizing processing is performed on an image area of which the alpha channel is smaller than or equal to a preset value in the first target image;
And performing stylization processing on an image area with an alpha channel larger than a preset value in the first target image through the feature model to obtain a second target image.
2. The image processing method as claimed in claim 1, wherein when adaptively segmenting the object and the background of the image to be processed based on the image matting method to obtain a first object image, the method further comprises:
and analyzing and determining image noise in the image to be processed, and removing the determined image noise.
3. The image processing method according to claim 1, wherein after the second target image and the stylized background image are superimposed and fused to obtain a final target image after being stylized, comprising:
and generating an image link of the final target image and sending the image link to the photographing equipment.
4. An image processing method according to any one of claims 1 to 3, wherein the adaptively segmenting the object and the background of the image to be processed based on the image matting method, when obtaining the first object image, includes:
And expanding the image edge of the first target image according to the preset pixels.
5. An image processing method applied to a photographing apparatus, the image processing method comprising:
starting a shooting function to acquire an image to be processed;
Acquiring an image stylized identifier;
The image to be processed and the image stylization identification are sent to a server, and the image to be processed and the image stylization identification are used for indicating the server to perform stylization processing on the image to be processed according to the image stylization identification so as to obtain a final target image;
The method comprises the steps of indicating the server to adaptively divide a target and a background of an image to be processed based on an image matting method to obtain a first target image;
instructing the server to input the first target image into a feature model corresponding to the image stylization identifier for stylization processing to obtain a second target image;
inputting the first target image into a feature model corresponding to the image stylization identifier to perform stylization processing, so as to obtain a second target image, wherein the method comprises the following steps:
acquiring an alpha channel of the first target image;
no stylizing processing is performed on an image area of which the alpha channel is smaller than or equal to a preset value in the first target image;
And performing stylization processing on an image area with an alpha channel larger than a preset value in the first target image through the feature model to obtain a second target image.
6. The image processing method according to claim 5, wherein the image processing method further comprises:
receiving an image link sent by a server;
and displaying the final target image and the image link on a screen terminal of the photographing equipment according to the image link.
7. A server comprising a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the processor implements the image processing method according to any one of claims 1 to 4 when executing the computer program.
8. A photographing apparatus comprising a processor, a memory and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image processing method according to claim 5 or 6 when executing the computer program.
9. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the image processing method according to any one of claims 1 to 4 or 5 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110890846.7A CN113763233B (en) | 2021-08-04 | 2021-08-04 | Image processing method, server and photographing equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110890846.7A CN113763233B (en) | 2021-08-04 | 2021-08-04 | Image processing method, server and photographing equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113763233A CN113763233A (en) | 2021-12-07 |
| CN113763233B true CN113763233B (en) | 2024-06-21 |
Family
ID=78788520
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110890846.7A Active CN113763233B (en) | 2021-08-04 | 2021-08-04 | Image processing method, server and photographing equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113763233B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115272146B (en) * | 2022-07-27 | 2023-04-07 | 天翼爱音乐文化科技有限公司 | Stylized image generation method, system, device and medium |
| CN115908116A (en) * | 2022-11-29 | 2023-04-04 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
| CN116578226A (en) * | 2023-06-06 | 2023-08-11 | 北京字跳网络技术有限公司 | Image processing method, device, device, storage medium and program product |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
| CN111210487A (en) * | 2020-02-28 | 2020-05-29 | 深圳壹账通智能科技有限公司 | Pattern generation method and system |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8970583B1 (en) * | 2012-10-01 | 2015-03-03 | Google Inc. | Image space stylization of level of detail artifacts in a real-time rendering engine |
| US9076258B2 (en) * | 2013-03-14 | 2015-07-07 | Pixar | Stylizing animation by example |
| CN110830706A (en) * | 2018-08-08 | 2020-02-21 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
| CN109308679B (en) * | 2018-08-13 | 2022-08-30 | 深圳市商汤科技有限公司 | Image style conversion method and device, equipment and storage medium |
| CN110222722A (en) * | 2019-05-14 | 2019-09-10 | 华南理工大学 | Interactive image stylization processing method, calculates equipment and storage medium at system |
| CN112399196B (en) * | 2019-08-16 | 2022-09-02 | 阿里巴巴集团控股有限公司 | Image processing method and device |
| CN111340905B (en) * | 2020-02-13 | 2023-08-04 | 北京百度网讯科技有限公司 | Image stylization method, device, equipment and medium |
| CN111986076A (en) * | 2020-08-21 | 2020-11-24 | 深圳市慧鲤科技有限公司 | Image processing method and device, interactive display device and electronic equipment |
| CN112215854B (en) * | 2020-10-19 | 2024-07-12 | 珠海金山数字网络科技有限公司 | Image processing method and device |
-
2021
- 2021-08-04 CN CN202110890846.7A patent/CN113763233B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
| CN111210487A (en) * | 2020-02-28 | 2020-05-29 | 深圳壹账通智能科技有限公司 | Pattern generation method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113763233A (en) | 2021-12-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113763233B (en) | Image processing method, server and photographing equipment | |
| CN106504220B (en) | A kind of image processing method and device | |
| CN106778928B (en) | Image processing method and device | |
| US9560271B2 (en) | Removing unwanted objects from photographed image | |
| CN107771336B (en) | Feature detection and masking in images based on color distribution | |
| US20190080457A1 (en) | Electronic device and method for automatic human segmentation in image | |
| CN110300264B (en) | Image processing method, device, mobile terminal and storage medium | |
| JP6154075B2 (en) | Object detection and segmentation method, apparatus, and computer program product | |
| CN107507217B (en) | Method and device for making certificate photo and storage medium | |
| CN112541867B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
| EP3048579B1 (en) | Structure analysis method for recovering missing structures in an image after object removal | |
| CN108833784B (en) | Self-adaptive composition method, mobile terminal and computer readable storage medium | |
| CN106560840B (en) | A kind of image information identifying processing method and device | |
| CN103366352A (en) | Device and method for producing image with background being blurred | |
| CN106997580B (en) | Picture processing method and device | |
| CN111192190A (en) | Method and device for eliminating image watermark and electronic equipment | |
| CN108111911B (en) | Video data real-time processing method and device based on self-adaptive tracking frame segmentation | |
| CN109361850B (en) | Image processing method, device, terminal device and storage medium | |
| CN111881846B (en) | Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program | |
| WO2017173578A1 (en) | Image enhancement method and device | |
| CN105701762B (en) | Picture processing method and electronic equipment | |
| CN111179287A (en) | Portrait instance segmentation method, device, equipment and storage medium | |
| CN115082384A (en) | Image processing method, model training method, body shaping method, image processing device, body shaping device, and storage medium | |
| CN107564085B (en) | Image warping processing method, device, computing device and computer storage medium | |
| CN109791703B (en) | Generate 3D user experience based on 2D media content |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |