+

CN110769313A - Video processing method and device and storage medium - Google Patents

Video processing method and device and storage medium Download PDF

Info

Publication number
CN110769313A
CN110769313A CN201911135734.XA CN201911135734A CN110769313A CN 110769313 A CN110769313 A CN 110769313A CN 201911135734 A CN201911135734 A CN 201911135734A CN 110769313 A CN110769313 A CN 110769313A
Authority
CN
China
Prior art keywords
special effect
video
special
processing
parameter set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911135734.XA
Other languages
Chinese (zh)
Other versions
CN110769313B (en
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911135734.XA priority Critical patent/CN110769313B/en
Publication of CN110769313A publication Critical patent/CN110769313A/en
Application granted granted Critical
Publication of CN110769313B publication Critical patent/CN110769313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4586Content update operation triggered locally, e.g. by comparing the version of software modules in a DVB carousel to the version stored locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video processing method and device and a storage medium, and belongs to the field of video processing. The method comprises the following steps: acquiring a test video; performing first special effect processing on the test video according to a first special effect parameter set to obtain a first special effect video, wherein the first special effect parameter set comprises at least one type of special effect information, and each type of special effect information comprises a special effect parameter; when a determination instruction for triggering a first special effect video is received, generating a special effect configuration file according to a first special effect parameter set, wherein the special effect configuration file comprises the first special effect parameter set; and uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file. The method and the device are beneficial to simplifying the process of the user terminal for carrying out special effect processing on the video.

Description

Video processing method and device and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a video processing method and apparatus, and a storage medium.
Background
With the popularization of user terminals such as smart phones and tablet computers, video processing Application programs (english: Application; abbreviated as App) based on the user terminals are also becoming more and more popular. The video processing App is usually internally provided with special effect parameters, and the video processing App can carry out special effect processing on the video according to the special effect parameters. For example, the video processing App can filter the video according to the filter parameters.
However, in the current scheme, if the user terminal needs to perform special effect processing on the video by using the updated special effect parameter, the video processing App needs to be updated first to update the special effect parameter, and then the video can be subjected to special effect processing by using the updated special effect parameter, so that the process of performing special effect processing on the video by the user terminal is complex.
Disclosure of Invention
The application provides a video processing method and device and a storage medium, which are beneficial to simplifying the process of special effect processing of a user terminal on a video. The technical scheme is as follows:
in a first aspect, a video processing method is provided, and the method includes:
acquiring a test video;
performing first special effect processing on the test video according to a first special effect parameter set to obtain a first special effect video, wherein the first special effect parameter set comprises at least one type of special effect information, and each type of special effect information comprises a special effect parameter;
when a determination instruction for the first special effect video trigger is received, generating a special effect configuration file according to the first special effect parameter set, wherein the special effect configuration file comprises the first special effect parameter set;
uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
Optionally, after performing a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, the method further includes:
when an adjusting instruction triggered by the first special-effect video is received, adjusting the special-effect information in the first special-effect parameter set to obtain a second special-effect number set;
performing second special effect processing on the test video according to the second special effect number set to obtain a second special effect video;
when a determination instruction for the second special effect video trigger is received, generating a special effect configuration file according to the second special effect parameter set, wherein the special effect configuration file comprises the second special effect parameter set;
uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
Optionally, before generating a special effects profile according to the first special effects parameter set, the method further comprises: when a preview instruction of the first special effect video is received, previewing the first special effect video;
before generating a special effects profile according to the second special effects parameter set, the method further comprises: when a preview instruction of the second special-effect video is received, previewing the second special-effect video.
Optionally, the special effect information includes a special effect parameter, and a start timestamp and an end timestamp corresponding to the special effect parameter, where the start timestamp indicates a start processing time of the special effect parameter, and the end timestamp indicates an end processing time of the special effect parameter.
Optionally, the special effect parameter includes at least one of a filter parameter, a transition parameter, a split screen parameter, or a template parameter.
Optionally, the acquiring the test video includes:
acquiring at least one test video segment and/or at least one test picture;
and generating the test video according to the at least one test video segment and/or the at least one test picture.
In a second aspect, a video processing apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring a test video;
a first processing module, configured to perform a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, where the first special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter;
a first generating module, configured to generate a special effect profile according to the first special effect parameter set when a determination instruction for the first special effect video trigger is received, where the special effect profile includes the first special effect parameter set;
the first uploading module is used for uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
Optionally, the apparatus further comprises:
the adjusting module is used for adjusting the special effect information in the first special effect parameter set to obtain a second special effect number set when an adjusting instruction triggered by the first special effect video is received;
the second processing module is used for carrying out second special effect processing on the test video according to the second special effect number set to obtain a second special effect video;
a second generating module, configured to generate a special effect configuration file according to the second special effect parameter set when a determination instruction for the second special effect video trigger is received, where the special effect configuration file includes the second special effect parameter set;
and the second uploading module is used for uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
Optionally, the apparatus further comprises:
the first preview module is used for previewing the first special-effect video when a preview instruction of the first special-effect video is received;
and the second preview module is used for previewing the second special-effect video when a preview instruction of the second special-effect video is received.
Optionally, the special effect information includes a special effect parameter, and a start timestamp and an end timestamp corresponding to the special effect parameter, where the start timestamp indicates a start processing time of the special effect parameter, and the end timestamp indicates an end processing time of the special effect parameter.
Optionally, the special effect parameter includes at least one of a filter parameter, a transition parameter, a split screen parameter, or a template parameter.
Optionally, the obtaining module is configured to:
acquiring at least one test video segment and/or at least one test picture;
and generating the test video according to the at least one test video segment and/or the at least one test picture.
In a third aspect, a video processing apparatus is provided, including: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the video processing method according to the first aspect or any optional manner of the first aspect.
In a fourth aspect, there is provided a storage medium in which a program is enabled to implement the video processing method according to the first aspect or any alternative form of the first aspect when the program is executed by a processor.
The beneficial effect that technical scheme that this application provided brought is:
in the video processing method and apparatus and the storage medium provided in the embodiments of the present application, after the test terminal performs special effect processing on the test video according to the special effect parameter set to obtain the special effect video, when receiving the determination instruction triggered by the special effect video, generating a special effect configuration file according to the special effect parameter set, uploading the special effect configuration file to a server, the effect profile comprises a set of effect parameters comprising at least one type of effect information, each type of effect information comprising effect parameters, such that the user terminal can retrieve the effect profile from the server, the video to be processed is subjected to special effect processing according to the special effect parameter set in the special effect configuration file, the user terminal can carry out special effect processing on the video to be processed according to the updated special effect parameter without updating the video processing App, and the process of carrying out special effect processing on the video by the user terminal is simplified.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment to which various embodiments of the present application relate;
fig. 2 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a flowchart of another video processing method provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface of a test terminal according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a user interface of another test terminal provided in an embodiment of the present application;
fig. 6 is a block diagram of a video processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of another video processing apparatus provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a schematic diagram of an implementation environment according to various embodiments of the present application is shown, the implementation environment including: the test terminal 120, the server 140 and the user terminal 160, each of the test terminal 120 and the user terminal 160 being communicatively connected to the server 140 via a wired network or a wireless network. The wireless network may include, but is not limited to: a Wireless Fidelity (WIFI) network, a bluetooth network, an infrared network, a Zigbee (Zigbee) network, or a data network, and the wired network may be a Universal Serial Bus (USB) network.
The test terminal 120 may be an electronic device capable of performing video processing, the electronic device may be a Personal Computer (PC), and the operating system of the test terminal 120 may be a Microsoft Windows operating system (Microsoft Windows) or a Macintosh operating system (mac). The test terminal 120 may be installed with a video processing tool (or called as a video editing tool), the video processing tool may be embedded with a special effect processing code, and the test terminal 120 may perform special effect processing on the test video by executing the special effect processing code embedded in the video processing tool, so as to test whether the special effect parameter meets an expected effect. As shown in fig. 1, the test terminal 120 is a desktop computer in the embodiment of the present application.
The user terminal 160 may be an electronic device capable of performing special effect processing on a video, and the electronic device may be a smart phone, a tablet computer, a smart television, a smart watch, a motion Picture Experts Group Audio Layer 5 (english: Moving Picture Experts Group Audio Layer V; abbreviated as MP5) player, a laptop portable computer, a desktop computer, or the like. The user terminal 160 may be installed with a video processing App, and the user terminal 160 performs special effect processing on the video through the video processing App. As shown in fig. 1, the embodiment of the present application is described by taking an example in which the user terminal 160 is a smart phone.
The server 140 may be a server, a server cluster composed of several servers, or a cloud computing service center. The test terminal 120 may upload the special effect parameters satisfying the expected effect to the server 140, and the user terminal 160 may obtain the special effect parameters from the server 140 and perform special effect processing on the video to be processed according to the obtained special effect parameters.
In this embodiment of the application, the test terminal 120 may obtain a test video, perform special effect processing on the test video according to a special effect parameter set to obtain a special effect video, generate a special effect configuration file according to a special effect parameter set when a determination instruction triggered for the special effect video is received, and upload the special effect configuration file to the server 140, where the special effect configuration file includes the special effect parameter set, the special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter; the user terminal 160 can obtain the special effect configuration file from the server 140, and update the special effect parameters in the user terminal 160 according to the special effect parameter set in the special effect configuration file, so that the user terminal can perform special effect processing on the video to be processed by using the updated special effect parameters without updating the video processing App, and the process of performing special effect processing on the video by the user terminal is simplified.
Referring to fig. 2, a flowchart of a method for processing a video according to an embodiment of the present application is shown, where the method for processing a video can be applied to the test terminal 120 in the implementation environment shown in fig. 1. Referring to fig. 2, the method may include the steps of:
step 201, obtaining a test video.
Step 202, performing a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, where the first special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter.
Step 203, when a determination instruction for the first special effect video trigger is received, generating a special effect configuration file according to the first special effect parameter set, where the special effect configuration file includes the first special effect parameter set.
And 204, uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
In summary, in the video processing method provided in the embodiment of the present application, after the test terminal performs special effect processing on the test video according to the special effect parameter set to obtain the special effect video, when receiving the determination instruction triggered by the special effect video, generating a special effect configuration file according to the special effect parameter set, uploading the special effect configuration file to a server, the effect profile comprises a set of effect parameters comprising at least one type of effect information, each type of effect information comprising effect parameters, such that the user terminal can retrieve the effect profile from the server, and performing special effect processing on the video to be processed according to the special effect parameter set in the special effect configuration file, wherein the user terminal can perform special effect processing on the video to be processed according to the updated special effect parameter without updating the video processing App, so that the process of performing special effect processing on the video by the user terminal is simplified.
Referring to fig. 3, a flowchart of another video processing method provided in the embodiment of the present application is shown, where the video processing method can be applied to the test terminal 120 in the implementation environment shown in fig. 1. Referring to fig. 3, the method may include the steps of:
step 301, obtaining a test video.
Alternatively, the test terminal may directly acquire a piece of video as the test video. Alternatively, the test terminal may obtain at least one test video segment and/or at least one test picture, and generate a test video according to the at least one test video segment and/or the at least one test picture, which is not limited in this embodiment of the present application.
The step of directly acquiring a section of video as a test video by the test terminal may include: the test terminal collects a section of video as a test video, or the test terminal loads a section of video from a local or server as the test video, which is not limited in the embodiments of the present application.
The obtaining, by the test terminal, at least one test video segment and/or at least one test picture, and generating the test video according to the at least one test video segment and/or the at least one test picture may include: the test terminal collects at least one test video segment and/or at least one test picture, or the test terminal loads at least one test video segment and/or at least one test picture from a local or server, and then the test terminal can perform serial (splicing) combination on the at least one test video segment and/or the at least one test picture to obtain a test video. For example, assuming that the test terminal acquires video segment 1, video segment 2, picture 1, and picture 2, the test terminal may combine video segment 1, video segment 2, picture 1, and picture 2 in series to obtain a test video, for example, the test terminal combines video segment 1, video segment 2, picture 1, and picture 2 in the following manner: the sequential combination of "video segment 1- > picture 11- > video segment 21- > picture 2" results in the test video.
Step 302, performing a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, where the first special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter.
The first special effect parameter set may include at least one type of special effect information, each type of special effect information may include a special effect parameter, and each type of special effect information may further include a start timestamp and an end timestamp corresponding to the special effect parameter, in each type of special effect information, the start timestamp indicates a start processing time of the special effect parameter, and the end timestamp indicates an end processing time of the special effect parameter. In the first special effect parameter set, the special effect information may include at least one of filter information, transition information, split screen information, or template information, and accordingly, the special effect parameter may include at least one of a filter parameter, transition parameter, split screen parameter, or template parameter. For example, in the embodiment of the present application, the first special effect parameter set may be as shown in table 1 below:
TABLE 1
Figure BDA0002279549580000071
Figure BDA0002279549580000081
As shown in table 1, the first special effect parameter set includes filter information, transition information, split screen information, template information, and the like, and the filter information includes filter parameter L1, start timestamp T1, and the likeL1And a termination timestamp T2L1The transition information comprises a transition parameter Z1 and a start time stamp T1Z1And a termination timestamp T2Z1The split screen information comprises a split screen parameter F1 and a start time stamp T1F1And a termination timestamp T2F1The template information comprises a template parameter M1 and a starting time stamp T1M1And a termination timestamp T2M1
In the embodiment of the application, a video processing tool (or called as a video editing tool) is installed in the test terminal, a special effect processing code is embedded in the video processing tool, and the test terminal can perform first special effect processing on the test video by executing the special effect processing code according to the first special effect parameter set to obtain a first special effect video. Illustratively, taking the first special effect parameter set as shown in table 1 as an example, the test terminal performs special effect processing code from T1 of the test videoL1Filter processing of the test video with filter parameter L1 was started until T2L1From T1 of test videoZ1Starting the transition processing of the test video with the transition parameter Z1 until T2Z1From T1 of test videoF1Starting to screen-split the test video by the screen-split parameter F1 until T2F1From T1 of test videoM1The test video is initially subjected to template processing (i.e., adding templates to the video) with the template parameters M1 until T2M1And finally obtaining the first special effect video.
Step 303, previewing the first special effect video when receiving a preview instruction of the first special effect video.
Optionally, after the test terminal performs the first special effect processing on the test video according to the first special effect parameter set to obtain the first special effect video, a preview button may be displayed in a user interface of the test terminal, and a test user may trigger a preview instruction for the first special effect video through the preview button.
For example, please refer to fig. 4, which shows a schematic diagram of a user interface 400 of a test terminal according to an embodiment of the present application, where the user interface 400 may be a video editing interface, as shown in fig. 4, a first special effect video and a preview button are displayed in the user interface 400, and a test user may trigger a preview instruction for the first special effect video by clicking the preview button with a mouse.
Step 304, when a determination instruction for a first special effect video trigger is received, generating a special effect configuration file according to the first special effect parameter set, where the special effect configuration file includes the first special effect parameter set.
Optionally, the test user may trigger a determination instruction for the first special effect video in a user interface of the test terminal, and when the test terminal receives the determination instruction for the first special effect video trigger, the test terminal generates a special effect configuration file according to the first special effect parameter set, where the special effect configuration file includes the first special effect parameter set. The format of the special effect configuration file may be a JavaScript Object Notation (json), that is, the special effect configuration file may be a json file. Optionally, a determination button may be displayed in the user interface of the test terminal, and the test user may trigger a determination instruction for the first special effect video through the determination button.
For example, please refer to fig. 5, which illustrates a schematic diagram of another user interface 500 of the test terminal provided in the embodiment of the present application, where the user interface 500 may be a video preview interface, as shown in fig. 5, a first special effect video and a determination button are displayed in the user interface 500, and a test user may trigger a determination instruction for the first special effect video by clicking the determination button with a mouse.
It should be noted that, in the process of previewing the first special-effect video, the test terminal may test whether the special-effect of the first special-effect video meets an expected effect, and if the special-effect of the first special-effect video meets the expected effect, the test terminal may trigger a determination instruction for the first special-effect video.
Step 305, uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server, and performing special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
Optionally, the test terminal may upload the special effect configuration file to the server through communication connection with the server, and the server may store the special effect processing file after receiving the special effect configuration file. If the user terminal needs to perform special effect processing on the video to be processed, the user terminal can acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
Optionally, after obtaining the special effect configuration file, the user terminal may read special effect information in the special effect configuration file, and initialize the video processing App in the user terminal according to the special effect information, so as to initialize each piece of special effect information in the special effect configuration file into the video processing App, and then the user terminal may customize the video to be processed according to the operation of the user, and perform special effect processing on the video to be processed according to each piece of initialized special effect information in the video processing App. That is, the user terminal performs special effect processing on the video to be processed according to the first special effect parameter set.
For example, the effect profile acquired by the user terminal may include a first effect parameter set as shown in table 1, and the user terminal may obtain filter information (L1, T1) from the first effect parameter setL1,T2L1) Performing filter processing on the video to be processed (namely, the user terminal performs T1 from the video to be processedL1Starting to perform filter processing on the video to be processed by using the filter parameter L1 until T2L1) According to transition information in the first special effect parameter set(Z1,T1Z1,T2Z1) The video to be processed is processed by transition (i.e. the user terminal is from T1 of the video to be processedZ1Starting to carry out transition processing on the video to be processed by using the transition parameter Z1 until T2Z1) According to the split screen information (F1, T1) in the first special effect parameter setF1,T2F1) Performing split screen processing on the video to be processed (namely, the user terminal performs split screen processing on the video to be processed from T1F1Starting to perform split screen processing on the video to be processed by using the split screen parameter F1 until T2F1) And so on.
Step 306, when an adjustment instruction triggered by the first special effect video is received, adjusting the special effect information in the first special effect parameter set to obtain a second special effect number set.
Optionally, the test user may trigger an adjustment instruction for the first special-effect video in a user interface of the test terminal, and when the test terminal receives the adjustment instruction triggered for the first special-effect video, the test terminal adjusts the special-effect information in the first special-effect parameter set to obtain the second special-effect number set.
Optionally, a cancel button may be displayed in a user interface of the test terminal, the test user may trigger an adjustment instruction for the first special-effect video through the cancel button, and after the test user triggers the adjustment instruction, the test terminal may cancel a current processing effect on the test video. As shown in fig. 5, a first special effect video and a cancel button are displayed in the user interface 500, and a test user may trigger an adjustment instruction for the first special effect video by clicking the cancel button with a mouse.
Optionally, the test user may input new special effect information in the test terminal, and the test terminal adjusts the special effect information in the first special effect parameter set according to the special effect information input by the test user to obtain the second special effect number set. It should be noted that, in the process of previewing the first special effect video, the test terminal may observe whether the special effect of the first special effect video meets an expected effect, if the special effect of the first special effect video does not meet the expected effect, the test terminal may trigger an adjustment instruction for the first special effect video, and input new special effect information in the test terminal according to a special effect parameter in the first special effect video that does not meet the expected effect, so that the test terminal adjusts the special effect information in the first special effect parameter set to obtain the second special effect number set.
For example, in this embodiment of the application, assuming that the filter effect in the first special effect video does not meet the expected effect, after the test user triggers the adjustment instruction, new filter information may be input in the test terminal, and the test terminal adjusts the special effect information in the first special effect parameter set according to the new filter information to obtain a second special effect number set, which may be shown in table 2 below:
TABLE 2
Figure BDA0002279549580000111
Comparing tables 1 and 2, the test terminal compares the filter information in Table 1 (L1, T1)L1,T2L1) Adjusted to (L2, T1)L2,T3L2) A second special effect parameter set is obtained as shown in table 2.
It should be noted that, in this embodiment of the present application, when the test terminal adjusts the special effect information in the first special effect parameter set, for each piece of special effect information, the test terminal may adjust at least one of a special effect parameter and a timestamp in the special effect information, which is not limited in this embodiment of the present application.
And 307, performing second special effect processing on the test video according to the second special effect number set to obtain a second special effect video.
In the embodiment of the application, a video processing tool is installed in the test terminal, a special effect processing code is built in the video processing tool, and the test terminal can perform second special effect processing on the test video by executing the special effect processing code according to the second special effect parameter set to obtain a second special effect video. Illustratively, taking the second special effect parameter set as shown in table 2 as an example, the test terminal performs special effect processing code from T1 of the test videoL2Begin filter processing the test video with filter parameter L2Until T3L2From T1 of test videoZ1Starting the transition processing of the test video with the transition parameter Z1 until T2Z1From T1 of test videoF1Starting to screen-split the test video by the screen-split parameter F1 until T2F1From T1 of test videoM1Starting to perform template processing on the test video by using the template parameter M1 until T2M1And finally obtaining a second special effect video.
Step 308, when a preview instruction for the second special effect video is received, previewing the second special effect video.
Optionally, after the test terminal performs a second special effect process on the test video according to the second special effect parameter set to obtain a second special effect video, a preview button may be displayed in a user interface of the test terminal, and a test user may trigger a preview instruction for the second special effect video through the preview button.
Step 309, when receiving a determination instruction triggered for the second special effect video, generating a special effect configuration file according to the second special effect parameter set, where the special effect configuration file includes the second special effect parameter set.
Optionally, the test user may trigger a determination instruction for the second special effect video in a user interface of the test terminal, and when the test terminal receives the determination instruction for the second special effect video trigger, the test terminal generates a special effect configuration file according to the second special effect parameter set, where the special effect configuration file includes the second special effect parameter set. The format of the special effect configuration file may be json format.
Step 310, uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server, and performing special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
Optionally, the test terminal may upload the special effect configuration file to the server through communication connection with the server, and the server may store the special effect processing file after receiving the special effect configuration file. If the user terminal needs to perform special effect processing on the video to be processed, the user terminal can acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
It should be noted that, the order of the steps of the video processing method provided in the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the situation, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application shall be included in the protection scope of the present application, and therefore, the details are not described again.
In summary, in the video processing method provided in the embodiment of the present application, after the test terminal performs special effect processing on the test video according to the special effect parameter set to obtain the special effect video, when receiving the determination instruction triggered by the special effect video, generating a special effect configuration file according to the special effect parameter set, uploading the special effect configuration file to a server, the effect profile comprises a set of effect parameters comprising at least one type of effect information, each type of effect information comprising effect parameters, such that the user terminal can retrieve the effect profile from the server, and performing special effect processing on the video to be processed according to the special effect parameter set in the special effect configuration file, wherein the user terminal can perform special effect processing on the video to be processed according to the updated special effect parameter without updating the video processing App, so that the process of performing special effect processing on the video by the user terminal is simplified.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 6, a block diagram of a video processing apparatus 600 according to an embodiment of the present application is shown, where the video processing apparatus 600 may be a program component in the testing terminal 120 in the implementation environment shown in fig. 1. Referring to fig. 6, the video processing apparatus 600 may include, but is not limited to:
an obtaining module 601, configured to obtain a test video;
a first processing module 602, configured to perform a first special effect process on a test video according to a first special effect parameter set to obtain a first special effect video, where the first special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter;
a first generating module 603, configured to generate, when a determination instruction for a first special effect video trigger is received, a special effect configuration file according to a first special effect parameter set, where the special effect configuration file includes the first special effect parameter set;
the first uploading module 604 is configured to upload the special effect configuration file to the server, so that the user terminal can obtain the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
In summary, according to the video processing apparatus provided in the embodiment of the present application, after the test terminal performs special effect processing on the test video according to the special effect parameter set to obtain the special effect video, when receiving the determination instruction triggered by the special effect video, generating a special effect configuration file according to the special effect parameter set, uploading the special effect configuration file to a server, the effect profile comprises a set of effect parameters comprising at least one type of effect information, each type of effect information comprising effect parameters, such that the user terminal can retrieve the effect profile from the server, and performing special effect processing on the video to be processed according to the special effect parameter set in the special effect configuration file, wherein the user terminal can perform special effect processing on the video to be processed according to the updated special effect parameter without updating the video processing App, so that the process of performing special effect processing on the video by the user terminal is simplified.
Optionally, referring to fig. 7, which shows a block diagram of another video processing apparatus 600 provided in an embodiment of the present application, referring to fig. 7, on the basis of fig. 6, the video processing apparatus 600 further includes:
an adjusting module 605, configured to adjust special effect information in the first special effect parameter set to obtain a second special effect number set when an adjusting instruction triggered for the first special effect video is received;
a second processing module 606, configured to perform a second special effect processing on the test video according to the second special effect number set to obtain a second special effect video;
a second generating module 607, configured to generate a special effect configuration file according to the second special effect parameter set when a determination instruction for a second special effect video trigger is received, where the special effect configuration file includes the second special effect parameter set;
the second uploading module 608 is configured to upload the special effect configuration file to the server, so that the user terminal can obtain the special effect configuration file from the server, and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
With continued reference to fig. 7, the video processing apparatus 600 further includes:
the first preview module 609 is configured to preview the first special-effect video when a preview instruction for the first special-effect video is received;
the second preview module 610 is configured to preview the second special-effect video when a preview instruction for the second special-effect video is received.
Optionally, the special effect information includes a special effect parameter, and a start timestamp and an end timestamp corresponding to the special effect parameter, where the start timestamp indicates a start time of processing the test video according to the special effect parameter, and the end timestamp indicates an end time of processing the test video according to the special effect parameter.
Optionally, the special effect parameters include at least one of filter parameters, transition parameters, split screen parameters, or template parameters.
Optionally, the obtaining module 601 is configured to:
acquiring at least one test video segment and/or at least one test picture;
and generating a test video according to the at least one test video segment and/or the at least one test picture.
In summary, according to the video processing apparatus provided in the embodiment of the present application, after the test terminal performs special effect processing on the test video according to the special effect parameter set to obtain the special effect video, when receiving the determination instruction triggered by the special effect video, generating a special effect configuration file according to the special effect parameter set, uploading the special effect configuration file to a server, the effect profile comprises a set of effect parameters comprising at least one type of effect information, each type of effect information comprising effect parameters, such that the user terminal can retrieve the effect profile from the server, and performing special effect processing on the video to be processed according to the special effect parameter set in the special effect configuration file, wherein the user terminal can perform special effect processing on the video to be processed according to the updated special effect parameter without updating the video processing App, so that the process of performing special effect processing on the video by the user terminal is simplified.
An embodiment of the present application provides a video processing apparatus, including: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory is used for storing a computer program.
The processor is configured to execute the computer program stored in the memory, and implement the video processing method provided by the above embodiment.
Referring to fig. 8, a schematic structural diagram of a video processing apparatus 800 according to an embodiment of the present disclosure is shown. The apparatus 800 may be a PC and may be a portable mobile terminal, such as: smart phones, tablet computers, motion Picture Experts Group Audio Layer IV players, notebook computers, or desktop computers. The apparatus 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the apparatus 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with an image processor (GPU), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 801 may further include an Artificial Intelligence (AI) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 802 is used to store at least one instruction for execution by the processor 801 to implement the video processing method provided by the embodiments of the present application.
In some embodiments, the apparatus 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a display screen 805, a camera assembly 806, audio circuitry 807, a positioning assembly 808, or a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral device associated with Input/Output (I/O) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting Radio Frequency (RF) signals, which are also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, the metropolitan area network, the intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), the Wireless local area network and/or the Wireless Fidelity (WiFi) network. In some embodiments, the rf circuit 804 may further include a Near Field Communication (NFC) related circuit, which is not limited in this application.
The display 805 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the device 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the device 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display, disposed on a curved surface or on a folded surface of the device 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 805 may be an Organic Light-Emitting Diode (OLED) display.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the device 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The Location component 808 is used to locate the current geographic Location of the device 800 for navigation or Location Based Services (LBS). The Positioning component 808 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 809 is used to power the various components in device 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power supply 809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the apparatus 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the device 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the device 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of device 800 and/or underneath touch display 805. When the pressure sensor 813 is arranged on the side frame of the device 800, the holding signal of the user to the device 800 can be detected, and the processor 801 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of device 800. When a physical button or vendor Logo is provided on the device 800, the fingerprint sensor 814 may be integrated with the physical button or vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the device 800. The proximity sensor 816 is used to capture the distance between the user and the front of the device 800. In one embodiment, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state when the proximity sensor 816 detects that the distance between the user and the front surface of the device 800 is gradually reduced; when the proximity sensor 816 detects that the distance between the user and the front of the device 800 is gradually increased, the touch display 805 is controlled by the processor 801 to switch from a rest screen state to a bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of the apparatus 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present application provides a storage medium, and when a program in the storage medium is executed by a processor, the video processing method provided by the above embodiment can be implemented. The storage medium may be non-transitory. For example, the storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The term "at least one of a or B" in this application is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, at least one of a or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. Similarly, "A, B or at least one of C" means that there may be seven relationships that may represent: seven cases of A alone, B alone, C alone, A and B together, A and C together, C and B together, and A, B and C together exist. Similarly, "A, B, C or at least one of D" indicates that there may be fifteen relationships, which may indicate: fifteen cases of a alone, B alone, C alone, D alone, a and B together, a and C together, a and D together, C and B together, D and B together, C and D together, A, B and C together, A, B and D together, A, C and D together, B, C and D together, A, B, C and D together exist.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The terms "first" and "second" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
The term "at least one" in the present application means one or more, and the plural means two or more.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of video processing, the method comprising:
acquiring a test video;
performing first special effect processing on the test video according to a first special effect parameter set to obtain a first special effect video, wherein the first special effect parameter set comprises at least one type of special effect information, and each type of special effect information comprises a special effect parameter;
when a determination instruction for the first special effect video trigger is received, generating a special effect configuration file according to the first special effect parameter set, wherein the special effect configuration file comprises the first special effect parameter set;
uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
2. The method of claim 1,
after performing a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, the method further includes:
when an adjusting instruction triggered by the first special-effect video is received, adjusting the special-effect information in the first special-effect parameter set to obtain a second special-effect number set;
performing second special effect processing on the test video according to the second special effect number set to obtain a second special effect video;
when a determination instruction for the second special effect video trigger is received, generating a special effect configuration file according to the second special effect parameter set, wherein the special effect configuration file comprises the second special effect parameter set;
uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
3. The method of claim 2,
before generating a special effects profile according to the first special effects parameter set, the method further comprises:
when a preview instruction of the first special effect video is received, previewing the first special effect video;
before generating a special effects profile according to the second special effects parameter set, the method further comprises:
when a preview instruction of the second special-effect video is received, previewing the second special-effect video.
4. The method of claim 1,
the special effect information comprises a special effect parameter and a starting time stamp and an ending time stamp corresponding to the special effect parameter, wherein the starting time stamp indicates the starting processing time of the special effect parameter, and the ending time stamp indicates the ending processing time of the special effect parameter.
5. The method according to any one of claims 1 to 4,
the special effect parameters comprise at least one of filter parameters, transition parameters, split screen parameters or template parameters.
6. A video processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a test video;
a first processing module, configured to perform a first special effect process on the test video according to a first special effect parameter set to obtain a first special effect video, where the first special effect parameter set includes at least one type of special effect information, and each type of special effect information includes a special effect parameter;
a first generating module, configured to generate a special effect profile according to the first special effect parameter set when a determination instruction for the first special effect video trigger is received, where the special effect profile includes the first special effect parameter set;
the first uploading module is used for uploading the special effect configuration file to a server so that a user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the first special effect parameter set in the special effect configuration file.
7. The apparatus of claim 6, further comprising:
the adjusting module is used for adjusting the special effect information in the first special effect parameter set to obtain a second special effect number set when an adjusting instruction triggered by the first special effect video is received;
the second processing module is used for carrying out second special effect processing on the test video according to the second special effect number set to obtain a second special effect video;
a second generating module, configured to generate a special effect configuration file according to the second special effect parameter set when a determination instruction for the second special effect video trigger is received, where the special effect configuration file includes the second special effect parameter set;
and the second uploading module is used for uploading the special effect configuration file to a server so that the user terminal can conveniently acquire the special effect configuration file from the server and perform special effect processing on the video to be processed according to the second special effect parameter set in the special effect configuration file.
8. The apparatus of claim 7, further comprising:
the first preview module is used for previewing the first special-effect video when a preview instruction of the first special-effect video is received;
and the second preview module is used for previewing the second special-effect video when a preview instruction of the second special-effect video is received.
9. The apparatus of claim 6,
the special effect information comprises a special effect parameter and a starting time stamp and an ending time stamp corresponding to the special effect parameter, wherein the starting time stamp indicates the starting processing time of the special effect parameter, and the ending time stamp indicates the ending processing time of the special effect parameter.
10. The apparatus according to any one of claims 6 to 9,
the special effect parameters comprise at least one of filter parameters, transition parameters, split screen parameters or template parameters.
11. A video processing apparatus, comprising: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory for storing a computer program;
the processor, configured to execute the computer program stored in the memory, to implement the video processing method according to any one of claims 1 to 5.
12. A storage medium, characterized in that a program in the storage medium, when executed by a processor, is capable of implementing the video processing method according to any one of claims 1 to 5.
CN201911135734.XA 2019-11-19 2019-11-19 Video processing method and device and storage medium Active CN110769313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911135734.XA CN110769313B (en) 2019-11-19 2019-11-19 Video processing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911135734.XA CN110769313B (en) 2019-11-19 2019-11-19 Video processing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110769313A true CN110769313A (en) 2020-02-07
CN110769313B CN110769313B (en) 2022-02-22

Family

ID=69338509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911135734.XA Active CN110769313B (en) 2019-11-19 2019-11-19 Video processing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110769313B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN112637518A (en) * 2020-12-21 2021-04-09 北京字跳网络技术有限公司 Method, device, equipment and medium for generating simulated shooting special effect
CN112752034A (en) * 2020-03-16 2021-05-04 腾讯科技(深圳)有限公司 Video special effect verification method and device
CN113497898A (en) * 2020-04-02 2021-10-12 北京字节跳动网络技术有限公司 Video special effect configuration file generation method, video rendering method and device
EP4044604A4 (en) * 2020-06-28 2023-01-18 Tencent Technology (Shenzhen) Company Limited VIDEO SPECIAL EFFECTS PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095446A1 (en) * 2006-10-24 2008-04-24 Sunplus Technology Co., Ltd. Method and system for performing image processing in a computer apparatus
US20120050617A1 (en) * 2010-08-27 2012-03-01 Tomoji Mizutani Signal processing apparatus and method, and program
CN106817596A (en) * 2015-12-02 2017-06-09 徐文波 Act on the effect processing method and device of acquisition of media device
CN107749956A (en) * 2017-09-27 2018-03-02 北京嗨动视觉科技有限公司 Video source switches special efficacy realization device and video source switching special efficacy implementation method
CN108632540A (en) * 2017-03-23 2018-10-09 北京小唱科技有限公司 Method for processing video frequency and device
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
CN109525900A (en) * 2018-12-21 2019-03-26 广州华多网络科技有限公司 Method, apparatus, terminal and the storage medium of watermark are added in video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095446A1 (en) * 2006-10-24 2008-04-24 Sunplus Technology Co., Ltd. Method and system for performing image processing in a computer apparatus
US20120050617A1 (en) * 2010-08-27 2012-03-01 Tomoji Mizutani Signal processing apparatus and method, and program
CN106817596A (en) * 2015-12-02 2017-06-09 徐文波 Act on the effect processing method and device of acquisition of media device
CN108632540A (en) * 2017-03-23 2018-10-09 北京小唱科技有限公司 Method for processing video frequency and device
CN107749956A (en) * 2017-09-27 2018-03-02 北京嗨动视觉科技有限公司 Video source switches special efficacy realization device and video source switching special efficacy implementation method
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
CN109525900A (en) * 2018-12-21 2019-03-26 广州华多网络科技有限公司 Method, apparatus, terminal and the storage medium of watermark are added in video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭召龙: "基于OpenGL的视频编辑技术研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112752034A (en) * 2020-03-16 2021-05-04 腾讯科技(深圳)有限公司 Video special effect verification method and device
CN112752034B (en) * 2020-03-16 2023-08-18 腾讯科技(深圳)有限公司 Video special effect verification method and device
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN113497898A (en) * 2020-04-02 2021-10-12 北京字节跳动网络技术有限公司 Video special effect configuration file generation method, video rendering method and device
US11856152B2 (en) 2020-04-02 2023-12-26 Beijing Bytedance Network Technology Co., Ltd. Video special effect configuration file generation method and apparatus, and video rendering method and apparatus
EP4044604A4 (en) * 2020-06-28 2023-01-18 Tencent Technology (Shenzhen) Company Limited VIDEO SPECIAL EFFECTS PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
US12041372B2 (en) 2020-06-28 2024-07-16 Tencent Technology (Shenzhen) Company Limited Video special effect processing method and apparatus, and electronic device
CN112637518A (en) * 2020-12-21 2021-04-09 北京字跳网络技术有限公司 Method, device, equipment and medium for generating simulated shooting special effect
CN112637518B (en) * 2020-12-21 2023-03-24 北京字跳网络技术有限公司 Method, device, equipment and medium for generating simulated shooting special effect

Also Published As

Publication number Publication date
CN110769313B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN108401124B (en) Video recording method and device
CN110769313B (en) Video processing method and device and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN112667835B (en) Works processing method, device, electronic device and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN111065001A (en) Video production method, device, equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111142838A (en) Audio playing method and device, computer equipment and storage medium
CN108922506A (en) Song audio generation method, device and computer readable storage medium
CN111031394A (en) Video production method, device, equipment and storage medium
CN109117466B (en) Table format conversion method, device, equipment and storage medium
CN111276122A (en) Audio generation method and device and storage medium
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN110677713B (en) Video image processing method and device and storage medium
CN109783176B (en) Page switching method and device
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN112118482A (en) Audio file playing method and device, terminal and storage medium
CN110134902B (en) Data information generating method, device and storage medium
CN108966026B (en) Method and device for making video file
CN110888710A (en) Method and device for adding subtitles, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载