+

CN111311665B - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN111311665B
CN111311665B CN202010168846.1A CN202010168846A CN111311665B CN 111311665 B CN111311665 B CN 111311665B CN 202010168846 A CN202010168846 A CN 202010168846A CN 111311665 B CN111311665 B CN 111311665B
Authority
CN
China
Prior art keywords
preset range
position data
area
ground area
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010168846.1A
Other languages
Chinese (zh)
Other versions
CN111311665A (en
Inventor
郭亨凯
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010168846.1A priority Critical patent/CN111311665B/en
Publication of CN111311665A publication Critical patent/CN111311665A/en
Application granted granted Critical
Publication of CN111311665B publication Critical patent/CN111311665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)
  • Studio Circuits (AREA)

Abstract

The embodiment of the disclosure provides a video processing method, a video processing device and electronic equipment, and belongs to the technical field of image processing. The video processing method comprises the following steps: acquiring video information in a corresponding preset range acquired by a camera; distinguishing a ground area and a non-ground area of the preset range according to the video information; adding a decoration object in the ground area of the preset range; and if the visual angle of the camera is changed, controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters. Through the scheme of the disclosure, the diversity of the real-time video display effect is improved, and the overall display effect and processing effect of the video are optimized.

Description

Video processing method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a video processing method, a video processing device and electronic equipment.
Background
With the development of image processing technology, small video applications are becoming more and more widespread, and video processing technology is also gradually optimized. The existing video processing scheme is that according to the content collected by a real scene as video data, simple decoration special effects are carried out on the areas such as the face of a person main body of the video data, and the display effect is single.
Therefore, the existing video processing scheme has the technical problem of single display effect.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a video processing method, apparatus and electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
acquiring video information in a corresponding preset range acquired by a camera;
distinguishing a ground area and a non-ground area of the preset range according to the video information;
adding a decoration object in the ground area of the preset range;
and if the visual angle of the camera is changed, controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters.
According to a specific implementation manner of the embodiment of the present disclosure, the step of distinguishing, according to the video information, the ground area and the non-ground area within the preset range includes:
combining synchronous positioning and mapping SLAM technology, projecting the video information to a three-dimensional scene corresponding to the preset range;
determining position information of a plurality of target feature points in the three-dimensional scene;
dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the target feature points.
According to a specific implementation manner of the embodiment of the present disclosure, the step of controlling the display parameter of the decoration object to change with the viewing angle if the viewing angle of the camera changes includes:
acquiring position data of a dynamic region in the preset range, wherein the dynamic region is at least a partial region corresponding to the current view angle of the camera;
updating the position data of the dynamic region according to the visual angle of the camera;
and adjusting the display parameters of the decoration object according to the position data of the target point where the decoration object is and the position data of the dynamic range.
According to a specific implementation manner of the embodiment of the present disclosure, after the step of obtaining the position data of the dynamic area within the preset range, the method further includes:
establishing an array;
storing position data of the dynamic region through the array;
the step of updating the position data of the dynamic area according to the view angle of the camera comprises the following steps:
determining a visual angle change parameter of the camera;
and adjusting the dynamic region according to the visual angle change parameter, and adjusting the position data of the dynamic region in the array.
According to a specific implementation manner of the embodiment of the present disclosure, the step of obtaining the position data of the dynamic area within the preset range includes:
determining a target reference point of the current view angle of the camera;
taking an area in a preset range around the target reference point as a dynamic area of the preset range;
and acquiring the position data of the dynamic area.
According to a specific implementation manner of the embodiment of the present disclosure, the step of determining the target reference point of the current view angle of the camera includes:
and taking the central point of the foreground object in the current time of the camera as the target reference point.
According to a specific implementation manner of the embodiment of the present disclosure, the step of using the area within the preset range around the target reference point as the dynamic area of the preset range includes:
and taking the area within 2 meters around the target reference point as a dynamic area of the preset range.
According to a specific implementation manner of the embodiment of the present disclosure, the step of adjusting the dynamic area according to the viewing angle variation parameter, and adjusting the position data of the dynamic area in the array includes:
if the visual angle change parameter is a preset distance moved leftwards, moving the dynamic region leftwards by the preset distance, shifting the position data in the array rightwards by the preset distance, and adding the position data of the newly added dynamic region to the left-hand region of the array.
According to a specific implementation manner of the embodiment of the present disclosure, after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the plurality of target feature points, the method further includes:
calculating display parameters of the preset area, wherein the display parameters comprise a sliding average value, visible times and confidence;
and optimizing the display effect of the three-dimensional scene according to the display parameters.
According to a specific implementation manner of the embodiment of the present disclosure, after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the plurality of target feature points, the method further includes:
determining two sets of target feature points of the ground area;
and optimizing the shake error of the camera according to the position data of the two groups of target feature points in the ground area.
In a second aspect, an embodiment of the present disclosure provides a video processing apparatus, including:
the acquisition module is used for acquiring video information in a corresponding preset range acquired by the camera;
the distinguishing module is used for distinguishing the ground area and the non-ground area of the preset range according to the video information;
the decoration module is used for adding decoration objects in the ground area of the preset range;
and the display module is used for controlling the display parameters of the decoration objects to change along with the visual angle if the visual angle of the camera changes, wherein the display parameters at least comprise position data and shape parameters.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method of the first aspect or any implementation of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the video processing method of the first aspect or any implementation of the first aspect.
In a fifth aspect, embodiments of the present disclosure also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video processing method of the first aspect or any implementation of the first aspect.
The video processing scheme in the embodiment of the disclosure comprises the following steps: acquiring video information in a corresponding preset range acquired by a camera; distinguishing a ground area and a non-ground area of the preset range according to the video information; adding a decoration object in the ground area of the preset range; and if the visual angle of the camera is changed, controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters. Through the scheme of the disclosure, the diversity of the real-time video display effect is improved, and the overall display effect and processing effect of the video are optimized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
fig. 2 is a partial flow chart of another video processing method according to an embodiment of the disclosure;
fig. 3 is a partial flow chart of another video processing method according to an embodiment of the disclosure;
fig. 4 is a partial flow chart of another video processing method according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a video processing method. The video processing method provided in this embodiment may be performed by a computing device, which may be implemented as software, or as a combination of software and hardware, and the computing device may be integrally provided in a server, a terminal device, or the like.
Referring to fig. 1, a flowchart of a video processing method according to an embodiment of the disclosure is shown. As shown in fig. 1, the method mainly comprises the following steps:
s101, acquiring video information in a corresponding preset range acquired by a camera;
the video processing method provided by the embodiment is applied to the processing process of the video application program, in particular to the processing process of video editing and special effect display. The video processing method provided by the embodiment is applied to the electronic equipment with the video processing function, and the electronic equipment can be internally provided with or externally connected with a camera and is used for collecting video information in a preset range.
S102, distinguishing a ground area and a non-ground area of the preset range according to the video information;
after the real-time video is acquired according to the steps, ground segmentation is performed to distinguish a ground area and a non-ground area within a preset range, so that adaptive decoration operation or other video processing operation can be conveniently performed on the ground area.
According to a specific implementation manner of the embodiment of the present disclosure, as shown in fig. 2, the step of distinguishing, according to the video information, the ground area and the non-ground area in the preset range may include:
s201, combining synchronous positioning and mapping SLAM technology, projecting the video information to a three-dimensional scene corresponding to the preset range;
s202, determining position information of a plurality of target feature points in the three-dimensional scene;
and S203, dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the target feature points.
S103, adding a decoration object in the ground area of the preset range;
after the ground area with the preset range is divided, a decoration object can be added to the ground area, for example, a flower is added to the ground to achieve a certain decoration effect.
And S104, if the visual angle of the camera is changed, controlling the display parameters of the decoration objects to be changed along with the visual angle. Wherein the display parameters include at least position data and shape parameters.
After the decoration object is added in the ground area corresponding to the initial view angle, if the view angle of the camera changes, the display parameters such as the display position, the display shape and the like of the decoration object should also change accordingly in order to ensure the real-time display effect.
According to a specific implementation manner of the embodiment of the disclosure, in order to solve the technical problem that the map data is stored completely, which may cause a large memory requirement, the memory requirement can be reduced by storing partial area data. As shown in fig. 3, the step of controlling the display parameter of the decoration object to change with the viewing angle if the viewing angle of the camera changes may include:
s301, acquiring position data of a dynamic region in the preset range, wherein the dynamic region is at least a partial region corresponding to the current view angle of the camera;
s302, updating the position data of the dynamic area according to the visual angle of the camera;
s303, adjusting the display parameters of the decoration object according to the position data of the target point where the decoration object is and the position data of the dynamic range.
Furthermore, the real-time performance of the map data is realized by storing the position data of the dynamic area in the annular array. According to a specific implementation manner of the embodiment of the present disclosure, as shown in fig. 4, after the step of obtaining the position data of the dynamic area within the preset range, the method further includes:
s401, establishing an array;
s402, storing the position data of the dynamic area through the array;
the step of updating the position data of the dynamic area according to the view angle of the camera comprises the following steps:
s403, determining a visual angle change parameter of the camera;
s404, adjusting the dynamic area according to the visual angle change parameter, and adjusting the position data of the dynamic area in the array.
In addition, the step of acquiring the position data of the dynamic area within the preset range may further include:
determining a target reference point of the current view angle of the camera;
taking an area in a preset range around the target reference point as a dynamic area of the preset range;
and acquiring the position data of the dynamic area.
In another specific implementation manner of the embodiment of the present disclosure, the step of determining the target reference point of the current view angle of the camera includes:
and taking the central point of the foreground object in the current time of the camera as the target reference point.
According to a specific implementation manner of the embodiment of the present disclosure, the step of using the area within the preset range around the target reference point as the dynamic area of the preset range includes:
and taking the area within 2 meters around the target reference point as a dynamic area of the preset range.
According to a specific implementation manner of the embodiment of the present disclosure, the step of adjusting the dynamic area according to the viewing angle variation parameter, and adjusting the position data of the dynamic area in the array includes:
if the visual angle change parameter is a preset distance moved leftwards, moving the dynamic region leftwards by the preset distance, shifting the position data in the array rightwards by the preset distance, and adding the position data of the newly added dynamic region to the left-hand region of the array.
For example, the position of a person in a map is determined, and only map data in a surrounding area of 2 m by 2 m around the position is stored. If the person moves to the left, the position data in the array is moved to the right, and the map data of the newly added area on the left is added to the newly left area of the array, and vice versa, so that the array forms a ring frame surrounding the person. Thus, the real-time performance of the map data can be ensured, and excessive consumption of contents can be avoided.
According to a specific implementation manner of the embodiment of the present disclosure, after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the plurality of target feature points, the method further includes:
calculating display parameters of the preset area, wherein the display parameters comprise a sliding average value, visible times and confidence;
and optimizing the display effect of the three-dimensional scene according to the display parameters.
According to a specific implementation manner of the embodiment of the present disclosure, after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset area according to the position information of the plurality of target feature points, the method further includes:
determining two sets of target feature points of the ground area;
and optimizing the shake error of the camera according to the position data of the two groups of target feature points in the ground area.
In this embodiment, the problem of camera shake is optimized by combining the position data of the two intra-lens feature points obtained in SLAM. The two selected groups of data are located on the same plane, preferably on the ground, so that the two groups of data can be guaranteed to be located on the same plane, errors generated by camera shake are optimized, and the SLAM calculation result is more accurate.
In summary, the video processing method provided by the embodiment of the present disclosure includes: acquiring video information in a corresponding preset range acquired by a camera; distinguishing a ground area and a non-ground area of the preset range according to the video information; adding a decoration object in the ground area of the preset range; and if the visual angle of the camera is changed, controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters. Through the scheme of the disclosure, the diversity of the real-time video display effect is improved, and the overall display effect and processing effect of the video are optimized.
Corresponding to the above method embodiment, referring to fig. 5, the embodiment of the present disclosure further provides a video processing apparatus 50, including:
the acquisition module 501 is configured to acquire video information in a corresponding preset range acquired by the camera;
a distinguishing module 502, configured to distinguish a ground area and a non-ground area in the preset range according to the video information;
a decoration module 503, configured to add a decoration object to the ground area within the preset range;
and the display module 504 is configured to control a display parameter of the decoration object to change with the viewing angle if the viewing angle of the camera changes, where the display parameter at least includes position data and a shape parameter.
The apparatus shown in fig. 5 may correspondingly execute the content in the foregoing method embodiment, and the portions not described in detail in this embodiment refer to the content described in the foregoing method embodiment, which are not described herein again.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method of the foregoing method embodiments.
The disclosed embodiments also provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the video processing method in the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video processing method of the foregoing method embodiments.
Referring now to fig. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic device 60 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While an electronic device 60 having various means is shown, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, enable the electronic device to implement the solutions provided by the method embodiments described above.
Alternatively, the computer readable medium carries one or more programs, which when executed by the electronic device, enable the electronic device to implement the solutions provided by the method embodiments described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the disclosure are intended to be covered by the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A video processing method, comprising:
acquiring video information in a corresponding preset range acquired by a camera;
distinguishing a ground area and a non-ground area of the preset range according to the video information;
adding a decoration object in the ground area of the preset range;
acquiring position data of a dynamic region in the preset range, and storing the position data of the dynamic region by establishing an array, wherein the array is an annular array, and the dynamic region is at least a partial region corresponding to the current view angle of the camera;
if the visual angle of the camera changes, determining a visual angle change parameter of the camera, adjusting the dynamic region according to the visual angle change parameter, and adjusting the position data of the dynamic region in the array;
and adjusting the display parameters of the decoration object according to the position data of the target point where the decoration object is located and the position data of the dynamic area, and controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters.
2. The method of claim 1, wherein the step of distinguishing the ground area and the non-ground area of the preset range according to the video information comprises:
combining synchronous positioning and mapping SLAM technology, projecting the video information to a three-dimensional scene corresponding to the preset range;
determining position information of a plurality of target feature points in the three-dimensional scene;
dividing a ground area and a non-ground area of the three-dimensional scene corresponding to the preset range according to the position information of the target feature points.
3. The method according to claim 1, wherein the step of acquiring the position data of the dynamic region within the preset range includes:
determining a target reference point of the current view angle of the camera;
taking an area in a preset range around the target reference point as a dynamic area of the preset range;
and acquiring the position data of the dynamic area.
4. A method according to claim 3, wherein the step of determining a target reference point for the current view of the camera comprises:
and taking the central point of the foreground object in the current time of the camera as the target reference point.
5. The method of claim 4, wherein the step of taking the area within a preset range around the target reference point as a dynamic area of the preset range comprises:
and taking the area within 2 meters around the target reference point as a dynamic area of the preset range.
6. The method of claim 5, wherein the step of adjusting the dynamic region according to the viewing angle variation parameter and adjusting the position data of the dynamic region within the array comprises:
if the visual angle change parameter is a preset distance moved leftwards, moving the dynamic region leftwards by the preset distance, shifting the position data in the array rightwards by the preset distance, and adding the position data of the newly added dynamic region to the left-hand region of the array.
7. The method according to claim 2, wherein after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset range according to the position information of the plurality of target feature points, the method further comprises:
calculating display parameters of the preset range, wherein the display parameters comprise a sliding average value, visible times and confidence;
and optimizing the display effect of the three-dimensional scene according to the display parameters.
8. The method according to claim 2, wherein after the step of dividing the ground area and the non-ground area of the three-dimensional scene corresponding to the preset range according to the position information of the plurality of target feature points, the method further comprises:
determining two sets of target feature points of the ground area;
and optimizing the shake error of the camera according to the position data of the two groups of target feature points in the ground area.
9. A video processing apparatus, comprising:
the acquisition module is used for acquiring video information in a corresponding preset range acquired by the camera;
the distinguishing module is used for distinguishing the ground area and the non-ground area of the preset range according to the video information;
the decoration module is used for adding decoration objects in the ground area of the preset range;
acquiring position data of a dynamic region in the preset range, and storing the position data of the dynamic region by establishing an array, wherein the array is an annular array, and the dynamic region is at least a partial region corresponding to the current view angle of the camera;
the display module is used for determining the visual angle change parameter of the camera if the visual angle of the camera is changed, adjusting the dynamic area according to the visual angle change parameter and adjusting the position data of the dynamic area in the array;
and adjusting the display parameters of the decoration object according to the position data of the target point where the decoration object is located and the position data of the dynamic area, and controlling the display parameters of the decoration object to change along with the visual angle, wherein the display parameters at least comprise position data and shape parameters.
10. An electronic device, the electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method of any one of the preceding claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the video processing method of any one of the preceding claims 1-8.
CN202010168846.1A 2020-03-12 2020-03-12 Video processing method and device and electronic equipment Active CN111311665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010168846.1A CN111311665B (en) 2020-03-12 2020-03-12 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010168846.1A CN111311665B (en) 2020-03-12 2020-03-12 Video processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111311665A CN111311665A (en) 2020-06-19
CN111311665B true CN111311665B (en) 2023-05-16

Family

ID=71145496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010168846.1A Active CN111311665B (en) 2020-03-12 2020-03-12 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111311665B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110288716A (en) * 2019-06-14 2019-09-27 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10388069B2 (en) * 2015-09-09 2019-08-20 Futurewei Technologies, Inc. Methods and systems for light field augmented reality/virtual reality on mobile devices
CN105933343B (en) * 2016-06-29 2019-01-08 深圳市优象计算技术有限公司 A kind of code stream caching method for 720 degree of panoramic video netcasts
CN107665505B (en) * 2016-07-29 2021-04-06 成都理想境界科技有限公司 Method and device for realizing augmented reality based on plane detection
WO2018019272A1 (en) * 2016-07-29 2018-02-01 成都理想境界科技有限公司 Method and apparatus for realizing augmented reality on the basis of plane detection
CN108122234B (en) * 2016-11-29 2021-05-04 北京市商汤科技开发有限公司 Convolutional neural network training and video processing method and device and electronic equipment
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
WO2019143722A1 (en) * 2018-01-18 2019-07-25 GumGum, Inc. Augmenting detected regions in image or video data
CN110288691B (en) * 2019-06-06 2023-04-07 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer-readable storage medium for rendering image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110288716A (en) * 2019-06-14 2019-09-27 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment

Also Published As

Publication number Publication date
CN111311665A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
EP4210320A1 (en) Video processing method, terminal device and storage medium
US20230199262A1 (en) Information display method and device, and terminal and storage medium
CN111857518A (en) Method and device for canceling image editing operation, electronic equipment and medium
EP4113446A1 (en) Sticker processing method and apparatus
CN113766303B (en) Multi-screen interaction method, device, equipment and storage medium
CN110069195B (en) Image dragging deformation method and device
CN111291244A (en) House resource information display method, device, terminal and storage medium
CN111652675A (en) Display method and device and electronic equipment
CN112416189B (en) Cross-page focus searching method and device and electronic equipment
CN110134905B (en) Page update display method, device, equipment and storage medium
CN113129366B (en) Monocular SLAM initialization method and device and electronic equipment
CN111325668A (en) Training method and device for image processing deep learning model and electronic equipment
CN110619028A (en) Map display method, device, terminal equipment and medium for house source detail page
CN111626990B (en) Target detection frame processing method and device and electronic equipment
CN110487264B (en) Map correction method, map correction device, electronic equipment and storage medium
CN111862342B (en) Augmented reality texture processing method and device, electronic equipment and storage medium
CN111309798B (en) Processing method, device, equipment and storage medium of table
CN111311665B (en) Video processing method and device and electronic equipment
CN112492399B (en) Information display method and device and electronic equipment
CN110070479B (en) Method and device for positioning image deformation dragging point
CN111292245B (en) Image processing method and device
CN111459597A (en) User guide generation method and device, electronic equipment and computer readable medium
CN111275828A (en) Data processing method and device for three-dimensional assembly and electronic equipment
CN114090817B (en) Dynamic construction method, device and storage medium for face feature database
CN113506356B (en) Method and device for drawing area map, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载