+

CN106851046A - Video dynamic super-resolution processing method and system - Google Patents

Video dynamic super-resolution processing method and system Download PDF

Info

Publication number
CN106851046A
CN106851046A CN201611231237.6A CN201611231237A CN106851046A CN 106851046 A CN106851046 A CN 106851046A CN 201611231237 A CN201611231237 A CN 201611231237A CN 106851046 A CN106851046 A CN 106851046A
Authority
CN
China
Prior art keywords
resolution
processing
pixel
super
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611231237.6A
Other languages
Chinese (zh)
Inventor
韩睿
汤晓莉
郭若杉
李晨
刘壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201611231237.6A priority Critical patent/CN106851046A/en
Publication of CN106851046A publication Critical patent/CN106851046A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)

Abstract

The present invention relates to a kind of video dynamic super-resolution processing method and system, the processing method includes being obtained according to estimation the motion vector of any pixel in currently pending frame, and calculates the reliability of the motion vector;Motion vector according to the pixel extracts pixel corresponding pixel in previous higher resolution frame, is processed using Kalman filtering according to the corresponding pixel and obtains the first result;Interpolation processing is carried out to the pending frame and obtains second processing result;Fusion is weighted to first result and second processing result using the reliability, super-resolution frame is obtained after fusion.In the present invention, the problems such as realizing more preferable Protect edge information information and improve image border sawtooth, in the case where excessive extra time overhead is not increased, effectively recover more image details;There is preferable robustness simultaneously, it is to avoid the inaccurate disorder for causing super-resolution result of estimation or motion blur.

Description

视频动态超分辨率处理方法及系统Video dynamic super-resolution processing method and system

技术领域technical field

本发明属于视频处理技术,特别是视频超分辨率(Super-resolution,SR)技术,尤其涉及一种视频动态超分辨率处理方法及系统。The invention belongs to video processing technology, in particular to video super-resolution (Super-resolution, SR) technology, in particular to a video dynamic super-resolution processing method and system.

背景技术Background technique

随着电视面板产业技术的不断提高,显示设备的分辨率迅速提升,但是视频内容的分辨率并没有随之迅速增加。例如,现在的超清电视在市场上已经越来越普遍,但是由于图像采集设备的限制,大量视频源的分辨率还是标清级或高清级的,这使得显示的视觉质量大大下降。因此,研究利用低分辨率图像信息重建出高分辨率图像的超分辨率(Super-resolution,SR)算法是十分重要的。With the continuous improvement of the technology of the TV panel industry, the resolution of the display device increases rapidly, but the resolution of the video content does not increase rapidly accordingly. For example, ultra-high-definition TVs are becoming more and more common in the market, but due to the limitation of image acquisition equipment, the resolution of a large number of video sources is still standard-definition or high-definition, which greatly reduces the visual quality of the display. Therefore, it is very important to study the super-resolution (SR) algorithm that uses low-resolution image information to reconstruct high-resolution images.

根据所运用的信息形式的不同,当前的超分辨率技术的方法可以大致分为两类:静态超分辨率技术(static super-resolution,SSR)方法和动态超分辨率技术(dynamicsuper-resolution,DSR)方法。所谓的静态超分辨率技术方法目的在于,通过一些低质的低分辨率重建出一张高分辨率图像,这些方法若直接运用在视频处理中,往往会造成放大的视频时序上的不连续性,因此限制了这类方法在视频超分辨率技术中的运用,动态超分辨率技术方法的提出主要是针对视频分辨率重建的问题。According to the different information forms used, the current super-resolution technology methods can be roughly divided into two categories: static super-resolution technology (static super-resolution, SSR) method and dynamic super-resolution technology (dynamic super-resolution, DSR). )method. The purpose of the so-called static super-resolution technology is to reconstruct a high-resolution image through some low-quality low-resolution images. If these methods are directly used in video processing, they will often cause discontinuity in the timing of the enlarged video. , thus limiting the application of this type of method in video super-resolution technology, the proposed method of dynamic super-resolution technology is mainly aimed at the problem of video resolution reconstruction.

静态超分辨率技术方法是目前研究最多的方法,通常分为如下几大类:基于插值的方法、基于样本学习的方法、基于重建的方法。基于插值的方法实际上属于图像缩放层面上的技术,但很多文献中将这一技术归类于超分辨率技术中。插值的方法主要是利用邻域原始像素点的像素值并为其分配不同权值,将加权求和之后的值作为待插值像素点的值。常见的插值方法,如最近邻插值、双线性插值、双立方插值等等都被广泛利用,这些方法也仅仅只能将原始低分辨图像的尺寸改变,并不能提供更多的信息或是频率,图像整体的分辨率依然没有得到提升。基于样本的方法,这类方法样本库的选择对结果影响很大,而且对于每个放大倍数都要有单独的一套样本库,复杂度太高。Static super-resolution technology methods are currently the most studied methods, and are usually divided into the following categories: interpolation-based methods, sample learning-based methods, and reconstruction-based methods. The interpolation-based method actually belongs to the image scaling technology, but many literatures classify this technology as the super-resolution technology. The method of interpolation is mainly to use the pixel values of the original pixel points in the neighborhood and assign different weights to them, and the value after weighted summation is used as the value of the pixel point to be interpolated. Common interpolation methods, such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, etc., are widely used. These methods can only change the size of the original low-resolution image and cannot provide more information or frequency. , the overall resolution of the image is still not improved. In the sample-based method, the selection of the sample library of this method has a great influence on the result, and there must be a separate set of sample library for each magnification, which is too complicated.

动态超分辨率技术用于解决视频的超分辨率问题,事实上静态超分辨率的方法通过扩展都可用于视频超分辨问题中,但是往往存在复杂度高、代价高、帧间不连续等问题。例如,在线纹理合成法引入了一种不需要样本库的超分辨率技术(Database-Free TextureSynthesis,DFTS),它的核心思想是用提高人眼感官上的分辨率代替恢复真实的信号。该方法利用相邻帧同一位置的相邻像素信息,合成高频信息以生成一幅有锐利的边缘和丰富的细节的高分辨率图像,但是这种方法仅适用于有大量相似结构的视频图像,并且可利用的合成信息很有限。还有利用卡尔曼滤波的动态超分辨率方法,该方法在相邻帧通过运动补偿获得对应的像素信息,得到更为可靠的超分辨率图像,但是这一类方法往往受到运动估计精度的制约,算法的鲁棒性不是很好。总而言之,传统动态超分辨率算法普遍存在复杂度和重建效果之间难以平衡的问题,并且利用帧间运动信息的动态超分辨率算法一般需要较高亚像素精度的运动矢量,对于运动估计算法无法估计准确或者复杂的运动视频往往会产生结果的紊乱。Dynamic super-resolution technology is used to solve video super-resolution problems. In fact, static super-resolution methods can be used in video super-resolution problems through expansion, but there are often problems such as high complexity, high cost, and discontinuity between frames. . For example, the online texture synthesis method introduces a super-resolution technology (Database-Free Texture Synthesis, DFTS) that does not require a sample library. Its core idea is to replace the restoration of real signals by improving the sensory resolution of the human eye. This method uses adjacent pixel information at the same position in adjacent frames to synthesize high-frequency information to generate a high-resolution image with sharp edges and rich details, but this method is only suitable for video images with a large number of similar structures , and the synthetic information available is limited. There is also a dynamic super-resolution method using Kalman filtering, which obtains corresponding pixel information through motion compensation in adjacent frames to obtain a more reliable super-resolution image, but this type of method is often restricted by the accuracy of motion estimation , the robustness of the algorithm is not very good. All in all, traditional dynamic super-resolution algorithms generally have the problem of difficult balance between complexity and reconstruction effect, and dynamic super-resolution algorithms using inter-frame motion information generally require motion vectors with higher sub-pixel precision, which cannot be achieved by motion estimation algorithms. Estimating accurate or complex motion videos tends to produce disordered results.

发明内容Contents of the invention

为了解决现有技术中的上述问题,即为了解决如何实现改善图像边缘同时保证超分辨率图像的鲁棒性,本发明提供了一种视频动态超分辨率处理方法,包括:In order to solve the above-mentioned problems in the prior art, that is, in order to solve how to improve the image edge while ensuring the robustness of the super-resolution image, the present invention provides a video dynamic super-resolution processing method, including:

根据运动估计得到当前待处理帧中任一像素的运动矢量,并计算出所述运动矢量的可靠度;Obtain the motion vector of any pixel in the current frame to be processed according to the motion estimation, and calculate the reliability of the motion vector;

根据所述像素的运动矢量提取所述像素在前一高分辨率帧中对应的像素,根据该对应的像素利用卡尔曼滤波处理得到第一处理结果;Extracting the pixel corresponding to the pixel in the previous high-resolution frame according to the motion vector of the pixel, and obtaining a first processing result by Kalman filtering according to the corresponding pixel;

对所述待处理帧进行插值处理得到第二处理结果;performing interpolation processing on the frame to be processed to obtain a second processing result;

利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合,融合后得到超分辨率帧。The reliability is used to perform weighted fusion on the first processing result and the second processing result, and super-resolution frames are obtained after fusion.

优选地,所述根据运动估计得到当前待处理帧中任一像素的运动矢量,包括:Preferably, said obtaining the motion vector of any pixel in the current frame to be processed according to the motion estimation includes:

利用前一高分辨率帧和当前待处理帧进行运动估计,得到当前待处理帧中任一像素的运动矢量。Motion estimation is performed by using the previous high-resolution frame and the current frame to be processed to obtain the motion vector of any pixel in the current frame to be processed.

优选地,在得到超分辨率帧之后,还包括:Preferably, after obtaining the super-resolution frame, it also includes:

对所述超分辨率帧进行去模糊处理,得到去模糊的高分辨率图像。Deblurring is performed on the super-resolution frame to obtain a deblurred high-resolution image.

优选地,所述利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合的融合公式为:Preferably, the fusion formula for performing weighted fusion of the first processing result and the second processing result by using the reliability is:

其中,为融合结果,为运动矢量的可靠度,为第一处理结果,为第二处理结果。in, For the fusion result, is the reliability of the motion vector, is the first processing result, is the second processing result.

本发明还提供了一种视频动态超分辨率处理系统,所述处理系统包括:The present invention also provides a video dynamic super-resolution processing system, the processing system comprising:

运动估计单元,用于根据运动估计得到当前待处理帧中任一像素的运动矢量,并计算出所述运动矢量的可靠度;A motion estimation unit, configured to obtain the motion vector of any pixel in the current frame to be processed according to motion estimation, and calculate the reliability of the motion vector;

第一处理单元,用于根据所述像素的运动矢量提取所述像素在前一高分辨率帧中对应的像素,根据该对应的像素利用卡尔曼滤波处理得到第一处理结果;对所述待处理帧进行插值处理得到第二处理结果;The first processing unit is configured to extract the pixel corresponding to the pixel in the previous high-resolution frame according to the motion vector of the pixel, and obtain a first processing result by Kalman filter processing according to the corresponding pixel; Processing the frame to perform interpolation processing to obtain a second processing result;

加权融合单元,用于利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合,融合后得到超分辨率帧。A weighted fusion unit, configured to use the reliability to perform weighted fusion on the first processing result and the second processing result, and obtain a super-resolution frame after fusion.

优选地,Preferably,

所述运动估计单元,具体用于利用前一高分辨率帧和当前待处理帧进行运动估计,得到当前待处理帧中任一像素的运动矢量。The motion estimation unit is specifically configured to use the previous high-resolution frame and the current frame to be processed to perform motion estimation to obtain a motion vector of any pixel in the current frame to be processed.

优选地,还包括:Preferably, it also includes:

第二处理单元,用于对所述超分辨率帧进行去模糊处理,得到去模糊的高分辨率图像。The second processing unit is configured to perform deblurring processing on the super-resolution frame to obtain a deblurred high-resolution image.

优选地,所述加权融合单元进行加权融合的融合公式为:Preferably, the fusion formula for the weighted fusion performed by the weighted fusion unit is:

其中,为融合结果,为运动矢量的可靠度,为第一处理结果,为第二处理结果。in, For the fusion result, is the reliability of the motion vector, is the first processing result, is the second processing result.

与现有技术相比,本发明至少具有以下优点:Compared with the prior art, the present invention has at least the following advantages:

通过本发明的设计,实现了更好的保护边缘信息改善图像边缘锯齿等问题,在不增加过多额外的时间开销的情况下,有效恢复更多图像细节;同时具有较好的鲁棒性,避免运动估计不准确造成超分辨率结果的紊乱或是运动模糊。Through the design of the present invention, better protection of edge information is achieved to improve image edge jaggedness and other issues, and more image details can be effectively restored without adding too much extra time overhead; at the same time, it has better robustness, Avoid inaccurate motion estimation causing disorder or motion blur in super-resolution results.

附图说明Description of drawings

图1是本发明所提供的视频动态超分辨率处理方法的流程示意图;Fig. 1 is a schematic flow chart of the video dynamic super-resolution processing method provided by the present invention;

图2是本发明所使用的运动估计算法三维递归搜索的示意图;Fig. 2 is a schematic diagram of the three-dimensional recursive search of the motion estimation algorithm used in the present invention;

图3是本发明所使用的运动矢量可靠度映射曲线图;Fig. 3 is a motion vector reliability mapping curve diagram used in the present invention;

图4是本发明所提供的视频动态超分辨率处理的系统架构图。Fig. 4 is a system architecture diagram of video dynamic super-resolution processing provided by the present invention.

具体实施方式detailed description

下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention, and are not intended to limit the protection scope of the present invention.

本发明提出一种视频动态超分辨率处理方法,下面结合附图,对本发明具体实施方式进行详细说明。The present invention proposes a video dynamic super-resolution processing method. The specific implementation of the present invention will be described in detail below in conjunction with the accompanying drawings.

如图1所示,该方法具体包括如下步骤:As shown in Figure 1, the method specifically includes the following steps:

步骤101,根据运动估计得到当前待处理帧中任一像素的运动矢量,并计算出所述运动矢量的可靠度。Step 101, obtain the motion vector of any pixel in the current frame to be processed according to the motion estimation, and calculate the reliability of the motion vector.

其中,所述根据运动估计得到当前待处理帧中任一像素的运动矢量,包括:Wherein, said obtaining the motion vector of any pixel in the current frame to be processed according to the motion estimation includes:

利用前一高分辨率帧和当前待处理帧进行运动估计,得到当前待处理帧中任一像素的运动矢量。Motion estimation is performed by using the previous high-resolution frame and the current frame to be processed to obtain the motion vector of any pixel in the current frame to be processed.

本步骤中的运动估计,可以使用已知的任何一种运动估计方法,如全搜索、三维递归搜索。本实施例采用三维递归搜索算法,如图2所示。The motion estimation in this step can use any known motion estimation method, such as full search and three-dimensional recursive search. This embodiment adopts a three-dimensional recursive search algorithm, as shown in FIG. 2 .

假设CS表示运动矢量候选集合,分别表示图2中空间域运动估计器Sa和Sb的运动矢量候选集,表示当前区域块在图像二维坐标系的坐标位置,t是时间,Cx和Cy表示运动矢量的水平分量和垂直分量,表示已经估计出的区域块运动矢量,T表示相邻两帧之间的时间间隔,表示零运动矢量,CSmax表示候选运动矢量的范围,同时也代表了三维递归搜索算法的搜索范围,其中水平向量的范围是[-N,+N],垂直向量的范围是[-M,+M],X·Y表示匹配区域块的大小,通常取8*8或者16*16。表示随机的更新运动矢量,更新值从查找表中随机选取一个更新向量,其中分别是更新运动矢量的水平和垂直分量。上述各集合的范围如下:Suppose CS represents the motion vector candidate set, with Denote the motion vector candidate sets of the space domain motion estimators Sa and Sb in Fig. 2 respectively, Indicates the coordinate position of the current area block in the two-dimensional image coordinate system, t is time, C x and C y represent motion vectors The horizontal and vertical components of , Indicates the estimated region block motion vector, T indicates the time interval between two adjacent frames, Represents the zero motion vector, CS max represents the range of candidate motion vectors, and also represents the search range of the three-dimensional recursive search algorithm, where the range of horizontal vectors is [-N,+N], and the range of vertical vectors is [-M,+ M], X Y represents the size of the matching area block, usually 8*8 or 16*16. Represents a random update motion vector, the update value randomly selects an update vector from the lookup table, where with are the horizontal and vertical components of the updated motion vector, respectively. The scope of each of the above collections is as follows:

在计算运动矢量的可靠度时,具体包括:When calculating the reliability of the motion vector, it specifically includes:

1)计算运动矢量方差1) Calculate the motion vector variance

记MVxi,j,MVyi,j分别为输入图像在(i,j)点的运动矢量的水平及垂直分量。通过下式计算点(i,j)及其邻域内的运动矢量的方差Vi,j。其中,运动矢量的水平、垂直方向的均值mxi,j,myi,j,H、W为局部窗的高和宽取值为3和3,运动矢量的水平、垂直方向的方差Vxi,j,Vyi,jNote that MVx i,j and MVy i,j are the horizontal and vertical components of the motion vector of the input image at point (i,j) respectively. The variance V i,j of motion vectors within a point (i,j) and its neighborhood is calculated by the following equation. Among them, the average value mx i,j ,my i,j of the horizontal and vertical directions of the motion vector, H and W are the height and width of the local window, and the values are 3 and 3, and the variance Vx i of the horizontal and vertical directions of the motion vector, j ,Vy i,j :

点(i,j)及其邻域内的运动矢量的方差Vi,j表示为式:The variance V i, j of motion vectors in point (i, j) and its neighborhood is expressed as formula:

2)运动矢量可靠度曲线映射2) Motion vector reliability curve mapping

运动矢量的方差可根据图3映射至运动矢量可靠度。阈值和曲线形状可自行设置,要保证运动矢量的方差越大,则认为该运动矢量的可靠度越小。图3给出了两种映射曲线的形式,为了降低风险,本文选择的是右图中软阈值控制的曲线,但是曲线的走势要设计的“陡峭”一些。因为如果运动向量不是十分准确,那么从参考帧获得的信息和当前帧的信息融合势必会造成运动模糊,如果曲线设置的比较平缓,即使和空域中的内容进行融合也不会有很大的改善,在这种情况下图像结果会退化成空域的插值结果,而且对于较为复杂的运动图像内容,人眼的对于清晰度的敏感程度远低于小运动或静止的图像内容。曲线的阈值设置为th1为4,th2为6。The variance of motion vectors can be mapped to motion vector reliability according to FIG. 3 . The threshold and the shape of the curve can be set by yourself. To ensure that the variance of the motion vector is larger, the reliability of the motion vector is considered to be smaller. Figure 3 shows two types of mapping curves. In order to reduce risks, this paper chooses the curve controlled by the soft threshold in the right figure, but the trend of the curve should be designed to be "steeper". Because if the motion vector is not very accurate, then the information obtained from the reference frame and the information fusion of the current frame will inevitably cause motion blur. If the curve is set relatively flat, even if it is fused with the content in the airspace, it will not be greatly improved. , in this case, the image result will degenerate into the interpolation result of the space domain, and for more complex moving image content, the sensitivity of the human eye to sharpness is much lower than that of small motion or still image content. The thresholds for the curves are set to 4 for th 1 and 6 for th 2 .

步骤102,根据所述像素的运动矢量提取所述像素在前一高分辨率帧中对应的像素,根据该对应的像素利用卡尔曼滤波处理得到第一处理结果;对所述待处理帧进行插值处理得到第二处理结果。Step 102, extract the pixel corresponding to the pixel in the previous high-resolution frame according to the motion vector of the pixel, and obtain the first processing result according to the corresponding pixel through Kalman filter processing; perform interpolation on the frame to be processed Processing obtains a second processing result.

其中,利用卡尔曼滤波处理时,通过使用如下预测值和更新值得到最终的卡尔曼滤波结果其中F(t)表示位移矩阵,是前一时刻高分辨率预测结果,K(t)是卡尔曼滤波的增益,是输入的低分辨率图像,D(t)表示下采样,是协方差预测量,是协方差的更新值,为下一帧处理所用,Cu(t)是运动估计造成的图像结果误差的协方差矩阵,Cw(t)是图像采集设备引入的随机噪声的协方差矩阵。Among them, when using Kalman filter processing, the final Kalman filter result is obtained by using the following predicted value and updated value where F(t) represents the displacement matrix, is the high-resolution prediction result at the previous moment, K(t) is the gain of Kalman filter, Is the input low-resolution image, D(t) represents downsampling, is the covariance predictor, is the update value of the covariance, which is used for the next frame processing, C u (t) is the covariance matrix of the image result error caused by motion estimation, and C w (t) is the covariance matrix of the random noise introduced by the image acquisition device.

预测值: Predictive value:

更新值: update value:

步骤103,利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合,融合后得到超分辨率帧。Step 103 , performing weighted fusion on the first processing result and the second processing result by using the reliability, and obtaining a super-resolution frame after fusion.

在进行加权融合时,所述利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合的融合公式为:When performing weighted fusion, the fusion formula for performing weighted fusion on the first processing result and the second processing result by using the reliability is:

其中,为融合结果,为运动矢量的可靠度,为第一处理结果,为第二处理结果。in, For the fusion result, is the reliability of the motion vector, is the first processing result, is the second processing result.

融合的宗旨是首先,以第一处理结果为主,以保证在运动估计准确的情况下获得最清晰的超分辨率结果;其次,可靠度越低,第一处理结果参与度越低,超分辨率结果退化为第二处理结果 The purpose of fusion is to firstly focus on the first processing results to ensure the clearest super-resolution results under the condition of accurate motion estimation; secondly, the lower the reliability, the first processing results The lower the participation, the super-resolution result degenerates to the second processing result

如图4所示,前一生成的高分辨率帧信息的准确性直接影响到下一生成的高分辨率帧。为了得到理想的高分辨率图像,需要一定数量的低分辨率帧经过数次的卡尔曼滤波处理才能获得。因此,本专利利用已经得到的边缘指导的信息,加快结果的收敛性并提高结果的准确性。为了不增加过多的复杂度,仅将第一帧高分辨率图像初始化成由边缘指导插值的高分辨率图像而不是每一帧都进行边缘指导插值,因为在时间轴上的迭代会将当前帧获取的边缘指导信息传递给下一帧,获得收敛更快准确度更高的高分辨率图像序列,因此不需要额外花费过多的计算量和复杂度。As shown in Figure 4, the accuracy of the information of the previous generated high-resolution frame directly affects the next generated high-resolution frame. In order to obtain an ideal high-resolution image, a certain number of low-resolution frames are required to be processed by Kalman filtering several times. Therefore, this patent utilizes the obtained edge-guided information to speed up the convergence of the results and improve the accuracy of the results. In order not to add too much complexity, only the first high-resolution image Initialized as a high-resolution image with edge-guided interpolation Instead of edge guidance interpolation for each frame, because the iteration on the time axis will pass the edge guidance information acquired by the current frame to the next frame, and obtain a high-resolution image sequence with faster convergence and higher accuracy, so it is not necessary to Additional calculation and complexity are required.

运动估计的准确性直接影响了视频超分辨率的处理结果,本专利使用的运动估计方法是具有亚像素精度的运动估计方法,因此影响亚像素精度一个是运动估计本身算法的设计,一个是用于估计的参考帧的信息精确程度。本专利选择的运动补偿参考帧是前一帧估计出的高分辨率帧,这一高分辨帧是还未去模糊的图像。图像中人眼可分辨的宏观物体具有一定的大小,该物体中的一点及其周围点的运动矢量应该是较为接近的,如果运动矢量差异较大,则该点的运动矢量可能是不准确的。对于完全不可靠(可靠度为0)的运动矢量所对应的块(子图像),它的超分辨率结果直接利用空域的结果代替,完全可靠(可靠度为1)的运动矢量它的超分辨率结果利用卡尔曼滤波的结果,对于可靠度在0和1之间的根据可靠度对空域插值结果和卡尔曼滤波结果进行融合。The accuracy of motion estimation directly affects the processing results of video super-resolution. The motion estimation method used in this patent is a motion estimation method with sub-pixel precision. Therefore, one that affects sub-pixel precision is the design of the motion estimation algorithm itself, and the other is the use of depends on the accuracy of the estimated reference frame information. The motion compensation reference frame selected in this patent is a high-resolution frame estimated from a previous frame, and this high-resolution frame is an image that has not yet been deblurred. The macroscopic object that can be distinguished by the human eye in the image has a certain size, and the motion vector of a point in the object and its surrounding points should be relatively close. If the motion vector difference is large, the motion vector of this point may be inaccurate . For the block (sub-image) corresponding to the completely unreliable (reliability is 0) motion vector, its super-resolution result is directly replaced by the result of the space domain, and the completely reliable (reliability is 1) motion vector its super-resolution The rate results use the results of Kalman filtering, and for the reliability between 0 and 1, the results of spatial interpolation and Kalman filtering are fused according to the reliability.

进一步地,在得到超分辨率帧之后,还包括:Further, after obtaining the super-resolution frame, it also includes:

对所述超分辨率帧进行去模糊处理,得到去模糊的高分辨率图像。Deblurring is performed on the super-resolution frame to obtain a deblurred high-resolution image.

该采用的去模糊处理方法是基于最大后验概率的方法,但是并不限于这种方法。求解去模糊后的高分辨率图像公式为:The adopted deblurring method is a method based on maximum a posteriori probability, but is not limited to this method. The formula for solving the deblurred high-resolution image is:

其中表示去模糊后的高分辨率图像,H为低通滤波,J(X(t))是正则化函数,正则项是提供一些先验信息以解决超分辨率病态问题的,用来增加收敛性和抑制噪声、运动估计误差造成的影响,λ是正则化参数用来平衡式中左边的保真项和右边的正则项,正则项作为惩罚项出现在代价方程中。关于正则项有很多表现形式,某些正则项的设计用于特殊的图像处理需求(如保持边缘或者去除运动拖尾噪声等等)。in Represents the high-resolution image after deblurring, H is low-pass filtering, J(X(t)) is a regularization function, and the regularization term is to provide some prior information to solve the super-resolution ill-conditioned problem, which is used to increase the convergence And to suppress the influence caused by noise and motion estimation error, λ is a regularization parameter used to balance the fidelity term on the left and the regular term on the right in the formula, and the regular term appears in the cost equation as a penalty term. There are many forms of regularization items, and some regularization items are designed for special image processing requirements (such as maintaining edges or removing motion smear noise, etc.).

用最陡梯度下降法来求解最小化问题,具体的计算形式如下所示:Use the steepest gradient descent method to solve the minimization problem. The specific calculation form is as follows:

其中β是迭代的步长,是对的微分。最终的终止迭代条件如下所示,其中η是迭代终止的阈值条件,n是当前迭代的次数,itermax是可迭代的最大次数:where β is the iteration step size, is true differential. The final termination iteration condition is as follows, where η is the threshold condition for iteration termination, n is the number of current iterations, and iter max is the maximum number of iterations:

或n≥itermax Or n ≥ iter max .

基于与上述本发明所提供方法相同的思路,本发明还提供了一种视频动态超分辨率处理系统,包括:Based on the same thinking as the method provided by the above-mentioned present invention, the present invention also provides a video dynamic super-resolution processing system, including:

运动估计单元,用于根据运动估计得到当前待处理帧中任一像素的运动矢量,并计算出所述运动矢量的可靠度;具体用于利用前一高分辨率帧和当前待处理帧进行运动估计,得到当前待处理帧中任一像素的运动矢量。The motion estimation unit is used to obtain the motion vector of any pixel in the current frame to be processed according to the motion estimation, and calculate the reliability of the motion vector; specifically, it is used to perform motion by using the previous high-resolution frame and the current frame to be processed Estimated to obtain the motion vector of any pixel in the current frame to be processed.

处理单元,用于根据所述像素的运动矢量提取所述像素在前一高分辨率帧中对应的像素,根据该对应的像素利用卡尔曼滤波处理得到第一处理结果;对所述待处理帧进行插值处理得到第二处理结果;A processing unit, configured to extract a pixel corresponding to the pixel in a previous high-resolution frame according to the motion vector of the pixel, and obtain a first processing result by Kalman filter processing according to the corresponding pixel; for the frame to be processed performing interpolation processing to obtain a second processing result;

加权融合单元,用于利用所述可靠度对所述第一处理结果和第二处理结果进行加权融合,融合后得到超分辨率帧。A weighted fusion unit, configured to use the reliability to perform weighted fusion on the first processing result and the second processing result, and obtain a super-resolution frame after fusion.

所述加权融合单元进行加权融合的融合公式为:The fusion formula for the weighted fusion performed by the weighted fusion unit is:

其中,为融合结果,为运动矢量的可靠度,为第一处理结果,为第二处理结果。in, For the fusion result, is the reliability of the motion vector, is the first processing result, is the second processing result.

第二处理单元,用于对所述超分辨率帧进行去模糊处理,得到去模糊的高分辨率图像。The second processing unit is configured to perform deblurring processing on the super-resolution frame to obtain a deblurred high-resolution image.

本领域技术人员应该能够意识到,结合本文中所公开的实施例描述的各示例的模块、及方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明电子硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以电子硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art should be able to realize that the modules and method steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the electronic hardware and Interchangeability of software. In the above description, the components and steps of each example have been generally described according to their functions. Whether these functions are performed by electronic hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may implement the described functionality using different methods for each particular application, but such implementation should not be considered as exceeding the scope of the present invention.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described in conjunction with the preferred embodiments shown in the accompanying drawings, but those skilled in the art will easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principles of the present invention, those skilled in the art can make equivalent changes or substitutions to relevant technical features, and the technical solutions after these changes or substitutions will all fall within the protection scope of the present invention.

Claims (8)

1. A video dynamic super-resolution processing method is characterized by comprising the following steps:
obtaining a motion vector of any pixel in a current frame to be processed according to motion estimation, and calculating the reliability of the motion vector;
extracting a pixel corresponding to the pixel in a previous high-resolution frame according to the motion vector of the pixel, and obtaining a first processing result by utilizing Kalman filtering processing according to the corresponding pixel;
carrying out interpolation processing on the frame to be processed to obtain a second processing result;
and performing weighted fusion on the first processing result and the second processing result by using the reliability, and obtaining a super-resolution frame after fusion.
2. The video dynamic super-resolution processing method according to claim 1, wherein said obtaining a motion vector of any pixel in a current frame to be processed according to motion estimation comprises:
and performing motion estimation by using the previous high-resolution frame and the current frame to be processed to obtain a motion vector of any pixel in the current frame to be processed.
3. The video dynamic super-resolution processing method according to claim 1, further comprising, after obtaining the super-resolution frame:
and carrying out deblurring processing on the super-resolution frame to obtain a deblurred high-resolution image.
4. The video dynamic super-resolution processing method according to claim 1, wherein the fusion formula for performing weighted fusion on the first processing result and the second processing result by using the reliability is as follows:
[ Z ‾ ^ ( t ) ] q = ( [ r ‾ ( t ) ] q [ Z ‾ ^ K ( t ) ] q + 0.2 * [ Z ‾ ^ S ( t ) ] q ) / ( [ r ‾ ( t ) ] q + 0.2 )
wherein,to fuse the results, [ r (t) ]]As a measure of the reliability of the motion vector,as a result of the first processing,is the second processing result.
5. A video dynamic super-resolution processing system, the processing system comprising:
the motion estimation unit is used for obtaining a motion vector of any pixel in the current frame to be processed according to motion estimation and calculating the reliability of the motion vector;
the first processing unit is used for extracting a pixel corresponding to the pixel in a previous high-resolution frame according to the motion vector of the pixel, and obtaining a first processing result by utilizing Kalman filtering processing according to the corresponding pixel; carrying out interpolation processing on the frame to be processed to obtain a second processing result;
and the weighted fusion unit is used for carrying out weighted fusion on the first processing result and the second processing result by utilizing the reliability, and obtaining a super-resolution frame after fusion.
6. The video dynamic super-resolution processing system according to claim 5,
the motion estimation unit is specifically configured to perform motion estimation by using a previous high-resolution frame and a current frame to be processed to obtain a motion vector of any pixel in the current frame to be processed.
7. The video dynamic super-resolution processing system according to claim 5, further comprising:
and the second processing unit is used for carrying out deblurring processing on the super-resolution frame to obtain a deblurred high-resolution image.
8. The video dynamic super-resolution processing system according to claim 5, wherein the fusion formula of the weighted fusion unit for weighted fusion is:
[ Z ‾ ^ ( t ) ] q = ( [ r ‾ ( t ) ] q [ Z ‾ ^ K ( t ) ] q + 0.2 * [ Z ‾ ^ S ( t ) ] q ) / ( [ r ‾ ( t ) ] q + 0.2 )
wherein,as a result of fusionr(t)]As a measure of the reliability of the motion vector,as a result of the first processing,is the second processing result.
CN201611231237.6A 2016-12-28 2016-12-28 Video dynamic super-resolution processing method and system Pending CN106851046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611231237.6A CN106851046A (en) 2016-12-28 2016-12-28 Video dynamic super-resolution processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611231237.6A CN106851046A (en) 2016-12-28 2016-12-28 Video dynamic super-resolution processing method and system

Publications (1)

Publication Number Publication Date
CN106851046A true CN106851046A (en) 2017-06-13

Family

ID=59113671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611231237.6A Pending CN106851046A (en) 2016-12-28 2016-12-28 Video dynamic super-resolution processing method and system

Country Status (1)

Country Link
CN (1) CN106851046A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132054A (en) * 2017-12-20 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN109640084A (en) * 2018-12-14 2019-04-16 网易(杭州)网络有限公司 Video flowing noise-reduction method, device and storage medium
CN109637502A (en) * 2018-12-25 2019-04-16 宁波迪比亿贸易有限公司 String array layout mechanism
CN110263699A (en) * 2019-06-17 2019-09-20 睿魔智能科技(深圳)有限公司 Method of video image processing, device, equipment and storage medium
CN110662030A (en) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 Video processing method and device
CN111489292A (en) * 2020-03-04 2020-08-04 北京思朗科技有限责任公司 Super-resolution reconstruction method and device for video stream
US20200413044A1 (en) 2018-09-12 2020-12-31 Beijing Bytedance Network Technology Co., Ltd. Conditions for starting checking hmvp candidates depend on total number minus k
US11134244B2 (en) 2018-07-02 2021-09-28 Beijing Bytedance Network Technology Co., Ltd. Order of rounding and pruning in LAMVR
US11134267B2 (en) 2018-06-29 2021-09-28 Beijing Bytedance Network Technology Co., Ltd. Update of look up table: FIFO, constrained FIFO
US11140385B2 (en) 2018-06-29 2021-10-05 Beijing Bytedance Network Technology Co., Ltd. Checking order of motion candidates in LUT
US11140383B2 (en) 2019-01-13 2021-10-05 Beijing Bytedance Network Technology Co., Ltd. Interaction between look up table and shared merge list
US11146785B2 (en) 2018-06-29 2021-10-12 Beijing Bytedance Network Technology Co., Ltd. Selection of coded motion information for LUT updating
US11159807B2 (en) 2018-06-29 2021-10-26 Beijing Bytedance Network Technology Co., Ltd. Number of motion candidates in a look up table to be checked according to mode
US11159817B2 (en) 2018-06-29 2021-10-26 Beijing Bytedance Network Technology Co., Ltd. Conditions for updating LUTS
WO2022068682A1 (en) * 2020-09-30 2022-04-07 华为技术有限公司 Image processing method and apparatus
US11528500B2 (en) 2018-06-29 2022-12-13 Beijing Bytedance Network Technology Co., Ltd. Partial/full pruning when adding a HMVP candidate to merge/AMVP
US11528501B2 (en) 2018-06-29 2022-12-13 Beijing Bytedance Network Technology Co., Ltd. Interaction between LUT and AMVP
US11589071B2 (en) 2019-01-10 2023-02-21 Beijing Bytedance Network Technology Co., Ltd. Invoke of LUT updating
CN115861142A (en) * 2022-12-21 2023-03-28 上海闻泰电子科技有限公司 Image fusion method and device, electronic equipment and storage medium
US11641483B2 (en) 2019-03-22 2023-05-02 Beijing Bytedance Network Technology Co., Ltd. Interaction between merge list construction and other tools
US11895318B2 (en) 2018-06-29 2024-02-06 Beijing Bytedance Network Technology Co., Ltd Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks
US11956464B2 (en) 2019-01-16 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Inserting order of motion candidates in LUT

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383898A (en) * 2007-09-07 2009-03-11 索尼株式会社 Image processing device, method and computer program
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system for obtaining super-resolution images optimized for viewing experience

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383898A (en) * 2007-09-07 2009-03-11 索尼株式会社 Image processing device, method and computer program
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system for obtaining super-resolution images optimized for viewing experience

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MICHAEL ELAD,ARIE FEUER: "Super-Resolution Reconstruction of Image Sequences", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
SINA FARSIU,MICHAEL ELAD,PEYMAN MILANFAR: "Video-to-Video Dynamic Super-Resolution for Grayscale and Color Sequences", 《EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING》 *
汤晓莉,韩睿,郭若杉: "基于边缘指导的快速动态超分辨率算法", 《微电子学与计算机》 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132054A (en) * 2017-12-20 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
US11146786B2 (en) 2018-06-20 2021-10-12 Beijing Bytedance Network Technology Co., Ltd. Checking order of motion candidates in LUT
US11706406B2 (en) 2018-06-29 2023-07-18 Beijing Bytedance Network Technology Co., Ltd Selection of coded motion information for LUT updating
US11877002B2 (en) 2018-06-29 2024-01-16 Beijing Bytedance Network Technology Co., Ltd Update of look up table: FIFO, constrained FIFO
CN110662030A (en) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 Video processing method and device
US12167018B2 (en) 2018-06-29 2024-12-10 Beijing Bytedance Network Technology Co., Ltd. Interaction between LUT and AMVP
US12058364B2 (en) 2018-06-29 2024-08-06 Beijing Bytedance Network Technology Co., Ltd. Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks
US11245892B2 (en) 2018-06-29 2022-02-08 Beijing Bytedance Network Technology Co., Ltd. Checking order of motion candidates in LUT
US11528501B2 (en) 2018-06-29 2022-12-13 Beijing Bytedance Network Technology Co., Ltd. Interaction between LUT and AMVP
US11528500B2 (en) 2018-06-29 2022-12-13 Beijing Bytedance Network Technology Co., Ltd. Partial/full pruning when adding a HMVP candidate to merge/AMVP
US11134267B2 (en) 2018-06-29 2021-09-28 Beijing Bytedance Network Technology Co., Ltd. Update of look up table: FIFO, constrained FIFO
CN110662030B (en) * 2018-06-29 2022-06-14 北京字节跳动网络技术有限公司 A video processing method and device
US12034914B2 (en) 2018-06-29 2024-07-09 Beijing Bytedance Network Technology Co., Ltd Checking order of motion candidates in lut
US11140385B2 (en) 2018-06-29 2021-10-05 Beijing Bytedance Network Technology Co., Ltd. Checking order of motion candidates in LUT
US11973971B2 (en) 2018-06-29 2024-04-30 Beijing Bytedance Network Technology Co., Ltd Conditions for updating LUTs
US11146785B2 (en) 2018-06-29 2021-10-12 Beijing Bytedance Network Technology Co., Ltd. Selection of coded motion information for LUT updating
US11695921B2 (en) 2018-06-29 2023-07-04 Beijing Bytedance Network Technology Co., Ltd Selection of coded motion information for LUT updating
US11153557B2 (en) 2018-06-29 2021-10-19 Beijing Bytedance Network Technology Co., Ltd. Which LUT to be updated or no updating
US11895318B2 (en) 2018-06-29 2024-02-06 Beijing Bytedance Network Technology Co., Ltd Concept of using one or multiple look up tables to store motion information of previously coded in order and use them to code following blocks
US11909989B2 (en) 2018-06-29 2024-02-20 Beijing Bytedance Network Technology Co., Ltd Number of motion candidates in a look up table to be checked according to mode
US11159817B2 (en) 2018-06-29 2021-10-26 Beijing Bytedance Network Technology Co., Ltd. Conditions for updating LUTS
US11159807B2 (en) 2018-06-29 2021-10-26 Beijing Bytedance Network Technology Co., Ltd. Number of motion candidates in a look up table to be checked according to mode
US11153559B2 (en) 2018-07-02 2021-10-19 Beijing Bytedance Network Technology Co., Ltd. Usage of LUTs
US11153558B2 (en) 2018-07-02 2021-10-19 Beijing Bytedance Network Technology Co., Ltd. Update of look-up tables
US11134243B2 (en) 2018-07-02 2021-09-28 Beijing Bytedance Network Technology Co., Ltd. Rules on updating luts
US11463685B2 (en) 2018-07-02 2022-10-04 Beijing Bytedance Network Technology Co., Ltd. LUTS with intra prediction modes and intra mode prediction from non-adjacent blocks
US11134244B2 (en) 2018-07-02 2021-09-28 Beijing Bytedance Network Technology Co., Ltd. Order of rounding and pruning in LAMVR
US11159787B2 (en) 2018-09-12 2021-10-26 Beijing Bytedance Network Technology Co., Ltd. Conditions for starting checking HMVP candidates depend on total number minus K
US11997253B2 (en) 2018-09-12 2024-05-28 Beijing Bytedance Network Technology Co., Ltd Conditions for starting checking HMVP candidates depend on total number minus K
US20210297659A1 (en) 2018-09-12 2021-09-23 Beijing Bytedance Network Technology Co., Ltd. Conditions for starting checking hmvp candidates depend on total number minus k
US20200413044A1 (en) 2018-09-12 2020-12-31 Beijing Bytedance Network Technology Co., Ltd. Conditions for starting checking hmvp candidates depend on total number minus k
CN109640084A (en) * 2018-12-14 2019-04-16 网易(杭州)网络有限公司 Video flowing noise-reduction method, device and storage medium
CN109640084B (en) * 2018-12-14 2021-11-16 网易(杭州)网络有限公司 Video stream noise reduction method and device and storage medium
CN109637502A (en) * 2018-12-25 2019-04-16 宁波迪比亿贸易有限公司 String array layout mechanism
CN109637502B (en) * 2018-12-25 2023-01-20 泉州市望海机械科技有限公司 String array layout mechanism
US12368880B2 (en) 2019-01-10 2025-07-22 Beijing Bytedance Network Technology Co., Ltd. Invoke of LUT updating
US11589071B2 (en) 2019-01-10 2023-02-21 Beijing Bytedance Network Technology Co., Ltd. Invoke of LUT updating
US11140383B2 (en) 2019-01-13 2021-10-05 Beijing Bytedance Network Technology Co., Ltd. Interaction between look up table and shared merge list
US11909951B2 (en) 2019-01-13 2024-02-20 Beijing Bytedance Network Technology Co., Ltd Interaction between lut and shared merge list
US11956464B2 (en) 2019-01-16 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Inserting order of motion candidates in LUT
US11962799B2 (en) 2019-01-16 2024-04-16 Beijing Bytedance Network Technology Co., Ltd Motion candidates derivation
US11641483B2 (en) 2019-03-22 2023-05-02 Beijing Bytedance Network Technology Co., Ltd. Interaction between merge list construction and other tools
US12401820B2 (en) 2019-03-22 2025-08-26 Beijing Bytedance Network Technology Co., Ltd. Interaction between merge list construction and other tools
CN110263699A (en) * 2019-06-17 2019-09-20 睿魔智能科技(深圳)有限公司 Method of video image processing, device, equipment and storage medium
CN110263699B (en) * 2019-06-17 2021-10-22 睿魔智能科技(深圳)有限公司 Video image processing method, device, equipment and storage medium
WO2020253103A1 (en) * 2019-06-17 2020-12-24 睿魔智能科技(深圳)有限公司 Video image processing method, device, apparatus, and storage medium
CN111489292A (en) * 2020-03-04 2020-08-04 北京思朗科技有限责任公司 Super-resolution reconstruction method and device for video stream
CN111489292B (en) * 2020-03-04 2023-04-07 北京集朗半导体科技有限公司 Super-resolution reconstruction method and device for video stream
WO2022068682A1 (en) * 2020-09-30 2022-04-07 华为技术有限公司 Image processing method and apparatus
CN115861142A (en) * 2022-12-21 2023-03-28 上海闻泰电子科技有限公司 Image fusion method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106851046A (en) Video dynamic super-resolution processing method and system
Xu et al. Quadratic video interpolation
CN103824273B (en) Super-resolution reconstruction method based on compound motion and self-adaptive nonlocal prior
TWI455588B (en) Bi-directional, local and global motion estimation based frame rate conversion
JP3898606B2 (en) Motion vector detection method and apparatus, and frame interpolation image creation method and apparatus
CN104103050B (en) A kind of real video restored method based on local policy
KR101987079B1 (en) Method for removing noise of upscaled moving picture with dynamic parameter based on machine learning
CN102136144A (en) Image registration reliability model and reconstruction method of super-resolution image
Su et al. Super-resolution without dense flow
CN101247489A (en) A method for real-time reproduction of digital TV details
CN106169173B (en) A Method of Image Interpolation
Mahajan et al. Adaptive and non-adaptive image interpolation techniques
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
CN115578255A (en) Super-resolution reconstruction method based on inter-frame sub-pixel block matching
CN107392854A (en) A kind of joint top sampling method based on local auto-adaptive gain factor
CN117689541A (en) Multi-region classification video super-resolution reconstruction method with temporal redundancy optimization
CN106056540A (en) Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment
CN104376544B (en) Non-local super-resolution reconstruction method based on multi-region dimension zooming compensation
Zhang et al. Crosszoom: Simultaneous motion deblurring and event super-resolving
CN105931189B (en) A video super-resolution method and device based on an improved super-resolution parametric model
CN106920213B (en) Method and system for acquiring high-resolution image
CN109658361A (en) A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account
Huang et al. Algorithm and architecture design of multirate frame rate up-conversion for ultra-HD LCD systems
CN103914807A (en) Non-locality image super-resolution method and system for zoom scale compensation
CN119295326A (en) An infrared image enhancement method based on pseudo-noise and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170613

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载