+

WO2010046990A1 - Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré - Google Patents

Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré Download PDF

Info

Publication number
WO2010046990A1
WO2010046990A1 PCT/JP2008/069272 JP2008069272W WO2010046990A1 WO 2010046990 A1 WO2010046990 A1 WO 2010046990A1 JP 2008069272 W JP2008069272 W JP 2008069272W WO 2010046990 A1 WO2010046990 A1 WO 2010046990A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
character
input
interpolation
vector
Prior art date
Application number
PCT/JP2008/069272
Other languages
English (en)
Japanese (ja)
Inventor
篤 松野
完 池田
浩之 吉田
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2008/069272 priority Critical patent/WO2010046990A1/fr
Publication of WO2010046990A1 publication Critical patent/WO2010046990A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • H04N7/013Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter the incoming video signal comprising different parts having originally different frame rate, e.g. video and graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors

Definitions

  • the present invention relates to an interpolation frame generation device, a frame rate conversion device, a display device, an interpolation frame generation method, a program thereof, and a recording medium on which the program is recorded.
  • Patent Document 1 a configuration for converting a frame rate of an input video composed of a plurality of frames is known (see, for example, Patent Document 1).
  • an interpolation frame is generated using a motion vector for a character area to be scrolled.
  • any processing other than a character region can be applied, in which no interpolation frame is generated, linear interpolation processing, or replacement processing using a neighboring frame is performed.
  • An object of the present invention is to record an interpolation frame generation device, a frame rate conversion device, a display device, an interpolation frame generation method, a program thereof, and a program thereof that can appropriately generate an interpolation frame with guaranteed quality. It is to provide a recording medium.
  • the interpolated frame generation apparatus outputs an input video composed of a plurality of input frames that can be considered to be input at an input synchronization timing based on an input image signal having a predetermined input frequency as an output image signal having a predetermined output frequency.
  • An interpolated frame generating device that generates the interpolated frame when converting the frame rate to an output video composed of the input frame output at an output synchronization timing based on the output frame and an interpolated frame interpolated between the input frames.
  • a character detection unit that detects a character scroll area that moves in a predetermined direction included in the input frame, a vector acquisition unit that acquires a motion in the input frame for each input frame as a motion vector, and the interpolation frame Used to generate the output synchronization timing and the interpolation frame.
  • An interpolation distance detection unit that detects an interval of the input synchronization timing of the input frame as an interpolation distance, and a non-character scroll area based on the interpolation distance and a non-character partial gain set to a predetermined value
  • a non-character partial vector is set by adjusting the size of the first motion vector corresponding to the non-character part vector, a non-character combining frame corresponding to the motion based on the non-character partial vector is generated, and the interpolation Based on the character part gain set to a value equal to or greater than the distance and the part gain other than the character, a character part vector is set by adjusting the size of the second motion vector corresponding to the character scroll area,
  • a vector frame rate conversion processing unit for generating a character synthesis frame corresponding to a motion based on a character partial vector;
  • An interpolation frame generation unit that generates the interpolation frame by synthesizing a portion other than the character scroll area in the outer synthesis frame and the character scroll area in the character synthesis frame
  • the interpolated frame generation apparatus outputs an input video composed of a plurality of input frames that can be considered to be input at an input synchronization timing based on an input image signal having a predetermined input frequency as an output image signal having a predetermined output frequency.
  • An interpolated frame generating device that generates the interpolated frame when converting the frame rate to an output video composed of the input frame output at an output synchronization timing based on the output frame and an interpolated frame interpolated between the input frames.
  • a character detection unit that detects a character scroll area that moves in a predetermined direction included in the input frame, a vector acquisition unit that acquires a motion in the input frame for each input frame as a motion vector, and the interpolation frame Used to generate the output synchronization timing and the interpolation frame.
  • An interpolation distance detector that detects an interval of the input synchronization timing of the input frame as an interpolation distance, and each of the input frame used for generating the interpolation frame and the input frame adjacent to the input frame.
  • a continuity of the first motion vector corresponding to a portion other than the character scroll area in the vector, and a vector acquisition accuracy determination unit for determining whether or not the continuity of the first motion vector is higher than a predetermined level.
  • a non-character partial vector is set by adjusting the size of the motion vector, and the character is based on the non-character partial vector.
  • a vector frame rate for adjusting a size of the second motion vector corresponding to the character scroll area to set a character part vector and generating a character composition frame in which the character scroll area is moved based on the character part vector A weighted average for generating a weighted average synthesis frame by executing a linear interpolation process of a conversion processing unit and a pair of input frames corresponding to the input synchronization timing before and after the output synchronization timing at which the interpolation distance is detected When it is determined that the continuity between the frame rate conversion processing unit and the first motion vector is high When the interpolation frame is generated by synthesizing the part other than the character scroll area in the non-character composition frame and the character scroll area in the character composition frame, and it is determined that the continuity is low An interpolation frame generation unit that
  • a frame rate conversion apparatus interpolates between the input frames the interpolation frame generation apparatus described above and the interpolation frame generated by the interpolation frame generation apparatus, and outputs an output video obtained by frame rate conversion. And an interpolation control unit to be displayed on the display unit.
  • an input video composed of a plurality of input frames that can be considered to be input at an input synchronization timing based on an input image signal having a predetermined input frequency is calculated by a calculation means with a predetermined output frequency.
  • the calculation means detects a character scroll area that moves in a predetermined direction included in the input frame, and acquires a motion in the input frame as a motion vector for each input frame.
  • the size of the first motion vector corresponding to a portion other than the character scroll area is adjusted to set a non-character partial vector, and the non-character corresponding to the motion based on the non-character partial vector
  • a frame for composition is generated, and the magnitude of the second motion vector corresponding to the character scroll area is determined based on the interpolation distance and a character partial gain set to a value greater than or equal to the non-character partial gain.
  • a vector that generates a character composition frame corresponding to movement based on this character part vector by adjusting and setting a character part vector A frame rate conversion processing step, and an interpolation frame generation step of generating the interpolation frame by synthesizing a portion other than the character scroll area in the non-character synthesis frame and the character scroll area in the character synthesis frame. And performing the above.
  • an input video composed of a plurality of input frames that can be considered to be input at an input synchronization timing based on an input image signal having a predetermined input frequency is calculated by a calculation means.
  • a frame generation method a character detection step of detecting a character scroll region moving in a predetermined direction included in the input frame, a vector acquisition step of acquiring a motion in the input frame as a motion vector for each input frame, The output synchronization timing at which the interpolation frame is output, and the interpolation
  • An interpolation distance detecting step for detecting an interval of the input synchronization timing of the input frame used for generating a frame as an interpolation distance; the input frame used for generating the interpolation frame; and an adjacent to the input frame.
  • a vector frame rate conversion processing step for generating a pair of input frames corresponding to the input synchronization timing before and after the output synchronization timing at which the interpolation distance is detected.
  • a weighted average frame rate conversion process for generating a frame, and the first motion vector The interpolated frame is generated by synthesizing the part other than the character scroll area in the non-character composition frame and the character scroll area in the character composition frame.
  • the interpolation frame is generated by synthesizing a portion other than the character scroll area in the weighted average synthesis frame and the character scroll area in the character synthesis frame.
  • an interpolation frame generation step is performed by synthesizing a portion other than the character scroll area in the weighted average synthesis frame and the character scroll area in the character synthesis frame.
  • the interpolated frame generation program of the present invention is characterized by causing an arithmetic means to execute the above-described interpolated frame generation method.
  • the interpolation frame generation program according to the present invention is characterized in that the calculation means functions as the above-described interpolation frame generation device.
  • the recording medium on which the interpolation frame generation program of the present invention is recorded is characterized in that the above-described interpolation frame generation program is recorded so as to be readable by the calculation means.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a display device according to a first embodiment of the present invention. It is a schematic diagram which shows the input image
  • FIG. 1 is a block diagram illustrating a schematic configuration of a display device.
  • FIG. 2 is a schematic diagram showing an input video.
  • FIG. 3 is a schematic diagram illustrating a generation state of a non-character composition frame.
  • FIG. 4 is a schematic diagram showing a generation state of a character composition frame.
  • the display device 100 includes a display unit 110 and a frame rate conversion device 120.
  • the display unit 110 is connected to the frame rate conversion device 120.
  • the display unit 110 displays the output video with the frame rate converted under the control of the frame rate conversion device 120.
  • Examples of the display unit 110 include a PDP (Plasma Display Panel), a liquid crystal panel, an organic EL (Electro Luminescence) panel, a CRT (Cathode-Ray Tube), an FED (Field Emission Display), and an electrophoretic display panel.
  • the frame rate conversion apparatus 120 has an input video N composed of an input frame F (hereinafter, the a-th input frame is referred to as an input frame Fa as appropriate) input based on a 24 Hz input vertical synchronization signal. Is frame rate converted into an output image composed of a plurality of output frames output based on the 60 Hz output vertical synchronization signal.
  • the frame rate conversion apparatus 120 includes an interpolation frame generation apparatus 130 as an arithmetic unit and an interpolation control unit 140.
  • the interpolated frame generation apparatus 130 includes a frame memory 131, a character detection unit 132, a vector acquisition unit 133, a gain control unit 134 that also functions as a vector acquisition accuracy determination unit, An interpolation distance ratio recognition unit 135 as an insertion distance detection unit, a vector frame rate conversion processing unit 136, and an interpolation frame generation unit 138 are provided.
  • the frame memory 131 acquires the image signal from the image signal output unit 10, temporarily accumulates the input frame F based on the image signal, and appropriately transmits it to the vector acquisition unit 133 and the vector frame rate conversion processing unit 136. Output.
  • the character detection unit 132 for example, a character scroll area C that scrolls from the right side to the left side in the input frame F (hereinafter, the character scroll area of the c-th input frame F is referred to as a character scroll area Cc as appropriate). ) Is detected. Then, scroll position information specifying the position of the character scroll area C is output to the vector acquisition unit 133.
  • the vector acquisition unit 133 acquires the input frame F (a + 1) based on the image signal from the image signal output unit 10 and the input frame Fa temporarily stored in the frame memory 131. Then, as shown in FIG. 3, the movement of the portions other than the character scroll area C in the input frames F (a + 1) and Fa is represented by the first input video detection vector V (a + 1) and not shown. Get as a local area vector. In FIG. 3, only the object Z in the input frame F in which the character scroll area C and the object Z exist is illustrated.
  • the vector acquisition unit 133 when acquiring the first input video detection vector V (a + 1), the vector acquisition unit 133 except for the portion inside the input frame F (a + 1) by a predetermined distance from the outer edge (not shown).
  • One motion detection block composed of parts is set. This motion detection block is composed of a plurality of local areas divided into a plurality. That is, the motion detection block has a first block size composed of pixels not shown.
  • the vector acquisition unit 133 converts the motion in the motion detection block, that is, the motion in almost the entire input frame F (a + 1), to one first input video detection vector. Obtained as V (a + 1) and output to the gain controller 134.
  • a method for obtaining the first input video detection vector V (a + 1) for example, a method as described in Japanese Patent Publication No. Sho 62-62109 (hereinafter referred to as a pattern matching method) or Japanese Patent Laid-Open No. Sho 62.
  • Examples include the method described in JP-A-206980 (hereinafter referred to as iterative gradient method). That is, when the pattern matching method is used, a plurality of blocks (hereinafter referred to as the following) having the same number of pixels as the motion detection block of the input frame F (a + 1) and shifted in different directions with respect to the input frame Fa. Set as “Past block”.
  • the one having the highest correlation with the motion detection block is detected from the plurality of past blocks, and the first input video detection vector V (a + 1) is obtained based on the detected past block and motion detection block.
  • An optimal one for detecting the first input video detection vector V (a + 1) is selected as the initial displacement vector. Then, the calculation is started from a value close to the true first input video detection vector V (a + 1) of the motion detection block, thereby reducing the number of times of the gradient method calculation and the second true input video.
  • a detection vector V (a + 1) is detected.
  • the vector acquisition unit 133 acquires the motion in each local area as a local area vector, and outputs it to the gain control unit 134. That is, the motion in the local area composed of the second number of pixels smaller than the first number is detected.
  • the same processing as that for acquiring the first input video detection vector V (a + 1) is performed. Note that, as the local area vector acquisition process, a process different from that at the time of acquiring the first input video detection vector V (a + 1) may be applied.
  • the vector acquisition unit 133 acquires the movement of the character scroll area C in the input frames F (a + 1) and Fa as the second input video detection vector B (a + 1). . 4 and FIG. 13 to be described later, only the character scroll area C in the input frame F in which the character scroll area C and the object Z exist are illustrated. In order to facilitate understanding of the contents of the present invention, a description will be given using a schematic diagram in which the character scroll area C that is actually scrolling to the left is scrolled from the lower side to the upper side. Specifically, when acquiring the second input video detection vector B (a + 1), the vector acquisition unit 133 includes one character scroll area C in the input frame F (a + 1). Set the motion detection block.
  • This motion detection block is composed of a plurality of local areas divided into a plurality. That is, the motion detection block has a second block size made up of pixels not shown. Then, for each input frame F (a + 1), the vector acquisition unit 133 uses the motion in the motion detection block, that is, the motion in the character scroll area C as one second input video detection vector B (a + 1). Obtained and output to the vector frame rate conversion processing unit 136.
  • the gain control unit 134 Based on the continuity of the first input image detection vector V acquired by the vector acquisition unit 133, the gain control unit 134, as shown in FIG. Increase or decrease the partial gain except for the character set to.
  • the gain control unit 134 has no continuity of the first input video detection vector V, that is, lower than a predetermined level. to decide.
  • the first input video detection vector V can be acquired and the number of local area vectors matching the first input video detection vector V is equal to or greater than a threshold, the continuity of the first input video detection vector V It is determined that there is, that is, higher than a predetermined level. If the number of local area vectors that match the first input video detection vector V is less than the threshold, it is determined that the first input video detection vector V is not continuous.
  • the gain control unit 134 may determine the acquisition accuracy of the first input video detection vector V as follows.
  • the vector acquisition unit 133 cannot acquire the first input video detection vector V, it is determined that the first input video detection vector V is not continuous and the acquisition accuracy is low. Further, when the first input video detection vector V can be acquired, the variance of the local area vector is calculated. If the variance is less than or equal to the threshold value, it is determined that the first input video detection vector V is continuous and the acquisition accuracy is high. If the variance is greater than the threshold value, the first input video detection vector V is not continuous. It may be determined that the acquisition accuracy is low.
  • the gain control unit 134 performs a process of increasing the non-character partial gain by a predetermined value, for example, a process of increasing by 0.25.
  • a predetermined value for example, a process of increasing by 0.25.
  • the first input video detection vector V8 is acquired and determined to be continuous, and the non-character partial gain at the output synchronization timing of the input frame F7 is set to 0.25. If this is the case, processing for sequentially increasing the non-character partial gain corresponding to the non-character combining frames L14 and L15 by 0.25 is performed.
  • the gain controller 134 reduces the non-character partial gain by a predetermined value when the first input video detection vector V is not continuous and the non-character partial gain is set to a value larger than 0. Process, for example, reduce by 0.25. For example, as shown in FIG. 3, since the erroneous first input video detection vector V7 is acquired, it is determined that there is no continuity, and the non-character partial gain corresponding to the non-character composition frame L11 is 1. Is set, the non-character partial gains corresponding to the non-character combining frames L12 and L13 and the input frame F7 are sequentially reduced by 0.25.
  • the gain controller 134 has the continuity of the first input video detection vector V and the non-character partial gain is set to 1, or there is no continuity and the non-character partial gain.
  • the partial gain setting other than this character is maintained.
  • the gain control unit 134 increases or decreases the non-character partial gain so as to be 0 or more and 1 or less.
  • the partial gain other than the increased / decreased character is output to the vector frame rate conversion processing unit 136.
  • the increase / decrease amount of the non-character partial gain is not limited to 0.25, and may be 0.1 or 0.5. Further, the amount to be increased and the amount to be decreased may be increased or decreased.
  • the gain control unit 134 always sets the character part gain corresponding to the character scroll area C detected by the character detection unit 132 to 1 and outputs the gain to the vector frame rate conversion processing unit 136. To do.
  • the character scroll area C since the moving speed and the moving direction are substantially constant, the motion vector acquisition accuracy is higher than that of the object Z in a portion other than the character scroll area C. For this reason, the character part gain is always set to 1 and the non-character part gain is set within a range equal to or less than the character part gain.
  • the interpolation distance ratio recognition unit 135 obtains the input vertical synchronization signal of the input frame F and the output vertical synchronization signal of the non-predetermined synthesis frame L from the synchronization signal output unit 20, and the non-predetermined synthesis frame L In addition to recognizing the input frame F at the input synchronization timing that has the shortest interval with the output synchronization timing, the interval between the output synchronization timing and the input synchronization timing is recognized as the interpolation distance. Further, as shown in FIGS. 3 and 4, a value obtained by dividing the interpolation distance by the interval of the output synchronization timing is calculated as the interpolation distance ratio.
  • the interpolation distance ratio is a positive value when the output synchronization timing of the non-character composition frame L is close to the input synchronization timing of the previous input frame F, and is close to the input synchronization timing of the subsequent input frame F. Negative value.
  • the interpolation distance ratio is 0 when there is an input frame F having the same output synchronization timing and input synchronization timing. Then, the interpolation distance ratio recognition unit 135 outputs the interpolation distance ratio to the vector frame rate conversion processing unit 136.
  • the vector frame rate conversion processing unit 136 generates a non-character composition frame L as shown in FIG. 3 and a character composition frame P as shown in FIG. 4 and outputs them to the interpolation frame generation unit 138.
  • the non-c-th character composition frame L and the character composition frame P are referred to as a non-character composition frame Lc and a character composition frame Pc, as appropriate.
  • the vector frame rate conversion processing unit 136 multiplies the interpolation distance ratio and the non-character partial gain, Find the unity gain. Then, the first input video use vector Kc (c is a natural number) is set by multiplying the first input video detection vector V (a + 1) by the first gain. For example, as shown in FIG. 3, when the first gain corresponding to the non-character composition frame L15 is -0.375, this -0.375 is multiplied by the first input video detection vector V8. First input video use vector K15 is set.
  • the vector frame rate conversion processing unit 136 generates a non-character composition frame L in which the object Z in a portion other than the character scroll area C has moved based on the first input video use vector Kc. For example, as shown in FIG. 3, a frame L15 for composition other than the character in which the object Z8 of the input frame F8 has moved is generated based on the first input video use vector K15. Then, as shown in FIG. 3, the vector frame rate conversion processing unit 136 outputs to the interpolation frame generation unit 138 as a synthesis frame L other than characters used for generation of the interpolation frame at a predetermined output synchronization timing. .
  • the non-character composition frame L when the non-character composition frame L is generated, the first input video use vector K generated based on the first gain set to a value equal to or less than the interpolation distance ratio is used. Therefore, the non-character composition frame L, which will be described in detail later, is compared with the character composition frame P generated using the second input video use vector D based on the interpolation distance ratio. The amount of movement may be small.
  • the vector frame rate conversion processing unit 136 obtains the second gain by multiplying the interpolation distance ratio and the character part gain. Then, the first input video use vector Dc (c is a natural number) is set by multiplying the second input video detection vector B (a + 1) by the second gain. For example, as shown in FIG. 4, when the second gain corresponding to the character composition frame P15 is -0.50, the -0.50 is multiplied by the second input video detection vector B8, A second input video use vector D15 is set.
  • the vector frame rate conversion processing unit 136 generates a character composition frame Pc in which the character scroll area C has moved based on the second input video use vector Dc. For example, as shown in FIG. 4, a character composition frame P15 in which the character scroll area C8 of the input frame F8 is moved is generated based on the second input video use vector D15. Then, as shown in FIG. 4, the vector frame rate conversion processing unit 136 outputs to the interpolation frame generation unit 138 as a character synthesis frame P used for generation of the interpolation frame at a predetermined output synchronization timing.
  • the interpolation frame generation unit 138 combines the non-character combining frame L and the character combining frame P from the vector frame rate conversion processing unit 136 to generate an interpolation frame (not shown).
  • the interpolated frame generation unit 138 includes a portion other than the character scroll areas C10 to C17 in the non-character composition frames L10 to L17, and the character composition frames P10 to P17 whose output synchronization timing matches that of the non-character composition frames L10 to L17.
  • an interpolation frame output at a predetermined output synchronization timing is generated and output to the interpolation control unit 140.
  • the interpolation frame generation unit 138 outputs the input frame F corresponding to the timing when the first gain is 0 to the interpolation control unit 140, and the input frame F corresponding to the timing when the first gain is other than 0. Do not output.
  • the interpolation control unit 140 acquires the input frame and the interpolation frame from the interpolation frame generation unit 138 and outputs them based on the output synchronization timing, thereby causing the display unit 110 to display the video.
  • FIG. 5 is a flowchart showing the operation of the display device.
  • step S1 the character scroll area C in the input frame F is detected (step S1), and the first and second input video detection vectors V and B and the local area vector are obtained (step S2). Thereafter, the interpolation distance ratio is recognized based on the input vertical synchronization signal and the output vertical synchronization signal (step S3), and the non-character partial gain and the character partial gain are set (step S4).
  • the frame rate conversion apparatus 120 generates a non-character composition frame L corresponding to the non-character partial gain and a character composition frame P corresponding to the character partial gain (step S5). Then, an interpolated frame obtained by synthesizing the non-character synthesizing frame L and the character synthesizing frame P is generated (step S6), and an image is displayed on the display unit 110 (step S7).
  • the interpolation frame generation device 130 of the display device 100 detects the character scroll region C included in the input frame F, acquires the movement of the character scroll region C as the second input video detection vector B, and For example, the movement of the object Z existing in a portion other than the character scroll area C is acquired as the first input video detection vector V. Further, the interpolation distance ratio is calculated by dividing the interpolation distance, which is the interval between the predetermined output synchronization timing and the input synchronization timing of the predetermined input frame F, by the output synchronization timing interval. Then, the first input video use vector K is set by multiplying the interpolation distance ratio, the first input video detection vector V, and the non-character partial gain that increases or decreases in the range of 0 to 1.
  • a frame L for composition other than a character having a motion amount based on the first input video use vector K is generated.
  • the second input video use vector D is set by multiplying the interpolation distance ratio, the second input video detection vector B, and the character partial gain fixed to 1, and this second input video use is set.
  • a character synthesis frame P having a motion amount based on the vector D is generated.
  • the non-character combining frame L and the character combining frame P are combined to generate an interpolation frame. For this reason, for the portion other than the character scroll area C in the interpolated frame, the image of the non-character composition frame L generated based on the first input video detection vector V corresponding to the movement of the object Z and the non-character partial gain.
  • the quality of the portion other than the character scroll area C can be guaranteed. Further, for the character scroll area C in the interpolated frame, the character composition generated based on the second input video detection vector V corresponding to the movement of the character scroll area C and the character partial gain having a value equal to or greater than the non-character partial gain. Since the image of the working frame P is used, the quality of the character scroll area C can be guaranteed.
  • the interpolation frame generation device 130 increases the non-character partial gain when the first input video detection vector V is continuous, and decreases the non-character partial gain when there is no continuity. Then, an interpolated frame is generated by synthesizing the non-character synthesis frame L based on the increased / decreased non-character partial gain. For this reason, when the first input video detection vector V has continuity, an interpolation frame in which the object Z exists at a position close to the movement locus of the object Z can be generated. On the other hand, when there is no continuity of the first input video detection vector V, the deviation distance from the movement trajectory of the object Z is minimized by reducing the non-character partial gain as compared with the case where the non-character partial gain is fixed. A limited number of interpolation frames can be generated. Therefore, even when the first input video detection vector V is not continuous, an error in the movement of the object Z can be minimized.
  • FIG. 6 is a block diagram illustrating a schematic configuration of the display device.
  • FIG. 7 is a schematic diagram showing setting control of the vector correspondence gain and the weighted average correspondence gain.
  • FIG. 8 and FIG. 9 are schematic diagrams showing a generation state of a non-character composition frame or a weighted average composition frame.
  • the display device 200 includes a display unit 110 and a frame rate conversion device 220. Further, the frame rate conversion device 220 includes an interpolation frame generation device 230 as an arithmetic means and an interpolation control unit 140.
  • the interpolation frame generation device 230 includes a frame memory 131, a character detection unit 132, a vector acquisition unit 133, a gain control unit 234 that also functions as a vector acquisition accuracy determination unit, An interpolation distance ratio recognition unit 235 as an insertion distance detection unit, a vector frame rate conversion processing unit 136, a weighted average frame rate conversion processing unit 237, and an interpolation frame generation unit 238 are provided.
  • the gain control unit 234 sets the partial gain other than the character set to 0 in advance and 1 as shown in FIG.
  • the weighted average corresponding gain is increased or decreased by 0.25 in the range of 0 or more and 1 or less. Further, at the time of this increase / decrease, at least one of the non-character partial gain and the weighted average correspondence gain is increased / decreased so that it is always zero.
  • the initial set values of the non-character partial gain and the weighted average correspondence gain are not limited to 1 and may be 0 or 0.5.
  • the increase / decrease amount of the partial gain other than characters and the weighted average correspondence gain is not limited to 0.25, and may be 0.1 or 0.5. And you may increase / decrease in the state from which the quantity to increase and the quantity to reduce differ. Further, the amount of increase / decrease may be made different between the non-character partial gain and the weighted average correspondence gain.
  • the gain control unit 234 determines the acquisition accuracy of the first input video detection vector V by the same processing as the gain control unit 134 of the first embodiment. Then, when the acquisition accuracy is high, it is determined whether or not the weighted average correspondence gain can be reduced. When it is determined that the weighted average corresponding gain can be reduced because it is greater than 0, the weighted average corresponding gain is decreased by 0.25. On the other hand, if it is determined that the weighted average correspondence gain is 0 and cannot be reduced, the non-character partial gain and the weighted average correspondence gain are maintained at 0 for the synthesis frame L other than one character, and then the other than characters. Increase the partial gain by 0.25.
  • the gain control unit 234 determines whether or not the non-character partial gain can be reduced when the acquisition accuracy is low. If the non-character partial gain is larger than 0, the gain control unit 234 determines that the non-character partial gain can be reduced. If it is determined that it cannot be reduced because it is 0, it is reduced by 0.25. After maintaining the state that the non-character partial gain and the weighted average correspondence gain become 0 for the frame L for synthesis other than one character, the weighted average correspondence is maintained. Increase the gain by 0.25. Note that when the non-character partial gain or the weighted average correspondence gain is 1, if it is determined to increase these, the state of 1 is maintained. Then, gain control section 234 outputs a partial gain other than characters to vector frame rate conversion processing section 136 and outputs a weighted average corresponding gain to weighted average frame rate conversion processing section 237.
  • the gain control unit 234 also performs vector frame rate conversion when the weighted average corresponding gain becomes 0 by shifting the acquisition accuracy of the first input video detection vector V from a state lower than a predetermined level to a high state.
  • An output selection signal for outputting the synthesis frame Lc other than the character generated by the processing unit 136 for generating an interpolation frame is output to the interpolation frame generation unit 238.
  • An output selection signal for outputting Mc is output to interpolation frame generation section 238.
  • the gain control unit 234 always sets the character part gain corresponding to the character scroll area C to 1 and outputs it to the vector frame rate conversion processing unit 136.
  • the interpolation distance ratio recognition unit 235 calculates and recognizes the interpolation distance ratio by the same process as the interpolation distance ratio recognition unit 135 of the first embodiment, and the interpolation distance ratio is a vector frame rate conversion processing unit. 136 and the weighted average frame rate conversion processing unit 237.
  • the vector frame rate conversion processing unit 136 generates a non-character composition frame L and a character composition frame P, and outputs them to the interpolation frame generation unit 238.
  • the weighted average frame rate conversion processing unit 237 generates a weighted average synthesis frame Mc by executing linear interpolation processing based on the weighted average correspondence gain. Specifically, the weighted average frame rate conversion processing unit 237 calculates the third gain by multiplying the absolute value of the interpolation distance ratio and the weighted average corresponding gain. Then, the reference surface weighted average weight and the target surface weighted average weight are calculated by substituting the third gain into the following equations (1) and (2).
  • the weighted average frame rate conversion processing unit 237 has a mixing ratio based on the reference plane weighted average weight and the target plane weighted average weight, and is in each of the corresponding positions in the input frame Fa and the input frame F (a + 1).
  • An image in which pixel colors are mixed is generated as a weighted average composition frame Mc.
  • the reference plane frame corresponding to the weighted average composition frame Mc is recognized as the past input frame Fa.
  • the reference plane frame “P” represents that the reference plane frame has been set to the past input frame Fa
  • “U” represents the future input frame F (a + Indicates that it is set in 1).
  • the weighted average frame rate conversion processing unit 237 sets the color of each pixel at the corresponding position in the input frame Fa and the input frame F (a + 1) as the color of the predetermined pixel in the weighted average synthesis frame Mc.
  • a color mixed at a ratio corresponding to each of the reference surface weighted average weight and the target surface weighted average weight is applied.
  • the reference plane frame corresponding to the weighted average combining frame Mc is recognized as the future input frame F (a + 1). Then, as the color of the predetermined pixel in the weighted average composition frame Mc, the color of each pixel at the corresponding position in the input frame F (a + 1) and the input frame Fa is used as the reference plane weighted average weight and the target plane. Apply colors mixed in proportions corresponding to each weighted average weight.
  • the weighted average frame rate conversion processing unit 237 generates a weighted average synthesis frame M19 to be inserted at a position close to the input frame F8 between the input frame F8 and the input frame F9.
  • the input frame F8 is recognized as a reference plane frame of the weighted average composition frame M19, and the color of the object Z8 and the corresponding position on the input frame F9 are used as the color of the corresponding position of the object Z8 on the weighted average composition frame M19.
  • a color mixed at 0.9: 0.1 is applied.
  • the mixing ratio of the color of the corresponding position on the input frame F8 and the color of the object Z9 is mixed at 0.9: 0.1. Apply color.
  • the input frame F9 is recognized as the reference plane frame of the weighted average composition frame M20
  • the color of the corresponding position of the object Z8 is A color obtained by mixing the color of the corresponding position on the input frame F9 with the color of the object Z8 at a ratio of 0.7: 0.3 is applied
  • the color of the position corresponding to the object Z9 is A color in which the mixing ratio of the color and the color at the corresponding position on the input frame F8 is 0.7: 0.3 is applied.
  • the weighted average frame rate conversion processing unit 237 outputs the weighted average synthesis frame M to the interpolation frame generation unit 238.
  • the vector frame rate conversion processing unit 136 and the weighted average frame rate conversion processing unit 237 use the input frame F acquired immediately before as necessary to synthesize a frame Lc other than characters. Then, a weighted average combining frame Mc is generated and output as a non-character combining frame L.
  • the interpolation frame generation unit 238 Based on the output selection signal from the gain control unit 234, the interpolation frame generation unit 238 and the weighted average from the weighted average frame rate conversion processing unit 237 and the non-character synthesis frame L from the vector frame rate conversion processing unit 136 One of the combining frames M is combined with the character combining frame P to generate an interpolation frame.
  • the interpolated frame generation unit 238 has a weighted average correspondence gain of 0 at the timing between the input frame F5 and the input frame F7.
  • An interpolated frame is generated by synthesizing the synthesis frame L and the character synthesis frame P other than the character and the character synthesis frame P having the same output synchronization timing.
  • the weighted average synthesis frame M and the character synthesis in which the output synchronization timing coincides with this weighted average synthesis frame M are generated by synthesizing the frame P. Then, the interpolation frame generation unit 238 outputs the generated interpolation frame to the interpolation control unit 140.
  • FIG. 10 is a flowchart showing the operation of the display device.
  • the interpolation frame generation device 230 of the display device 200 recognizes the interpolation distance ratio (step S11) after performing the processes of steps S1 and S2, and performs a non-character partial gain, a character partial gain, A weighted average correspondence gain is set (step S12). Then, the interpolation frame generation device 230 performs a vector frame rate conversion process (step S13) and also performs a weighted average frame rate conversion process (step S14). That is, in step S13, a non-character composition frame L and a character composition frame P are generated, and in step S14, a weighted average composition frame M is generated.
  • the frame rate conversion apparatus 220 recognizes the frame other than the character combining frame L or the weighted average combining frame M as the frame to be combined with the character combining frame P according to the settings of the non-character partial gain and the weighted average correspondence gain. (Step S15). Then, the recognized frame and the character synthesis frame P are synthesized to generate an interpolation frame (step S16), and the process of step S7 is performed.
  • the interpolated frame generation device 230 of the display device 200 generates a non-character combining frame L and a character combining frame P, similarly to the interpolated frame generating device 130 of the first embodiment. Further, the interpolation frame generation device 230 calculates the reference plane weighted average weight and the target plane weighted average weight based on the above-described equations (1) and (2), and the input frame Fa and the input frame F (a + 1). A weighted average combining frame M is generated in which the colors of the pixels at the corresponding positions are set to colors obtained by mixing the colors corresponding to the reference surface weighted average weight and the target surface weighted average weight.
  • an interpolation frame is generated by synthesizing the non-character synthesis frame L and the character synthesis frame P.
  • the weighted average synthesis frame M is generated.
  • an interpolation frame is generated by combining the frame P for character synthesis. For this reason, for portions other than the character scroll area C in the interpolated frame, the first input video detection vector V corresponding to the movement of the object Z and the characters other than the characters when the first input video detection vector V is highly accurate. Since the image of the composition frame L other than the character generated based on the partial gain is used, the quality of the portion other than the character scroll area C can be guaranteed.
  • the accuracy of obtaining the first input video detection vector V is high, an image of the weighted average composition frame M in which the colors of the two consecutive input frames F are mixed is used, so the composition frame L other than characters is used. Compared to the case, the process of generating a frame to be combined with the character combining frame P can be facilitated. Further, for the character scroll area C in the interpolated frame, the character composition generated based on the second input video detection vector V corresponding to the movement of the character scroll area C and the character partial gain having a value equal to or greater than the non-character partial gain. Since the image of the working frame P is used, the quality of the character scroll area C can be guaranteed.
  • the character partial gain may be fixed to 1 as shown in FIG. 4, and the non-character partial gain may be fixed to 0.75 as shown in FIG. Further, the character partial gain may be fixed to 1 as shown in FIG. 4 and the non-character partial gain may be increased or decreased within a range of 0 to 0.75 as shown in FIG.
  • a character composition frame P as shown in FIG. 4 is generated based on the character part gain
  • a non-character composition frame as shown in FIGS. 11 and 12 based on the non-character part gain.
  • L is generated, and an interpolated frame is generated by combining the character combining frame P and the non-character combining frame L that have the same output synchronization timing.
  • the character partial gain may be fixed to 0.75 as shown in FIG. 13, and the non-character partial gain may be fixed to 0.25 as shown in FIG.
  • a character synthesis frame P as shown in FIG. 13 is generated based on the character partial gain
  • a non-character synthesis frame L as shown in FIG. 14 is generated based on the non-character partial gain.
  • step S13 may be performed before step S14 or may be performed simultaneously with step S14.
  • the interpolated frame generation unit 238 illustrated a configuration in which one of the non-character combining frame L and the weighted average combining frame M is used to generate an interpolated frame based on the vector acquisition accuracy.
  • a synthesis frame obtained by taking a weighted average of the non-character synthesis frame L and the weighted average synthesis frame M may be used to generate an interpolation frame. That is, when the vector acquisition accuracy is higher than a predetermined value, the shock of change is reduced by making the weighted average of the non-character composition frame L larger than the weighted average when the vector acquisition accuracy is less than the predetermined value. Can do.
  • the image blurring is less in the configuration in which one of the non-character combining frame L and the weighted average combining frame M is used to generate the interpolated frame. Image quality is improved.
  • the interpolated frame generation device of the present invention has been exemplified for the configuration applied to the display device, it may be applied to any configuration that converts the frame rate of the input video and displays it.
  • the present invention may be applied to a playback device or a recording / playback device.
  • each function described above is constructed as a program, but it may be configured by hardware such as a circuit board or an element such as one IC (Integrated Circuit), and can be used in any form.
  • IC Integrated Circuit
  • the interpolation frame generation device 130 acquires the movement of the character scroll area C included in the input frame F as the second input video detection vector B, and other than the character scroll area C.
  • the movement of the object Z is acquired as the first input video detection vector V.
  • a non-character combining frame L is generated based on the first input video detection vector V and the non-character partial gain that increases or decreases in the range of 0 to 1.
  • a character synthesis frame P is generated based on the second input video detection vector B and the character partial gain fixed to 1. Then, the non-character combining frame L and the character combining frame P are combined to generate an interpolation frame.
  • the image of the non-character composition frame L generated based on the first input video detection vector V corresponding to the movement of the object Z and the non-character partial gain. Therefore, the quality of the portion other than the character scroll area C can be guaranteed. Further, for the character scroll area C in the interpolated frame, the character composition generated based on the second input video detection vector V corresponding to the movement of the character scroll area C and the character partial gain having a value equal to or greater than the non-character partial gain. Since the image of the working frame P is used, the quality of the character scroll area C can be guaranteed.
  • the interpolation frame generation device 230 generates a non-character composition frame L and a character composition frame P, and at corresponding positions in the input frame Fa and the input frame F (a + 1).
  • a weighted average combining frame M is generated in which the colors of certain pixels are set to colors obtained by mixing the colors of the respective pixels at a ratio corresponding to the reference surface weighted average weight and the target surface weighted average weight. Then, when the accuracy of obtaining the first input video detection vector V is high, an interpolation frame is generated by synthesizing the non-character synthesis frame L and the character synthesis frame P. When the acquisition accuracy is low, the weighted average synthesis frame M is generated.
  • an interpolation frame is generated by combining the frame P for character synthesis.
  • the first input video detection vector V corresponding to the movement of the object Z and the characters other than the characters when the first input video detection vector V is highly accurate. Since the image of the composition frame L other than the character generated based on the partial gain is used, the quality of the portion other than the character scroll area C can be guaranteed.
  • an image of the weighted average composition frame M in which the colors of the two consecutive input frames F are mixed is used, so the composition frame L other than characters is used.
  • the process of generating a frame to be combined with the character combining frame P can be facilitated. Further, for the character scroll area C in the interpolated frame, the character composition generated based on the second input video detection vector V corresponding to the movement of the character scroll area C and the character partial gain having a value equal to or greater than the non-character partial gain. Since the image of the working frame P is used, the quality of the character scroll area C can be guaranteed.
  • the present invention can be used as an interpolation frame generation device, a frame rate conversion device, a display device, an interpolation frame generation method, a program thereof, and a recording medium on which the program is recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un dispositif de conversion de taux de trame d'un dispositif d'affichage qui acquiert : un mouvement d'une zone de défilement de texte (C) incluse dans une trame d'entrée en tant qu’un second vecteur de détection de trame vidéo d'entrée ; et un mouvement d'un objet présent dans une zone autre que la zone de défilement de texte (C) en tant que premier vecteur de détection de trame vidéo d'entrée. Un premier vecteur d'utilisation de trame vidéo d'entrée est déterminé en multipliant le premier vecteur de détection de trame vidéo d'entrée, une proportion de la distance d'interpolation, et un gain d'une zone autre que la zone de défilement de texte qui varie de 0 à 1, ce qui permet ainsi de générer une trame de combinaison de zone autre qu’une zone de texte basée sur le premier vecteur de détection de trame vidéo d'entrée. Un second vecteur d'utilisation de trame vidéo d'entrée est déterminé en multipliant le second vecteur de détection de trame vidéo d'entrée, la proportion de la distance d'interpolation, et un gain d'une zone de texte fixé à la valeur 1, ce qui permet ainsi de générer une trame de combinaison de zone de texte basée sur le second vecteur d'utilisation de trame vidéo d'entrée. En combinant la trame de combinaison de zone non de texte et la trame de combinaison de zone de texte, une trame d'interpolation est générée.
PCT/JP2008/069272 2008-10-23 2008-10-23 Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré WO2010046990A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/069272 WO2010046990A1 (fr) 2008-10-23 2008-10-23 Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/069272 WO2010046990A1 (fr) 2008-10-23 2008-10-23 Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré

Publications (1)

Publication Number Publication Date
WO2010046990A1 true WO2010046990A1 (fr) 2010-04-29

Family

ID=42119051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/069272 WO2010046990A1 (fr) 2008-10-23 2008-10-23 Dispositif de génération de trame d'interpolation, dispositif de conversion de taux de trame, dispositif d'affichage, procédé de génération de trame d'interpolation, programme associé, et support d'enregistrement sur lequel son programme est enregistré

Country Status (1)

Country Link
WO (1) WO2010046990A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2553125A (en) * 2016-08-24 2018-02-28 Snell Advanced Media Ltd Comparing video sequences using fingerprints
CN109729298A (zh) * 2017-10-27 2019-05-07 联咏科技股份有限公司 图像处理方法与图像处理装置
WO2022158132A1 (fr) * 2021-01-22 2022-07-28 ソニーセミコンダクタソリューションズ株式会社 Dispositif de traitement vidéo, procédé de traitement vidéo et dispositif d'affichage vidéo

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6126382A (ja) * 1984-07-17 1986-02-05 Kokusai Denshin Denwa Co Ltd <Kdd> 動き量を用いた動画像フレ−ムレ−ト変換方式
JPH10501953A (ja) * 1995-04-11 1998-02-17 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ 動き補償されたフィールドレート変換
WO2003055211A1 (fr) * 2001-12-13 2003-07-03 Sony Corporation Processeur de signaux d'image et procede de traitement
JP2004023673A (ja) * 2002-06-19 2004-01-22 Sony Corp 動きベクトル検出装置及び方法、動き補正装置及び方法
JP2007329952A (ja) * 2006-02-28 2007-12-20 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
JP2008107753A (ja) * 2006-09-28 2008-05-08 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
WO2008136116A1 (fr) * 2007-04-26 2008-11-13 Pioneer Corporation Contrôleur de génération de trame d'interpolation, convertisseur de taux de trame, dispositif d'affichage, procédé pour commander la génération d'une trame d'interpolation, programme pour celui-ci, et support d'enregistrement stockant le programme

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6126382A (ja) * 1984-07-17 1986-02-05 Kokusai Denshin Denwa Co Ltd <Kdd> 動き量を用いた動画像フレ−ムレ−ト変換方式
JPH10501953A (ja) * 1995-04-11 1998-02-17 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ 動き補償されたフィールドレート変換
WO2003055211A1 (fr) * 2001-12-13 2003-07-03 Sony Corporation Processeur de signaux d'image et procede de traitement
JP2004023673A (ja) * 2002-06-19 2004-01-22 Sony Corp 動きベクトル検出装置及び方法、動き補正装置及び方法
JP2007329952A (ja) * 2006-02-28 2007-12-20 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
JP2008107753A (ja) * 2006-09-28 2008-05-08 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
WO2008136116A1 (fr) * 2007-04-26 2008-11-13 Pioneer Corporation Contrôleur de génération de trame d'interpolation, convertisseur de taux de trame, dispositif d'affichage, procédé pour commander la génération d'une trame d'interpolation, programme pour celui-ci, et support d'enregistrement stockant le programme

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2553125A (en) * 2016-08-24 2018-02-28 Snell Advanced Media Ltd Comparing video sequences using fingerprints
US10395121B2 (en) 2016-08-24 2019-08-27 Snell Advanced Media Limited Comparing video sequences using fingerprints
GB2553125B (en) * 2016-08-24 2022-03-09 Grass Valley Ltd Comparing video sequences using fingerprints
CN109729298A (zh) * 2017-10-27 2019-05-07 联咏科技股份有限公司 图像处理方法与图像处理装置
CN109729298B (zh) * 2017-10-27 2020-11-06 联咏科技股份有限公司 图像处理方法与图像处理装置
WO2022158132A1 (fr) * 2021-01-22 2022-07-28 ソニーセミコンダクタソリューションズ株式会社 Dispositif de traitement vidéo, procédé de traitement vidéo et dispositif d'affichage vidéo
US12198657B2 (en) 2021-01-22 2025-01-14 Sony Semiconductor Solutions Corporation Image processing device, image processing method, and image display device

Similar Documents

Publication Publication Date Title
US7965303B2 (en) Image displaying apparatus and method, and image processing apparatus and method
JP4157579B2 (ja) 画像表示装置及び方法、画像処理装置及び方法
JP4359223B2 (ja) 映像補間装置とこれを用いたフレームレート変換装置,映像表示装置
JP4525692B2 (ja) 画像処理装置、画像処理方法、画像表示装置
US20070097260A1 (en) Moving image display device and method for moving image display
US20100214473A1 (en) Image display system, image display apparatus, and control method for image display apparatus
WO2010046990A1 (fr) Dispositif de génération de trame d&#39;interpolation, dispositif de conversion de taux de trame, dispositif d&#39;affichage, procédé de génération de trame d&#39;interpolation, programme associé, et support d&#39;enregistrement sur lequel son programme est enregistré
WO2011155258A1 (fr) Appareil de traitement d&#39;image, son procédé, appareil d&#39;affichage d&#39;image et son procédé
EP1761045A2 (fr) Dispositif de traitement d&#39;image et procédé de conversion de vidéo à balayage entrelacé en video à balayage progressif
JPWO2008136116A1 (ja) 内挿フレーム作成制御装置、フレームレート変換装置、表示装置、内挿フレーム作成制御方法、そのプログラム、および、そのプログラムを記録した記録媒体
CN100401763C (zh) 运动补偿设备和方法
JP5208381B2 (ja) 動画像フレームレート変換装置および動画像フレームレート変換方法
JP2009055340A (ja) 画像表示装置及び方法、画像処理装置及び方法
WO2010046989A1 (fr) Dispositif de conversion de taux de trame, dispositif de traitement d’images, dispositif d’affichage, procédé de conversion de taux de trame, programme associé, et support d&#39;enregistrement sur lequel le programme est enregistré
US20070024608A1 (en) Apparatus for controlling displaying images and method of doing the same
JP2009181067A (ja) 画像表示装置及び方法、画像処理装置及び方法
JP2006154751A (ja) 動画ボケ改善のための信号処理
JP2008017321A (ja) 画像処理装置及び画像処理方法
JP4355347B2 (ja) 画像表示装置及び方法、画像処理装置及び方法
JP5219646B2 (ja) 映像処理装置及び映像処理装置の制御方法
JP5077037B2 (ja) 画像処理装置
JP2008193730A (ja) 画像表示装置及び方法、画像処理装置及び方法
KR20050039644A (ko) 데이터 처리 장치 및 데이터 처리 방법, 프로그램, 및기록 매체
JP6218575B2 (ja) 画像処理装置及びその制御方法
JP4736456B2 (ja) 走査線補間装置、映像表示装置、映像信号処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08877556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 08877556

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载